Sample records for larger scale implementation

  1. LES Modeling of Lateral Dispersion in the Ocean on Scales of 10 m to 10 km

    DTIC Science & Technology

    2015-10-20

    ocean on scales of 0.1-10 km that can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local...ocean on scales of 0.1-10 km that can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local...www.fields.utoronto.ca/video-archive/static/2013/06/166-1766/mergedvideo.ogv) and at the Nonlinear Effects in Internal Waves Conference held at Cornell University

  2. Multi-scale Slip Inversion Based on Simultaneous Spatial and Temporal Domain Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Liu, W.; Yao, H.; Yang, H. Y.

    2017-12-01

    Finite fault inversion is a widely used method to study earthquake rupture processes. Some previous studies have proposed different methods to implement finite fault inversion, including time-domain, frequency-domain, and wavelet-domain methods. Many previous studies have found that different frequency bands show different characteristics of the seismic rupture (e.g., Wang and Mori, 2011; Yao et al., 2011, 2013; Uchide et al., 2013; Yin et al., 2017). Generally, lower frequency waveforms correspond to larger-scale rupture characteristics while higher frequency data are representative of smaller-scale ones. Therefore, multi-scale analysis can help us understand the earthquake rupture process thoroughly from larger scale to smaller scale. By the use of wavelet transform, the wavelet-domain methods can analyze both the time and frequency information of signals in different scales. Traditional wavelet-domain methods (e.g., Ji et al., 2002) implement finite fault inversion with both lower and higher frequency signals together to recover larger-scale and smaller-scale characteristics of the rupture process simultaneously. Here we propose an alternative strategy with a two-step procedure, i.e., firstly constraining the larger-scale characteristics with lower frequency signals, and then resolving the smaller-scale ones with higher frequency signals. We have designed some synthetic tests to testify our strategy and compare it with the traditional one. We also have applied our strategy to study the 2015 Gorkha Nepal earthquake using tele-seismic waveforms. Both the traditional method and our two-step strategy only analyze the data in different temporal scales (i.e., different frequency bands), while the spatial distribution of model parameters also shows multi-scale characteristics. A more sophisticated strategy is to transfer the slip model into different spatial scales, and then analyze the smooth slip distribution (larger scales) with lower frequency data firstly and more detailed slip distribution (smaller scales) with higher frequency data subsequently. We are now implementing the slip inversion using both spatial and temporal domain wavelets. This multi-scale analysis can help us better understand frequency-dependent rupture characteristics of large earthquakes.

  3. Quantum interference in heterogeneous superconducting-photonic circuits on a silicon chip.

    PubMed

    Schuck, C; Guo, X; Fan, L; Ma, X; Poot, M; Tang, H X

    2016-01-21

    Quantum information processing holds great promise for communicating and computing data efficiently. However, scaling current photonic implementation approaches to larger system size remains an outstanding challenge for realizing disruptive quantum technology. Two main ingredients of quantum information processors are quantum interference and single-photon detectors. Here we develop a hybrid superconducting-photonic circuit system to show how these elements can be combined in a scalable fashion on a silicon chip. We demonstrate the suitability of this approach for integrated quantum optics by interfering and detecting photon pairs directly on the chip with waveguide-coupled single-photon detectors. Using a directional coupler implemented with silicon nitride nanophotonic waveguides, we observe 97% interference visibility when measuring photon statistics with two monolithically integrated superconducting single-photon detectors. The photonic circuit and detector fabrication processes are compatible with standard semiconductor thin-film technology, making it possible to implement more complex and larger scale quantum photonic circuits on silicon chips.

  4. IMPLEMENTATION STRATEGY FOR PRODUCTION OF NATIONAL LAND-COVER DATA (NLCD) FROM THE LANDSAT 7 THEMATIC MAPPER SATELLITE

    EPA Science Inventory

    As environmental programs within and outside the federal government continue to move away from point-based studies to larger and larger spatial (not cartographic) scale, the need for land-cover and other geographic data have become ineluctable. The national land-cover mapping pr...

  5. International Perspectives on Educational Reform and Policy Implementation.

    ERIC Educational Resources Information Center

    Carter, David S. G., Ed.; O'Neill, Marnie H., Ed.

    This book focuses on educational change processes in the context of larger scale educational reform. The first of 2 volumes, the book contains 11 chapters that examine the historical, social, and economic forces at work in the formulation and implementation of educational policy. The chapters present different cross-cultural experiences of…

  6. CONVERTING CAMPUS WASTE STREAMS INTO LOCALLY USED ENERGY PRODUCTS THROUGH STEAM HYDROGASIFICATION AND METHANE REFORMATION

    EPA Science Inventory

    We expect to find that the application of our technology can significantly benefit the school, environmentally and economically. Our lab-scale demonstration results would lead to larger demonstration scale investigations and ultimately implementation on campus showcasing the p...

  7. Quantum interference in heterogeneous superconducting-photonic circuits on a silicon chip

    PubMed Central

    Schuck, C.; Guo, X.; Fan, L.; Ma, X.; Poot, M.; Tang, H. X.

    2016-01-01

    Quantum information processing holds great promise for communicating and computing data efficiently. However, scaling current photonic implementation approaches to larger system size remains an outstanding challenge for realizing disruptive quantum technology. Two main ingredients of quantum information processors are quantum interference and single-photon detectors. Here we develop a hybrid superconducting-photonic circuit system to show how these elements can be combined in a scalable fashion on a silicon chip. We demonstrate the suitability of this approach for integrated quantum optics by interfering and detecting photon pairs directly on the chip with waveguide-coupled single-photon detectors. Using a directional coupler implemented with silicon nitride nanophotonic waveguides, we observe 97% interference visibility when measuring photon statistics with two monolithically integrated superconducting single-photon detectors. The photonic circuit and detector fabrication processes are compatible with standard semiconductor thin-film technology, making it possible to implement more complex and larger scale quantum photonic circuits on silicon chips. PMID:26792424

  8. Numerical Simulations of Vortical Mode Stirring: Effects of Large Scale Shear and Strain

    DTIC Science & Technology

    2015-09-30

    Numerical Simulations of Vortical Mode Stirring: Effects of Large-Scale Shear and Strain M.-Pascale Lelong NorthWest Research Associates...can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local ambient conditions including latitude...talk at the 1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Nonlinear Effects in Internal Waves Conference held

  9. Deployment Pulmonary Health

    DTIC Science & Technology

    2015-02-11

    A similar risk-based approach may be appropriate for deploying military personnel. e) If DoD were to consider implementing a large- scale pre...quality of existing spirometry programs prior to considering a larger scale pre-deployment effort. Identifying an accelerated decrease in spirometry...baseline spirometry on a wider scale . e) Conduct pre-deployment baseline spirometry if there is a significant risk of exposure to a pulmonary hazard based

  10. How Does Scale of Implementation Impact the Environmental Sustainability of Wastewater Treatment Integrated with Resource Recovery?

    PubMed

    Cornejo, Pablo K; Zhang, Qiong; Mihelcic, James R

    2016-07-05

    Energy and resource consumptions required to treat and transport wastewater have led to efforts to improve the environmental sustainability of wastewater treatment plants (WWTPs). Resource recovery can reduce the environmental impact of these systems; however, limited research has considered how the scale of implementation impacts the sustainability of WWTPs integrated with resource recovery. Accordingly, this research uses life cycle assessment (LCA) to evaluate how the scale of implementation impacts the environmental sustainability of wastewater treatment integrated with water reuse, energy recovery, and nutrient recycling. Three systems were selected: a septic tank with aerobic treatment at the household scale, an advanced water reclamation facility at the community scale, and an advanced water reclamation facility at the city scale. Three sustainability indicators were considered: embodied energy, carbon footprint, and eutrophication potential. This study determined that as with economies of scale, there are benefits to centralization of WWTPs with resource recovery in terms of embodied energy and carbon footprint; however, the community scale was shown to have the lowest eutrophication potential. Additionally, technology selection, nutrient control practices, system layout, and topographical conditions may have a larger impact on environmental sustainability than the implementation scale in some cases.

  11. Hierarchical population monitoring of greater sage-grouse (Centrocercus urophasianus) in Nevada and California—Identifying populations for management at the appropriate spatial scale

    USGS Publications Warehouse

    Coates, Peter S.; Prochazka, Brian G.; Ricca, Mark A.; Wann, Gregory T.; Aldridge, Cameron L.; Hanser, Steven E.; Doherty, Kevin E.; O'Donnell, Michael S.; Edmunds, David R.; Espinosa, Shawn P.

    2017-08-10

    Population ecologists have long recognized the importance of ecological scale in understanding processes that guide observed demographic patterns for wildlife species. However, directly incorporating spatial and temporal scale into monitoring strategies that detect whether trajectories are driven by local or regional factors is challenging and rarely implemented. Identifying the appropriate scale is critical to the development of management actions that can attenuate or reverse population declines. We describe a novel example of a monitoring framework for estimating annual rates of population change for greater sage-grouse (Centrocercus urophasianus) within a hierarchical and spatially nested structure. Specifically, we conducted Bayesian analyses on a 17-year dataset (2000–2016) of lek counts in Nevada and northeastern California to estimate annual rates of population change, and compared trends across nested spatial scales. We identified leks and larger scale populations in immediate need of management, based on the occurrence of two criteria: (1) crossing of a destabilizing threshold designed to identify significant rates of population decline at a particular nested scale; and (2) crossing of decoupling thresholds designed to identify rates of population decline at smaller scales that decouple from rates of population change at a larger spatial scale. This approach establishes how declines affected by local disturbances can be separated from those operating at larger scales (for example, broad-scale wildfire and region-wide drought). Given the threshold output from our analysis, this adaptive management framework can be implemented readily and annually to facilitate responsive and effective actions for sage-grouse populations in the Great Basin. The rules of the framework can also be modified to identify populations responding positively to management action or demonstrating strong resilience to disturbance. Similar hierarchical approaches might be beneficial for other species occupying landscapes with heterogeneous disturbance and climatic regimes.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ben-Naim, Eli; Krapivsky, Paul

    Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less

  13. The Impact of Large, Multi-Function/Multi-Site Competitions

    DTIC Science & Technology

    2003-08-01

    this approach generates larger savings and improved service quality , and is less expensive to implement. Moreover, it is a way to meet the President s...of the study is to assess the degree to which large-scale competitions completed have resulted in increased savings and service quality and decreased

  14. Lessons from a pilot program to induce stove replacements in Chile: design, implementation and evaluation

    NASA Astrophysics Data System (ADS)

    Gómez, Walter; Chávez, Carlos; Salgado, Hugo; Vásquez, Felipe

    2017-11-01

    We present the design, implementation, and evaluation of a subsidy program to introduce cleaner and more efficient household wood combustion technologies. The program was conducted in the city of Temuco, one of the most polluted cities in southern Chile, as a pilot study to design a new national stove replacement initiative for pollution control. In this city, around 90% of the total emissions of suspended particulate matter is caused by households burning wood. We created a simulated market in which households could choose among different combustion technologies with an assigned subsidy. The subsidy was a relevant factor in the decision to participate, and the inability to secure credit was a significant constraint for the participation of low-income households. Due to several practical difficulties and challenges associated with the implementation of large-scale programs that encourage technological innovation at the household level, it is strongly advisable to start with a small-scale pilot that can provide useful insights into the final design of a fuller, larger-scale program.

  15. Small-Scale Design Experiments as Working Space for Larger Mobile Communication Challenges

    ERIC Educational Resources Information Center

    Lowe, Sarah; Stuedahl, Dagny

    2014-01-01

    In this paper, a design experiment using Instagram as a cultural probe is submitted as a method for analyzing the challenges that arise when considering the implementation of social media within a distributed communication space. It outlines how small, iterative investigations can reveal deeper research questions relevant to the education of…

  16. Kinetics of Aggregation with Choice

    DOE PAGES

    Ben-Naim, Eli; Krapivsky, Paul

    2016-12-01

    Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less

  17. MODFLOW-LGR: Practical application to a large regional dataset

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Coulibaly, K. M.

    2011-12-01

    In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.

  18. Software engineering risk factors in the implementation of a small electronic medical record system: the problem of scalability.

    PubMed

    Chiang, Michael F; Starren, Justin B

    2002-01-01

    The successful implementation of clinical information systems is difficult. In examining the reasons and potential solutions for this problem, the medical informatics community may benefit from the lessons of a rich body of software engineering and management literature about the failure of software projects. Based on previous studies, we present a conceptual framework for understanding the risk factors associated with large-scale projects. However, the vast majority of existing literature is based on large, enterprise-wide systems, and it unclear whether those results may be scaled down and applied to smaller projects such as departmental medical information systems. To examine this issue, we discuss the case study of a delayed electronic medical record implementation project in a small specialty practice at Columbia-Presbyterian Medical Center. While the factors contributing to the delay of this small project share some attributes with those found in larger organizations, there are important differences. The significance of these differences for groups implementing small medical information systems is discussed.

  19. Two Large-Scale Professional Development Programs for Mathematics Teachers and Their Impact on Student Achievement

    ERIC Educational Resources Information Center

    Lindvall, Jannika

    2017-01-01

    This article reports on two professional development programs for mathematics teachers and their effects on student achievement. The projects' design and their implementation within a larger municipality in Sweden, working together with over 90 teachers and 5000 students in elementary school, are described by using a set of core critical features…

  20. The Vision Is Set, Now Help Chronicle the Change

    ERIC Educational Resources Information Center

    Woodin, Terry; Feser, Jason; Herrera, Jose

    2012-01-01

    The Vision and Change effort to explore and implement needed changes in undergraduate biology education has been ongoing since 2006. It is now time to take stock of changes that have occurred at the faculty and single-course levels, and to consider how to accomplish the larger-scale changes needed at departmental and institutional levels. This…

  1. Challenges in Disseminating Model Programs: A Qualitative Analysis of the Strengthening Washington DC Families Program

    ERIC Educational Resources Information Center

    Fox, Danielle Polizzi; Gottfredson, Denise C.; Kumpfer, Karol K.; Beatty, Penny D.

    2004-01-01

    This article discusses the challenges faced when a popular model program, the Strengthening Families Program, which in the past has been implemented on a smaller scale in single organizations, moves to a larger, multiorganization endeavor. On the basis of 42 interviews conducted with program staff, the results highlight two main themes that…

  2. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.

  3. Browndye: A Software Package for Brownian Dynamics

    PubMed Central

    McCammon, J. Andrew

    2010-01-01

    A new software package, Browndye, is presented for simulating the diffusional encounter of two large biological molecules. It can be used to estimate second-order rate constants and encounter probabilities, and to explore reaction trajectories. Browndye builds upon previous knowledge and algorithms from software packages such as UHBD, SDA, and Macrodox, while implementing algorithms that scale to larger systems. PMID:21132109

  4. Communities of Practice and On-Line Support for Dissemination and Implementation of Innovation.

    ERIC Educational Resources Information Center

    Stuckey, Bronwyn; Buehring, Anna; Fraser, Sally

    New tools, whether developed organizationally, commercially, or within a domain may represent innovation in the workplace and can be part of larger scale reform and change. The focus of this research is on exploring the roster of issues arising as the theory of communities of practice is applied to specific cases of online professional development…

  5. Cost-effective description of strong correlation: Efficient implementations of the perfect quadruples and perfect hextuples models

    DOE PAGES

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    2016-10-07

    Novel implementations based on dense tensor storage are presented here for the singlet-reference perfect quadruples (PQ) [J. A. Parkhill et al., J. Chem. Phys. 130, 084101 (2009)] and perfect hextuples (PH) [J. A. Parkhill and M. Head-Gordon, J. Chem. Phys. 133, 024103 (2010)] models. The methods are obtained as block decompositions of conventional coupled-cluster theory that are exact for four electrons in four orbitals (PQ) and six electrons in six orbitals (PH), but that can also be applied to much larger systems. PQ and PH have storage requirements that scale as the square, and as the cube of the numbermore » of active electrons, respectively, and exhibit quartic scaling of the computational effort for large systems. Applications of the new implementations are presented for full-valence calculations on linear polyenes (C nH n+2), which highlight the excellent computational scaling of the present implementations that can routinely handle active spaces of hundreds of electrons. The accuracy of the models is studied in the π space of the polyenes, in hydrogen chains (H 50), and in the π space of polyacene molecules. In all cases, the results compare favorably to density matrix renormalization group values. With the novel implementation of PQ, active spaces of 140 electrons in 140 orbitals can be solved in a matter of minutes on a single core workstation, and the relatively low polynomial scaling means that very large systems are also accessible using parallel computing.« less

  6. Cost-effective description of strong correlation: Efficient implementations of the perfect quadruples and perfect hextuples models

    NASA Astrophysics Data System (ADS)

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    2016-10-01

    Novel implementations based on dense tensor storage are presented for the singlet-reference perfect quadruples (PQ) [J. A. Parkhill et al., J. Chem. Phys. 130, 084101 (2009)] and perfect hextuples (PH) [J. A. Parkhill and M. Head-Gordon, J. Chem. Phys. 133, 024103 (2010)] models. The methods are obtained as block decompositions of conventional coupled-cluster theory that are exact for four electrons in four orbitals (PQ) and six electrons in six orbitals (PH), but that can also be applied to much larger systems. PQ and PH have storage requirements that scale as the square, and as the cube of the number of active electrons, respectively, and exhibit quartic scaling of the computational effort for large systems. Applications of the new implementations are presented for full-valence calculations on linear polyenes (CnHn+2), which highlight the excellent computational scaling of the present implementations that can routinely handle active spaces of hundreds of electrons. The accuracy of the models is studied in the π space of the polyenes, in hydrogen chains (H50), and in the π space of polyacene molecules. In all cases, the results compare favorably to density matrix renormalization group values. With the novel implementation of PQ, active spaces of 140 electrons in 140 orbitals can be solved in a matter of minutes on a single core workstation, and the relatively low polynomial scaling means that very large systems are also accessible using parallel computing.

  7. Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2013-12-01

    In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.

  8. Design of a Minimum Surface-Effect Three Degree-of-Freedom Micromanipulator

    NASA Technical Reports Server (NTRS)

    Goldfarb, Michael; Speich, John E.

    1997-01-01

    This paper describes the fundamental physical motivations for small-scale minimum surface-effect design, and presents a three degree-of-freedom micromanipulator design that incorporates a minimum surface-effect approach. The primary focus of the design is the split-tube flexure, a unique small-scale revolute joint that exhibits a considerably larger range of motion and significantly better multi-axis revolute joint characteristics than a conventional flexure. The development of this joint enables the implementation of a small-scale spatially-loaded revolute joint-based manipulator with well-behaved kinematic characteristics and without the backlash and stick-slip behavior that would otherwise prevent precision control

  9. Development of a superconductor magnetic suspension and balance prototype facility for studying the feasibility of applying this technique to large scale aerodynamic testing

    NASA Technical Reports Server (NTRS)

    Zapata, R. N.; Humphris, R. R.; Henderson, K. C.

    1975-01-01

    The basic research and development work towards proving the feasibility of operating an all-superconductor magnetic suspension and balance device for aerodynamic testing is presented. The feasibility of applying a quasi-six-degree-of freedom free support technique to dynamic stability research was studied along with the design concepts and parameters for applying magnetic suspension techniques to large-scale aerodynamic facilities. A prototype aerodynamic test facility was implemented. Relevant aspects of the development of the prototype facility are described in three sections: (1) design characteristics; (2) operational characteristics; and (3) scaling to larger facilities.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    Novel implementations based on dense tensor storage are presented here for the singlet-reference perfect quadruples (PQ) [J. A. Parkhill et al., J. Chem. Phys. 130, 084101 (2009)] and perfect hextuples (PH) [J. A. Parkhill and M. Head-Gordon, J. Chem. Phys. 133, 024103 (2010)] models. The methods are obtained as block decompositions of conventional coupled-cluster theory that are exact for four electrons in four orbitals (PQ) and six electrons in six orbitals (PH), but that can also be applied to much larger systems. PQ and PH have storage requirements that scale as the square, and as the cube of the numbermore » of active electrons, respectively, and exhibit quartic scaling of the computational effort for large systems. Applications of the new implementations are presented for full-valence calculations on linear polyenes (C nH n+2), which highlight the excellent computational scaling of the present implementations that can routinely handle active spaces of hundreds of electrons. The accuracy of the models is studied in the π space of the polyenes, in hydrogen chains (H 50), and in the π space of polyacene molecules. In all cases, the results compare favorably to density matrix renormalization group values. With the novel implementation of PQ, active spaces of 140 electrons in 140 orbitals can be solved in a matter of minutes on a single core workstation, and the relatively low polynomial scaling means that very large systems are also accessible using parallel computing.« less

  11. Measuring Implementation Fidelity in a Community-Based Parenting Intervention

    PubMed Central

    Breitenstein, Susan M.; Fogg, Louis; Garvey, Christine; Hill, Carri; Resnick, Barbara; Gross, Deborah

    2012-01-01

    Background Establishing the feasibility and validity of implementation fidelity monitoring strategies is an important methodological step in implementing evidence-based interventions on a large scale. Objectives The objective of the study was to examine the reliability and validity of the Fidelity Checklist, a measure designed to assess group leader adherence and competence delivering a parent training intervention (the Chicago Parent Program) in child care centers serving low-income families. Method The sample included 9 parent groups (12 group sessions each), 12 group leaders, and 103 parents. Independent raters reviewed 106 audiotaped parent group sessions and coded group leaders’ fidelity on the Adherence and Competence Scales of the Fidelity Checklist. Group leaders completed self-report adherence checklists and a measure of parent engagement in the intervention. Parents completed measures of consumer satisfaction and child behavior. Results High interrater agreement (Adherence Scale = 94%, Competence Scale = 85%) and adequate intraclass correlation coefficients (Adherence Scale = .69, Competence Scale = .91) were achieved for the Fidelity Checklist. Group leader adherence changed over time, but competence remained stable. Agreement between group leader self-report and independent ratings on the Adherence Scale was 85%; disagreements were more frequently due to positive bias in group leader self-report. Positive correlations were found between group leader adherence and parent attendance and engagement in the intervention and between group leader competence and parent satisfaction. Although child behavior problems improved, improvements were not related to fidelity. Discussion The results suggest that the Fidelity Checklist is a feasible, reliable, and valid measure of group leader implementation fidelity in a group-based parenting intervention. Future research will be focused on testing the Fidelity Checklist with diverse and larger samples and generalizing to other group-based interventions using a similar intervention model. PMID:20404777

  12. A Hybrid Effectiveness-Implementation Trial of an Evidence-Based Exercise Intervention for Breast Cancer Survivors

    PubMed Central

    Beidas, Rinad S.; Paciotti, Breah; Barg, Fran; Branas, Andrea R.; Brown, Justin C.; Glanz, Karen; DeMichele, Angela; DiGiovanni, Laura; Salvatore, Domenick

    2014-01-01

    Background The primary aims of this hybrid Type 1 effectiveness-implementation trial were to quantitatively assess whether an evidence-based exercise intervention for breast cancer survivors, Strength After Breast Cancer, was safe and effective in a new setting and to qualitatively assess barriers to implementation. Methods A cohort of 84 survivors completed measurements related to limb volume, muscle strength, and body image at baseline, 67 survivors completed measurements 12 months later. Qualitative methods were used to understand barriers to implementation experienced by referring oncology clinicians and physical therapists who delivered the program. Results Similar to the efficacy trial, the revised intervention demonstrated safety with regard to lymphedema, and led to improvements in lymphedema symptoms, muscular strength, and body image. Comparison of effects in the effectiveness trial to effects in the efficacy trial revealed larger strength increases in the efficacy trial than in the effectiveness trial (P < .04), but few other differences were found. Qualitative implementation data suggested significant barriers around intervention characteristics, payment, eligibility criteria, the referral process, the need for champions (ie, advocates), and the need to adapt during implementation of the intervention, which should be considered in future dissemination and implementation efforts. Conclusions This trial successfully demonstrated that a physical therapy led strength training program for breast cancer survivors can be implemented in a community setting while retaining the effectiveness and safety of the clinical trial. However, during the translation process, strategies to reduce barriers to implementation are required. This new program can inform larger scale dissemination and implementation efforts. PMID:25749601

  13. Using an Adaptive Expertise Lens to Understand the Quality of Teachers' Classroom Implementation of Computer-Supported Complex Systems Curricula in High School Science

    ERIC Educational Resources Information Center

    Yoon, Susan A.; Koehler-Yom, Jessica; Anderson, Emma; Lin, Joyce; Klopfer, Eric

    2015-01-01

    Background: This exploratory study is part of a larger-scale research project aimed at building theoretical and practical knowledge of complex systems in students and teachers with the goal of improving high school biology learning through professional development and a classroom intervention. Purpose: We propose a model of adaptive expertise to…

  14. Snow Micro-Structure Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Micah Johnson, Andrew Slaughter

    PIKA is a MOOSE-based application for modeling micro-structure evolution of seasonal snow. The model will be useful for environmental, atmospheric, and climate scientists. Possible applications include application to energy balance models, ice sheet modeling, and avalanche forecasting. The model implements physics from published, peer-reviewed articles. The main purpose is to foster university and laboratory collaboration to build a larger multi-scale snow model using MOOSE. The main feature of the code is that it is implemented using the MOOSE framework, thus making features such as multiphysics coupling, adaptive mesh refinement, and parallel scalability native to the application. PIKA implements three equations:more » the phase-field equation for tracking the evolution of the ice-air interface within seasonal snow at the grain-scale; the heat equation for computing the temperature of both the ice and air within the snow; and the mass transport equation for monitoring the diffusion of water vapor in the pore space of the snow.« less

  15. A derivation and scalable implementation of the synchronous parallel kinetic Monte Carlo method for simulating long-time dynamics

    NASA Astrophysics Data System (ADS)

    Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2017-10-01

    Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.

  16. Multiscale Approach to Small River Plumes off California

    NASA Astrophysics Data System (ADS)

    Basdurak, N. B.; Largier, J. L.; Nidzieko, N.

    2012-12-01

    While larger scale plumes have received significant attention, the dynamics of plumes associated with small rivers typical of California are little studied. Since small streams are not dominated by a momentum flux, their plumes are more susceptible to conditions in the coastal ocean such as wind and waves. In order to correctly model water transport at smaller scales, there is a need to capture larger scale processes. To do this, one-way nested grids with varying grid resolution (1 km and 10 m for the parent and the child grid respectively) were constructed. CENCOOS (Central and Northern California Ocean Observing System) model results were used as boundary conditions to the parent grid. Semi-idealized model results for Santa Rosa Creek, California are presented from an implementation of the Regional Ocean Modeling System (ROMS v3.0), a three-dimensional, free-surface, terrain-following numerical model. In these preliminary results, the interaction between tides, winds, and buoyancy forcing in plume dynamics is explored for scenarios including different strengths of freshwater flow with different modes (steady and pulsed). Seasonal changes in transport dynamics and dispersion patterns are analyzed.

  17. Training Research: Practical Recommendations for Maximum Impact

    PubMed Central

    Beidas, Rinad S.; Koerner, Kelly; Weingardt, Kenneth R.; Kendall, Philip C.

    2011-01-01

    This review offers practical recommendations regarding research on training in evidence-based practices for mental health and substance abuse treatment. When designing training research, we recommend: (a) aligning with the larger dissemination and implementation literature to consider contextual variables and clearly defining terminology, (b) critically examining the implicit assumptions underlying the stage model of psychotherapy development, (c) incorporating research methods from other disciplines that embrace the principles of formative evaluation and iterative review, and (d) thinking about how technology can be used to take training to scale throughout all stages of a training research project. An example demonstrates the implementation of these recommendations. PMID:21380792

  18. Large-scale virtual screening on public cloud resources with Apache Spark.

    PubMed

    Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola

    2017-01-01

    Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.

  19. The Grand Challenge of Scale in Scientific Hydrology: Some Personal Reflections

    NASA Astrophysics Data System (ADS)

    Gupta, V. K.

    2009-12-01

    Scale issues in hydrology have shaped my entire scientific career. I first recognized the challenge of scale during the 1970s in linking multi-scale hydrologic processes through collaborative work on solute transport in saturated porous media. Linking geometry, dynamics and statistics, and the role of diagnostics in testing theoretical predictions against experimental observations, played a foundational role. This foundation has guided the rest of my multi-scale research on larger space-time scales of river basins, regional, and global. After the blue book was published in 1991, NSF needed a futuristic implementation plan for the blue book, but did not communicate it to Pete. I came to know of it in 1998 after six years of pursuing an ‘open-ended agenda’ in which Doug played a key role. The upper management of the Geosciences Directorate first mentioned to me in 1998 that the blue book needed a broad and futuristic implementation plan. It led to the Water, Earth, and Biota (WEB) report in 2000 following an NSF-funded workshop in 1999. The multi-scale nature of hydrology served as the central organizing theme for the WEB report. The history from 1984 to 2001 is summarized on the CUAHSI web page under “history”, so I will only share a few personal reflections from this period. Where do we go from here? My perspective is that an urgent need exists to modernize hydrology curriculum that should include the progress that has been made in addressing multi-scale challenges. I will share some personal reflections, both intellectual and administrative, from my experiences in implementing a graduate hydrology science program at the University of Colorado after joining it in 1989.

  20. Simulating Forest Carbon Dynamics in Response to Large-scale Fuel Reduction Treatments Under Projected Climate-fire Interactions in the Sierra Nevada Mountains, USA

    NASA Astrophysics Data System (ADS)

    Liang, S.; Hurteau, M. D.

    2016-12-01

    The interaction of warmer, drier climate and increasing large wildfires, coupled with increasing fire severity resulting from fire-exclusion are anticipated to undermine forest carbon (C) stock stability and C sink strength in the Sierra Nevada forests. Treatments, including thinning and prescribed burning, to reduce biomass and restore forest structure have proven effective at reducing fire severity and lessening C loss when treated stands are burned by wildfire. However, the current pace and scale of treatment implementation is limited, especially given recent increases in area burned by wildfire. In this study, we used a forest landscape model (LANDIS-II) to evaluate the role of implementation timing of large-scale fuel reduction treatments in influencing forest C stock and fluxes of Sierra Nevada forests with projected climate and larger wildfires. We ran 90-year simulations using climate and wildfire projections from three general circulation models driven by the A2 emission scenario. We simulated two different treatment implementation scenarios: a `distributed' (treatments implemented throughout the simulation) and an `accelerated' (treatments implemented during the first half century) scenario. We found that across the study area, accelerated implementation had 0.6-10.4 Mg ha-1 higher late-century aboveground biomass (AGB) and 1.0-2.2 g C m-2 yr-1 higher mean C sink strength than the distributed scenario, depending on specific climate-wildfire projections. Cumulative wildfire emissions over the simulation period were 0.7-3.9 Mg C ha-1 higher for distributed implementation relative to accelerated implementation. However, simulations with both implementation practices have considerably higher AGB and C sink strength as well as lower wildfire emission than simulations in the absence of fuel reduction treatments. The results demonstrate the potential for implementing large-scale fuel reduction treatments to enhance forest C stock stability and C sink strength under projected climate-wildfire interactions. Given climate and wildfire would become more stressful since the mid-century, a forward management action would grant us more C benefits.

  1. Cost of Community Integrated Prevention Campaign for Malaria, HIV, and Diarrhea in Rural Kenya

    PubMed Central

    2011-01-01

    Background Delivery of community-based prevention services for HIV, malaria, and diarrhea is a major priority and challenge in rural Africa. Integrated delivery campaigns may offer a mechanism to achieve high coverage and efficiency. Methods We quantified the resources and costs to implement a large-scale integrated prevention campaign in Lurambi Division, Western Province, Kenya that reached 47,133 individuals (and 83% of eligible adults) in 7 days. The campaign provided HIV testing, condoms, and prevention education materials; a long-lasting insecticide-treated bed net; and a water filter. Data were obtained primarily from logistical and expenditure data maintained by implementing partners. We estimated the projected cost of a Scaled-Up Replication (SUR), assuming reliance on local managers, potential efficiencies of scale, and other adjustments. Results The cost per person served was $41.66 for the initial campaign and was projected at $31.98 for the SUR. The SUR cost included 67% for commodities (mainly water filters and bed nets) and 20% for personnel. The SUR projected unit cost per person served, by disease, was $6.27 for malaria (nets and training), $15.80 for diarrhea (filters and training), and $9.91 for HIV (test kits, counseling, condoms, and CD4 testing at each site). Conclusions A large-scale, rapidly implemented, integrated health campaign provided services to 80% of a rural Kenyan population with relatively low cost. Scaling up this design may provide similar services to larger populations at lower cost per person. PMID:22189090

  2. ED(MF)n: Humidity-Convection Feedbacks in a Mass Flux Scheme Based on Resolved Size Densities

    NASA Astrophysics Data System (ADS)

    Neggers, R.

    2014-12-01

    Cumulus cloud populations remain at least partially unresolved in present-day numerical simulations of global weather and climate, and accordingly their impact on the larger-scale flow has to be represented through parameterization. Various methods have been developed over the years, ranging in complexity from the early bulk models relying on a single plume to more recent approaches that attempt to reconstruct the underlying probability density functions, such as statistical schemes and multiple plume approaches. Most of these "classic" methods capture key aspects of cumulus cloud populations, and have been successfully implemented in operational weather and climate models. However, the ever finer discretizations of operational circulation models, driven by advances in the computational efficiency of supercomputers, is creating new problems for existing sub-grid schemes. Ideally, a sub-grid scheme should automatically adapt its impact on the resolved scales to the dimension of the grid-box within which it is supposed to act. It can be argued that this is only possible when i) the scheme is aware of the range of scales of the processes it represents, and ii) it can distinguish between contributions as a function of size. How to conceptually represent this knowledge of scale in existing parameterization schemes remains an open question that is actively researched. This study considers a relatively new class of models for sub-grid transport in which ideas from the field of population dynamics are merged with the concept of multi plume modelling. More precisely, a multiple mass flux framework for moist convective transport is formulated in which the ensemble of plumes is created in "size-space". It is argued that thus resolving the underlying size-densities creates opportunities for introducing scale-awareness and scale-adaptivity in the scheme. The behavior of an implementation of this framework in the Eddy Diffusivity Mass Flux (EDMF) model, named ED(MF)n, is examined for a standard case of subtropical marine shallow cumulus. We ask if a system of multiple independently resolved plumes is able to automatically create the vertical profile of bulk (mass) flux at which the sub-grid scale transport balances the imposed larger-scale forcings in the cloud layer.

  3. Approaches to the implementation of the Water Framework Directive: targeting mitigation measures at critical source areas of diffuse phosphorus in Irish catchments.

    PubMed

    Doody, D G; Archbold, M; Foy, R H; Flynn, R

    2012-01-01

    The Water Framework Directive (WFD) has initiated a shift towards a targeted approach to implementation through its focus on river basin districts as management units and the natural ecological characteristics of waterbodies. Due to its role in eutrophication, phosphorus (P) has received considerable attention, resulting in a significant body of research, which now forms the evidence base for the programme of measures (POMs) adopted in WFD River Basin Management Plans (RBMP). Targeting POMs at critical sources areas (CSAs) of P could significantly improve environmental efficiency and cost effectiveness of proposed mitigation strategies. This paper summarises the progress made towards targeting mitigation measures at CSAs in Irish catchments. A review of current research highlights that knowledge related to P export at field scale is relatively comprehensive however; the availability of site-specific data and tools limits widespread identification of CSA at this scale. Increasing complexity of hydrological processes at larger scales limits accurate identification of CSA at catchment scale. Implementation of a tiered approach, using catchment scale tools in conjunction with field-by-field surveys could decrease uncertainty and provide a more practical and cost effective method of delineating CSA in a range of catchments. Despite scientific and practical uncertainties, development of a tiered CSA-based approach to assist in the development of supplementary measures would provide a means of developing catchment-specific and cost-effective programmes of measures for diffuse P. The paper presents a conceptual framework for such an approach, which would have particular relevance for the development of supplementary measures in High Status Waterbodies (HSW). The cost and resources necessary for implementation are justified based on HSWs' value as undisturbed reference condition ecosystems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Lessons Learned from Managing a Petabyte

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becla, J

    2005-01-20

    The amount of data collected and stored by the average business doubles each year. Many commercial databases are already approaching hundreds of terabytes, and at this rate, will soon be managing petabytes. More data enables new functionality and capability, but the larger scale reveals new problems and issues hidden in ''smaller'' terascale environments. This paper presents some of these new problems along with implemented solutions in the framework of a petabyte dataset for a large High Energy Physics experiment. Through experience with two persistence technologies, a commercial database and a file-based approach, we expose format-independent concepts and issues prevalent atmore » this new scale of computing.« less

  5. Assessing the impact of continuous quality improvement/total quality management: concept versus implementation.

    PubMed Central

    Shortell, S M; O'Brien, J L; Carman, J M; Foster, R W; Hughes, E F; Boerstler, H; O'Connor, E J

    1995-01-01

    OBJECTIVE: This study examines the relationships among organizational culture, quality improvement processes and selected outcomes for a sample of up to 61 U. S. hospitals. DATA SOURCES AND STUDY SETTING: Primary data were collected from 61 U. S. hospitals (located primarily in the midwest and the west) on measures related to continuous quality improvement/total quality management (CQI/TQM), organizational culture, implementation approaches, and degree of quality improvement implementation based on the Baldrige Award criteria. These data were combined with independently collected data on perceived impact and objective measures of clinical efficiency (i.e., charges and length of stay) for six clinical conditions. STUDY DESIGN: The study involved cross-sectional examination of the named relationships. DATA COLLECTION/EXTRACTION METHODS: Reliable and valid scales for the organizational culture and quality improvement implementation measures were developed based on responses from over 7,000 individuals across the 61 hospitals with an overall completion rate of 72 percent. Independent data on perceived impact were collected from a national survey and independent data on clinical efficiency from a companion study of managed care. PRINCIPAL FINDINGS: A participative, flexible, risk-taking organizational culture was significantly related to quality improvement implementation. Quality improvement implementation, in turn, was positively associated with greater perceived patient outcomes and human resource development. Larger-size hospitals experienced lower clinical efficiency with regard to higher charges and higher length of stay, due in part to having more bureaucratic and hierarchical cultures that serve as a barrier to quality improvement implementation. CONCLUSIONS: What really matters is whether or not a hospital has a culture that supports quality improvement work and an approach that encourages flexible implementation. Larger-size hospitals face more difficult challenges in this regard. PMID:7782222

  6. Assessing the impact of continuous quality improvement/total quality management: concept versus implementation.

    PubMed

    Shortell, S M; O'Brien, J L; Carman, J M; Foster, R W; Hughes, E F; Boerstler, H; O'Connor, E J

    1995-06-01

    This study examines the relationships among organizational culture, quality improvement processes and selected outcomes for a sample of up to 61 U. S. hospitals. Primary data were collected from 61 U. S. hospitals (located primarily in the midwest and the west) on measures related to continuous quality improvement/total quality management (CQI/TQM), organizational culture, implementation approaches, and degree of quality improvement implementation based on the Baldrige Award criteria. These data were combined with independently collected data on perceived impact and objective measures of clinical efficiency (i.e., charges and length of stay) for six clinical conditions. The study involved cross-sectional examination of the named relationships. Reliable and valid scales for the organizational culture and quality improvement implementation measures were developed based on responses from over 7,000 individuals across the 61 hospitals with an overall completion rate of 72 percent. Independent data on perceived impact were collected from a national survey and independent data on clinical efficiency from a companion study of managed care. A participative, flexible, risk-taking organizational culture was significantly related to quality improvement implementation. Quality improvement implementation, in turn, was positively associated with greater perceived patient outcomes and human resource development. Larger-size hospitals experienced lower clinical efficiency with regard to higher charges and higher length of stay, due in part to having more bureaucratic and hierarchical cultures that serve as a barrier to quality improvement implementation. What really matters is whether or not a hospital has a culture that supports quality improvement work and an approach that encourages flexible implementation. Larger-size hospitals face more difficult challenges in this regard.

  7. Implementation of a 3D version of ponderomotive guiding center solver in particle-in-cell code OSIRIS

    NASA Astrophysics Data System (ADS)

    Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo

    2016-10-01

    Laser-driven accelerators gained an increased attention over the past decades. Typical modeling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) simulations. PIC simulations, however, are very computationally expensive due to the disparity of the relevant scales ranging from the laser wavelength, in the micrometer range, to the acceleration length, currently beyond the ten centimeter range. To minimize the gap between these despair scales the ponderomotive guiding center (PGC) algorithm is a promising approach. By describing the evolution of the laser pulse envelope separately, only the scales larger than the plasma wavelength are required to be resolved in the PGC algorithm, leading to speedups in several orders of magnitude. Previous work was limited to two dimensions. Here we present the implementation of the 3D version of a PGC solver into the massively parallel, fully relativistic PIC code OSIRIS. We extended the solver to include periodic boundary conditions and parallelization in all spatial dimensions. We present benchmarks for distributed and shared memory parallelization. We also discuss the stability of the PGC solver.

  8. Geostatistical analysis of 3D microCT images of porous media for stochastic upscaling of spatially variable reactive surfaces

    NASA Astrophysics Data System (ADS)

    De Lucia, Marco; Kühn, Michael

    2015-04-01

    The 3D imaging of porous media through micro tomography allows the characterization of porous space and mineral abundances with unprecedented resolution. Such images can be used to perform computational determination of permeability and to obtain a realistic measure of the mineral surfaces exposed to fluid flow and thus to chemical interactions. However, the volume of the plugs that can be analysed with such detail is in the order of 1 cm3, so that their representativity at a larger scale, i.e. as needed for reactive transport modelling at Darcy scale, is questionable at best. In fact, the fine scale heterogeneity (from plug to plug at few cm distance within the same core) would originate substantially different readings of the investigated properties. Therefore, a comprehensive approach including the spatial variability and heterogeneity at the micro- and plug scale needs to be adopted to gain full advantage from the high resolution images in view of the upscaling to Darcy scale. In the framework of the collaborative project H2STORE, micro-CT imaging of different core samples from potential H2-storage sites has been performed by partners at TU Clausthal and Jena University before and after treatment with H2/CO2 mixtures in pressurized autoclaves. We present here the workflow which has been implemented to extract the relevant features from the available data concerning the heterogeneity of the medium at the microscopic and plug scale and to correlate the observed chemical reactions and changes in the porous structure with the geometrical features of the medium. First, a multivariate indicator-based geostatistical model for the microscopic structure of the plugs has been built and fitted to the available images. This involved the implementation of exploratory analysis algorithms such as experimental indicator variograms and cross-variograms. The implemented methods are able to efficiently deal with images in the order of 10003 voxels making use of parallelization. Sequential Indicator Simulations are then employed to generate equi-probable realizations of microscopic structures with varying mineral proportions and porosity but constrained to the spatial variability observed in the plugs. The statistics computed on the ensemble of realizations (essentially the distribution of mineral reactive surfaces exposed to porous space) is integrated at a larger, Darcy scale. In a further step, the analysis of the microscopic changes in the plugs after exposure to reactive solution establishes the correlations betweens amount of chemical reactions and changes in the spatial models, thus deriving some effective correlations which can be injected into the reactive transport modelling. In this contribution, we demonstrate the implemented workflow on a series of images obtained from plugs from a german depleted gas field exposed to H2 and CO2-charged brines. The geostatistical evaluation of microscale variability of the porous media contributes to the upscaling of relevant variables and helps estimating - if not reducing - the uncertainty due to the heterogeneity across scales of the natural systems.

  9. Systematic Planning of Adaptation Options for Pluvial Flood Resilience

    NASA Astrophysics Data System (ADS)

    Babovic, Filip; Mijic, Ana; Madani, Kaveh

    2016-04-01

    Different elements of infrastructure and the built environment vary in their ability to quickly adapt to changing circumstances. Furthermore, many of the slowest, and often largest infrastructure adaptations, offer the greatest improvements to system performance. In the context of de-carbonation of individual buildings Brand (1995) identified six potential layers of adaptation based on their renewal times ranging from daily to multi-decadal time scales. Similar layers exist in urban areas with regards to Water Sensitive Urban Design (WSUD) and pluvial flood risk. These layers range from appliances within buildings to changes in the larger urban form. Changes in low-level elements can be quickly implemented, but are limited in effectiveness, while larger interventions occur at a much slower pace but offer greater benefits as a part of systemic change. In the context of urban adaptation this multi-layered approach provides information on how to order urban adaptations. This information helps to identify potential pathways by prioritising relatively quick adaptations to be implemented in the short term while identifying options which require more long term planning with respect to both uncertainty and flexibility. This information is particularly critical in the evolution towards more resilient and water sensitive cities (Brown, 2009). Several potential adaptation options were identified ranging from small to large-scale adaptations. The time needed for the adaptation to be implemented was estimated and curves representing the added drainage capacity per year were established. The total drainage capacity added by each option was then established. This methodology was utilised on a case study in the Cranbrook Catchment in the North East of London. This information was able to provide insight on how to best renew or extend the life of critical ageing infrastructure.

  10. eHealth in Switzerland - building consensus, awareness and architecture.

    PubMed

    Lovis, Christian; Looser, Hansjorg; Schmid, Adrian; Wagner, Judith; Wyss, Stefan

    2011-01-01

    This paper reports on the process of the Swiss national strategy to define and implement eHealth. Switzerland is a federal political organization with 26 cantons that are autonomous for the health legal framework. Switzerland must also provide support for four national languages. Thus, this experience addresses many challenges that are experienced at the European level in a much larger scale. Also, Switzerland benefits from the major projects ongoing in Europe, such as epSOS, to define its own strategy.

  11. Segmenting healthcare terminology users: a strategic approach to large scale evolutionary development.

    PubMed

    Price, C; Briggs, K; Brown, P J

    1999-01-01

    Healthcare terminologies have become larger and more complex, aiming to support a diverse range of functions across the whole spectrum of healthcare activity. Prioritization of development, implementation and evaluation can be achieved by regarding the "terminology" as an integrated system of content-based and functional components. Matching these components to target segments within the healthcare community, supports a strategic approach to evolutionary development and provides essential product differentiation to enable terminology providers and systems suppliers to focus on end-user requirements.

  12. Honeycomb: Visual Analysis of Large Scale Social Networks

    NASA Astrophysics Data System (ADS)

    van Ham, Frank; Schulz, Hans-Jörg; Dimicco, Joan M.

    The rise in the use of social network sites allows us to collect large amounts of user reported data on social structures and analysis of this data could provide useful insights for many of the social sciences. This analysis is typically the domain of Social Network Analysis, and visualization of these structures often proves invaluable in understanding them. However, currently available visual analysis tools are not very well suited to handle the massive scale of this network data, and often resolve to displaying small ego networks or heavily abstracted networks. In this paper, we present Honeycomb, a visualization tool that is able to deal with much larger scale data (with millions of connections), which we illustrate by using a large scale corporate social networking site as an example. Additionally, we introduce a new probability based network metric to guide users to potentially interesting or anomalous patterns and discuss lessons learned during design and implementation.

  13. Telepathology Impacts and Implementation Challenges: A Scoping Review.

    PubMed

    Meyer, Julien; Paré, Guy

    2015-12-01

    Telepathology is a particular form of telemedicine that fundamentally alters the way pathology services are delivered. Prior reviews in this area have mostly focused on 2 themes, namely technical feasibility issues and diagnosis accuracy. To synthesize the literature on telepathology implementation challenges and broader organizational and societal impacts and to propose a research agenda to guide future efforts in this domain. Two complementary databases were systematically searched: MEDLINE (PubMed) and ABI/INFORM (ProQuest). Peer-reviewed articles and conference proceedings were considered. The final sample consisted of 159 papers published between 1992 and 2013. This review highlights the diversity of telepathology networks and the importance of considering these distinctions when interpreting research findings. Various network structures are associated with different benefits. Although the dominant rationale in single-site projects is financial, larger centralized and decentralized telepathology networks are targeting a more diverse set of benefits, including extending access to pathology to a whole region, achieving substantial economies of scale in workforce and equipment, and improving quality by standardizing care. Importantly, our synthesis reveals that the nature and scale of encountered implementation challenges also varies depending on the network structure. In smaller telepathology networks, organizational concerns are less prominent, and implementers are more focused on usability issues. As the network scope widens, organizational and legal issues gain prominence.

  14. Supporting shared decision making beyond consumer-prescriber interactions: Initial development of the CommonGround fidelity scale

    PubMed Central

    Fukui, Sadaaki; Salyers, Michelle P.; Rapp, Charlie; Goscha, Rick; Young, Leslie; Mabry, Ally

    2015-01-01

    Shared decision-making has become a central tenet of recovery-oriented, person-centered mental health care, yet the practice is not always transferred to the routine psychiatric visit. Supporting the practice at the system level, beyond the interactions of consumers and medication prescribers, is needed for successful adoption of shared decision-making. CommonGround is a systemic approach, intended to be part of a larger integration of shared decision-making tools and practices at the system level. We discuss the organizational components that CommonGround uses to facilitate shared decision-making, and we present a fidelity scale to assess how well the system is being implemented. PMID:28090194

  15. Computational modeling of electrostatic charge and fields produced by hypervelocity impact

    DOE PAGES

    Crawford, David A.

    2015-05-19

    Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. The model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effectmore » at larger scales, higher impact velocities, or both.« less

  16. Impact of SCALE-UP on science teaching self-efficacy of students in general education science courses

    NASA Astrophysics Data System (ADS)

    Cassani, Mary Kay Kuhr

    The objective of this study was to evaluate the effect of two pedagogical models used in general education science on non-majors' science teaching self-efficacy. Science teaching self-efficacy can be influenced by inquiry and cooperative learning, through cognitive mechanisms described by Bandura (1997). The Student Centered Activities for Large Enrollment Undergraduate Programs (SCALE-UP) model of inquiry and cooperative learning incorporates cooperative learning and inquiry-guided learning in large enrollment combined lecture-laboratory classes (Oliver-Hoyo & Beichner, 2004). SCALE-UP was adopted by a small but rapidly growing public university in the southeastern United States in three undergraduate, general education science courses for non-science majors in the Fall 2006 and Spring 2007 semesters. Students in these courses were compared with students in three other general education science courses for non-science majors taught with the standard teaching model at the host university. The standard model combines lecture and laboratory in the same course, with smaller enrollments and utilizes cooperative learning. Science teaching self-efficacy was measured using the Science Teaching Efficacy Belief Instrument - B (STEBI-B; Bleicher, 2004). A science teaching self-efficacy score was computed from the Personal Science Teaching Efficacy (PTSE) factor of the instrument. Using non-parametric statistics, no significant difference was found between teaching models, between genders, within models, among instructors, or among courses. The number of previous science courses was significantly correlated with PTSE score. Student responses to open-ended questions indicated that students felt the larger enrollment in the SCALE-UP room reduced individual teacher attention but that the large round SCALE-UP tables promoted group interaction. Students responded positively to cooperative and hands-on activities, and would encourage inclusion of more such activities in all of the courses. The large enrollment SCALE-UP model as implemented at the host university did not increase science teaching self-efficacy of non-science majors, as hypothesized. This was likely due to limited modification of standard cooperative activities according to the inquiry-guided SCALE-UP model. It was also found that larger SCALE-UP enrollments did not decrease science teaching self-efficacy when standard cooperative activities were used in the larger class.

  17. Tensor hypercontracted ppRPA: Reducing the cost of the particle-particle random phase approximation from O(r {sup 6}) to O(r {sup 4})

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shenvi, Neil; Yang, Yang; Yang, Weitao

    In recent years, interest in the random-phase approximation (RPA) has grown rapidly. At the same time, tensor hypercontraction has emerged as an intriguing method to reduce the computational cost of electronic structure algorithms. In this paper, we combine the particle-particle random phase approximation with tensor hypercontraction to produce the tensor-hypercontracted particle-particle RPA (THC-ppRPA) algorithm. Unlike previous implementations of ppRPA which scale as O(r{sup 6}), the THC-ppRPA algorithm scales asymptotically as only O(r{sup 4}), albeit with a much larger prefactor than the traditional algorithm. We apply THC-ppRPA to several model systems and show that it yields the same results as traditionalmore » ppRPA to within mH accuracy. Our method opens the door to the development of post-Kohn Sham functionals based on ppRPA without the excessive asymptotic cost of traditional ppRPA implementations.« less

  18. Tensor hypercontracted ppRPA: Reducing the cost of the particle-particle random phase approximation from O(r 6) to O(r 4)

    NASA Astrophysics Data System (ADS)

    Shenvi, Neil; van Aggelen, Helen; Yang, Yang; Yang, Weitao

    2014-07-01

    In recent years, interest in the random-phase approximation (RPA) has grown rapidly. At the same time, tensor hypercontraction has emerged as an intriguing method to reduce the computational cost of electronic structure algorithms. In this paper, we combine the particle-particle random phase approximation with tensor hypercontraction to produce the tensor-hypercontracted particle-particle RPA (THC-ppRPA) algorithm. Unlike previous implementations of ppRPA which scale as O(r6), the THC-ppRPA algorithm scales asymptotically as only O(r4), albeit with a much larger prefactor than the traditional algorithm. We apply THC-ppRPA to several model systems and show that it yields the same results as traditional ppRPA to within mH accuracy. Our method opens the door to the development of post-Kohn Sham functionals based on ppRPA without the excessive asymptotic cost of traditional ppRPA implementations.

  19. Lay Social Resources for Support of Adherence to Antiretroviral Prophylaxis for HIV Prevention Among Serodiscordant Couples in sub-Saharan Africa: A Qualitative Study.

    PubMed

    Ware, Norma C; Pisarski, Emily E; Haberer, Jessica E; Wyatt, Monique A; Tumwesigye, Elioda; Baeten, Jared M; Celum, Connie L; Bangsberg, David R

    2015-05-01

    Effectiveness of antiretroviral pre-exposure prophylaxis (PrEP) for HIV prevention will require high adherence. Using qualitative data, this paper identifies potential lay social resources for support of PrEP adherence by HIV serodiscordant couples in Uganda, laying the groundwork for incorporation of these resources into adherence support initiatives as part of implementation. The qualitative analysis characterizes support for PrEP adherence provided by HIV-infected spouses, children, extended family members, and the larger community. Results suggest social resources for support of PrEP adherence in Africa are plentiful outside formal health care settings and health systems and that couples will readily use them. The same shortage of health professionals that impeded scale-up of antiretroviral treatment for HIV/AIDS in Africa promises to challenge delivery of PrEP. Building on the treatment scale-up experience, implementers can address this challenge by examining the value of lay social resources for adherence support in developing strategies for delivery of PrEP.

  20. What work has to be done to implement collaborative care for depression? Process evaluation of a trial utilizing the Normalization Process Model

    PubMed Central

    2010-01-01

    Background There is a considerable evidence base for 'collaborative care' as a method to improve quality of care for depression, but an acknowledged gap between efficacy and implementation. This study utilises the Normalisation Process Model (NPM) to inform the process of implementation of collaborative care in both a future full-scale trial, and the wider health economy. Methods Application of the NPM to qualitative data collected in both focus groups and one-to-one interviews before and after an exploratory randomised controlled trial of a collaborative model of care for depression. Results Findings are presented as they relate to the four factors of the NPM (interactional workability, relational integration, skill-set workability, and contextual integration) and a number of necessary tasks are identified. Using the model, it was possible to observe that predictions about necessary work to implement collaborative care that could be made from analysis of the pre-trial data relating to the four different factors of the NPM were indeed borne out in the post-trial data. However, additional insights were gained from the post-trial interview participants who, unlike those interviewed before the trial, had direct experience of a novel intervention. The professional freedom enjoyed by more senior mental health workers may work both for and against normalisation of collaborative care as those who wish to adopt new ways of working have the freedom to change their practice but are not obliged to do so. Conclusions The NPM provides a useful structure for both guiding and analysing the process by which an intervention is optimized for testing in a larger scale trial or for subsequent full-scale implementation. PMID:20181163

  1. What is needed for taking emergency obstetric and neonatal programmes to scale?

    PubMed

    Bergh, Anne-Marie; Allanson, Emma; Pattinson, Robert C

    2015-11-01

    Scaling up an emergency obstetric and neonatal care (EmONC) programme entails reaching a larger number of people in a potentially broader geographical area. Multiple strategies requiring simultaneous attention should be deployed. This paper provides a framework for understanding the implementation, scale-up and sustainability of such programmes. We reviewed the existing literature and drew on our experience in scaling up the Essential Steps in the Management of Obstetric Emergencies (ESMOE) programme in South Africa. We explore the non-linear change process and conditions to be met for taking an existing EmONC programme to scale. Important concepts cutting across all components of a programme are equity, quality and leadership. Conditions to be met include appropriate awareness across the board and a policy environment that leads to the following: commitment, health systems-strengthening actions, allocation of resources (human, financial and capital/material), dissemination and training, supportive supervision and monitoring and evaluation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Parallel processes: using motivational interviewing as an implementation coaching strategy.

    PubMed

    Hettema, Jennifer E; Ernst, Denise; Williams, Jessica Roberts; Miller, Kristin J

    2014-07-01

    In addition to its clinical efficacy as a communication style for strengthening motivation and commitment to change, motivational interviewing (MI) has been hypothesized to be a potential tool for facilitating evidence-based practice adoption decisions. This paper reports on the rationale and content of MI-based implementation coaching Webinars that, as part of a larger active dissemination strategy, were found to be more effective than passive dissemination strategies at promoting adoption decisions among behavioral health and health providers and administrators. The Motivational Interviewing Treatment Integrity scale (MITI 3.1.1) was used to rate coaching Webinars from 17 community behavioral health organizations and 17 community health centers. The MITI coding system was found to be applicable to the coaching Webinars, and raters achieved high levels of agreement on global and behavior count measurements of fidelity to MI. Results revealed that implementation coaches maintained fidelity to the MI model, exceeding competency benchmarks for almost all measures. Findings suggest that it is feasible to implement MI as a coaching tool.

  3. Estimating the Cost of Providing Foundational Public Health Services.

    PubMed

    Mamaril, Cezar Brian C; Mays, Glen P; Branham, Douglas Keith; Bekemeier, Betty; Marlowe, Justin; Timsina, Lava

    2017-12-28

    To estimate the cost of resources required to implement a set of Foundational Public Health Services (FPHS) as recommended by the Institute of Medicine. A stochastic simulation model was used to generate probability distributions of input and output costs across 11 FPHS domains. We used an implementation attainment scale to estimate costs of fully implementing FPHS. We use data collected from a diverse cohort of 19 public health agencies located in three states that implemented the FPHS cost estimation methodology in their agencies during 2014-2015. The average agency incurred costs of $48 per capita implementing FPHS at their current attainment levels with a coefficient of variation (CV) of 16 percent. Achieving full FPHS implementation would require $82 per capita (CV=19 percent), indicating an estimated resource gap of $34 per capita. Substantial variation in costs exists across communities in resources currently devoted to implementing FPHS, with even larger variation in resources needed for full attainment. Reducing geographic inequities in FPHS may require novel financing mechanisms and delivery models that allow health agencies to have robust roles within the health system and realize a minimum package of public health services for the nation. © Health Research and Educational Trust.

  4. GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    DOE PAGES

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-06-11

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less

  5. Green infrastructure retrofits on residential parcels: Ecohydrologic modeling for stormwater design

    NASA Astrophysics Data System (ADS)

    Miles, B.; Band, L. E.

    2014-12-01

    To meet water quality goals stormwater utilities and not-for-profit watershed organizations in the U.S. are working with citizens to design and implement green infrastructure on residential land. Green infrastructure, as an alternative and complement to traditional (grey) stormwater infrastructure, has the potential to contribute to multiple ecosystem benefits including stormwater volume reduction, carbon sequestration, urban heat island mitigation, and to provide amenities to residents. However, in small (1-10-km2) medium-density urban watersheds with heterogeneous land cover it is unclear whether stormwater retrofits on residential parcels significantly contributes to reduce stormwater volume at the watershed scale. In this paper, we seek to improve understanding of how small-scale redistribution of water at the parcel scale as part of green infrastructure implementation affects urban water budgets and stormwater volume across spatial scales. As study sites we use two medium-density headwater watersheds in Baltimore, MD and Durham, NC. We develop ecohydrology modeling experiments to evaluate the effectiveness of redirecting residential rooftop runoff to un-altered pervious surfaces and to engineered rain gardens to reduce stormwater runoff. As baselines for these experiments, we performed field surveys of residential rooftop hydrologic connectivity to adjacent impervious surfaces, and found low rates of connectivity. Through simulations of pervasive adoption of downspout disconnection to un-altered pervious areas or to rain garden stormwater control measures (SCM) in these catchments, we find that most parcel-scale changes in stormwater fate are attenuated at larger spatial scales and that neither SCM alone is likely to provide significant changes in streamflow at the watershed scale.

  6. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  7. FAST: A multi-processed environment for visualization of computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon V.; Merritt, Fergus J.; Plessel, Todd C.; Kelaita, Paul G.; Mccabe, R. Kevin

    1991-01-01

    Three-dimensional, unsteady, multi-zoned fluid dynamics simulations over full scale aircraft are typical of the problems being investigated at NASA Ames' Numerical Aerodynamic Simulation (NAS) facility on CRAY2 and CRAY-YMP supercomputers. With multiple processor workstations available in the 10-30 Mflop range, we feel that these new developments in scientific computing warrant a new approach to the design and implementation of analysis tools. These larger, more complex problems create a need for new visualization techniques not possible with the existing software or systems available as of this writing. The visualization techniques will change as the supercomputing environment, and hence the scientific methods employed, evolves even further. The Flow Analysis Software Toolkit (FAST), an implementation of a software system for fluid mechanics analysis, is discussed.

  8. Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide

    DOE PAGES

    Tang, William; Wang, Bei; Ethier, Stephane; ...

    2016-11-01

    The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less

  9. A Frequency-Domain Implementation of a Sliding-Window Traffic Sign Detector for Large Scale Panoramic Datasets

    NASA Astrophysics Data System (ADS)

    Creusen, I. M.; Hazelhoff, L.; De With, P. H. N.

    2013-10-01

    In large-scale automatic traffic sign surveying systems, the primary computational effort is concentrated at the traffic sign detection stage. This paper focuses on reducing the computational load of particularly the sliding window object detection algorithm which is employed for traffic sign detection. Sliding-window object detectors often use a linear SVM to classify the features in a window. In this case, the classification can be seen as a convolution of the feature maps with the SVM kernel. It is well known that convolution can be efficiently implemented in the frequency domain, for kernels larger than a certain size. We show that by careful reordering of sliding-window operations, most of the frequency-domain transformations can be eliminated, leading to a substantial increase in efficiency. Additionally, we suggest to use the overlap-add method to keep the memory use within reasonable bounds. This allows us to keep all the transformed kernels in memory, thereby eliminating even more domain transformations, and allows all scales in a multiscale pyramid to be processed using the same set of transformed kernels. For a typical sliding-window implementation, we have found that the detector execution performance improves with a factor of 5.3. As a bonus, many of the detector improvements from literature, e.g. chi-squared kernel approximations, sub-class splitting algorithms etc., can be more easily applied at a lower performance penalty because of an improved scalability.

  10. Greater sage-grouse population trends across Wyoming

    USGS Publications Warehouse

    Edmunds, David; Aldridge, Cameron L.; O'Donnell, Michael; Monroe, Adrian

    2018-01-01

    The scale at which analyses are performed can have an effect on model results and often one scale does not accurately describe the ecological phenomena of interest (e.g., population trends) for wide-ranging species: yet, most ecological studies are performed at a single, arbitrary scale. To best determine local and regional trends for greater sage-grouse (Centrocercus urophasianus) in Wyoming, USA, we modeled density-independent and -dependent population growth across multiple spatial scales relevant to management and conservation (Core Areas [habitat encompassing approximately 83% of the sage-grouse population on ∼24% of surface area in Wyoming], local Working Groups [7 regional areas for which groups of local experts are tasked with implementing Wyoming's statewide sage-grouse conservation plan at the local level], Core Area status (Core Area vs. Non-Core Area) by Working Groups, and Core Areas by Working Groups). Our goal was to determine the influence of fine-scale population trends (Core Areas) on larger-scale populations (Working Group Areas). We modeled the natural log of change in population size ( peak M lek counts) by time to calculate the finite rate of population growth (λ) for each population of interest from 1993 to 2015. We found that in general when Core Area status (Core Area vs. Non-Core Area) was investigated by Working Group Area, the 2 populations trended similarly and agreed with the overall trend of the Working Group Area. However, at the finer scale where Core Areas were analyzed separately, Core Areas within the same Working Group Area often trended differently and a few large Core Areas could influence the overall Working Group Area trend and mask trends occurring in smaller Core Areas. Relatively close fine-scale populations of sage-grouse can trend differently, indicating that large-scale trends may not accurately depict what is occurring across the landscape (e.g., local effects of gas and oil fields may be masked by increasing larger populations). 

  11. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  12. Gas-solid fluidized bed reactors: Scale-up, flow regimes identification and hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zaid, Faraj Muftah

    This research studied the scale-up, flow regimes identification and hydrodynamics of fluidized beds using 6-inch and 18- inch diameter columns and different particles. One of the objectives was to advance the scale-up of gas-solid fluidized bed reactors by developing a new mechanistic methodology for hydrodynamic similarity based on matching the radial or diameter profile of gas phase holdup, since gas dynamics dictate the hydrodynamics of these reactors. This has been successfully achieved. However, the literature reported scale-up methodology based on matching selected dimensionless groups was examined and it was found that it was not easy to match the dimensionless groups and hence, there was some deviation in the hydrodynamics of the studied two different fluidized beds. A new technique based on gamma ray densitometry (GRD) was successfully developed and utilized to on-line monitor the implementation of scale-up, to identify the flow regime, and to measure the radial or diameter profiles of gas and solids holdups. CFD has been demonstrated as a valuable tool to enable the implementation of the newly developed scale-up methodology based on finding the conditions that provide similar or closer radial profile or cross sectional distribution of the gas holdup. As gas velocity increases, solids holdup in the center region of the column decreases in the fully developed region of both 6 inch and 18 inch diameter columns. Solids holdup increased with the increase in the particles size and density. Upflowing particles velocity increased with the gas velocity and became steeper at high superficial gas velocity at all axial heights where the center line velocity became higher than that in the wall region. Smaller particles size and lower density gave larger upflowing particles velocity. Minimum fluidization velocity and transition velocity from bubbly to churn turbulent flow regimes were found to be lower in 18 inch diameter column compared to those obtained in 6 inch diameter column. Also the absolute fluctuation of upflowing particles velocity multiplied by solids holdups vś 3ś as one of the terms for solids mass flux estimation was found to be larger in 18-inch diameter column than that in 6-inch diameter column using same particles size and density.

  13. Leading for the long haul: a mixed-method evaluation of the Sustainment Leadership Scale (SLS).

    PubMed

    Ehrhart, Mark G; Torres, Elisa M; Green, Amy E; Trott, Elise M; Willging, Cathleen E; Moullin, Joanna C; Aarons, Gregory A

    2018-01-19

    Despite our progress in understanding the organizational context for implementation and specifically the role of leadership in implementation, its role in sustainment has received little attention. This paper took a mixed-method approach to examine leadership during the sustainment phase of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Utilizing the Implementation Leadership Scale as a foundation, we sought to develop a short, practical measure of sustainment leadership that can be used for both applied and research purposes. Data for this study were collected as a part of a larger mixed-method study of evidence-based intervention, SafeCare®, sustainment. Quantitative data were collected from 157 providers using web-based surveys. Confirmatory factor analysis was used to examine the factor structure of the Sustainment Leadership Scale (SLS). Qualitative data were collected from 95 providers who participated in one of 15 focus groups. A framework approach guided qualitative data analysis. Mixed-method integration was also utilized to examine convergence of quantitative and qualitative findings. Confirmatory factor analysis supported the a priori higher order factor structure of the SLS with subscales indicating a single higher order sustainment leadership factor. The SLS demonstrated excellent internal consistency reliability. Qualitative analyses offered support for the dimensions of sustainment leadership captured by the quantitative measure, in addition to uncovering a fifth possible factor, available leadership. This study found qualitative and quantitative support for the pragmatic SLS measure. The SLS can be used for assessing leadership of first-level leaders to understand how staff perceive leadership during sustainment and to suggest areas where leaders could direct more attention in order to increase the likelihood that EBIs are institutionalized into the normal functioning of the organization.

  14. An evolving effective stress approach to anisotropic distortional hardening

    DOE PAGES

    Lester, B. T.; Scherzinger, W. M.

    2018-03-11

    A new yield surface with an evolving effective stress definition is proposed for consistently and efficiently describing anisotropic distortional hardening. Specifically, a new internal state variable is introduced to capture the thermodynamic evolution between different effective stress definitions. The corresponding yield surface and evolution equations of the internal variables are derived from thermodynamic considerations enabling satisfaction of the second law. A closest point projection return mapping algorithm for the proposed model is formulated and implemented for use in finite element analyses. Finally, select constitutive and larger scale boundary value problems are solved to explore the capabilities of the model andmore » examine the impact of distortional hardening on constitutive and structural responses. Importantly, these simulations demonstrate the tractability of the proposed formulation in investigating large-scale problems of interest.« less

  15. An evolving effective stress approach to anisotropic distortional hardening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, B. T.; Scherzinger, W. M.

    A new yield surface with an evolving effective stress definition is proposed for consistently and efficiently describing anisotropic distortional hardening. Specifically, a new internal state variable is introduced to capture the thermodynamic evolution between different effective stress definitions. The corresponding yield surface and evolution equations of the internal variables are derived from thermodynamic considerations enabling satisfaction of the second law. A closest point projection return mapping algorithm for the proposed model is formulated and implemented for use in finite element analyses. Finally, select constitutive and larger scale boundary value problems are solved to explore the capabilities of the model andmore » examine the impact of distortional hardening on constitutive and structural responses. Importantly, these simulations demonstrate the tractability of the proposed formulation in investigating large-scale problems of interest.« less

  16. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    DOE PAGES

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less

  17. Dispersion interactions in Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Andrinopoulos, Lampros; Hine, Nicholas; Mostofi, Arash

    2012-02-01

    Semilocal functionals in Density Functional Theory (DFT) achieve high accuracy simulating a wide range of systems, but miss the effect of dispersion (vdW) interactions, important in weakly bound systems. We study two different methods to include vdW in DFT: First, we investigate a recent approach [1] to evaluate the vdW contribution to the total energy using maximally-localized Wannier functions. Using a set of simple dimers, we show that it has a number of shortcomings that hamper its predictive power; we then develop and implement a series of improvements [2] and obtain binding energies and equilibrium geometries in closer agreement to quantum-chemical coupled-cluster calculations. Second, we implement the vdW-DF functional [3], using Soler's method [4], within ONETEP [5], a linear-scaling DFT code, and apply it to a range of systems. This method within a linear-scaling DFT code allows the simulation of weakly bound systems of larger scale, such as organic/inorganic interfaces, biological systems and implicit solvation models. [1] P. Silvestrelli, JPC A 113, 5224 (2009). [2] L. Andrinopoulos et al, JCP 135, 154105 (2011). [3] M. Dion et al, PRL 92, 246401 (2004). [4] G. Rom'an-P'erez, J.M. Soler, PRL 103, 096102 (2009). [5] C. Skylaris et al, JCP 122, 084119 (2005).

  18. The spatial and temporal domains of modern ecology.

    PubMed

    Estes, Lyndon; Elsen, Paul R; Treuer, Timothy; Ahmed, Labeeb; Caylor, Kelly; Chang, Jason; Choi, Jonathan J; Ellis, Erle C

    2018-05-01

    To understand ecological phenomena, it is necessary to observe their behaviour across multiple spatial and temporal scales. Since this need was first highlighted in the 1980s, technology has opened previously inaccessible scales to observation. To help to determine whether there have been corresponding changes in the scales observed by modern ecologists, we analysed the resolution, extent, interval and duration of observations (excluding experiments) in 348 studies that have been published between 2004 and 2014. We found that observational scales were generally narrow, because ecologists still primarily use conventional field techniques. In the spatial domain, most observations had resolutions ≤1 m 2 and extents ≤10,000 ha. In the temporal domain, most observations were either unreplicated or infrequently repeated (>1 month interval) and ≤1 year in duration. Compared with studies conducted before 2004, observational durations and resolutions appear largely unchanged, but intervals have become finer and extents larger. We also found a large gulf between the scales at which phenomena are actually observed and the scales those observations ostensibly represent, raising concerns about observational comprehensiveness. Furthermore, most studies did not clearly report scale, suggesting that it remains a minor concern. Ecologists can better understand the scales represented by observations by incorporating autocorrelation measures, while journals can promote attentiveness to scale by implementing scale-reporting standards.

  19. Accelerating Time Integration for the Shallow Water Equations on the Sphere Using GPUs

    DOE PAGES

    Archibald, R.; Evans, K. J.; Salinger, A.

    2015-06-01

    The push towards larger and larger computational platforms has made it possible for climate simulations to resolve climate dynamics across multiple spatial and temporal scales. This direction in climate simulation has created a strong need to develop scalable timestepping methods capable of accelerating throughput on high performance computing. This study details the recent advances in the implementation of implicit time stepping of the spectral element dynamical core within the United States Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) on graphical processing units (GPU) based machines. We demonstrate how solvers in the Trilinos project are interfaced with ACMEmore » and GPU kernels to increase computational speed of the residual calculations in the implicit time stepping method for the atmosphere dynamics. We demonstrate the optimization gains and data structure reorganization that facilitates the performance improvements.« less

  20. A General Purpose High Performance Linux Installation Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wachsmann, Alf

    2002-06-17

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then usesmore » kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation.« less

  1. The dynamical analysis of modified two-compartment neuron model and FPGA implementation

    NASA Astrophysics Data System (ADS)

    Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao

    2017-10-01

    The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.

  2. Optimizing the Performance of Reactive Molecular Dynamics Simulations for Multi-core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aktulga, Hasan Metin; Coffman, Paul; Shan, Tzu-Ray

    2015-12-01

    Hybrid parallelism allows high performance computing applications to better leverage the increasing on-node parallelism of modern supercomputers. In this paper, we present a hybrid parallel implementation of the widely used LAMMPS/ReaxC package, where the construction of bonded and nonbonded lists and evaluation of complex ReaxFF interactions are implemented efficiently using OpenMP parallelism. Additionally, the performance of the QEq charge equilibration scheme is examined and a dual-solver is implemented. We present the performance of the resulting ReaxC-OMP package on a state-of-the-art multi-core architecture Mira, an IBM BlueGene/Q supercomputer. For system sizes ranging from 32 thousand to 16.6 million particles, speedups inmore » the range of 1.5-4.5x are observed using the new ReaxC-OMP software. Sustained performance improvements have been observed for up to 262,144 cores (1,048,576 processes) of Mira with a weak scaling efficiency of 91.5% in larger simulations containing 16.6 million particles.« less

  3. Scale-up of networked HIV treatment in Nigeria: creation of an integrated electronic medical records system.

    PubMed

    Chaplin, Beth; Meloni, Seema; Eisen, Geoffrey; Jolayemi, Toyin; Banigbe, Bolanle; Adeola, Juliette; Wen, Craig; Reyes Nieva, Harry; Chang, Charlotte; Okonkwo, Prosper; Kanki, Phyllis

    2015-01-01

    The implementation of PEPFAR programs in resource-limited settings was accompanied by the need to document patient care on a scale unprecedented in environments where paper-based records were the norm. We describe the development of an electronic medical records system (EMRS) put in place at the beginning of a large HIV/AIDS care and treatment program in Nigeria. Databases were created to record laboratory results, medications prescribed and dispensed, and clinical assessments, using a relational database program. A collection of stand-alone files recorded different elements of patient care, linked together by utilities that aggregated data on national standard indicators and assessed patient care for quality improvement, tracked patients requiring follow-up, generated counts of ART regimens dispensed, and provided 'snapshots' of a patient's response to treatment. A secure server was used to store patient files for backup and transfer. By February 2012, when the program transitioned to local in-country management by APIN, the EMRS was used in 33 hospitals across the country, with 4,947,433 adult, pediatric and PMTCT records that had been created and continued to be available for use in patient care. Ongoing trainings for data managers, along with an iterative process of implementing changes to the databases and forms based on user feedback, were needed. As the program scaled up and the volume of laboratory tests increased, results were produced in a digital format, wherever possible, that could be automatically transferred to the EMRS. Many larger clinics began to link some or all of the databases to local area networks, making them available to a larger group of staff members, or providing the ability to enter information simultaneously where needed. The EMRS improved patient care, enabled efficient reporting to the Government of Nigeria and to U.S. funding agencies, and allowed program managers and staff to conduct quality control audits. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Tai Chi: moving for better balance -- development of a community-based falls prevention program.

    PubMed

    Li, Fuzhong; Harmer, Peter; Mack, Karin A; Sleet, David; Fisher, K John; Kohn, Melvin A; Millet, Lisa M; Xu, Junheng; Yang, Tingzhong; Sutton, Beth; Tompkins, Yvaughn

    2008-05-01

    This study was designed to develop an evidence- and community based falls prevention program -- Tai Chi: Moving for Better Balance. A mixed qualitative and quantitative approach was used to develop a package of materials for program implementation and evaluation. The developmental work was conducted in 2 communities in the Pacific Northwest. Participants included a panel of experts, senior service program managers or activity coordinators, and older adults. Outcome measures involved program feasibility and satisfaction. Through an iterative process, a program package was developed. The package contained an implementation plan and class training materials (ie, instructor's manual, videotape, and user's guidebook). Pilot testing of program materials showed that the content was appropriate for the targeted users (community-living older adults) and providers (local senior service organizations). A feasibility survey indicated interest and support from users and providers for program implementation. A 2-week pilot evaluation showed that the program implementation was feasible and evidenced good class attendance, high participant satisfaction, and interest in continuing Tai Chi. The package of materials developed in this study provides a solid foundation for larger scale implementation and evaluation of the program in community settings.

  5. Application of LANDSAT-2 data to the implementation and enforcement of the Pennsylvania Surface Mining Conservation and Reclamation Act

    NASA Technical Reports Server (NTRS)

    Russell, O. R. (Principal Investigator); Nichols, D. A.; Anderson, R.

    1977-01-01

    The author has identified the following significant results. Evaluation of LANDSAT imagery indicates severe limitations in its utility for surface mine land studies. Image stripping resulting from unequal detector response on satellite degrades the image quality to the extent that images of scales larger than 1:125,000 are of limited value for manual interpretation. Computer processing of LANDSAT data to improve image quality is essential; the removal of scanline stripping and enhancement of mine land reflectance data combined with color composite printing permits useful photographic enlargements to approximately 1:60,000.

  6. Strategy for an Extensible Microcomputer-Based Mumps System for Private Practice

    PubMed Central

    Walters, Richard F.; Johnson, Stephen L.

    1979-01-01

    A macro expander technique has been adopted to generate a machine independent single user version of ANSI Standard MUMPS running on an 8080 Microcomputer. This approach makes it possible to have the medically oriented MUMPS language available on inexpensive systems suitable for small group practice settings. Substitution of another macro expansion set allows the same interpreter to be implemented on another computer, thereby providing compatibility with comparable or larger scale systems. Furthermore, since the global file handler can be separated from the interpreter, this approach permits development of a distributed MUMPS system with no change in applications software.

  7. Microcracking in Composite Laminates: Simulation of Crack-Induced Ultrasound Attenuation

    NASA Technical Reports Server (NTRS)

    Leckey, C. A. C.; Rogge, M. D.; Parker, F. R.

    2012-01-01

    Microcracking in composite laminates is a known precursor to the growth of inter-ply delaminations and larger scale damage. Microcracking can lead to the attenuation of ultrasonic waves due to the crack-induced scattering. 3D elastodynamic finite integration technique (EFIT) has been implemented to explore the scattering of ultrasonic waves due to microcracks in anisotropic composite laminates. X-ray microfocus computed tomography data was directly input into the EFIT simulation for these purposes. The validated anisotropic 3D EFIT code is shown to be a useful tool for exploring the complex multiple-scattering which arises from extensive microcracking.

  8. A phase code for memory could arise from circuit mechanisms in entorhinal cortex

    PubMed Central

    Hasselmo, Michael E.; Brandon, Mark P.; Yoshida, Motoharu; Giocomo, Lisa M.; Heys, James G.; Fransen, Erik; Newman, Ehren L.; Zilli, Eric A.

    2009-01-01

    Neurophysiological data reveals intrinsic cellular properties that suggest how entorhinal cortical neurons could code memory by the phase of their firing. Potential cellular mechanisms for this phase coding in models of entorhinal function are reviewed. This mechanism for phase coding provides a substrate for modeling the responses of entorhinal grid cells, as well as the replay of neural spiking activity during waking and sleep. Efforts to implement these abstract models in more detailed biophysical compartmental simulations raise specific issues that could be addressed in larger scale population models incorporating mechanisms of inhibition. PMID:19656654

  9. Efficient implementation of core-excitation Bethe-Salpeter equation calculations

    NASA Astrophysics Data System (ADS)

    Gilmore, K.; Vinson, John; Shirley, E. L.; Prendergast, D.; Pemmaraju, C. D.; Kas, J. J.; Vila, F. D.; Rehr, J. J.

    2015-12-01

    We present an efficient implementation of the Bethe-Salpeter equation (BSE) method for obtaining core-level spectra including X-ray absorption (XAS), X-ray emission (XES), and both resonant and non-resonant inelastic X-ray scattering spectra (N/RIXS). Calculations are based on density functional theory (DFT) electronic structures generated either by ABINIT or QuantumESPRESSO, both plane-wave basis, pseudopotential codes. This electronic structure is improved through the inclusion of a GW self energy. The projector augmented wave technique is used to evaluate transition matrix elements between core-level and band states. Final two-particle scattering states are obtained with the NIST core-level BSE solver (NBSE). We have previously reported this implementation, which we refer to as OCEAN (Obtaining Core Excitations from Ab initio electronic structure and NBSE) (Vinson et al., 2011). Here, we present additional efficiencies that enable us to evaluate spectra for systems ten times larger than previously possible; containing up to a few thousand electrons. These improvements include the implementation of optimal basis functions that reduce the cost of the initial DFT calculations, more complete parallelization of the screening calculation and of the action of the BSE Hamiltonian, and various memory reductions. Scaling is demonstrated on supercells of SrTiO3 and example spectra for the organic light emitting molecule Tris-(8-hydroxyquinoline)aluminum (Alq3) are presented. The ability to perform large-scale spectral calculations is particularly advantageous for investigating dilute or non-periodic systems such as doped materials, amorphous systems, or complex nano-structures.

  10. Experimental implementation of parallel riverbed erosion to study vegetation uprooting by flow

    NASA Astrophysics Data System (ADS)

    Perona, Paolo; Edmaier, Katharina; Crouzy, Benoît

    2014-05-01

    In nature, flow erosion leading to the uprooting of vegetation is often a delayed process that gradually reduces anchoring by root exposure and correspondingly increases drag on the exposed biomass. The process determining scouring or deposition of the riverbed, and consequently plant root exposure is complex and scale dependent. At the local scale, it is hydrodynamically driven and depends on obstacle porosity, as well as sediment vs obstacle size ratio. At a larger scale it results from morphodynamic conditions, which mostly depend on riverbed topography and stream bedload transport capacity. In the latter case, ablation of sediment gradually reduces local bed elevation around the obstacle at a scale larger than the obstacle size, and uprooting eventually occurs when flow drag exceeds the residual anchoring. Ideally, one would study the timescales of vegetation uprooting by flow by inducing parallel bed erosion. This condition is not trivial to obtain experimentally because bed elevation adjustments occur in relation to longitudinal changes in sediment apportion as described by Exner's equation. In this work, we study the physical conditions leading to parallel bed erosion by reducing Exner equation closed for bedload transport to a nonlinear partial differential equation, and showing that this is a particular "boundary value" problem. Eventually, we use the data of Edmaier (2014) from a small scale mobile-bed flume setup to verify the proposed theoretical framework, and to show how such a simple experiment can provide useful insights into the timescales of the uprooting process (Edmaier et al., 2011). REFERENCES - Edmaier, K., P. Burlando, and P. Perona (2011). Mechanisms of vegetation uprooting by flow in alluvial non-cohesive sediment. Hydrology and Earth System Sciences, vol. 15, p. 1615-1627. - Edmaier, K. Uprooting mechanisms of juvenile vegetation by flow. PhD thesis, EPFL, in preparation.

  11. Transfer of movement sequences: bigger is better.

    PubMed

    Dean, Noah J; Kovacs, Attila J; Shea, Charles H

    2008-02-01

    Experiment 1 was conducted to determine if proportional transfer from "small to large" scale movements is as effective as transferring from "large to small." We hypothesize that the learning of larger scale movement will require the participant to learn to manage the generation, storage, and dissipation of forces better than when practicing smaller scale movements. Thus, we predict an advantage for transfer of larger scale movements to smaller scale movements relative to transfer from smaller to larger scale movements. Experiment 2 was conducted to determine if adding a load to a smaller scale movement would enhance later transfer to a larger scale movement sequence. It was hypothesized that the added load would require the participants to consider the dynamics of the movement to a greater extent than without the load. The results replicated earlier findings of effective transfer from large to small movements, but consistent with our hypothesis, transfer was less effective from small to large (Experiment 1). However, when a load was added during acquisition transfer from small to large was enhanced even though the load was removed during the transfer test. These results are consistent with the notion that the transfer asymmetry noted in Experiment 1 was due to factors related to movement dynamics that were enhanced during practice of the larger scale movement sequence, but not during the practice of the smaller scale movement sequence. The findings that the movement structure is unaffected by transfer direction but the movement dynamics are influenced by transfer direction is consistent with hierarchal models of sequence production.

  12. Multiscale mechanics of the lateral pressure effect on enhancing the load transfer between polymer coated CNTs.

    PubMed

    Yazdandoost, Fatemeh; Mirzaeifar, Reza; Qin, Zhao; Buehler, Markus J

    2017-05-04

    While individual carbon nanotubes (CNTs) are known as one of the strongest fibers ever known, even the strongest fabricated macroscale CNT yarns and fibers are still significantly weaker than individual nanotubes. The loss in mechanical properties is mainly because the deformation mechanism of CNT fibers is highly governed by the weak shear strength corresponding to sliding of nanotubes on each other. Adding polymer coating to the bundles, and twisting the CNT yarns to enhance the intertube interactions are both efficient methods to improve the mechanical properties of macroscale yarns. Here, we perform molecular dynamics (MD) simulations to unravel the unknown deformation mechanism in the intertube polymer chains and also local deformations of the CNTs at the atomistic scale. Our results show that the lateral pressure can have both beneficial and adverse effects on shear strength of polymer coated CNTs, depending on the local deformations at the atomistic scale. In this paper we also introduce a bottom-up bridging strategy between a full atomistic model and a coarse-grained (CG) model. Our trained CG model is capable of incorporating the atomistic scale local deformations of each CNT to the larger scale collect behavior of bundles, which enables the model to accurately predict the effect of lateral pressure on larger CNT bundles and yarns. The developed multiscale CG model is implemented to study the effect of lateral pressure on the shear strength of straight polymer coated CNT yarns, and also the effect of twisting on the pull-out force of bundles in spun CNT yarns.

  13. Challenges in Upscaling Geomorphic Transport Laws: Scale-dependence of Local vs. Non-local Formalisms and Derivation of Closures (Invited)

    NASA Astrophysics Data System (ADS)

    Foufoula-Georgiou, E.; Ganti, V. K.; Passalacqua, P.

    2010-12-01

    Nonlinear geomorphic transport laws are often derived from mechanistic considerations at a point, and yet they are implemented on 90m or 30 m DEMs, presenting a mismatch in the scales of derivation and application of the flux laws. Since estimates of local slopes and curvatures are known to depend on the scale of the DEM used in their computation, two questions arise: (1) how to meaningfully compensate for the scale dependence, if any, of local transport laws? and (2) how to formally derive, via upscaling, constitutive laws that are applicable at larger scales? Recently, non-local geomorphic transport laws for sediment transport on hillslopes have been introduced using the concept of an integral flux that depends on topographic attributes in the vicinity of a point of interest. In this paper, we demonstrate the scale dependence of local nonlinear hillslope sediment transport laws and derive a closure term via upscaling (Reynolds averaging). We also show that the non-local hillslope transport laws are inherently scale independent owing to their non-local, scale-free nature. These concepts are demonstrated via an application to a small subbasin of the Oregon Coast Range using 2m LiDAR topographic data.

  14. Evaluation of models and data for assessing whooping crane habitat in the central Platte River, Nebraska

    USGS Publications Warehouse

    Farmer, Adrian H.; Cade, Brian S.; Terrell, James W.; Henriksen, Jim H.; Runge, Jeffery T.

    2005-01-01

    The primary objectives of this evaluation were to improve the performance of the Whooping Crane Habitat Suitability model (C4R) used by the U.S. Fish and Wildlife Service (Service) for defining the relationship between river discharge and habitat availability, and to assist the Service in implementing improved model(s) with existing hydraulic files. The C4R habitat model is applied at the scale of individual river cross-sections, but the model outputs are scaledup to larger reaches of the river using a decision support “model” comprised of other data and procedures. Hence, the validity of the habitat model depends at least partially on how its outputs are incorporated into this larger context. For that reason, we also evaluated other procedures including the PHABSIM data files, the FORTRAN computer programs used to implement the model, and other parameters used to simulate the relationship between river flows and the availability of Whooping Crane roosting habitat along more than 100 miles of heterogeneous river channels. An equally important objective of this report was to fully document these related procedures as well as the model and evaluation results so that interested parties could readily understand the technical basis for the Service’s recommendations.

  15. Line overlap and self-shielding of molecular hydrogen in galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y.; Draine, Bruce T., E-mail: gnedin@fnal.gov, E-mail: andrey@oddjob.uchicago.edu, E-mail: draine@astro.princeton.edu

    2014-11-01

    The effect of line overlap in the Lyman and Werner bands, often ignored in galactic studies of the atomic-to-molecular transition, greatly enhances molecular hydrogen self-shielding in low metallicity environments and dominates over dust shielding for metallicities below about 10% solar. We implement that effect in cosmological hydrodynamics simulations with an empirical model, calibrated against the observational data, and provide fitting formulae for the molecular hydrogen fraction as a function of gas density on various spatial scales and in environments with varied dust abundance and interstellar radiation field. We find that line overlap, while important for detailed radiative transfer in themore » Lyman and Werner bands, has only a minor effect on star formation on galactic scales, which, to a much larger degree, is regulated by stellar feedback.« less

  16. Detecting magnetic ordering with atomic size electron probes

    DOE PAGES

    Idrobo, Juan Carlos; Rusz, Ján; Spiegelberg, Jakob; ...

    2016-05-27

    While magnetism originates at the atomic scale, the existing spectroscopic techniques sensitive to magnetic signals only produce spectra with spatial resolution on a larger scale. However, recently, it has been theoretically argued that atomic size electron probes with customized phase distributions can detect magnetic circular dichroism. Here, we report a direct experimental real-space detection of magnetic circular dichroism in aberration-corrected scanning transmission electron microscopy (STEM). Using an atomic size-aberrated electron probe with a customized phase distribution, we reveal the checkerboard antiferromagnetic ordering of Mn moments in LaMnAsO by observing a dichroic signal in the Mn L-edge. The novel experimental setupmore » presented here, which can easily be implemented in aberration-corrected STEM, opens new paths for probing dichroic signals in materials with unprecedented spatial resolution.« less

  17. Effect of ambient light on monoclonal antibody product quality during small-scale mammalian cell culture process in clear glass bioreactors.

    PubMed

    Mallaney, Mary; Wang, Szu-Han; Sreedhara, Alavattam

    2014-01-01

    During a small-scale cell culture process producing a monoclonal antibody, a larger than expected difference was observed in the charge variants profile of the harvested cell culture fluid (HCCF) between the 2 L and larger scales (e.g., 400 L and 12 kL). Small-scale studies performed at the 2 L scale consistently showed an increase in acidic species when compared with the material made at larger scale. Since the 2 L bioreactors were made of clear transparent glass while the larger scale reactors are made of stainless steel, the effect of ambient laboratory light on cell culture process in 2 L bioreactors as well as handling the HCCF was carefully evaluated. Photoreactions in the 2 L glass bioreactors including light mediated increase in acidic variants in HCCF and formulation buffers were identified and carefully analyzed. While the acidic variants comprised of a mixture of sialylated, reduced disulfide, crosslinked (nonreducible), glycated, and deamidated forms, an increase in the nonreducible forms, deamidation and Met oxidation was predominantly observed under light stress. The monoclonal antibody produced in glass bioreactors that were protected from light behaved similar to the one produced in the larger scale. Our data clearly indicate that care should be taken when glass bioreactors are used in cell culture studies during monoclonal antibody production. © 2014 American Institute of Chemical Engineers.

  18. A program for handling map projections of small-scale geospatial raster data

    USGS Publications Warehouse

    Finn, Michael P.; Steinwand, Daniel R.; Trent, Jason R.; Buehler, Robert A.; Mattli, David M.; Yamamoto, Kristina H.

    2012-01-01

    Scientists routinely accomplish small-scale geospatial modeling using raster datasets of global extent. Such use often requires the projection of global raster datasets onto a map or the reprojection from a given map projection associated with a dataset. The distortion characteristics of these projection transformations can have significant effects on modeling results. Distortions associated with the reprojection of global data are generally greater than distortions associated with reprojections of larger-scale, localized areas. The accuracy of areas in projected raster datasets of global extent is dependent on spatial resolution. To address these problems of projection and the associated resampling that accompanies it, methods for framing the transformation space, direct point-to-point transformations rather than gridded transformation spaces, a solution to the wrap-around problem, and an approach to alternative resampling methods are presented. The implementations of these methods are provided in an open-source software package called MapImage (or mapIMG, for short), which is designed to function on a variety of computer architectures.

  19. Does implementing a development plan for user participation in a mental hospital change patients' experience? A non-randomized controlled study.

    PubMed

    Rise, Marit B; Steinsbekk, Aslak

    2015-10-01

    Governments in several countries attempt to strengthen user participation through instructing health-care organizations to implement user participation initiatives. There is, however, little knowledge on the effect on patients' experience from comprehensive plans for enhancing user participation in whole health service organizations. To investigate whether implementing a development plan intending to enhance user participation in a mental hospital had any effect on the patients' experience of user participation. A non-randomized controlled study including patients in three mental hospitals in Central Norway, one intervention hospital and two control hospitals. A development plan intended to enhance user participation was implemented in the intervention hospital as a part of a larger reorganizational process. The plan included establishment of a patient education centre and a user office, purchase of user expertise, appointment of contact professionals for next of kin and improvement of the centre's information and the professional culture. Perceptions of Care, Inpatient Treatment Alliance Scale and questions made for this study. A total of 1651 patients participated. Implementing a development plan in a mental hospital intending to enhance user participation had no significant effect on the patients' experience of user participation. The lack of effect can be due to inappropriate initiatives or challenges in implementation processes. Further research should ensure that initiatives and implementation processes are appropriate to impact the patients' experience. © 2013 John Wiley & Sons Ltd.

  20. Image segmentation by hierarchial agglomeration of polygons using ecological statistics

    DOEpatents

    Prasad, Lakshman; Swaminarayan, Sriram

    2013-04-23

    A method for rapid hierarchical image segmentation based on perceptually driven contour completion and scene statistics is disclosed. The method begins with an initial fine-scale segmentation of an image, such as obtained by perceptual completion of partial contours into polygonal regions using region-contour correspondences established by Delaunay triangulation of edge pixels as implemented in VISTA. The resulting polygons are analyzed with respect to their size and color/intensity distributions and the structural properties of their boundaries. Statistical estimates of granularity of size, similarity of color, texture, and saliency of intervening boundaries are computed and formulated into logical (Boolean) predicates. The combined satisfiability of these Boolean predicates by a pair of adjacent polygons at a given segmentation level qualifies them for merging into a larger polygon representing a coarser, larger-scale feature of the pixel image and collectively obtains the next level of polygonal segments in a hierarchy of fine-to-coarse segmentations. The iterative application of this process precipitates textured regions as polygons with highly convolved boundaries and helps distinguish them from objects which typically have more regular boundaries. The method yields a multiscale decomposition of an image into constituent features that enjoy a hierarchical relationship with features at finer and coarser scales. This provides a traversable graph structure from which feature content and context in terms of other features can be derived, aiding in automated image understanding tasks. The method disclosed is highly efficient and can be used to decompose and analyze large images.

  1. A strategy for implementing genomics into nursing practice informed by three behaviour change theories.

    PubMed

    Leach, Verity; Tonkin, Emma; Lancastle, Deborah; Kirk, Maggie

    2016-06-01

    Genomics is an ever increasing aspect of nursing practice, with focus being directed towards improving health. The authors present an implementation strategy for the incorporation of genomics into nursing practice within the UK, based on three behaviour change theories and the identification of individuals who are likely to provide support for change. Individuals identified as Opinion Leaders and Adopters of genomics illustrate how changes in behaviour might occur among the nursing profession. The core philosophy of the strategy is that genomic nurse Adopters and Opinion Leaders who have direct interaction with their peers in practice will be best placed to highlight the importance of genomics within the nursing role. The strategy discussed in this paper provides scope for continued nursing education and development of genomics within nursing practice on a larger scale. The recommendations might be of particular relevance for senior staff and management. © 2016 John Wiley & Sons Australia, Ltd.

  2. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  3. Feasibility of utilizing Cherenkov Telescope Array gamma-ray telescopes as free-space optical communication ground stations.

    PubMed

    Carrasco-Casado, Alberto; Vilera, Mariafernanda; Vergaz, Ricardo; Cabrero, Juan Francisco

    2013-04-10

    The signals that will be received on Earth from deep-space probes in future implementations of free-space optical communication will be extremely weak, and new ground stations will have to be developed in order to support these links. This paper addresses the feasibility of using the technology developed in the gamma-ray telescopes that will make up the Cherenkov Telescope Array (CTA) observatory in the implementation of a new kind of ground station. Among the main advantages that these telescopes provide are the much larger apertures needed to overcome the power limitation that ground-based gamma-ray astronomy and optical communication both have. Also, the large number of big telescopes that will be built for CTA will make it possible to reduce costs by economy-scale production, enabling optical communications in the large telescopes that will be needed for future deep-space links.

  4. Implementation of sediment dynamics in a global integrated assessment model for an improved simulation of nutrient retention and transfers in surface freshwaters

    NASA Astrophysics Data System (ADS)

    Vilmin, L.; Beusen, A.; Mogollón, J.; Bouwman, L.

    2017-12-01

    Sediment dynamics play a significant role in river biogeochemical functioning. They notably control the transfer of particle-bound nutrients, have a direct influence on light availability for primary production, and particle accumulation can affect oxic conditions of river beds. In the perspective of improving our current understanding of large scale nutrient fluxes in rivers, it is hence necessary to include these dynamics in global models. In this scope, we implement particle accumulation and remobilization in a coupled global hydrology-nutrient model (IMAGE-GNM), at a spatial resolution of 0.5°. The transfer of soil loss from natural and agricultural lands is simulated mechanistically, from headwater streams to estuaries. First tests of the model are performed in the Mississippi river basin. At a yearly time step for the period 1978-2000, the average difference between simulated and measured suspended sediment concentrations at the most downstream monitoring station is 25%. Sediment retention is estimated in the different Strahler stream orders, in lakes and reservoirs. We discuss: 1) the distribution of sediment loads to small streams, which has a significant effect on transfers through watersheds and larger scale river fluxes and 2) the potential effect of damming on the fate of particle-bound nutrients. These new developments are crucial for future assessments of large scale nutrient and carbon fluxes in river systems.

  5. Establishing a Framework for Community Modeling in Hydrologic Science: Recommendations from the CUAHSI CHyMP Initiative

    NASA Astrophysics Data System (ADS)

    Arrigo, J. S.; Famiglietti, J. S.; Murdoch, L. C.; Lakshmi, V.; Hooper, R. P.

    2012-12-01

    The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) continues a major effort towards supporting Community Hydrologic Modeling. From 2009 - 2011, the Community Hydrologic Modeling Platform (CHyMP) initiative held three workshops, the ultimate goal of which was to produce recommendations and an implementation plan to establish a community modeling program that enables comprehensive simulation of water anywhere on the North American continent. Such an effort would include connections to and advances in global climate models, biogeochemistry, and efforts of other disciplines that require an understanding of water patterns and processes in the environment. To achieve such a vision will require substantial investment in human and cyber-infrastructure and significant advances in the science of hydrologic modeling and spatial scaling. CHyMP concluded with a final workshop, held March 2011, and produced several recommendations. CUAHSI and the university community continue to advance community modeling and implement these recommendations through several related and follow on efforts. Key results from the final 2011 workshop included agreement among participants that the community is ready to move forward with implementation. It is recognized that initial implementation of this larger effort can begin with simulation capabilities that currently exist, or that can be easily developed. CHyMP identified four key activities in support of community modeling: benchmarking, dataset evaluation and development, platform evaluation, and developing a national water model framework. Key findings included: 1) The community supported the idea of a National Water Model framework; a community effort is needed to explore what the ultimate implementation of a National Water Model is. A true community modeling effort would support the modeling of "water anywhere" and would include all relevant scales and processes. 2) Implementation of a community modeling program could initially focus on continental scale modeling of water quantity (rather than quality). The goal of this initial model is the comprehensive description of water stores and fluxes in such a way to permit linkage to GCM's, biogeochemical, ecological, and geomorphic models. This continental scale focus allows systematic evaluation of our current state of knowledge and data, leverages existing efforts done by large scale modelers, contributes to scientific discovery that informs globally and societal relevant questions, and provides an initial framework to evaluate hydrologic information relevant to other disciplines and a structure into which to incorporate other classes of hydrologic models. 3) Dataset development will be a key aspect of any successful national water model implementation. Our current knowledge of the subsurface is limiting our ability to truly integrate soil and groundwater into large scale models, and to answering critical science questions with societal relevance (i.e. groundwater's influence on climate). 4) The CHyMP workshops and efforts to date have achieved collaboration between university scientists, government agencies and the private sector that must be maintained. Follow on efforts in community modeling should aim at leveraging and maintaining this collaboration for maximum scientific and societal benefit.

  6. Politics is nothing but medicine at a larger scale: reflections on public health's biggest idea.

    PubMed

    Mackenbach, J P

    2009-03-01

    This article retraces the historical origins and contemporary resonances of Rudolf Virchow's famous statement "Medicine is a social science, and politics nothing but medicine at a larger scale". Virchow was convinced that social inequality was a root cause of ill-health, and that medicine therefore had to be a social science. Because of their intimate knowledge of the problems of society, doctors, according to Virchow, also were better statesmen. Although Virchow's analogies between biology and sociology are out of date, some of his core ideas still resonate in public health. This applies particularly to the notion that whole populations can be sick, and that political action may be needed to cure them. Aggregate population health may well be different from the sum (or average) of the health statuses of all individual members: populations sometimes operate as malfunctioning systems, and positive feedback loops will let population health diverge from the aggregate of individual health statuses. There is considerable controversy among epidemiologists and public health professionals about how far one should go in influencing political processes. A "ladder of political activism" is proposed to help clarify this issue, and examples of recent public health successes are given which show that some political action has often been required before effective public health policies and interventions could be implemented.

  7. Image-Enhancement Aid For The Partially Sighted

    NASA Technical Reports Server (NTRS)

    Lawton, T. A.; Gennery, D. B.

    1989-01-01

    Digital filtering enhances ability to read and to recognize objects. Possible to construct portable vision aid by combining miniature video equipment to observe scene and display images with very-large-scale integrated circuits to implement real-time digital image-data processing. Afflicted observer views scene through magnifier to shift spatial frequencies downward and thereby improves perceived image. However, less magnification needed, larger the scene observed. Thus, one measure of effectiveness of new system is amount of magnification required with and without it. In series of tests, found 27 to 70 percent more magnification needed for afflicted observers to recognize unfiltered words than to recognize filtered words.

  8. eqMAXEL: A new automatic earthquake location algorithm implementation for Earthworm

    NASA Astrophysics Data System (ADS)

    Lisowski, S.; Friberg, P. A.; Sheen, D. H.

    2017-12-01

    A common problem with automated earthquake location systems for a local to regional scale seismic network is false triggering and false locations inside the network caused by larger regional to teleseismic distance earthquakes. This false location issue also presents a problem for earthquake early warning systems where societal impacts of false alarms can be very expensive. Towards solving this issue, Sheen et al. (2016) implemented a robust maximum-likelihood earthquake location algorithm known as MAXEL. It was shown with both synthetics and real-data for a small number of arrivals, that large regional events were easily identifiable through metrics in the MAXEL algorithm. In the summer of 2017, we collaboratively implemented the MAXEL algorithm into a fully functional Earthworm module and tested it in regions of the USA where false detections and alarming are observed. We show robust improvement in the ability of the Earthworm system to filter out regional and teleseismic events that would have falsely located inside the network using the traditional Earthworm hypoinverse solution. We also explore using different grid sizes in the implementation of the MAXEL algorithm, which was originally designed with South Korea as the target network size.

  9. Demonstration of a small programmable quantum computer with atomic qubits.

    PubMed

    Debnath, S; Linke, N M; Figgatt, C; Landsman, K A; Wright, K; Monroe, C

    2016-08-04

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

  10. Demonstration of a small programmable quantum computer with atomic qubits

    NASA Astrophysics Data System (ADS)

    Debnath, S.; Linke, N. M.; Figgatt, C.; Landsman, K. A.; Wright, K.; Monroe, C.

    2016-08-01

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

  11. Toward a more efficient and scalable checkpoint/restart mechanism in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine

    2015-04-01

    The number of cores (both CPU as well as accelerator) in large-scale systems has been increasing rapidly over the past several years. In 2008, there were only 5 systems in the Top500 list that had over 100,000 total cores (including accelerator cores) whereas the number of system with such capability has jumped to 31 in Nov 2014. This growth however has also increased the risk of hardware failure rates, necessitating the implementation of fault tolerance mechanism in applications. The checkpoint and restart (C/R) approach is commonly used to save the state of the application and restart at a later time either after failure or to continue execution of experiments. The implementation of an efficient C/R mechanism will make it more affordable to output the necessary C/R files more frequently. The availability of larger systems (more nodes, memory and cores) has also facilitated the scaling of applications. Nowadays, it is more common to conduct coupled global climate simulation experiments at 1 deg horizontal resolution (atmosphere), often requiring about 103 cores. At the same time, a few climate modeling teams that have access to a dedicated cluster and/or large scale systems are involved in modeling experiments at 0.25 deg horizontal resolution (atmosphere) and 0.1 deg resolution for the ocean. These ultrascale configurations require the order of 104 to 105 cores. It is not only necessary for the numerical algorithms to scale efficiently but the input/output (IO) mechanism must also scale accordingly. An ongoing series of ultrascale climate simulations, using the Titan supercomputer at the Oak Ridge Leadership Computing Facility (ORNL), is based on the spectral element dynamical core of the Community Atmosphere Model (CAM-SE), which is a component of the Community Earth System Model and the DOE Accelerated Climate Model for Energy (ACME). The CAM-SE dynamical core for a 0.25 deg configuration has been shown to scale efficiently across 100,000 cpu cores. At this scale, there is an increased risk that the simulation could be terminated due to hardware failures, resulting in a loss that could be as high as 105 - 106 titan core hours. Increasing the frequency of the output of C/R files could mitigate this loss but at the cost of additional C/R overhead. We are testing a more efficient C/R mechanism in CAM-SE. Our early implementation has demonstrated a nearly 3X performance improvement for a 1 deg CAM-SE (with CAM5 physics and MOZART chemistry) configuration using nearly 103 cores. We are in the process of scaling our implementation to 105 cores. This would allow us to run ultra scale simulations with more sophisticated physics and chemistry options while making better utilization of resources.

  12. Advances in Landslide Nowcasting: Evaluation of a Global and Regional Modeling Approach

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia Bach; Peters-Lidard, Christa; Adler, Robert; Hong, Yang; Kumar, Sujay; Lerner-Lam, Arthur

    2011-01-01

    The increasing availability of remotely sensed data offers a new opportunity to address landslide hazard assessment at larger spatial scales. A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that may experience landslide activity. This system combines a calculation of static landslide susceptibility with satellite-derived rainfall estimates and uses a threshold approach to generate a set of nowcasts that classify potentially hazardous areas. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale near real-time landslide hazard assessment efforts, it requires several modifications before it can be fully realized as an operational tool. This study draws upon a prior work s recommendations to develop a new approach for considering landslide susceptibility and hazard at the regional scale. This case study calculates a regional susceptibility map using remotely sensed and in situ information and a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America. The susceptibility map is evaluated with a regional rainfall intensity duration triggering threshold and results are compared with the global algorithm framework for the same event. Evaluation of this regional system suggests that this empirically based approach provides one plausible way to approach some of the data and resolution issues identified in the global assessment. The presented methodology is straightforward to implement, improves upon the global approach, and allows for results to be transferable between regions. The results also highlight several remaining challenges, including the empirical nature of the algorithm framework and adequate information for algorithm validation. Conclusions suggest that integrating additional triggering factors such as soil moisture may help to improve algorithm performance accuracy. The regional algorithm scenario represents an important step forward in advancing regional and global-scale landslide hazard assessment.

  13. Costs and cost-effectiveness of vector control in Eritrea using insecticide-treated bed nets.

    PubMed

    Yukich, Joshua O; Zerom, Mehari; Ghebremeskel, Tewolde; Tediosi, Fabrizio; Lengeler, Christian

    2009-03-30

    While insecticide-treated nets (ITNs) are a recognized effective method for preventing malaria, there has been an extensive debate in recent years about the best large-scale implementation strategy. Implementation costs and cost-effectiveness are important elements to consider when planning ITN programmes, but so far little information on these aspects is available from national programmes. This study uses a standardized methodology, as part of a larger comparative study, to collect cost data and cost-effectiveness estimates from a large programme providing ITNs at the community level and ante-natal care facilities in Eritrea. This is a unique model of ITN implementation fully integrated into the public health system. Base case analysis results indicated that the average annual cost of ITN delivery (2005 USD 3.98) was very attractive when compared with past ITN delivery studies at different scales. Financing was largely from donor sources though the Eritrean government and net users also contributed funding. The intervention's cost-effectiveness was in a highly attractive range for sub-Saharan Africa. The cost per DALY averted was USD 13 - 44. The cost per death averted was USD 438-1449. Distribution of nets coincided with significant increases in coverage and usage of nets nationwide, approaching or exceeding international targets in some areas. ITNs can be cost-effectively delivered at a large scale in sub-Saharan Africa through a distribution system that is highly integrated into the health system. Operating and sustaining such a system still requires strong donor funding and support as well as a functional and extensive system of health facilities and community health workers already in place.

  14. Study of metallic powder behavior in very low pressure plasma spraying (VLPPS) — Application to the manufacturing of titanium–aluminum coatings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vautherin, B.; Planche, M.-P.; Montavon, G.

    2015-08-28

    In this study, metallic materials made of aluminum and titanium were manufactured implementing very low pressure plasma spraying (VLPPS). Aluminum was selected at first as a demonstrative material due to its rather low vaporization enthalpy ( i.e., 381.9 kJ·mol⁻¹). Developments were then carried out with titanium which exhibits a higher vaporization enthalpy ( i.e., 563.6 kJ·mol⁻¹). Optical emission spectroscopy (OES) was implemented to analyze the behavior of each solid precursor (metallic powders) when it is injected into the plasma jet under very low pressure ( i.e., in the 150 Pa range). Besides, aluminum, titanium and titanium–aluminum coatings were deposited inmore » the same conditions implementing a stick-cathode plasma torch operated at 50 kW, maximum power. Coating phase compositions were identified by X-Ray Diffraction (XRD). Coating elementary compositions were quantified by Glow Discharge Optical Emission Spectroscopy (GDOES) and Energy Dispersive Spectroscopy (EDS) analyses. The coating structures were observed by Scanning Electron Microscopy (SEM). The coating void content was determined by Ultra-Small Angle X-ray Scattering (USAXS). The coatings exhibit a two-scale structure corresponding to condensed vapors (smaller scale) and solidified areas (larger scale). Titanium–aluminum sprayed coatings, with various Ti/Al atomic ratios, are constituted of three phases: metastable α-Ti, Al and metastable α₂-Ti₃Al. This latter is formed at elevated temperature in the plasma flow, before being condensed. Its rather small fraction, impeded by the rather small amount of vaporized Ti, does not allow modifying however the coating hardness.« less

  15. Characterization of UMT2013 Performance on Advanced Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, Louis

    2014-12-31

    This paper presents part of a larger effort to make detailed assessments of several proxy applications on various advanced architectures, with the eventual goal of extending these assessments to codes of programmatic interest running more realistic simulations. The focus here is on UMT2013, a proxy implementation of deterministic transport for unstructured meshes. I present weak and strong MPI scaling results and studies of OpenMP efficiency on the Sequoia BG/Q system at LLNL, with comparison against similar tests on an Intel Sandy Bridge TLCC2 system. The hardware counters on BG/Q provide detailed information on many aspects of on-node performance, while informationmore » from the mpiP tool gives insight into the reasons for the differing scaling behavior on these two different architectures. Preliminary tests that exploit NVRAM as extended memory on an Ivy Bridge machine designed for “Big Data” applications are also included.« less

  16. Promotion and advocacy for improved complementary feeding: can we apply the lessons learned from breastfeeding?

    PubMed

    Piwoz, Ellen G; Huffman, Sandra L; Quinn, Victoria J

    2003-03-01

    Although many successes have been achieved in promoting breastfeeding, this has not been the case for complementary feeding. Some successes in promoting complementary feeding at the community level have been documented, but few of these efforts have expanded to a larger scale and become sustained. To discover the reasons for this difference, the key factors for the successful promotion of breastfeeding on a large scale were examined and compared with the efforts made in complementary feeding. These factors include definition and rationale, policy support, funding, advocacy, private-sector involvement, availability and use of monitoring data, integration of research into action, and the existence of a well-articulated series of steps for successful implementation. The lessons learned from the promotion of breastfeeding should be applied to complementary feeding, and the new Global Strategy for Infant and Young Child Feeding provides an excellent first step in this process.

  17. Overspill avalanching in a dense reservoir network

    PubMed Central

    Mamede, George L.; Araújo, Nuno A. M.; Schneider, Christian M.; de Araújo, José Carlos; Herrmann, Hans J.

    2012-01-01

    Sustainability of communities, agriculture, and industry is strongly dependent on an effective storage and supply of water resources. In some regions the economic growth has led to a level of water demand that can only be accomplished through efficient reservoir networks. Such infrastructures are not always planned at larger scale but rather made by farmers according to their local needs of irrigation during droughts. Based on extensive data from the upper Jaguaribe basin, one of the world’s largest system of reservoirs, located in the Brazilian semiarid northeast, we reveal that surprisingly it self-organizes into a scale-free network exhibiting also a power-law in the distribution of the lakes and avalanches of discharges. With a new self-organized-criticality-type model we manage to explain the novel critical exponents. Implementing a flow model we are able to reproduce the measured overspill evolution providing a tool for catastrophe mitigation and future planning. PMID:22529343

  18. MOBE-ChIP: Probing Cell Type-Specific Binding Through Large-Scale Chromatin Immunoprecipitation.

    PubMed

    Wang, Shenqi; Lau, On Sun

    2018-01-01

    In multicellular organisms, the initiation and maintenance of specific cell types often require the activity of cell type-specific transcriptional regulators. Understanding their roles in gene regulation is crucial but probing their DNA targets in vivo, especially in a genome-wide manner, remains a technical challenge with their limited expression. To improve the sensitivity of chromatin immunoprecipitation (ChIP) for detecting the cell type-specific signals, we have developed the Maximized Objects for Better Enrichment (MOBE)-ChIP, where ChIP is performed at a substantially larger experimental scale and under low background conditions. Here, we describe the procedure in the study of transcription factors in the model plant Arabidopsis. However, with some modifications, the technique should also be implemented in other systems. Besides cell type-specific studies, MOBE-ChIP can also be used as a general strategy to improve ChIP signals.

  19. GENASIS Mathematics : Object-oriented manifolds, operations, and solvers for large-scale physics simulations

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2018-01-01

    The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.

  20. Entanglement-Based Machine Learning on a Quantum Computer

    NASA Astrophysics Data System (ADS)

    Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W.

    2015-03-01

    Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing "big data" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning.

  1. Impacts of subgrid-scale orography parameterization on simulated atmospheric fields over Korea using a high-resolution atmospheric forecast model

    NASA Astrophysics Data System (ADS)

    Lim, Kyo-Sun Sunny; Lim, Jong-Myoung; Shin, Hyeyum Hailey; Hong, Jinkyu; Ji, Young-Yong; Lee, Wanno

    2018-06-01

    A substantial over-prediction bias at low-to-moderate wind speeds in the Weather Research and Forecasting (WRF) model has been reported in the previous studies. Low-level wind fields play an important role in dispersion of air pollutants, including radionuclides, in a high-resolution WRF framework. By implementing two subgrid-scale orography parameterizations (Jimenez and Dudhia in J Appl Meteorol Climatol 51:300-316, 2012; Mass and Ovens in WRF model physics: problems, solutions and a new paradigm for progress. Preprints, 2010 WRF Users' Workshop, NCAR, Boulder, Colo. http://www.mmm.ucar.edu/wrf/users/workshops/WS2010/presentations/session%204/4-1_WRFworkshop2010Final.pdf, 2010), we tried to compare the performance of parameterizations and to enhance the forecast skill of low-level wind fields over the central western part of South Korea. Even though both subgrid-scale orography parameterizations significantly alleviated the positive bias at 10-m wind speed, the parameterization by Jimenez and Dudhia revealed a better forecast skill in wind speed under our modeling configuration. Implementation of the subgrid-scale orography parameterizations in the model did not affect the forecast skills in other meteorological fields including 10-m wind direction. Our study also brought up the problem of discrepancy in the definition of "10-m" wind between model physics parameterizations and observations, which can cause overestimated winds in model simulations. The overestimation was larger in stable conditions than in unstable conditions, indicating that the weak diurnal cycle in the model could be attributed to the representation error.

  2. Do Performance-Safety Tradeoffs Cause Hypometric Metabolic Scaling in Animals?

    PubMed

    Harrison, Jon F

    2017-09-01

    Hypometric scaling of aerobic metabolism in animals has been widely attributed to constraints on oxygen (O 2 ) supply in larger animals, but recent findings demonstrate that O 2 supply balances with need regardless of size. Larger animals also do not exhibit evidence of compensation for O 2 supply limitation. Because declining metabolic rates (MRs) are tightly linked to fitness, this provides significant evidence against the hypothesis that constraints on supply drive hypometric scaling. As an alternative, ATP demand might decline in larger animals because of performance-safety tradeoffs. Larger animals, which typically reproduce later, exhibit risk-reducing strategies that lower MR. Conversely, smaller animals are more strongly selected for growth and costly neurolocomotory performance, elevating metabolism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zhang, Xubin; Tan, Zhe-Min

    2017-04-01

    The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .

  4. Integration of a nationally procured electronic health record system into user work practices.

    PubMed

    Cresswell, Kathrin M; Worth, Allison; Sheikh, Aziz

    2012-03-08

    Evidence suggests that many small- and medium-scale Electronic Health Record (EHR) implementations encounter problems, these often stemming from users' difficulties in accommodating the new technology into their work practices. There is the possibility that these challenges may be exacerbated in the context of the larger-scale, more standardised, implementation strategies now being pursued as part of major national modernisation initiatives. We sought to understand how England's centrally procured and delivered EHR software was integrated within the work practices of users in selected secondary and specialist care settings. We conducted a qualitative longitudinal case study-based investigation drawing on sociotechnical theory in three purposefully selected sites implementing early functionality of a nationally procured EHR system. The complete dataset comprised semi-structured interview data from a total of 66 different participants, 38.5 hours of non-participant observation of use of the software in context, accompanying researcher field notes, and hospital documents (including project initiation and lessons learnt reports). Transcribed data were analysed thematically using a combination of deductive and inductive approaches, and drawing on NVivo8 software to facilitate coding. The nationally led "top-down" implementation and the associated focus on interoperability limited the opportunity to customise software to local needs. Lack of system usability led users to employ a range of workarounds unanticipated by management to compensate for the perceived shortcomings of the system. These had a number of knock-on effects relating to the nature of collaborative work, patterns of communication, the timeliness and availability of records (including paper) and the ability for hospital management to monitor organisational performance. This work has highlighted the importance of addressing potentially adverse unintended consequences of workarounds associated with the introduction of EHRs. This can be achieved with customisation, which is inevitably somewhat restricted in the context of attempts to implement national solutions. The tensions and potential trade-offs between achieving large-scale interoperability and local requirements is likely to be the subject of continuous debate in England and beyond with no easy answers in sight.

  5. Engineering web maps with gradual content zoom based on streaming vector data

    NASA Astrophysics Data System (ADS)

    Huang, Lina; Meijers, Martijn; Šuba, Radan; van Oosterom, Peter

    2016-04-01

    Vario-scale data structures have been designed to support gradual content zoom and the progressive transfer of vector data, for use with arbitrary map scales. The focus to date has been on the server side, especially on how to convert geographic data into the proposed vario-scale structures by means of automated generalisation. This paper contributes to the ongoing vario-scale research by focusing on the client side and communication, particularly on how this works in a web-services setting. It is claimed that these functionalities are urgently needed, as many web-based applications, both desktop and mobile, require gradual content zoom, progressive transfer and a high performance level. The web-client prototypes developed in this paper make it possible to assess the behaviour of vario-scale data and to determine how users will actually see the interactions. Several different options of web-services communication architectures are possible in a vario-scale setting. These options are analysed and tested with various web-client prototypes, with respect to functionality, ease of implementation and performance (amount of transmitted data and response times). We show that the vario-scale data structure can fit in with current web-based architectures and efforts to standardise map distribution on the internet. However, to maximise the benefits of vario-scale data, a client needs to be aware of this structure. When a client needs a map to be refined (by means of a gradual content zoom operation), only the 'missing' data will be requested. This data will be sent incrementally to the client from a server. In this way, the amount of data transferred at one time is reduced, shortening the transmission time. In addition to these conceptual architecture aspects, there are many implementation and tooling design decisions at play. These will also be elaborated on in this paper. Based on the experiments conducted, we conclude that the vario-scale approach indeed supports gradual content zoom and the progressive web transfer of vector data. This is a big step forward in making vector data at arbitrary map scales available to larger user groups.

  6. Preferential flow from pore to landscape scales

    NASA Astrophysics Data System (ADS)

    Koestel, J. K.; Jarvis, N.; Larsbo, M.

    2017-12-01

    In this presentation, we give a brief personal overview of some recent progress in quantifying preferential flow in the vadose zone, based on our own work and those of other researchers. One key challenge is to bridge the gap between the scales at which preferential flow occurs (i.e. pore to Darcy scales) and the scales of interest for management (i.e. fields, catchments, regions). We present results of recent studies that exemplify the potential of 3-D non-invasive imaging techniques to visualize and quantify flow processes at the pore scale. These studies should lead to a better understanding of how the topology of macropore networks control key state variables like matric potential and thus the strength of preferential flow under variable initial and boundary conditions. Extrapolation of this process knowledge to larger scales will remain difficult, since measurement technologies to quantify macropore networks at these larger scales are lacking. Recent work suggests that the application of key concepts from percolation theory could be useful in this context. Investigation of the larger Darcy-scale heterogeneities that generate preferential flow patterns at the soil profile, hillslope and field scales has been facilitated by hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help to parameterize models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  7. The physics behind the larger scale organization of DNA in eukaryotes.

    PubMed

    Emanuel, Marc; Radja, Nima Hamedani; Henriksson, Andreas; Schiessel, Helmut

    2009-07-01

    In this paper, we discuss in detail the organization of chromatin during a cell cycle at several levels. We show that current experimental data on large-scale chromatin organization have not yet reached the level of precision to allow for detailed modeling. We speculate in some detail about the possible physics underlying the larger scale chromatin organization.

  8. Soil organic carbon across scales.

    PubMed

    O'Rourke, Sharon M; Angers, Denis A; Holden, Nicholas M; McBratney, Alex B

    2015-10-01

    Mechanistic understanding of scale effects is important for interpreting the processes that control the global carbon cycle. Greater attention should be given to scale in soil organic carbon (SOC) science so that we can devise better policy to protect/enhance existing SOC stocks and ensure sustainable use of soils. Global issues such as climate change require consideration of SOC stock changes at the global and biosphere scale, but human interaction occurs at the landscape scale, with consequences at the pedon, aggregate and particle scales. This review evaluates our understanding of SOC across all these scales in the context of the processes involved in SOC cycling at each scale and with emphasis on stabilizing SOC. Current synergy between science and policy is explored at each scale to determine how well each is represented in the management of SOC. An outline of how SOC might be integrated into a framework of soil security is examined. We conclude that SOC processes at the biosphere to biome scales are not well understood. Instead, SOC has come to be viewed as a large-scale pool subjects to carbon flux. Better understanding exists for SOC processes operating at the scales of the pedon, aggregate and particle. At the landscape scale, the influence of large- and small-scale processes has the greatest interaction and is exposed to the greatest modification through agricultural management. Policy implemented at regional or national scale tends to focus at the landscape scale without due consideration of the larger scale factors controlling SOC or the impacts of policy for SOC at the smaller SOC scales. What is required is a framework that can be integrated across a continuum of scales to optimize SOC management. © 2015 John Wiley & Sons Ltd.

  9. PARVMEC: An Efficient, Scalable Implementation of the Variational Moments Equilibrium Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seal, Sudip K; Hirshman, Steven Paul; Wingen, Andreas

    The ability to sustain magnetically confined plasma in a state of stable equilibrium is crucial for optimal and cost-effective operations of fusion devices like tokamaks and stellarators. The Variational Moments Equilibrium Code (VMEC) is the de-facto serial application used by fusion scientists to compute magnetohydrodynamics (MHD) equilibria and study the physics of three dimensional plasmas in confined configurations. Modern fusion energy experiments have larger system scales with more interactive experimental workflows, both demanding faster analysis turnaround times on computational workloads that are stressing the capabilities of sequential VMEC. In this paper, we present PARVMEC, an efficient, parallel version of itsmore » sequential counterpart, capable of scaling to thousands of processors on distributed memory machines. PARVMEC is a non-linear code, with multiple numerical physics modules, each with its own computational complexity. A detailed speedup analysis supported by scaling results on 1,024 cores of a Cray XC30 supercomputer is presented. Depending on the mode of PARVMEC execution, speedup improvements of one to two orders of magnitude are reported. PARVMEC equips fusion scientists for the first time with a state-of-theart capability for rapid, high fidelity analyses of magnetically confined plasmas at unprecedented scales.« less

  10. Application of optimized multiscale mathematical morphology for bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Gong, Tingkai; Yuan, Yanbin; Yuan, Xiaohui; Wu, Xiaotao

    2017-04-01

    In order to suppress noise effectively and extract the impulsive features in the vibration signals of faulty rolling element bearings, an optimized multiscale morphology (OMM) based on conventional multiscale morphology (CMM) and iterative morphology (IM) is presented in this paper. Firstly, the operator used in the IM method must be non-idempotent; therefore, an optimized difference (ODIF) operator has been designed. Furthermore, in the iterative process the current operation is performed on the basis of the previous one. This means that if a larger scale is employed, more fault features are inhibited. Thereby, a unit scale is proposed as the structuring element (SE) scale in IM. According to the above definitions, the IM method is implemented on the results over different scales obtained by CMM. The validity of the proposed method is first evaluated by a simulated signal. Subsequently, aimed at an outer race fault two vibration signals sampled by different accelerometers are analyzed by OMM and CMM, respectively. The same is done for an inner race fault. The results show that the optimized method is effective in diagnosing the two bearing faults. Compared with the CMM method, the OMM method can extract much more fault features under strong noise background.

  11. The on-line coupled atmospheric chemistry model system MECO(n) - Part 5: Expanding the Multi-Model-Driver (MMD v2.0) for 2-way data exchange including data interpolation via GRID (v1.0)

    NASA Astrophysics Data System (ADS)

    Kerkweg, Astrid; Hofmann, Christiane; Jöckel, Patrick; Mertens, Mariano; Pante, Gregor

    2018-03-01

    As part of the Modular Earth Submodel System (MESSy), the Multi-Model-Driver (MMD v1.0) was developed to couple online the regional Consortium for Small-scale Modeling (COSMO) model into a driving model, which can be either the regional COSMO model or the global European Centre Hamburg general circulation model (ECHAM) (see Part 2 of the model documentation). The coupled system is called MECO(n), i.e., MESSy-fied ECHAM and COSMO models nested n times. In this article, which is part of the model documentation of the MECO(n) system, the second generation of MMD is introduced. MMD comprises the message-passing infrastructure required for the parallel execution (multiple programme multiple data, MPMD) of different models and the communication of the individual model instances, i.e. between the driving and the driven models. Initially, the MMD library was developed for a one-way coupling between the global chemistry-climate ECHAM/MESSy atmospheric chemistry (EMAC) model and an arbitrary number of (optionally cascaded) instances of the regional chemistry-climate model COSMO/MESSy. Thus, MMD (v1.0) provided only functions for unidirectional data transfer, i.e. from the larger-scale to the smaller-scale models.Soon, extended applications requiring data transfer from the small-scale model back to the larger-scale model became of interest. For instance, the original fields of the larger-scale model can directly be compared to the upscaled small-scale fields to analyse the improvements gained through the small-scale calculations, after the results are upscaled. Moreover, the fields originating from the two different models might be fed into the same diagnostic tool, e.g. the online calculation of the radiative forcing calculated consistently with the same radiation scheme. Last but not least, enabling the two-way data transfer between two models is the first important step on the way to a fully dynamical and chemical two-way coupling of the various model instances.In MMD (v1.0), interpolation between the base model grids is performed via the COSMO preprocessing tool INT2LM, which was implemented into the MMD submodel for online interpolation, specifically for mapping onto the rotated COSMO grid. A more flexible algorithm is required for the backward mapping. Thus, MMD (v2.0) uses the new MESSy submodel GRID for the generalised definition of arbitrary grids and for the transformation of data between them.In this article, we explain the basics of the MMD expansion and the newly developed generic MESSy submodel GRID (v1.0) and show some examples of the abovementioned applications.

  12. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    PubMed

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  13. IMa2p - Parallel MCMC and inference of ancient demography under the Isolation with Migration (IM) model

    PubMed Central

    Sethuraman, Arun; Hey, Jody

    2015-01-01

    IMa2 and related programs are used to study the divergence of closely related species and of populations within species. These methods are based on the sampling of genealogies using MCMC, and they can proceed quite slowly for larger data sets. We describe a parallel implementation, called IMa2p, that provides a nearly linear increase in genealogy sampling rate with the number of processors in use. IMa2p is written in OpenMPI and C++, and scales well for demographic analyses of a large number of loci and populations, which are difficult to study using the serial version of the program. PMID:26059786

  14. Economic viability and critical influencing factors assessment of black water and grey water source-separation sanitation system.

    PubMed

    Thibodeau, C; Monette, F; Glaus, M; Laflamme, C B

    2011-01-01

    The black water and grey water source-separation sanitation system aims at efficient use of energy (biogas), water and nutrients but currently lacks evidence of economic viability to be considered a credible alternative to the conventional system. This study intends to demonstrate economic viability, identify main cost contributors and assess critical influencing factors. A technico-economic model was built based on a new neighbourhood in a Canadian context. Three implementation scales of source-separation system are defined: 500, 5,000 and 50,000 inhabitants. The results show that the source-separation system is 33% to 118% more costly than the conventional system, with the larger cost differential obtained by lower source-separation system implementation scales. A sensitivity analysis demonstrates that vacuum toilet flow reduction from 1.0 to 0.25 L/flush decreases source-separation system cost between 23 and 27%. It also shows that high resource costs can be beneficial or unfavourable to the source-separation system depending on whether the vacuum toilet flow is low or normal. Therefore, the future of this configuration of the source-separation system lies mainly in vacuum toilet flow reduction or the introduction of new efficient effluent volume reduction processes (e.g. reverse osmosis).

  15. New Evidence on Self-Affirmation Effects and Theorized Sources of Heterogeneity from Large-Scale Replications

    PubMed Central

    Hanselman, Paul; Rozek, Christopher S.; Grigg, Jeffrey; Borman, Geoffrey D.

    2016-01-01

    Brief, targeted self-affirmation writing exercises have recently been offered as a way to reduce racial achievement gaps, but evidence about their effects in educational settings is mixed, leaving ambiguity about the likely benefits of these strategies if implemented broadly. A key limitation in interpreting these mixed results is that they come from studies conducted by different research teams with different procedures in different settings; it is therefore impossible to isolate whether different effects are the result of theorized heterogeneity, unidentified moderators, or idiosyncratic features of the different studies. We addressed this limitation by conducting a well-powered replication of self-affirmation in a setting where a previous large-scale field experiment demonstrated significant positive impacts, using the same procedures. We found no evidence of effects in this replication study and estimates were precise enough to reject benefits larger than an effect size of 0.10. These null effects were significantly different from persistent benefits in the prior study in the same setting, and extensive testing revealed that currently theorized moderators of self-affirmation effects could not explain the difference. These results highlight the potential fragility of self-affirmation in educational settings when implemented widely and the need for new theory, measures, and evidence about the necessary conditions for self-affirmation success. PMID:28450753

  16. Integration and application of optical chemical sensors in microbioreactors.

    PubMed

    Gruber, Pia; Marques, Marco P C; Szita, Nicolas; Mayr, Torsten

    2017-08-08

    The quantification of key variables such as oxygen, pH, carbon dioxide, glucose, and temperature provides essential information for biological and biotechnological applications and their development. Microfluidic devices offer an opportunity to accelerate research and development in these areas due to their small scale, and the fine control over the microenvironment, provided that these key variables can be measured. Optical sensors are well-suited for this task. They offer non-invasive and non-destructive monitoring of the mentioned variables, and the establishment of time-course profiles without the need for sampling from the microfluidic devices. They can also be implemented in larger systems, facilitating cross-scale comparison of analytical data. This tutorial review presents an overview of the optical sensors and their technology, with a view to support current and potential new users in microfluidics and biotechnology in the implementation of such sensors. It introduces the benefits and challenges of sensor integration, including, their application for microbioreactors. Sensor formats, integration methods, device bonding options, and monitoring options are explained. Luminescent sensors for oxygen, pH, carbon dioxide, glucose and temperature are showcased. Areas where further development is needed are highlighted with the intent to guide future development efforts towards analytes for which reliable, stable, or easily integrated detection methods are not yet available.

  17. Identifying the Threshold of Dominant Controls on Fire Spread in a Boreal Forest Landscape of Northeast China

    PubMed Central

    Liu, Zhihua; Yang, Jian; He, Hong S.

    2013-01-01

    The relative importance of fuel, topography, and weather on fire spread varies at different spatial scales, but how the relative importance of these controls respond to changing spatial scales is poorly understood. We designed a “moving window” resampling technique that allowed us to quantify the relative importance of controls on fire spread at continuous spatial scales using boosted regression trees methods. This quantification allowed us to identify the threshold value for fire size at which the dominant control switches from fuel at small sizes to weather at large sizes. Topography had a fluctuating effect on fire spread across the spatial scales, explaining 20–30% of relative importance. With increasing fire size, the dominant control switched from bottom-up controls (fuel and topography) to top-down controls (weather). Our analysis suggested that there is a threshold for fire size, above which fires are driven primarily by weather and more likely lead to larger fire size. We suggest that this threshold, which may be ecosystem-specific, can be identified using our “moving window” resampling technique. Although the threshold derived from this analytical method may rely heavily on the sampling technique, our study introduced an easily implemented approach to identify scale thresholds in wildfire regimes. PMID:23383247

  18. Do learning collaboratives strengthen communication? A comparison of organizational team communication networks over time.

    PubMed

    Bunger, Alicia C; Lengnick-Hall, Rebecca

    Collaborative learning models were designed to support quality improvements, such as innovation implementation by promoting communication within organizational teams. Yet the effect of collaborative learning approaches on organizational team communication during implementation is untested. The aim of this study was to explore change in communication patterns within teams from children's mental health organizations during a year-long learning collaborative focused on implementing a new treatment. We adopt a social network perspective to examine intraorganizational communication within each team and assess change in (a) the frequency of communication among team members, (b) communication across organizational hierarchies, and (c) the overall structure of team communication networks. A pretest-posttest design compared communication among 135 participants from 21 organizational teams at the start and end of a learning collaborative. At both time points, participants were asked to list the members of their team and rate the frequency of communication with each along a 7-point Likert scale. Several individual, pair-wise, and team level communication network metrics were calculated and compared over time. At the individual level, participants reported communicating with more team members by the end of the learning collaborative. Cross-hierarchical communication did not change. At the team level, these changes manifested differently depending on team size. In large teams, communication frequency increased, and networks grew denser and slightly less centralized. In small teams, communication frequency declined, growing more sparse and centralized. Results suggest that team communication patterns change minimally but evolve differently depending on size. Learning collaboratives may be more helpful for enhancing communication among larger teams; thus, managers might consider selecting and sending larger staff teams to learning collaboratives. This study highlights key future research directions that can disentangle the relationship between learning collaboratives and team networks.

  19. Sustaining program effectiveness after implementation: The case of the self-management of well-being group intervention for older adults.

    PubMed

    Goedendorp, Martine M; Kuiper, Daphne; Reijneveld, Sijmen A; Sanderman, Robbert; Steverink, Nardi

    2017-06-01

    The Self-Management of Well-being (SMW) group intervention for older women was implemented in health and social care. Our aim was to assess whether effects of the SMW intervention were comparable with the original randomized controlled trial (RCT). Furthermore, we investigated threats to effectiveness, such as participant adherence, group reached, and program fidelity. In the implementation study (IMP) 287 and RCT 142 women participated. We compared scores on self-management ability and well-being of the IMP and RCT. For adherence, drop-out rates and session attendance were compared. Regarding reach, we compared participants' baseline characteristics. Professionals completed questions regarding program fidelity. No significant differences were found on effect outcomes and adherence between IMP and RCT (all p≥0.135). Intervention effect sizes were equal (0.47-0.59). IMP participants were significantly less lonely and more likely to be married, but had lower well-being. Most professionals followed the protocol, with only minimal deviations. The effectiveness of the SMW group intervention was reproduced after implementation, with similar participant adherence, minimal changes in the group reached, and high program fidelity. The SMW group intervention can be transferred to health and social care without loss of effectiveness. Implementation at a larger scale is warranted. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Large-scale model-based assessment of deer-vehicle collision risk.

    PubMed

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife-vehicle collisions elsewhere.

  1. Large-Scale Model-Based Assessment of Deer-Vehicle Collision Risk

    PubMed Central

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer–vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on 74,000 deer–vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer–vehicle collisions and to investigate the relationship between deer–vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer–vehicle collisions, which allows nonlinear environment–deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new “deer–vehicle collision index” for deer management. We show that the risk of deer–vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer–vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer–vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife–vehicle collisions elsewhere. PMID:22359535

  2. Beta-diversity of ectoparasites at two spatial scales: nested hierarchy, geography and habitat type.

    PubMed

    Warburton, Elizabeth M; van der Mescht, Luther; Stanko, Michal; Vinarski, Maxim V; Korallo-Vinarskaya, Natalia P; Khokhlova, Irina S; Krasnov, Boris R

    2017-06-01

    Beta-diversity of biological communities can be decomposed into (a) dissimilarity of communities among units of finer scale within units of broader scale and (b) dissimilarity of communities among units of broader scale. We investigated compositional, phylogenetic/taxonomic and functional beta-diversity of compound communities of fleas and gamasid mites parasitic on small Palearctic mammals in a nested hierarchy at two spatial scales: (a) continental scale (across the Palearctic) and (b) regional scale (across sites within Slovakia). At each scale, we analyzed beta-diversity among smaller units within larger units and among larger units with partitioning based on either geography or ecology. We asked (a) whether compositional, phylogenetic/taxonomic and functional dissimilarities of flea and mite assemblages are scale dependent; (b) how geographical (partitioning of sites according to geographic position) or ecological (partitioning of sites according to habitat type) characteristics affect phylogenetic/taxonomic and functional components of dissimilarity of ectoparasite assemblages and (c) whether assemblages of fleas and gamasid mites differ in their degree of dissimilarity, all else being equal. We found that compositional, phylogenetic/taxonomic, or functional beta-diversity was greater on a continental rather than a regional scale. Compositional and phylogenetic/taxonomic components of beta-diversity were greater among larger units than among smaller units within larger units, whereas functional beta-diversity did not exhibit any consistent trend regarding site partitioning. Geographic partitioning resulted in higher values of beta-diversity of ectoparasites than ecological partitioning. Compositional and phylogenetic components of beta-diversity were higher in fleas than mites but the opposite was true for functional beta-diversity in some, but not all, traits.

  3. Enrichment scale determines herbivore control of primary producers.

    PubMed

    Gil, Michael A; Jiao, Jing; Osenberg, Craig W

    2016-03-01

    Anthropogenic nutrient enrichment stimulates primary production and threatens natural communities worldwide. Herbivores may counteract deleterious effects of enrichment by increasing their consumption of primary producers. However, field tests of herbivore control are often done by adding nutrients at small (e.g., sub-meter) scales, while enrichment in real systems often occurs at much larger scales (e.g., kilometers). Therefore, experimental results may be driven by processes that are not relevant at larger scales. Using a mathematical model, we show that herbivores can control primary producer biomass in experiments by concentrating their foraging in small enriched plots; however, at larger, realistic scales, the same mechanism may not lead to herbivore control of primary producers. Instead, other demographic mechanisms are required, but these are not examined in most field studies (and may not operate in many systems). This mismatch between experiments and natural processes suggests that many ecosystems may be less resilient to degradation via enrichment than previously believed.

  4. Adaptation and psychometric assessment of the Hebrew version of the Recovery Promoting Relationships Scale (RPRS).

    PubMed

    Moran, Galia S; Zisman-Ilani, Yaara; Garber-Epstein, Paula; Roe, David

    2014-03-01

    Recovery is supported by relationships that are characterized by human centeredness, empowerment and a hopeful approach. The Recovery Promoting Relationships Scale (RPRS; Russinova, Rogers, & Ellison, 2006) assesses consumer-provider relationships from the consumer perspective. Here we present the adaptation and psychometric assessment of a Hebrew version of the RPRS. The RPRS was translated to Hebrew (RPRS-Heb) using multiple strategies to assure conceptual soundness. Then 216 mental health consumers were administered the RPRS-Heb as part of a larger project initiative implementing illness management and recovery intervention (IMR) in community settings. Psychometric testing included assessment of the factor structure, reliability, and validity using the Hope Scale, the Working Alliance Inventory, and the Recovery Assessment Scale. The RPRS-Heb factor structure replicated the two factor structures found in the original scale with minor exceptions. Reliability estimates were good: Cronbach's alpha for the total scale was 0.94. An estimate of 0.93 for the Recovery-Promoting Strategies factor, and 0.86 for the Core Relationship. Concurrent validity was confirmed using the Working Alliance Scale (rp = .51, p < .001) and the Hope Scale (rp = .43, p < .001). Criterion validity was examined using the Recovery Assessment Scale (rp = .355, p < .05). The study yielded a 23-item RPRS-Heb version with a psychometrically sound factor structure, satisfactory reliability, and concurrent validity tested against the Hope, Alliance, and Recovery Assessment scales. Outcomes are discussed in the context of the original scale properties and a similar Dutch initiative. The RPRS-Heb can serve as a valuable tool for studying recovery promoting relationships with Hebrew speaking population.

  5. Community-based livestock breeding programmes: essentials and examples.

    PubMed

    Mueller, J P; Rischkowsky, B; Haile, A; Philipsson, J; Mwai, O; Besbes, B; Valle Zárate, A; Tibbo, M; Mirkena, T; Duguma, G; Sölkner, J; Wurzinger, M

    2015-04-01

    Breeding programmes described as community-based (CBBP) typically relate to low-input systems with farmers having a common interest to improve and share their genetic resources. CBBPs are more frequent with keepers of small ruminants, in particular smallholders of local breeds, than with cattle, pigs or chickens with which farmers may have easier access to alternative programmes. Constraints that limit the adoption of conventional breeding technologies in low-input systems cover a range of organizational and technical aspects. The analysis of 8 CBBPs located in countries of Latin-America, Africa and Asia highlights the importance of bottom-up approaches and involvement of local institutions in the planning and implementation stages. The analysis also reveals a high dependence of these programmes on organizational, technical and financial support. Completely self-sustained CBBPs seem to be difficult to realize. There is a need to implement and document formal socio-economic evaluations of CBBPs to provide governments and other development agencies with the information necessary for creating sustainable CBBPs at larger scales. © 2015 Blackwell Verlag GmbH.

  6. High-efficiency Thin-film Fe 2SiS 4 and Fe 2GeS 4-based Solar Cells Prepared from Low-Cost Solution Precursors. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radu, Daniela Rodica; Liu, Mimi; Hwang, Po-yu

    The project aimed to provide solar energy education to students from underrepresented groups and to develop a novel, nano-scale approach, in utilizing Fe 2SiS 4 and Fe 2GeS 4 materials as precursors to the absorber layer in photovoltaic thin-film devices. The objectives of the project were as follows: 1. Develop and implement one solar-related course at Delaware State University and train two graduate students in solar research. 2. Fabricate and characterize high-efficiency (larger than 7%) Fe 2SiS 4 and Fe 2GeS 4-based solar devices. The project has been successful in both the educational components, implementing the solar course at DSUmore » as well as in developing multiple routes to prepare the Fe 2GeS 4 with high purity and in large quantities. The project did not meet the efficiency objective, however, a functional solar device was demonstrated.« less

  7. Analyzing large-scale spiking neural data with HRLAnalysis™

    PubMed Central

    Thibeault, Corey M.; O'Brien, Michael J.; Srinivasa, Narayan

    2014-01-01

    The additional capabilities provided by high-performance neural simulation environments and modern computing hardware has allowed for the modeling of increasingly larger spiking neural networks. This is important for exploring more anatomically detailed networks but the corresponding accumulation in data can make analyzing the results of these simulations difficult. This is further compounded by the fact that many existing analysis packages were not developed with large spiking data sets in mind. Presented here is a software suite developed to not only process the increased amount of spike-train data in a reasonable amount of time, but also provide a user friendly Python interface. We describe the design considerations, implementation and features of the HRLAnalysis™ suite. In addition, performance benchmarks demonstrating the speedup of this design compared to a published Python implementation are also presented. The result is a high-performance analysis toolkit that is not only usable and readily extensible, but also straightforward to interface with existing Python modules. PMID:24634655

  8. Processing large remote sensing image data sets on Beowulf clusters

    USGS Publications Warehouse

    Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail

    2003-01-01

    High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.

  9. Diversification of Processors Based on Redundancy in Instruction Set

    NASA Astrophysics Data System (ADS)

    Ichikawa, Shuichi; Sawada, Takashi; Hata, Hisashi

    By diversifying processor architecture, computer software is expected to be more resistant to plagiarism, analysis, and attacks. This study presents a new method to diversify instruction set architecture (ISA) by utilizing the redundancy in the instruction set. Our method is particularly suited for embedded systems implemented with FPGA technology, and realizes a genuine instruction set randomization, which has not been provided by the preceding studies. The evaluation results on four typical ISAs indicate that our scheme can provide a far larger degree of freedom than the preceding studies. Diversified processors based on MIPS architecture were actually implemented and evaluated with Xilinx Spartan-3 FPGA. The increase of logic scale was modest: 5.1% in Specialized design and 3.6% in RAM-mapped design. The performance overhead was also modest: 3.4% in Specialized design and 11.6% in RAM-mapped design. From these results, our scheme is regarded as a practical and promising way to secure FPGA-based embedded systems.

  10. A three-dimensional bioprinting system for use with a hydrogel-based biomaterial and printing parameter characterization.

    PubMed

    Song, Seung-Joon; Choi, Jaesoon; Park, Yong-Doo; Lee, Jung-Joo; Hong, So Young; Sun, Kyung

    2010-11-01

    Bioprinting is an emerging technology for constructing tissue or bioartificial organs with complex three-dimensional (3D) structures. It provides high-precision spatial shape forming ability on a larger scale than conventional tissue engineering methods, and simultaneous multiple components composition ability. Bioprinting utilizes a computer-controlled 3D printer mechanism for 3D biological structure construction. To implement minimal pattern width in a hydrogel-based bioprinting system, a study on printing characteristics was performed by varying printer control parameters. The experimental results showed that printing pattern width depends on associated printer control parameters such as printing flow rate, nozzle diameter, and nozzle velocity. The system under development showed acceptable feasibility of potential use for accurate printing pattern implementation in tissue engineering applications and is another example of novel techniques for regenerative medicine based on computer-aided biofabrication system. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  11. Social marketing and public health intervention.

    PubMed

    Lefebvre, R C; Flora, J A

    1988-01-01

    The rapid proliferation of community-based health education programs has out-paced the knowledge base of behavior change strategies that are appropriate and effective for public health interventions. However, experiences from a variety of large-scale studies suggest that principles and techniques of social marketing may help bridge this gap. This article discusses eight essential aspects of the social marketing process: the use of a consumer orientation to develop and market intervention techniques, exchange theory as a model from which to conceptualize service delivery and program participation, audience analysis and segmentation strategies, the use of formative research in program design and pretesting of intervention materials, channel analysis for devising distribution systems and promotional campaigns, employment of the "marketing mix" concept in intervention planning and implementation, development of a process tracking system, and a management process of problem analysis, planning, implementation, feedback and control functions. Attention to such variables could result in more cost-effective programs that reach larger numbers of the target audience.

  12. Energy Efficient Microwave Hybrid Processing of Lime for Cement, Steel, and Glass Industries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fall, Morgana L; Yakovlev, Vadim; Sahi, Catherine

    2012-02-10

    In this study, the microwave materials interactions were studied through dielectric property measurements, process modeling, and lab scale microwave hybrid calcination tests. Characterization and analysis were performed to evaluate material reactions and energy usage. Processing parameters for laboratory scale and larger scale calcining experiments were developed for MAT limestone calcination. Early stage equipment design concepts were developed, with a focus on microwave post heating treatment. The retrofitting of existing rotary calcine equipment in the lime industry was assessed and found to be feasible. Ceralink sought to address some of the major barriers to the uptake of MAT identified as themore » need for (1) team approach with end users, technology partners, and equipment manufacturers, (2) modeling that incorporates kiln materials and variations to the design of industrial microwave equipment. This project has furthered the commercialization effort of MAT by working closely with an industrial lime manufacturer to educate them regarding MAT, identifying equipment manufacturer to supply microwave equipment, and developing a sophisticated MAT modeling with WPI, the university partner. MAT was shown to enhance calcining through lower energy consumption and faster reaction rates compared to conventional processing. Laboratory testing concluded that a 23% reduction in energy was possible for calcining small batches (5kg). Scale-up testing indicated that the energy savings increased as a function of load size and 36% energy savings was demonstrated (22 kg). A sophisticated model was developed which combines simultaneous microwave and conventional heating. Continued development of this modeling software could be used for larger scale calcining simulations, which would be a beneficial low-cost tool for exploring equipment design prior to actual building. Based on these findings, estimates for production scale MAT calcining benefits were calculated, assuming uptake of MAT in the US lime industry. This estimate showed that 7.3 TBTU/year could be saved, with reduction of 270 MMlbs of CO2 emissions, and $29 MM/year in economic savings. Taking into account estimates for MAT implementation in the US cement industry, an additional 39 TBTU/year, 3 Blbs of CO2 and $155 MM/year could be saved. One of the main remaining barriers to commercialization of MAT for the lime and cement industries is the sheer size of production. Through this project, it was realized that a production size MAT rotary calciner was not feasible, and a different approach was adapted. The concept of a microwave post heat section located in the upper portion of the cooler was devised and appears to be a more realistic approach for MAT implementation. Commercialization of this technology will require (1) continued pilot scale calcining demonstrations, (2) involvement of lime kiln companies, and (3) involvement of an industrial microwave equipment provider. An initial design concept for a MAT post-heat treatment section was conceived as a retrofit into the cooler sections of existing lime rotary calciners with a 1.4 year payback. Retrofitting will help spur implementation of this technology, as the capital investment will be minimal for enhancing the efficiency of current rotary lime kilns. Retrofits would likely be attractive to lime manufacturers, as the purchase of a new lime kiln is on the order of a $30 million dollar investment, where as a MAT retrofit is estimated on the order of $1 million. The path for commercialization lies in partnering with existing lime kiln companies, who will be able to implement the microwave post heat sections in existing and new build kilns. A microwave equipment provider has been identified, who would make up part of the continued development and commercialization team.« less

  13. Image processing for quantifying fracture orientation and length scale transitions during brittle deformation

    NASA Astrophysics Data System (ADS)

    Rizzo, R. E.; Healy, D.; Farrell, N. J.

    2017-12-01

    We have implemented a novel image processing tool, namely two-dimensional (2D) Morlet wavelet analysis, capable of detecting changes occurring in fracture patterns at different scales of observation, and able of recognising the dominant fracture orientations and the spatial configurations for progressively larger (or smaller) scale of analysis. Because of its inherited anisotropy, the Morlet wavelet is proved to be an excellent choice for detecting directional linear features, i.e. regions where the amplitude of the signal is regular along one direction and has sharp variation along the perpendicular direction. Performances of the Morlet wavelet are tested against the 'classic' Mexican hat wavelet, deploying a complex synthetic fracture network. When applied to a natural fracture network, formed triaxially (σ1>σ2=σ3) deforming a core sample of the Hopeman sandstone, the combination of 2D Morlet wavelet and wavelet coefficient maps allows for the detection of characteristic scale orientation and length transitions, associated with the shifts from distributed damage to the growth of localised macroscopic shear fracture. A complementary outcome arises from the wavelet coefficient maps produced by increasing the wavelet scale parameter. These maps can be used to chart the variations in the spatial distribution of the analysed entities, meaning that it is possible to retrieve information on the density of fracture patterns at specific length scales during deformation.

  14. A new precipitation-based method of baseflow separation and event identification for small watersheds (<50 km2)

    NASA Astrophysics Data System (ADS)

    Koskelo, Antti I.; Fisher, Thomas R.; Utz, Ryan M.; Jordan, Thomas E.

    2012-07-01

    SummaryBaseflow separation methods are often impractical, require expensive materials and time-consuming methods, and/or are not designed for individual events in small watersheds. To provide a simple baseflow separation method for small watersheds, we describe a new precipitation-based technique known as the Sliding Average with Rain Record (SARR). The SARR uses rainfall data to justify each separation of the hydrograph. SARR has several advantages such as: it shows better consistency with the precipitation and discharge records, it is easier and more practical to implement, and it includes a method of event identification based on precipitation and quickflow response. SARR was derived from the United Kingdom Institute of Hydrology (UKIH) method with several key modifications to adapt it for small watersheds (<50 km2). We tested SARR on watersheds in the Choptank Basin on the Delmarva Peninsula (US Mid-Atlantic region) and compared the results with the UKIH method at the annual scale and the hydrochemical method at the individual event scale. Annually, SARR calculated a baseflow index that was ˜10% higher than the UKIH method due to the finer time step of SARR (1 d) compared to UKIH (5 d). At the watershed scale, hydric soils were an important driver of the annual baseflow index likely due to increased groundwater retention in hydric areas. At the event scale, SARR calculated less baseflow than the hydrochemical method, again because of the differences in time step (hourly for hydrochemical) and different definitions of baseflow. Both SARR and hydrochemical baseflow increased with event size, suggesting that baseflow contributions are more important during larger storms. To make SARR easy to implement, we have written a MatLab program to automate the calculations which requires only daily rainfall and daily flow data as inputs.

  15. High-resolution mapping of forest carbon stocks in the Colombian Amazon

    NASA Astrophysics Data System (ADS)

    Asner, G. P.; Clark, J. K.; Mascaro, J.; Galindo García, G. A.; Chadwick, K. D.; Navarrete Encinales, D. A.; Paez-Acosta, G.; Cabrera Montenegro, E.; Kennedy-Bowdoin, T.; Duque, Á.; Balaji, A.; von Hildebrand, P.; Maatoug, L.; Bernal, J. F. Phillips; Yepes Quintero, A. P.; Knapp, D. E.; García Dávila, M. C.; Jacobson, J.; Ordóñez, M. F.

    2012-07-01

    High-resolution mapping of tropical forest carbon stocks can assist forest management and improve implementation of large-scale carbon retention and enhancement programs. Previous high-resolution approaches have relied on field plot and/or light detection and ranging (LiDAR) samples of aboveground carbon density, which are typically upscaled to larger geographic areas using stratification maps. Such efforts often rely on detailed vegetation maps to stratify the region for sampling, but existing tropical forest maps are often too coarse and field plots too sparse for high-resolution carbon assessments. We developed a top-down approach for high-resolution carbon mapping in a 16.5 million ha region (> 40%) of the Colombian Amazon - a remote landscape seldom documented. We report on three advances for large-scale carbon mapping: (i) employing a universal approach to airborne LiDAR-calibration with limited field data; (ii) quantifying environmental controls over carbon densities; and (iii) developing stratification- and regression-based approaches for scaling up to regions outside of LiDAR coverage. We found that carbon stocks are predicted by a combination of satellite-derived elevation, fractional canopy cover and terrain ruggedness, allowing upscaling of the LiDAR samples to the full 16.5 million ha region. LiDAR-derived carbon maps have 14% uncertainty at 1 ha resolution, and the regional map based on stratification has 28% uncertainty in any given hectare. High-resolution approaches with quantifiable pixel-scale uncertainties will provide the most confidence for monitoring changes in tropical forest carbon stocks. Improved confidence will allow resource managers and decision makers to more rapidly and effectively implement actions that better conserve and utilize forests in tropical regions.

  16. High-resolution Mapping of Forest Carbon Stocks in the Colombian Amazon

    NASA Astrophysics Data System (ADS)

    Asner, G. P.; Clark, J. K.; Mascaro, J.; Galindo García, G. A.; Chadwick, K. D.; Navarrete Encinales, D. A.; Paez-Acosta, G.; Cabrera Montenegro, E.; Kennedy-Bowdoin, T.; Duque, Á.; Balaji, A.; von Hildebrand, P.; Maatoug, L.; Bernal, J. F. Phillips; Knapp, D. E.; García Dávila, M. C.; Jacobson, J.; Ordóñez, M. F.

    2012-03-01

    High-resolution mapping of tropical forest carbon stocks can assist forest management and improve implementation of large-scale carbon retention and enhancement programs. Previous high-resolution approaches have relied on field plot and/or Light Detection and Ranging (LiDAR) samples of aboveground carbon density, which are typically upscaled to larger geographic areas using stratification maps. Such efforts often rely on detailed vegetation maps to stratify the region for sampling, but existing tropical forest maps are often too coarse and field plots too sparse for high resolution carbon assessments. We developed a top-down approach for high-resolution carbon mapping in a 16.5 million ha region (>40 %) of the Colombian Amazon - a remote landscape seldom documented. We report on three advances for large-scale carbon mapping: (i) employing a universal approach to airborne LiDAR-calibration with limited field data; (ii) quantifying environmental controls over carbon densities; and (iii) developing stratification- and regression-based approaches for scaling up to regions outside of LiDAR coverage. We found that carbon stocks are predicted by a combination of satellite-derived elevation, fractional canopy cover and terrain ruggedness, allowing upscaling of the LiDAR samples to the full 16.5 million ha region. LiDAR-derived carbon mapping samples had 14.6 % uncertainty at 1 ha resolution, and regional maps based on stratification and regression approaches had 25.6 % and 29.6 % uncertainty, respectively, in any given hectare. High-resolution approaches with reported local-scale uncertainties will provide the most confidence for monitoring changes in tropical forest carbon stocks. Improved confidence will allow resource managers and decision-makers to more rapidly and effectively implement actions that better conserve and utilize forests in tropical regions.

  17. Advancing our understanding of the onshore propagation of tsunami bores over rough surfaces through numerical modeling

    NASA Astrophysics Data System (ADS)

    Marras, S.; Suckale, J.; Eguzkitza, B.; Houzeaux, G.; Vázquez, M.; Lesage, A. C.

    2016-12-01

    The propagation of tsunamis in the open ocean has been studied in detail with many excellent numerical approaches available to researchers. Our understanding of the processes that govern the onshore propagation of tsunamis is less advanced. Yet, the reach of tsunamis on land is an important predictor of the damage associated with a given event, highlighting the need to investigate the factors that govern tsunami propagation onshore. In this study, we specifically focus on understanding the effect of bottom roughness at a variety of scales. The term roughness is to be understood broadly, as it represents scales ranging from small features like rocks, to vegetation, up to the size of larger structures and topography. In this poster, we link applied mathematics, computational fluid dynamics, and tsunami physics to analyze the small scales features of coastal hydrodynamics and the effect of roughness on the motion of tsunamis as they run up a sloping beach and propagate inland. We solve the three-dimensional Navier-Stokes equations of incompressible flows with free surface, which is tracked by a level set function in combination with an accurate re-distancing scheme. We discretize the equations via linear finite elements for space approximation and fully implicit time integration. Stabilization is achieved via the variational multiscale method whereas the subgrid scales for our large eddy simulations are modeled using a dynamically adaptive Smagorinsky eddy viscosity. As the geometrical characteristics of roughness in this study vary greatly across different scales, we implement a scale-dependent representation of the roughness elements. We model the smallest sub-grid scale roughness features by the use of a properly defined law of the wall. Furthermore, we utilize a Manning formula to compute the shear stress at the boundary. As the geometrical scales become larger, we resolve the geometry explicitly and compute the effective volume drag introduced by large scale immersed bodies. This study is a necessary step to verify and validate our model before proceeding further into the simulation of sediment transport in turbulent free surface flows. The simulation of such problems requires a space and time-dependent viscosity to model the effect of solid bodies transported by the incoming flow on onshore tsunami propagation.

  18. Using Anatomic Magnetic Resonance Image Information to Enhance Visualization and Interpretation of Functional Images: A Comparison of Methods Applied to Clinical Arterial Spin Labeling Images

    PubMed Central

    Dai, Weiying; Soman, Salil; Hackney, David B.; Wong, Eric T.; Robson, Philip M.; Alsop, David C.

    2017-01-01

    Functional imaging provides hemodynamic and metabolic information and is increasingly being incorporated into clinical diagnostic and research studies. Typically functional images have reduced signal-to-noise ratio and spatial resolution compared to other non-functional cross sectional images obtained as part of a routine clinical protocol. We hypothesized that enhancing visualization and interpretation of functional images with anatomic information could provide preferable quality and superior diagnostic value. In this work, we implemented five methods (frequency addition, frequency multiplication, wavelet transform, non-subsampled contourlet transform and intensity-hue-saturation) and a newly proposed ShArpening by Local Similarity with Anatomic images (SALSA) method to enhance the visualization of functional images, while preserving the original functional contrast and quantitative signal intensity characteristics over larger spatial scales. Arterial spin labeling blood flow MR images of the brain were visualization enhanced using anatomic images with multiple contrasts. The algorithms were validated on a numerical phantom and their performance on images of brain tumor patients were assessed by quantitative metrics and neuroradiologist subjective ratings. The frequency multiplication method had the lowest residual error for preserving the original functional image contrast at larger spatial scales (55%–98% of the other methods with simulated data and 64%–86% with experimental data). It was also significantly more highly graded by the radiologists (p<0.005 for clear brain anatomy around the tumor). Compared to other methods, the SALSA provided 11%–133% higher similarity with ground truth images in the simulation and showed just slightly lower neuroradiologist grading score. Most of these monochrome methods do not require any prior knowledge about the functional and anatomic image characteristics, except the acquired resolution. Hence, automatic implementation on clinical images should be readily feasible. PMID:27723582

  19. Modeling of molecular diffusion and thermal conduction with multi-particle interaction in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Tai, Y.; Watanabe, T.; Nagata, K.

    2018-03-01

    A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.

  20. Implementation and application of an interactive user-friendly validation software for RADIANCE

    NASA Astrophysics Data System (ADS)

    Sundaram, Anand; Boonn, William W.; Kim, Woojin; Cook, Tessa S.

    2012-02-01

    RADIANCE extracts CT dose parameters from dose sheets using optical character recognition and stores the data in a relational database. To facilitate validation of RADIANCE's performance, a simple user interface was initially implemented and about 300 records were evaluated. Here, we extend this interface to achieve a wider variety of functions and perform a larger-scale validation. The validator uses some data from the RADIANCE database to prepopulate quality-testing fields, such as correspondence between calculated and reported total dose-length product. The interface also displays relevant parameters from the DICOM headers. A total of 5,098 dose sheets were used to test the performance accuracy of RADIANCE in dose data extraction. Several search criteria were implemented. All records were searchable by accession number, study date, or dose parameters beyond chosen thresholds. Validated records were searchable according to additional criteria from validation inputs. An error rate of 0.303% was demonstrated in the validation. Dose monitoring is increasingly important and RADIANCE provides an open-source solution with a high level of accuracy. The RADIANCE validator has been updated to enable users to test the integrity of their installation and verify that their dose monitoring is accurate and effective.

  1. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G. D.; Tartakovsky, A. M.; Scheibe, T. D.; Fang, Y.; Mahadevan, R.; Lovley, D. R.

    2013-09-01

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).

  2. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    NASA Astrophysics Data System (ADS)

    Scheibe, T. D.; Tartakovsky, G.; Tartakovsky, A. M.; Fang, Y.; Mahadevan, R.; Lovley, D. R.

    2012-12-01

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).

  3. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Guzel D.; Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2013-09-07

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated withmore » microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparisonto prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).« less

  4. Scaling NASA Applications to 1024 CPUs on Origin 3K

    NASA Technical Reports Server (NTRS)

    Taft, Jim

    2002-01-01

    The long and highly successful joint SGI-NASA research effort in ever larger SSI systems was to a large degree the result of the successful development of the MLP scalable parallel programming paradigm developed at ARC: 1) MLP scaling in real production codes justified ever larger systems at NAS; 2) MLP scaling on 256p Origin 2000 gave SGl impetus to productize 256p; 3) MLP scaling on 512 gave SGI courage to build 1024p O3K; and 4) History of MLP success resulted in IBM Star Cluster based MLP effort.

  5. Dynamics of a neural system with a multiscale architecture

    PubMed Central

    Breakspear, Michael; Stam, Cornelis J

    2005-01-01

    The architecture of the brain is characterized by a modular organization repeated across a hierarchy of spatial scales—neurons, minicolumns, cortical columns, functional brain regions, and so on. It is important to consider that the processes governing neural dynamics at any given scale are not only determined by the behaviour of other neural structures at that scale, but also by the emergent behaviour of smaller scales, and the constraining influence of activity at larger scales. In this paper, we introduce a theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture. In essence, the dynamics at each scale are determined by a coupled ensemble of nonlinear oscillators, which embody the principle scale-specific neurobiological processes. The dynamics at larger scales are ‘slaved’ to the emergent behaviour of smaller scales through a coupling function that depends on a multiscale wavelet decomposition. The approach is first explicated mathematically. Numerical examples are then given to illustrate phenomena such as between-scale bifurcations, and how synchronization in small-scale structures influences the dynamics in larger structures in an intuitive manner that cannot be captured by existing modelling approaches. A framework for relating the dynamical behaviour of the system to measured observables is presented and further extensions to capture wave phenomena and mode coupling are suggested. PMID:16087448

  6. PARALLEL HOP: A SCALABLE HALO FINDER FOR MASSIVE COSMOLOGICAL DATA SETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skory, Stephen; Turk, Matthew J.; Norman, Michael L.

    2010-11-15

    Modern N-body cosmological simulations contain billions (10{sup 9}) of dark matter particles. These simulations require hundreds to thousands of gigabytes of memory and employ hundreds to tens of thousands of processing cores on many compute nodes. In order to study the distribution of dark matter in a cosmological simulation, the dark matter halos must be identified using a halo finder, which establishes the halo membership of every particle in the simulation. The resources required for halo finding are similar to the requirements for the simulation itself. In particular, simulations have become too extensive to use commonly employed halo finders, suchmore » that the computational requirements to identify halos must now be spread across multiple nodes and cores. Here, we present a scalable-parallel halo finding method called Parallel HOP for large-scale cosmological simulation data. Based on the halo finder HOP, it utilizes message passing interface and domain decomposition to distribute the halo finding workload across multiple compute nodes, enabling analysis of much larger data sets than is possible with the strictly serial or previous parallel implementations of HOP. We provide a reference implementation of this method as a part of the toolkit {sup yt}, an analysis toolkit for adaptive mesh refinement data that include complementary analysis modules. Additionally, we discuss a suite of benchmarks that demonstrate that this method scales well up to several hundred tasks and data sets in excess of 2000{sup 3} particles. The Parallel HOP method and our implementation can be readily applied to any kind of N-body simulation data and is therefore widely applicable.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Melissa R.; Aziz, H. M. Abdul; Coletti, Mark A.

    Changing human activity within a geographical location may have significant influence on the global climate, but that activity must be parameterized in such a way as to allow these high-resolution sub-grid processes to affect global climate within that modeling framework. Additionally, we must have tools that provide decision support and inform local and regional policies regarding mitigation of and adaptation to climate change. The development of next-generation earth system models, that can produce actionable results with minimum uncertainties, depends on understanding global climate change and human activity interactions at policy implementation scales. Unfortunately, at best we currently have only limitedmore » schemes for relating high-resolution sectoral emissions to real-time weather, ultimately to become part of larger regions and well-mixed atmosphere. Moreover, even our understanding of meteorological processes at these scales is imperfect. This workshop addresses these shortcomings by providing a forum for discussion of what we know about these processes, what we can model, where we have gaps in these areas and how we can rise to the challenge to fill these gaps.« less

  8. Three fundamental devices in one: a reconfigurable multifunctional device in two-dimensional WSe2

    NASA Astrophysics Data System (ADS)

    Dhakras, Prathamesh; Agnihotri, Pratik; Lee, Ji Ung

    2017-06-01

    The three pillars of semiconductor device technologies are (1) the p-n diode, (2) the metal-oxide-semiconductor field-effect transistor and (3) the bipolar junction transistor. They have enabled the unprecedented growth in the field of information technology that we see today. Until recently, the technological revolution for better, faster and more efficient devices has been governed by scaling down the device dimensions following Moore’s Law. With the slowing of Moore’s law, there is a need for alternative materials and computing technologies that can continue the advancement in functionality. Here, we describe a single, dynamically reconfigurable device that implements these three fundamental device functions. The device uses buried gates to achieve n- and p-channels and fits into a larger effort to develop devices with enhanced functionalities, including logic functions, over device scaling. As they are all surface conducting devices, we use one material parameter, the interface trap density of states, to describe the key figure-of-merit of each device.

  9. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much lessmore » computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.« less

  10. Hydrogen Research for Spaceport and Space-Based Applications: Hydrogen Production, Storage, and Transport. Part 3

    NASA Technical Reports Server (NTRS)

    Anderson, Tim; Balaban, Canan

    2008-01-01

    The activities presented are a broad based approach to advancing key hydrogen related technologies in areas such as fuel cells, hydrogen production, and distributed sensors for hydrogen-leak detection, laser instrumentation for hydrogen-leak detection, and cryogenic transport and storage. Presented are the results from research projects, education and outreach activities, system and trade studies. The work will aid in advancing the state-of-the-art for several critical technologies related to the implementation of a hydrogen infrastructure. Activities conducted are relevant to a number of propulsion and power systems for terrestrial, aeronautics and aerospace applications. Hydrogen storage and in-space hydrogen transport research focused on developing and verifying design concepts for efficient, safe, lightweight liquid hydrogen cryogenic storage systems. Research into hydrogen production had a specific goal of further advancing proton conducting membrane technology in the laboratory at a larger scale. System and process trade studies evaluated the proton conducting membrane technology, specifically, scale-up issues.

  11. Large Scale Analysis of Geospatial Data with Dask and XArray

    NASA Astrophysics Data System (ADS)

    Zender, C. S.; Hamman, J.; Abernathey, R.; Evans, K. J.; Rocklin, M.; Zender, C. S.; Rocklin, M.

    2017-12-01

    The analysis of geospatial data with high level languages has acceleratedinnovation and the impact of existing data resources. However, as datasetsgrow beyond single-machine memory, data structures within these high levellanguages can become a bottleneck. New libraries like Dask and XArray resolve some of these scalability issues,providing interactive workflows that are both familiar tohigh-level-language researchers while also scaling out to much largerdatasets. This broadens the access of researchers to larger datasets on highperformance computers and, through interactive development, reducestime-to-insight when compared to traditional parallel programming techniques(MPI). This talk describes Dask, a distributed dynamic task scheduler, Dask.array, amulti-dimensional array that copies the popular NumPy interface, and XArray,a library that wraps NumPy/Dask.array with labeled and indexes axes,implementing the CF conventions. We discuss both the basic design of theselibraries and how they change interactive analysis of geospatial data, and alsorecent benefits and challenges of distributed computing on clusters ofmachines.

  12. Why does offspring size affect performance? Integrating metabolic scaling with life-history theory

    PubMed Central

    Pettersen, Amanda K.; White, Craig R.; Marshall, Dustin J.

    2015-01-01

    Within species, larger offspring typically outperform smaller offspring. While the relationship between offspring size and performance is ubiquitous, the cause of this relationship remains elusive. By linking metabolic and life-history theory, we provide a general explanation for why larger offspring perform better than smaller offspring. Using high-throughput respirometry arrays, we link metabolic rate to offspring size in two species of marine bryozoan. We found that metabolism scales allometrically with offspring size in both species: while larger offspring use absolutely more energy than smaller offspring, larger offspring use proportionally less of their maternally derived energy throughout the dependent, non-feeding phase. The increased metabolic efficiency of larger offspring while dependent on maternal investment may explain offspring size effects—larger offspring reach nutritional independence (feed for themselves) with a higher proportion of energy relative to structure than smaller offspring. These findings offer a potentially universal explanation for why larger offspring tend to perform better than smaller offspring but studies on other taxa are needed. PMID:26559952

  13. Coastal erosion vulnerability and risk assessment focusing in tourism beach use.

    NASA Astrophysics Data System (ADS)

    Alexandrakis, George

    2016-04-01

    It is well established that the global market for tourism services is a key source of economic growth. Especially among Mediterranean countries, the tourism sector is one of the principal sectors driving national economies. With the majority of the mass tourism activities concentrated around coastal areas, coastal erosion, inter alia, poses a significant threat to coastal economies that depend heavily on revenues from tourism. The economic implications of beach erosion were mainly focused in the cost of coastal protection measures, instead of the revenue losses from tourism. For this, the vulnerability of the coast to sea level rise and associated erosion, in terms of expected land loss and economic activity need to be identified. To achieve this, a joint environmental and economic evaluation approach of the problem can provide a managerial tool to mitigate the impact of beach erosion in tourism, through realistic cost-benefit scenarios for planning alternative protection measures. Such a multipurpose tool needs to consider social, economic and environmental factors, which relationships can be better understood when distributed and analyzed along the geographical space. The risk assessment is implemented through the estimation of the vulnerability and exposure variables of the coast in two scales. The larger scale estimates the vulnerability in a regional level, with the use environmental factors with the use of CVI. The exposure variable is estimated by the use of socioeconomic factors. Subsequently, a smaller scale focuses on highly vulnerable beaches with high social and economic value. The assessment of the natural processes to the environmental characteristics of the beach is estimated with the use of the Beach Vulnerability Index (BVI) method. As exposure variable, the value of beach width that is capitalized in revenues is implemented through a hedonic pricing model. In this econometric modelling, Beach Value is related with economic and environmental attributes like, Beach width, distance from the city) of each sector, tourism attributes (Coastal business; Number of hotels; Number of hotel rooms; Room price; Beach attendance). All calculations are implemented in a GIS database, organised in five levels. As case study area for the application of the method is selected Crete Island, while for the small scale four beach tourist destinations in the Island of Crete, with different vulnerabilities. In the small scale vulnerability analysis, the sectors of the beach which are most vulnerable were identified, and risk analysis was made based on the revenue losses. Acknowledgments This work was implemented within the framework of "Post-Doctoral Excellence Scholarship. State Scholarships Foundation, Greece IKY-Siemens Action"

  14. Predicting weak lensing statistics from halo mass reconstructions - Final Paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Everett, Spencer

    2015-08-20

    As dark matter does not absorb or emit light, its distribution in the universe must be inferred through indirect effects such as the gravitational lensing of distant galaxies. While most sources are only weakly lensed, the systematic alignment of background galaxies around a foreground lens can constrain the mass of the lens which is largely in the form of dark matter. In this paper, I have implemented a framework to reconstruct all of the mass along lines of sight using a best-case dark matter halo model in which the halo mass is known. This framework is then used to makemore » predictions of the weak lensing of 3,240 generated source galaxies through a 324 arcmin² field of the Millennium Simulation. The lensed source ellipticities are characterized by the ellipticity-ellipticity and galaxy-mass correlation functions and compared to the same statistic for the intrinsic and ray-traced ellipticities. In the ellipticity-ellipticity correlation function, I and that the framework systematically under predicts the shear power by an average factor of 2.2 and fails to capture correlation from dark matter structure at scales larger than 1 arcminute. The model predicted galaxy-mass correlation function is in agreement with the ray-traced statistic from scales 0.2 to 0.7 arcminutes, but systematically underpredicts shear power at scales larger than 0.7 arcminutes by an average factor of 1.2. Optimization of the framework code has reduced the mean CPU time per lensing prediction by 70% to 24 ± 5 ms. Physical and computational shortcomings of the framework are discussed, as well as potential improvements for upcoming work.« less

  15. Ultra-wideband three-dimensional optoacoustic tomography.

    PubMed

    Gateau, Jérôme; Chekkoury, Andrei; Ntziachristos, Vasilis

    2013-11-15

    Broadband optoacoustic waves generated by biological tissues excited with nanosecond laser pulses carry information corresponding to a wide range of geometrical scales. Typically, the frequency content present in the signals generated during optoacoustic imaging is much larger compared to the frequency band captured by common ultrasonic detectors, the latter typically acting as bandpass filters. To image optical absorption within structures ranging from entire organs to microvasculature in three dimensions, we implemented optoacoustic tomography with two ultrasound linear arrays featuring a center frequency of 6 and 24 MHz, respectively. In the present work, we show that complementary information on anatomical features could be retrieved and provide a better understanding on the localization of structures in the general anatomy by analyzing multi-bandwidth datasets acquired on a freshly excised kidney.

  16. Understanding violence: a school initiative for violence prevention.

    PubMed

    Nikitopoulos, Christina E; Waters, Jessica S; Collins, Erin; Watts, Caroline L

    2009-01-01

    The present study evaluates Understanding Violence, a violence prevention initiative implemented in a Boston-area elementary school whose students experience high rates of community violence. Understanding Violence draws on the educational and personal skills of youths and allows them to practice positive alternatives to violence. Participating 5th graders (n = 123) completed a survey that included rating scale items and open-ended questions to assess the program. Results indicate high levels of satisfaction with and learning from the program. Participants responded positively to the program's use of diverse components and community engagement. Developed as part of a larger community mental health outreach program, Understanding Violence offers an example of a school-based initiative to mitigate the effects of community violence.

  17. Atomistically derived cohesive zone model of intergranular fracture in polycrystalline graphene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guin, Laurent; Department of Mechanical Engineering, Columbia University, New York, New York 10027; Raphanel, Jean L.

    2016-06-28

    Pristine single crystal graphene is the strongest known two-dimensional material, and its nonlinear anisotropic mechanical properties are well understood from the atomic length scale up to a continuum description. However, experiments indicate that grain boundaries in the polycrystalline form reduce the mechanical behavior of polycrystalline graphene. Herein, we perform atomistic-scale molecular dynamics simulations of the deformation and fracture of graphene grain boundaries and express the results as continuum cohesive zone models (CZMs) that embed notions of the grain boundary ultimate strength and fracture toughness. To facilitate energy balance, we employ a new methodology that simulates a quasi-static controlled crack propagationmore » which renders the kinetic energy contribution to the total energy negligible. We verify good agreement between Griffith's critical energy release rate and the work of separation of the CZM, and we note that the energy of crack edges and fracture toughness differs by about 35%, which is attributed to the phenomenon of bond trapping. This justifies the implementation of the CZM within the context of the finite element method (FEM). To enhance computational efficiency in the FEM implementation, we discuss the use of scaled traction-separation laws (TSLs) for larger element sizes. As a final result, we have established that the failure characteristics of pristine graphene and high tilt angle bicrystals differ by less than 10%. This result suggests that one could use a unique or a few typical TSLs as a good approximation for the CZMs associated with the mechanical simulations of the polycrystalline graphene.« less

  18. Development of a scaled-down aerobic fermentation model for scale-up in recombinant protein vaccine manufacturing.

    PubMed

    Farrell, Patrick; Sun, Jacob; Gao, Meg; Sun, Hong; Pattara, Ben; Zeiser, Arno; D'Amore, Tony

    2012-08-17

    A simple approach to the development of an aerobic scaled-down fermentation model is presented to obtain more consistent process performance during the scale-up of recombinant protein manufacture. Using a constant volumetric oxygen mass transfer coefficient (k(L)a) for the criterion of a scale-down process, the scaled-down model can be "tuned" to match the k(L)a of any larger-scale target by varying the impeller rotational speed. This approach is demonstrated for a protein vaccine candidate expressed in recombinant Escherichia coli, where process performance is shown to be consistent among 2-L, 20-L, and 200-L scales. An empirical correlation for k(L)a has also been employed to extrapolate to larger manufacturing scales. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. A narrative account of implementation lessons learnt from the dissemination of an up-scaled state-wide child obesity management program in Australia: PEACH™ (Parenting, Eating and Activity for Child Health) Queensland.

    PubMed

    Croyden, Debbie L; Vidgen, Helen A; Esdaile, Emma; Hernandez, Emely; Magarey, Anthea; Moores, Carly J; Daniels, Lynne

    2018-03-13

    PEACH™QLD translated the PEACH™ Program, designed to manage overweight/obesity in primary school-aged children, from efficacious RCT and small scale community trial to a larger state-wide program. This paper describes the lessons learnt when upscaling to universal health coverage. The 6-month, family-focussed program was delivered in Queensland, Australia from 2013 to 2016. Its implementation was planned by researchers who developed the program and conducted the RCT, and experienced project managers and practitioners across the health continuum. The intervention targeted parents as the agents of change and was delivered via parent-only group sessions. Concurrently, children attended fun, non-competitive activity sessions. Sessions were delivered by facilitators who received standardised training and were employed by a range of service providers. Participants were referred by health professionals or self-referred in response to extensive promotion and marketing. A pilot phase and a quality improvement framework were planned to respond to emerging challenges. Implementation challenges included engagement of the health system; participant recruitment; and engagement. A total of 1513 children (1216 families) enrolled, with 1122 children (919 families) in the face-to-face program (105 groups in 50 unique venues) and 391 children (297 families) in PEACH™ Online. Self-referral generated 68% of enrolments. Unexpected, concurrent and, far-reaching public health system changes contributed to poor program uptake by the sector (only 56 [53%] groups delivered by publicly-funded health organisations) requiring substantial modification of the original implementation plan. Process evaluation during the pilot phase and an ongoing quality improvement framework informed program adaptations that included changing from fortnightly to weekly sessions aligned with school terms, revision of parent materials, modification of eligibility criteria to include healthy weight children and provision of services privately. Comparisons between pilot versus state-wide waves showed comparable prevalence of families not attending any sessions (25% vs 28%) but improved number of sessions attended (median = 5 vs 7) and completion rates (43% vs 56%). Translating programs developed in the research context to enable implementation at scale is complex and presents substantial challenges. Planning must ensure there is flexibility to accommodate and proactively manage the system changes that are inevitable over time. ACTRN12617000315314 . This trial was registered retrospectively on 28 February, 2017.

  20. Learning before leaping: integration of an adaptive study design process prior to initiation of BetterBirth, a large-scale randomized controlled trial in Uttar Pradesh, India.

    PubMed

    Hirschhorn, Lisa Ruth; Semrau, Katherine; Kodkany, Bhala; Churchill, Robyn; Kapoor, Atul; Spector, Jonathan; Ringer, Steve; Firestone, Rebecca; Kumar, Vishwajeet; Gawande, Atul

    2015-08-14

    Pragmatic and adaptive trial designs are increasingly used in quality improvement (QI) interventions to provide the strongest evidence for effective implementation and impact prior to broader scale-up. We previously showed that an on-site coaching intervention focused on the World Health Organization Safe Childbirth Checklist (SCC) improved performance of essential birth practices (EBPs) in one facility in Karnataka, India. We report on the process and outcomes of adapting the intervention prior to larger-scale implementation in a randomized controlled trial in Uttar Pradesh (UP), India. Initially, we trained a local team of physicians and nurses to coach birth attendants in SCC use at two public facilities for 4-6 weeks. Trained observers evaluated adherence to EBPs before and after coaching. Using mixed methods and a systematic adaptation process, we modified and strengthened the intervention. The modified intervention was implemented in three additional facilities. Pre/post-change in EBP prevalence aggregated across facilities was analyzed. In the first two facilities, limited improvement was seen in EBPs with the exception of post-partum oxytocin. Checklists were used <25 % of observations. We identified challenges in physicians coaching nurses, need to engage district and facility leadership to address system gaps, and inadequate strategy for motivating SCC uptake. Revisions included change to peer-to-peer coaching (nurse to nurse, physician to physician); strengthened coach training on behavior and system change; adapted strategy for effective leadership engagement; and an explicit motivation strategy to enhance professional pride and effectiveness. These modifications resulted in improvement in multiple EBPs from baseline including taking maternal blood pressure (0 to 16 %), post-partum oxytocin (36 to 97 %), early breastfeeding initiation (3 to 64 %), as well as checklist use (range 32 to 88 %), all p < 0.01. Further adaptations were implemented to increase the effectiveness prior to full trial launch. The adaptive study design of implementation, evaluation, and feedback drove iterative redesign and successfully developed a SCC-focused coaching intervention that improved EBPs in UP facilities. This work was critical to develop a replicable BetterBirth package tailored to the local context. The multi-center pragmatic trial is underway measuring impact of the BetterBirth program on EBP and maternal-neonatal morbidity and mortality. NCT02148952 .

  1. LASSIE: simulating large-scale models of biochemical systems on GPUs.

    PubMed

    Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo

    2017-05-10

    Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE is able to perform fast simulations of even larger models, whereby the tested CPU-implementation of LSODA failed to reach termination. LASSIE is therefore expected to make an important breakthrough in Systems Biology applications, for the execution of faster and in-depth computational analyses of large-scale models of complex biological systems.

  2. Tests of dynamic Lagrangian eddy viscosity models in Large Eddy Simulations of flow over three-dimensional bluff bodies

    NASA Astrophysics Data System (ADS)

    Tseng, Yu-Heng; Meneveau, Charles; Parlange, Marc B.

    2004-11-01

    Large Eddy Simulations (LES) of atmospheric boundary-layer air movement in urban environments are especially challenging due to complex ground topography. Typically in such applications, fairly coarse grids must be used where the subgrid-scale (SGS) model is expected to play a crucial role. A LES code using pseudo-spectral discretization in horizontal planes and second-order differencing in the vertical is implemented in conjunction with the immersed boundary method to incorporate complex ground topography, with the classic equilibrium log-law boundary condition in the new-wall region, and with several versions of the eddy-viscosity model: (1) the constant-coefficient Smagorinsky model, (2) the dynamic, scale-invariant Lagrangian model, and (3) the dynamic, scale-dependent Lagrangian model. Other planar-averaged type dynamic models are not suitable because spatial averaging is not possible without directions of statistical homogeneity. These SGS models are tested in LES of flow around a square cylinder and of flow over surface-mounted cubes. Effects on the mean flow are documented and found not to be major. Dynamic Lagrangian models give a physically more realistic SGS viscosity field, and in general, the scale-dependent Lagrangian model produces larger Smagorinsky coefficient than the scale-invariant one, leading to reduced distributions of resolved rms velocities especially in the boundary layers near the bluff bodies.

  3. Implementation of a boundary element method to solve for the near field effects of an array of WECs

    NASA Astrophysics Data System (ADS)

    Oskamp, J. A.; Ozkan-Haller, H. T.

    2010-12-01

    When Wave Energy Converters (WECs) are installed, they affect the shoreline wave climate by removing some of the wave energy which would have reached the shore. Before large WEC projects are launched, it is important to understand the potential coastal impacts of these installations. The high cost associated with ocean scale testing invites the use of hydrodynamic models to play a major role in estimating these effects. In this study, a wave structure interaction program (WAMIT) is used to model an array of WECs. The program predicts the wave field throughout the array using a boundary element method to solve the potential flow fluid problem, taking into account the incident waves, the power dissipated, and the way each WEC moves and interacts with the others. This model is appropriate for a small domain near the WEC array in order to resolve the details in the interactions, but not extending to the coastline (where the far-field effects must be assessed). To propagate these effects to the coastline, the waves leaving this small domain will be used as boundary conditions for a larger model domain which will assess the shoreline effects caused by the array. The immediate work is concerned with setting up the WAMIT model for a small array of point absorbers. A 1:33 scale lab test is planned and will provide data to validate the WAMIT model on this small domain before it is nested with the larger domain to estimate shoreline effects.

  4. Community-Based Management: Under What Conditions Do Sámi Pastoralists Manage Pastures Sustainably?

    PubMed Central

    Hausner, Vera H.; Fauchald, Per; Jernsletten, Johnny-Leo

    2012-01-01

    Community-based management (CBM) has been implemented in socio-ecological systems (SES) worldwide. CBM has also been the prevailing policy in Sámi pastoral SES in Norway, but the outcomes tend to vary extensively among resource groups (“siidas”). We asked why do some siidas self-organize to manage common pool resources sustainably and others do not? To answer this question we used a mixed methods approach. First, in the statistical analyses we analyzed the relationship between sustainability indicators and structural variables. We found that small winter pastures that are shared by few siidas were managed more sustainably than larger pastures. Seasonal siida stability, i.e., a low turnover of pastoralists working together throughout the year, and equality among herders, also contributed to more sustainable outcomes. Second, interviews were conducted in the five largest pastures to explain the relationships between the structural variables and sustainability. The pastoralists expressed a high level of agreement with respect to sustainable policies, but reported a low level of trust and cooperation among the siidas. The pastoralists requested siida tenures or clear rules and sanctioning mechanisms by an impartial authority rather than flexible organization or more autonomy for the siidas. The lack of nestedness in self-organization for managing pastures on larger scales, combined with the past economic policies, could explain why CBM is less sustainable on the largest winter pastures. We conclude that the scale mis-match between self-organization and the formal governance is a key condition for sustainability. PMID:23240003

  5. Efficient parallelization for AMR MHD multiphysics calculations; implementation in AstroBEAR

    NASA Astrophysics Data System (ADS)

    Carroll-Nellenback, Jonathan J.; Shroyer, Brandon; Frank, Adam; Ding, Chen

    2013-03-01

    Current adaptive mesh refinement (AMR) simulations require algorithms that are highly parallelized and manage memory efficiently. As compute engines grow larger, AMR simulations will require algorithms that achieve new levels of efficient parallelization and memory management. We have attempted to employ new techniques to achieve both of these goals. Patch or grid based AMR often employs ghost cells to decouple the hyperbolic advances of each grid on a given refinement level. This decoupling allows each grid to be advanced independently. In AstroBEAR we utilize this independence by threading the grid advances on each level with preference going to the finer level grids. This allows for global load balancing instead of level by level load balancing and allows for greater parallelization across both physical space and AMR level. Threading of level advances can also improve performance by interleaving communication with computation, especially in deep simulations with many levels of refinement. While we see improvements of up to 30% on deep simulations run on a few cores, the speedup is typically more modest (5-20%) for larger scale simulations. To improve memory management we have employed a distributed tree algorithm that requires processors to only store and communicate local sections of the AMR tree structure with neighboring processors. Using this distributed approach we are able to get reasonable scaling efficiency (>80%) out to 12288 cores and up to 8 levels of AMR - independent of the use of threading.

  6. HESS Opinions: The complementary merits of competing modelling philosophies in hydrology

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Clark, Martyn P.

    2017-08-01

    In hydrology, two somewhat competing philosophies form the basis of most process-based models. At one endpoint of this continuum are detailed, high-resolution descriptions of small-scale processes that are numerically integrated to larger scales (e.g. catchments). At the other endpoint of the continuum are spatially lumped representations of the system that express the hydrological response via, in the extreme case, a single linear transfer function. Many other models, developed starting from these two contrasting endpoints, plot along this continuum with different degrees of spatial resolutions and process complexities. A better understanding of the respective basis as well as the respective shortcomings of different modelling philosophies has the potential to improve our models. In this paper we analyse several frequently communicated beliefs and assumptions to identify, discuss and emphasize the functional similarity of the seemingly competing modelling philosophies. We argue that deficiencies in model applications largely do not depend on the modelling philosophy, although some models may be more suitable for specific applications than others and vice versa, but rather on the way a model is implemented. Based on the premises that any model can be implemented at any desired degree of detail and that any type of model remains to some degree conceptual, we argue that a convergence of modelling strategies may hold some value for advancing the development of hydrological models.

  7. Carbon Dioxide Collection and Purification System for Mars

    NASA Technical Reports Server (NTRS)

    Clark, D. Larry; Trevathan, Joseph R.

    2001-01-01

    One of the most abundant resources available on Mars is the atmosphere. The primary constituent, carbon dioxide, can be used to produce a wide variety of consumables including propellants and breathing air. The residual gases can be used for additional pressurization tasks including supplementing the oxygen partial pressure in human habitats. A system is presented that supplies pure, high-pressure carbon dioxide and a separate stream of residual gases ready for further processing. This power-efficient method freezes the carbon dioxide directly from the atmosphere using a pulse-tube cryocooler. The resulting CO2 mass is later thawed in a closed pressure vessel, resulting in a compact source of liquefied gas at the vapor pressure of the bulk fluid. Results from a demonstration system are presented along with analysis and system scaling factors for implementation at larger scales. Trace gases in the Martian atmosphere challenge the system designer for all carbon dioxide acquisitions concepts. The approximately five percent of other gases build up as local concentrations of CO2 are removed, resulting in diminished performance of the collection process. The presented system takes advantage of this fact and draws the concentrated residual gases away as a useful byproduct. The presented system represents an excelient volume and mass solution for collecting and compressing this valuable Martian resource. Recent advances in pulse-tube cryocooler technology have enabled this concept to be realized in a reliable, low power implementation.

  8. Scaling Effects of Riparian Peatlands on Stable Isotopes in Runoff and DOC Mobilization

    NASA Astrophysics Data System (ADS)

    Tetzlaff, D.; Tunaley, C.; Soulsby, C.

    2016-12-01

    We combined 13 months of daily isotope measurements in stream water with daily DOC and 15 minute FDOM (fluorescent component of dissolved organic matter) data at three nested scales to identify how riparian peatlands generate runoff and influence DOC dynamics in streams. We investigated how runoff generation processes in a small, riparian peatland dominated headwater catchment (0.65 km2) propagate to larger scales (3.2 km2 and 31 km2) with decreasing percentage of riparian peatland coverage. Isotope damping was most pronounced in the 0.65 km2 headwater catchment due to high water storage in the organic soils which encourage tracer mixing. At the largest scale, stream flow and water isotope dynamics showed a more flashy response. The isotopic difference between the sites was most pronounced in the summer months when stream water signatures were enriched. During the winter months, the inter-site difference reduced. The isotopes also revealed evaporative fractionation in the peatland dominated catchment, in particular during summer low flows, which implied high hydrological connectivity in form of constant seepage from the peatlands sustaining high baseflows at the headwater scale. This connectivity resulted in high DOC concentrations at the peatland site during baseflow ( 5 mg l-1). In contrast, at the larger scales, DOC was minimal during low flows ( 2 mg l-1) due to increased groundwater influence and the disconnection between DOC sources and the stream. High frequency data also revealed diel variability during low flows. Insights into event dynamics through the analysis of hysteresis loops showed slight dilution on the rising limb, the strong influence of dry antecedent conditions and a quick recovery between events at the riparian peatland site. Again, these dynamics are driven by the tight coupling and high connectivity of the landscape to the stream. At larger scales, the disconnection between the landscape units increase and the variable connectivity controls runoff generation and DOC dynamics. The results presented here suggest that the processes occurring in riparian peatlands in headwater catchments are less evident at larger scales which may have implications for the larger scale impact of peatland restoration projects.

  9. Scaling effects of riparian peatlands on stable isotopes in runoff and DOC mobilisation

    NASA Astrophysics Data System (ADS)

    Tunaley, C.; Tetzlaff, D.; Soulsby, C.

    2017-06-01

    We combined 13 months of daily isotope measurements in stream water with daily DOC and 15 min FDOM (fluorescent component of dissolved organic matter) data at three nested scales to identify how riparian peatlands generate runoff and influence DOC dynamics in streams. We investigated how runoff generation processes in a small, riparian peatland-dominated headwater catchment (0.65 km2) propagate to larger scales (3.2 km2 and 31 km2) with decreasing percentage of riparian peatland coverage. Isotope damping was most pronounced in the 0.65 km2 headwater catchment due to high water storage in the organic soils encouraging tracer mixing. At the largest scale, stream flow and water isotope dynamics showed a more flashy response. The isotopic difference between the sites was most pronounced in the summer months when stream water signatures were enriched. During the winter months, the inter-site difference reduced. The isotopes also revealed evaporative fractionation in the peatland dominated catchment, in particular during summer low flows, which implied high hydrological connectivity in the form of constant seepage from the peatlands sustaining high baseflows at the headwater scale. This connectivity resulted in high DOC concentrations at the peatland site during baseflow (∼5 mg l-1). In contrast, at the larger scales, DOC was minimal during low flows (∼2 mg l-1) due to increased groundwater influence and the disconnection between DOC sources and the stream. High frequency data also revealed diel variability during low flows. Insights into event dynamics through the analysis of hysteresis loops showed slight dilution on the rising limb, the strong influence of dry antecedent conditions and a quick recovery between events at the riparian peatland site. Again, these dynamics are driven by the tight coupling and high connectivity of the landscape to the stream. At larger scales, the disconnection between the landscape units increases and the variable connectivity controls runoff generation and DOC dynamics. The results presented here suggest that the processes occurring in riparian peatlands in headwater catchments are less evident at larger scales which may have implications for the larger scale impact of peatland restoration projects.

  10. Scale dependence of entrainment-mixing mechanisms in cumulus clouds

    DOE PAGES

    Lu, Chunsong; Liu, Yangang; Niu, Shengjie; ...

    2014-12-17

    This work empirically examines the dependence of entrainment-mixing mechanisms on the averaging scale in cumulus clouds using in situ aircraft observations during the Routine Atmospheric Radiation Measurement Aerial Facility Clouds with Low Optical Water Depths Optical Radiative Observations (RACORO) field campaign. A new measure of homogeneous mixing degree is defined that can encompass all types of mixing mechanisms. Analysis of the dependence of the homogenous mixing degree on the averaging scale shows that, on average, the homogenous mixing degree decreases with increasing averaging scales, suggesting that apparent mixing mechanisms gradually approach from homogeneous mixing to extreme inhomogeneous mixing with increasingmore » scales. The scale dependence can be well quantified by an exponential function, providing first attempt at developing a scale-dependent parameterization for the entrainment-mixing mechanism. The influences of three factors on the scale dependence are further examined: droplet-free filament properties (size and fraction), microphysical properties (mean volume radius and liquid water content of cloud droplet size distributions adjacent to droplet-free filaments), and relative humidity of entrained dry air. It is found that the decreasing rate of homogeneous mixing degree with increasing averaging scales becomes larger with larger droplet-free filament size and fraction, larger mean volume radius and liquid water content, or higher relative humidity. The results underscore the necessity and possibility of considering averaging scale in representation of entrainment-mixing processes in atmospheric models.« less

  11. Rossby waves and two-dimensional turbulence in a large-scale zonal jet

    NASA Technical Reports Server (NTRS)

    Shepherd, Theodor G.

    1987-01-01

    Homogeneous barotropic beta-plane turbulence is investigated, taking into account the effects of spatial inhomogeneity in the form of a zonal shear flows. Attention is given to the case of zonal flows that are barotropically stable and of larger scale than the resulting transient eddy field. Numerical simulations reveal that large-scale zonal flows alter the picture of classical beta-plane turbulence. It is found that the disturbance field penetrates to the largest scales of motion, that the larger disturbance scales show a tendency to meridional rather than zonal anisotropy, and that the initial spectral transfer rate away from an isotropic intermediate-scale source is enhanced by the shear-induced transfer associated with straining by the zonal flow.

  12. Consecutive anaerobic-aerobic treatment of the organic fraction of municipal solid waste and lignocellulosic materials in laboratory-scale landfill-bioreactors.

    PubMed

    Pellera, Frantseska-Maria; Pasparakis, Emmanouil; Gidarakos, Evangelos

    2016-10-01

    The scope of this study is to evaluate the use of laboratory-scale landfill-bioreactors, operated consecutively under anaerobic and aerobic conditions, for the combined treatment of the organic fraction of municipal solid waste (OFMSW) with two different co-substrates of lignocellulosic nature, namely green waste (GW) and dried olive pomace (DOP). According to the results such a system would represent a promising option for eventual larger scale applications. Similar variation patterns among bioreactors indicate a relatively defined sequence of processes. Initially operating the systems under anaerobic conditions would allow energetic exploitation of the substrates, while the implementation of a leachate treatment system ultimately aiming at nutrient recovery, especially during the anaerobic phase, could be a profitable option for the whole system, due to the high organic load that characterizes this effluent. In order to improve the overall effectiveness of such a system, measures towards enhancing methane contents of produced biogas, such as substrate pretreatment, should be investigated. Moreover, the subsequent aerobic phase should have the goal of stabilizing the residual materials and finally obtain an end material eventually suitable for other purposes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations

    PubMed Central

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061

  14. Applications of species accumulation curves in large-scale biological data analysis.

    PubMed

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2015-09-01

    The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.

  15. Applications of species accumulation curves in large-scale biological data analysis

    PubMed Central

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2016-01-01

    The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899

  16. Semantic Representation and Scale-Up of Integrated Air Traffic Management Data

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Ranjan, Shubha; Wei, Mei Y.; Eshow, Michelle M.

    2016-01-01

    Each day, the global air transportation industry generates a vast amount of heterogeneous data from air carriers, air traffic control providers, and secondary aviation entities handling baggage, ticketing, catering, fuel delivery, and other services. Generally, these data are stored in isolated data systems, separated from each other by significant political, regulatory, economic, and technological divides. These realities aside, integrating aviation data into a single, queryable, big data store could enable insights leading to major efficiency, safety, and cost advantages. In this paper, we describe an implemented system for combining heterogeneous air traffic management data using semantic integration techniques. The system transforms data from its original disparate source formats into a unified semantic representation within an ontology-based triple store. Our initial prototype stores only a small sliver of air traffic data covering one day of operations at a major airport. The paper also describes our analysis of difficulties ahead as we prepare to scale up data storage to accommodate successively larger quantities of data -- eventually covering all US commercial domestic flights over an extended multi-year timeframe. We review several approaches to mitigating scale-up related query performance concerns.

  17. Spatial Estimation of Populations at Risk from Radiological Dispersion Device Terrorism Incidents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regens, J.L.; Gunter, J.T.

    2008-07-01

    Delineation of the location and size of the population potentially at risk of exposure to ionizing radiation is one of the key analytical challenges in estimating accurately the severity of the potential health effects associated with a radiological terrorism incident. Regardless of spatial scale, the geographical units for which population data commonly are collected rarely coincide with the geographical scale necessary for effective incident management and medical response. This paper identifies major government and commercial open sources of U.S. population data and presents a GIS-based approach for allocating publicly available population data, including age distributions, to geographical units appropriate formore » planning and implementing incident management and medical response strategies. In summary: The gravity model offers a straight-forward, empirical tool for estimating population flows, especially when geographical areas are relatively well-defined in terms of accessibility and spatial separation. This is particularly important for several reasons. First, the spatial scale for the area impacted by a RDD terrorism event is unlikely to match fully the spatial scale of available population data. That is, the plume spread typically will not uniformly overlay the impacted area. Second, the number of people within the impacted area varies as a function whether an attack occurs during the day or night. For example, the population of a central business district or industrial area typically is larger during the day while predominately residential areas have larger night time populations. As a result, interpolation techniques that link population data to geographical units and allocate those data based on time-frame at a spatial scale that is relevant to enhancing preparedness and response. The gravity model's main advantage is that it efficiently allocates readily available, open source population data to geographical units appropriate for planning and implementing incident management and medical monitoring strategies. The importance of being able to link population estimates to geographic areas during the course of an RDD incident can be understood intuitively: - The spatial distribution of actual total dose equivalents of ionizing radiation is likely to vary due to changes in meteorological parameters as an event evolves over time; - The size of the geographical area affected also is likely to vary as a function of the actual release scenario; - The ability to identify the location and size of the populations that may be exposed to doses of ionizing radiation is critical to carrying out appropriate treatment and post-event medical monitoring; - Once a spatial interaction model has been validated for a city or a region, it can then be used for simulation and prediction purposes to assess the possible human health consequences of different release scenarios. (authors)« less

  18. Bottom-up implementation of disease-management programmes: results of a multisite comparison.

    PubMed

    Lemmens, K M M; Nieboer, A P; Rutten-Van Mölken, M P M H; van Schayck, C P; Spreeuwenberg, C; Asin, J D; Huijsman, R

    2011-01-01

    To evaluate the implementation of three regional disease-management programmes on chronic obstructive pulmonary disease (COPD) based on bottlenecks experienced in professional practice. The authors performed a multisite comparison of three Dutch regional disease-management programmes combining patient-related, professional-directed and organisational interventions. Process (Assessing Chronic Illness Care survey) and outcome (disease specific quality of life (clinical COPD questionnaire (CCQ); chronic respiratory questionnaire (CRQ)), Medical Research Council dyspnoea and patients' experiences) data were collected for 370 COPD patients and their care providers. Bottlenecks in region A were mostly related to patient involvement, in region B to organisational issues and in region C to both. Selected interventions related to identified bottlenecks were implemented in all programmes, except for patient-related interventions in programme A. Within programmes, significant improvements were found on dyspnoea and patients' experiences with practice nurses. Outcomes on quality of life differed between programmes: programme A did not show any significant improvements; programme B did show any significant improvements on CCQ total (p<0.001), functional (p=0.011) and symptom (p<0.001), CRQ fatigue (p<0.001) and emotional scales (p<0.001); in programme C, CCQ symptom (p<0.001) improved significantly, whereas CCQ mental score (p<0.001) deteriorated significantly. Regression analyses showed that programmes with better implementation of selected interventions resulted in relatively larger improvements in quality of life (CCQ). Bottom-up implementation of COPD disease-management programmes is a feasible approach, which in multiple settings leads to significant improvements in outcomes of care. Programmes with a better fit between implemented interventions and bottlenecks showed more positive changes in outcomes.

  19. Building managed primary care practice networks to deliver better clinical care: a qualitative semi-structured interview study.

    PubMed

    Pawa, Jasmine; Robson, John; Hull, Sally

    2017-11-01

    Primary care practices are increasingly working in larger groups. In 2009, all 36 primary care practices in the London borough of Tower Hamlets were grouped geographically into eight managed practice networks to improve the quality of care they delivered. Quantitative evaluation has shown improved clinical outcomes. To provide insight into the process of network implementation, including the aims, facilitating factors, and barriers, from both the clinical and managerial perspectives. A qualitative study of network implementation in the London borough of Tower Hamlets, which serves a socially disadvantaged and ethnically diverse population. Nineteen semi-structured interviews were carried out with doctors, nurses, and managers, and were informed by existing literature on integrated care and GP networks. Interviews were recorded and transcribed, and thematic analysis used to analyse emerging themes. Interviewees agreed that networks improved clinical care and reduced variation in practice performance. Network implementation was facilitated by the balance struck between 'a given structure' and network autonomy to adopt local solutions. Improved use of data, including patient recall and peer performance indicators, were viewed as critical key factors. Targeted investment provided the necessary resources to achieve this. Barriers to implementing networks included differences in practice culture, a reluctance to share data, and increased workload. Commissioners and providers were positive about the implementation of GP networks as a way to improve the quality of clinical care in Tower Hamlets. The issues that arose may be of relevance to other areas implementing similar quality improvement programmes at scale. © British Journal of General Practice 2017.

  20. GPGPU-based explicit finite element computations for applications in biomechanics: the performance of material models, element technologies, and hardware generations.

    PubMed

    Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N

    2017-12-01

    Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.

  1. Smallholder Irrigation and Crop Diversification under Climate Change in sub-Saharan Africa: Evidence and Potential for Simultaneous Food Security, Adaptation, and Mitigation

    NASA Astrophysics Data System (ADS)

    Naylor, R.; Burney, J. A.; Postel, S.

    2011-12-01

    The poorest populations in sub-Saharan Africa live in rural areas and depend on smallholder agricultural production for their livelihoods. Over 90% of all farmed area in Sub-Saharan Africa is rainfed, with crop production centering on 3-5 months of rainfall. Rapid population growth is reducing land per capita ratios, and low yields for staple crops make food security an increasingly challenging goal. Malnutrition, most noticeable among children, peaks during the dry season. Recent data on aggregate economic growth and investment in Africa hide these patterns of seasonal hunger and income disparity. Perhaps most perversely, smallholder farmers in the dry tropical regions of sub-Saharan Africa are (and will continue to be) some of the earliest and hardest hit by climate change. Our research focuses on the role distributed, small-scale irrigation can play in food security and climate change adaptation in sub-Saharan Africa. As Asia's agricultural success has demonstrated, irrigation, when combined with the availability of inputs (fertilizer) and improved crop varieties, can enable year-round production, growth in rural incomes, and a dramatic reduction in hunger. The situation in Africa is markedly different: agroecological conditions are far more heterogeneous than in Asia and evaporation rates are relatively high; most smallholders lack access to fertilizers; and market integration is constrained by infrastructure, information, and private sector incentives. Yet from a resource perspective, national- and regional-level estimates suggest that Internal Renewable Water Resources (IRWR) are nowhere near fully exploited in Sub-Saharan Africa -- even in the Sudano-Sahel, which is considered to be one of the driest regions of the continent. Irrigation can thus be implemented on a much larger scale sustainably. We will present (a) results from controlled, experimental field studies of solar-powered drip irrigation systems in the rural Sudano-Sahel region of West Africa. We have shown that such systems can be implemented in a cost-competitive and environmentally responsible manner, with significant and sustained impact on livelihoods. These findings will be coupled with (b) case studies of successful and failed irrigation projects across the continent that reveal technical and institutional requirements for success; and (c) regional and continental data that quantify the larger-scale food security, development, adaptation, and mitigation potentials of these types of smallholder systems.

  2. What Is the Contribution of City-Scale Actions to the Overall Food System's Environmental Impacts?: Assessing Water, Greenhouse Gas, and Land Impacts of Future Urban Food Scenarios.

    PubMed

    Boyer, Dana; Ramaswami, Anu

    2017-10-17

    This paper develops a methodology for individual cities to use to analyze the in- and trans-boundary water, greenhouse gas (GHG), and land impacts of city-scale food system actions. Applied to Delhi, India, the analysis demonstrates that city-scale action can rival typical food policy interventions that occur at larger scales, although no single city-scale action can rival in all three environmental impacts. In particular, improved food-waste management within the city (7% system-wide GHG reduction) matches the GHG impact of preconsumer trans-boundary food waste reduction. The systems approach is particularly useful in illustrating key trade-offs and co-benefits. For instance, multiple diet shifts that can reduce GHG emissions have trade-offs that increase water and land impacts. Vertical farming technology (VFT) with current applications for fruits and vegetables can provide modest system-wide water (4%) and land reductions (3%), although implementation within the city itself may raise questions of constraints in water-stressed cities, with such a shift in Delhi increasing community-wide direct water use by 16%. Improving the nutrition status for the bottom 50% of the population to the median diet is accompanied by proportionally smaller increases of water, GHG, and land impacts (4%, 9%, and 8%, systemwide): increases that can be offset through simultaneous city-scale actions, e.g., improved food-waste management and VFT.

  3. Links between soil properties and steady-state solute transport through cultivated topsoil at the field scale

    NASA Astrophysics Data System (ADS)

    Koestel, J. K.; Norgaard, T.; Luong, N. M.; Vendelboe, A. L.; Moldrup, P.; Jarvis, N. J.; Lamandé, M.; Iversen, B. V.; Wollesen de Jonge, L.

    2013-02-01

    It is known that solute transport through soil is heterogeneous at all spatial scales. However, little data are available to allow quantification of these heterogeneities at the field scale or larger. In this study, we investigated the spatial patterns of soil properties, hydrologic state variables, and tracer breakthrough curves (BTCs) at the field scale for the inert solute transport under a steady-state irrigation rate which produced near-saturated conditions. Sixty-five undisturbed soil columns approximately 20 cm in height and diameter were sampled from the loamy topsoil of an agricultural field site in Silstrup (Denmark) at a sampling distance of approximately 15 m (with a few exceptions), covering an area of approximately 1 ha (60 m × 165 m). For 64 of the 65 investigated soil columns, we observed BTC shapes indicating a strong preferential transport. The strength of preferential transport was positively correlated with the bulk density and the degree of water saturation. The latter suggests that preferential macropore transport was the dominating transport process. Increased bulk densities were presumably related with a decrease in near-saturated hydraulic conductivities and as a consequence to larger water saturation and the activation of larger macropores. Our study provides further evidence that it should be possible to estimate solute transport properties from soil properties such as soil texture or bulk density. We also demonstrated that estimation approaches established for the column scale have to be upscaled when applied to the field scale or larger.

  4. Community shifts under climate change: mechanisms at multiple scales.

    PubMed

    Gornish, Elise S; Tylianakis, Jason M

    2013-07-01

    Processes that drive ecological dynamics differ across spatial scales. Therefore, the pathways through which plant communities and plant-insect relationships respond to changing environmental conditions are also expected to be scale-dependent. Furthermore, the processes that affect individual species or interactions at single sites may differ from those affecting communities across multiple sites. We reviewed and synthesized peer-reviewed literature to identify patterns in biotic or abiotic pathways underpinning changes in the composition and diversity of plant communities under three components of climate change (increasing temperature, CO2, and changes in precipitation) and how these differ across spatial scales. We also explored how these changes to plants affect plant-insect interactions. The relative frequency of biotic vs. abiotic pathways of climate effects at larger spatial scales often differ from those at smaller scales. Local-scale studies show variable responses to climate drivers, often driven by biotic factors. However, larger scale studies identify changes to species composition and/or reduced diversity as a result of abiotic factors. Differing pathways of climate effects can result from different responses of multiple species, habitat effects, and differing effects of invasions at local vs. regional to global scales. Plant community changes can affect higher trophic levels as a result of spatial or phenological mismatch, foliar quality changes, and plant abundance changes, though studies on plant-insect interactions at larger scales are rare. Climate-induced changes to plant communities will have considerable effects on community-scale trophic exchanges, which may differ from the responses of individual species or pairwise interactions.

  5. Microclimate Data Improve Predictions of Insect Abundance Models Based on Calibrated Spatiotemporal Temperatures.

    PubMed

    Rebaudo, François; Faye, Emile; Dangles, Olivier

    2016-01-01

    A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species abundances. In conclusion, keeping in mind that the mismatch between the size of organisms and the scale at which climate data are collected and modeled remains a key issue, temperature dataset selection should be balanced by the desired output spatiotemporal scale for better predicting pest dynamics and developing efficient pest management strategies.

  6. Microclimate Data Improve Predictions of Insect Abundance Models Based on Calibrated Spatiotemporal Temperatures

    PubMed Central

    Rebaudo, François; Faye, Emile; Dangles, Olivier

    2016-01-01

    A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species abundances. In conclusion, keeping in mind that the mismatch between the size of organisms and the scale at which climate data are collected and modeled remains a key issue, temperature dataset selection should be balanced by the desired output spatiotemporal scale for better predicting pest dynamics and developing efficient pest management strategies. PMID:27148077

  7. Asymmetric multiscale detrended fluctuation analysis of California electricity spot price

    NASA Astrophysics Data System (ADS)

    Fan, Qingju

    2016-01-01

    In this paper, we develop a new method called asymmetric multiscale detrended fluctuation analysis, which is an extension of asymmetric detrended fluctuation analysis (A-DFA) and can assess the asymmetry correlation properties of series with a variable scale range. We investigate the asymmetric correlations in California 1999-2000 power market after filtering some periodic trends by empirical mode decomposition (EMD). Our findings show the coexistence of symmetric and asymmetric correlations in the price series of 1999 and strong asymmetric correlations in 2000. What is more, we detect subtle correlation properties of the upward and downward price series for most larger scale intervals in 2000. Meanwhile, the fluctuations of Δα(s) (asymmetry) and | Δα(s) | (absolute asymmetry) are more significant in 2000 than that in 1999 for larger scale intervals, and they have similar characteristics for smaller scale intervals. We conclude that the strong asymmetry property and different correlation properties of upward and downward price series for larger scale intervals in 2000 have important implications on the collapse of California power market, and our findings shed a new light on the underlying mechanisms of power price.

  8. 78 FR 922 - Revisions to the California State Implementation Plan, Imperial County Air Pollution Control...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-07

    ... ) emissions from sources of fugitive dust such as unpaved roads and disturbed soils in open and agricultural... trespass and stabilize disturbed soil on open areas larger than 0.5 acres in urban areas, and larger than...

  9. PrEP implementation in the Asia-Pacific region: opportunities, implementation and barriers

    PubMed Central

    Zablotska, Iryna; Grulich, Andrew E; Phanuphak, Nittaya; Anand, Tarandeep; Janyam, Surang; Poonkasetwattana, Midnight; Baggaley, Rachel; van Griensven, Frits; Lo, Ying-Ru

    2016-01-01

    Introduction HIV epidemics in the Asia-Pacific region are concentrated among men who have sex with men (MSM) and other key populations. Pre-exposure prophylaxis (PrEP) is an effective HIV prevention intervention and could be a potential game changer in the region. We discuss the progress towards PrEP implementation in the Asia-Pacific region, including opportunities and barriers. Discussion Awareness about PrEP in the Asia-Pacific is still low and so are its levels of use. A high proportion of MSM who are aware of PrEP are willing to use it. Key PrEP implementation barriers include poor knowledge about PrEP, limited access to PrEP, weak or non-existent HIV prevention programmes for MSM and other key populations, high cost of PrEP, stigma and discrimination against key populations and restrictive laws in some countries. Only several clinical trials, demonstration projects and a few larger-scale implementation studies have been implemented so far in Thailand and Australia. However, novel approaches to PrEP implementation have emerged: researcher-, facility- and community-led models of care, with PrEP services for fee and for free. The WHO consolidated guidelines on HIV testing, treatment and prevention call for an expanded access to PrEP worldwide and have provided guidance on PrEP implementation in the region. Some countries like Australia have released national PrEP guidelines. There are growing community leadership and consultation processes to initiate PrEP implementation in Asia and the Pacific. Conclusions Countries of the Asia-Pacific region will benefit from adding PrEP to their HIV prevention packages, but for many this is a critical step that requires resourcing. Having an impact on the HIV epidemic requires investment. The next years should see the region transitioning from limited PrEP implementation projects to growing access to PrEP and expansion of HIV prevention programmes. PMID:27760688

  10. PrEP implementation in the Asia-Pacific region: opportunities, implementation and barriers.

    PubMed

    Zablotska, Iryna; Grulich, Andrew E; Phanuphak, Nittaya; Anand, Tarandeep; Janyam, Surang; Poonkasetwattana, Midnight; Baggaley, Rachel; van Griensven, Frits; Lo, Ying-Ru

    2016-01-01

    HIV epidemics in the Asia-Pacific region are concentrated among men who have sex with men (MSM) and other key populations. Pre-exposure prophylaxis (PrEP) is an effective HIV prevention intervention and could be a potential game changer in the region. We discuss the progress towards PrEP implementation in the Asia-Pacific region, including opportunities and barriers. Awareness about PrEP in the Asia-Pacific is still low and so are its levels of use. A high proportion of MSM who are aware of PrEP are willing to use it. Key PrEP implementation barriers include poor knowledge about PrEP, limited access to PrEP, weak or non-existent HIV prevention programmes for MSM and other key populations, high cost of PrEP, stigma and discrimination against key populations and restrictive laws in some countries. Only several clinical trials, demonstration projects and a few larger-scale implementation studies have been implemented so far in Thailand and Australia. However, novel approaches to PrEP implementation have emerged: researcher-, facility- and community-led models of care, with PrEP services for fee and for free. The WHO consolidated guidelines on HIV testing, treatment and prevention call for an expanded access to PrEP worldwide and have provided guidance on PrEP implementation in the region. Some countries like Australia have released national PrEP guidelines. There are growing community leadership and consultation processes to initiate PrEP implementation in Asia and the Pacific. Countries of the Asia-Pacific region will benefit from adding PrEP to their HIV prevention packages, but for many this is a critical step that requires resourcing. Having an impact on the HIV epidemic requires investment. The next years should see the region transitioning from limited PrEP implementation projects to growing access to PrEP and expansion of HIV prevention programmes.

  11. Benchmarking urban flood models of varying complexity and scale using high resolution terrestrial LiDAR data

    NASA Astrophysics Data System (ADS)

    Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.

    2011-01-01

    This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.

  12. Scaling properties of European research units

    PubMed Central

    Jamtveit, Bjørn; Jettestuen, Espen; Mathiesen, Joachim

    2009-01-01

    A quantitative characterization of the scale-dependent features of research units may provide important insight into how such units are organized and how they grow. The relative importance of top-down versus bottom-up controls on their growth may be revealed by their scaling properties. Here we show that the number of support staff in Scandinavian research units, ranging in size from 20 to 7,800 staff members, is related to the number of academic staff by a power law. The scaling exponent of ≈1.30 is broadly consistent with a simple hierarchical model of the university organization. Similar scaling behavior between small and large research units with a wide range of ambitions and strategies argues against top-down control of the growth. Top-down effects, and externally imposed effects from changing political environments, can be observed as fluctuations around the main trend. The observed scaling law implies that cost-benefit arguments for merging research institutions into larger and larger units may have limited validity unless the productivity per academic staff and/or the quality of the products are considerably higher in larger institutions. Despite the hierarchical structure of most large-scale research units in Europe, the network structures represented by the academic component of such units are strongly antihierarchical and suboptimal for efficient communication within individual units. PMID:19625626

  13. Approaches to the simulation of unconfined flow and perched groundwater flow in MODFLOW

    USGS Publications Warehouse

    Bedekar, Vivek; Niswonger, Richard G.; Kipp, Kenneth; Panday, Sorab; Tonkin, Matthew

    2012-01-01

    Various approaches have been proposed to manage the nonlinearities associated with the unconfined flow equation and to simulate perched groundwater conditions using the MODFLOW family of codes. The approaches comprise a variety of numerical techniques to prevent dry cells from becoming inactive and to achieve a stable solution focused on formulations of the unconfined, partially-saturated, groundwater flow equation. Keeping dry cells active avoids a discontinuous head solution which in turn improves the effectiveness of parameter estimation software that relies on continuous derivatives. Most approaches implement an upstream weighting of intercell conductance and Newton-Raphson linearization to obtain robust convergence. In this study, several published approaches were implemented in a stepwise manner into MODFLOW for comparative analysis. First, a comparative analysis of the methods is presented using synthetic examples that create convergence issues or difficulty in handling perched conditions with the more common dry-cell simulation capabilities of MODFLOW. Next, a field-scale three-dimensional simulation is presented to examine the stability and performance of the discussed approaches in larger, practical, simulation settings.

  14. The RTS,S/AS01 malaria vaccine in children 5 to 17 months of age at first vaccination.

    PubMed

    Vandoolaeghe, Pascale; Schuerman, Lode

    2016-12-01

    The RTS,S/AS01 malaria vaccine received a positive scientific opinion from the European Medicines Agency in July 2015. The World Health Organization recommended pilot implementation of the vaccine in children at least 5 months of age according to an initial 3-dose schedule given at least 1 month apart, and a 4th dose 15-18 months post-dose 3. Clinical trials and mathematical modeling demonstrated that the partial protection provided by RTS,S/AS01 against malaria has the potential to provide substantial public health benefit when used in parallel with other malaria interventions, especially in highly endemic regions. The highest impact was seen with 4 vaccine doses in children aged 5 months or older. The vaccine will be evaluated in real-life settings to further assess its impact on mortality, vaccine safety in the context of routine immunization, and programmatic feasibility of delivering a 4-dose vaccination schedule requiring new immunization contacts. If successful, this will pave the way for larger-scale implementation.

  15. The Contextualized Technology Adaptation Process (CTAP): Optimizing Health Information Technology to Improve Mental Health Systems.

    PubMed

    Lyon, Aaron R; Wasse, Jessica Knaster; Ludwig, Kristy; Zachry, Mark; Bruns, Eric J; Unützer, Jürgen; McCauley, Elizabeth

    2016-05-01

    Health information technologies have become a central fixture in the mental healthcare landscape, but few frameworks exist to guide their adaptation to novel settings. This paper introduces the contextualized technology adaptation process (CTAP) and presents data collected during Phase 1 of its application to measurement feedback system development in school mental health. The CTAP is built on models of human-centered design and implementation science and incorporates repeated mixed methods assessments to guide the design of technologies to ensure high compatibility with a destination setting. CTAP phases include: (1) Contextual evaluation, (2) Evaluation of the unadapted technology, (3) Trialing and evaluation of the adapted technology, (4) Refinement and larger-scale implementation, and (5) Sustainment through ongoing evaluation and system revision. Qualitative findings from school-based practitioner focus groups are presented, which provided information for CTAP Phase 1, contextual evaluation, surrounding education sector clinicians' workflows, types of technologies currently available, and influences on technology use. Discussion focuses on how findings will inform subsequent CTAP phases, as well as their implications for future technology adaptation across content domains and service sectors.

  16. Characterization of a multi-module tunable EC-QCL system for mid-infrared biofluid spectroscopy for hospital use and personalized diabetes technology

    NASA Astrophysics Data System (ADS)

    Grafen, M.; Nalpantidis, K.; Ostendorf, A.; Ihrig, D.; Heise, H. M.

    2016-03-01

    Blood glucose monitoring systems are important point-of-care devices for the hospital and personalised diabetes technology. FTIR-spectrometers have been successfully employed for the development of continuous bed-side monitoring systems in combination with micro-dialysis. For implementation in miniaturised portable systems, external-cavity quantum cascade lasers (EC-QCL) are suited. An ultra-broadly tunable pulsed EC-QCL system, covering a spectral range from 1920 to 780 cm-1, has been characterised with regard to the spectral emission profiles and wavenumber scale accuracy. The measurement of glucose in aqueous solution is presented and problems with signal linearity using Peltier-cooled MCT-detectors are discussed. The use of larger optical sample pathlengths for attenuating the laser power in transmission measurements has recently been suggested and implemented, but implications for broad mid-infrared measurements have now been investigated. The utilization of discrete wavenumber variables as an alternative for sweep-tune measurements has also been studied and sparse multivariate calibration models intended for clinical chemistry applications are described for glucose and lactate.

  17. Joining and Integration of Silicon Carbide-Based Materials for High Temperature Applications

    NASA Technical Reports Server (NTRS)

    Halbig, Michael C.; Singh, Mrityunjay

    2016-01-01

    Advanced joining and integration technologies of silicon carbide-based ceramics and ceramic matrix composites are enabling for their implementation into wide scale aerospace and ground-based applications. The robust joining and integration technologies allow for large and complex shapes to be fabricated and integrated with the larger system. Potential aerospace applications include lean-direct fuel injectors, thermal actuators, turbine vanes, blades, shrouds, combustor liners and other hot section components. Ground based applications include components for energy and environmental systems. Performance requirements and processing challenges are identified for the successful implementation different joining technologies. An overview will be provided of several joining approaches which have been developed for high temperature applications. In addition, various characterization approaches were pursued to provide an understanding of the processing-microstructure-property relationships. Microstructural analysis of the joint interfaces was conducted using optical, scanning electron, and transmission electron microscopy to identify phases and evaluate the bond quality. Mechanical testing results will be presented along with the need for new standardized test methods. The critical need for tailoring interlayer compositions for optimum joint properties will also be highlighted.

  18. Stanford Hardware Development Program

    NASA Technical Reports Server (NTRS)

    Peterson, A.; Linscott, I.; Burr, J.

    1986-01-01

    Architectures for high performance, digital signal processing, particularly for high resolution, wide band spectrum analysis were developed. These developments are intended to provide instrumentation for NASA's Search for Extraterrestrial Intelligence (SETI) program. The real time signal processing is both formal and experimental. The efficient organization and optimal scheduling of signal processing algorithms were investigated. The work is complemented by efforts in processor architecture design and implementation. A high resolution, multichannel spectrometer that incorporates special purpose microcoded signal processors is being tested. A general purpose signal processor for the data from the multichannel spectrometer was designed to function as the processing element in a highly concurrent machine. The processor performance required for the spectrometer is in the range of 1000 to 10,000 million instructions per second (MIPS). Multiple node processor configurations, where each node performs at 100 MIPS, are sought. The nodes are microprogrammable and are interconnected through a network with high bandwidth for neighboring nodes, and medium bandwidth for nodes at larger distance. The implementation of both the current mutlichannel spectrometer and the signal processor as Very Large Scale Integration CMOS chip sets was commenced.

  19. Interval follow up of a 4-day pilot program to implement the WHO surgical safety checklist at a Congolese hospital.

    PubMed

    White, Michelle C; Peterschmidt, Jennifer; Callahan, James; Fitzgerald, J Edward; Close, Kristin L

    2017-06-29

    The World Health Organisation Surgical Safety Checklist (SSC) improves surgical outcomes and the research question is no longer 'does the SSC work?' but, 'how to make the SSC work?' Evidence for implementation strategies in low-income countries is sparse and existing strategies are heavily based on long-term external support. Short but effective implementation programs are required if widespread scale up is to be achieved. We designed and delivered a four-day pilot SSC training course at a single hospital centre in the Republic of Congo, and evaluated the implementation after one year. We hypothesised that participants would still be using the checklist over 50% of the time. We taught the four-day SSC training course at Dolisie hospital in February 2014, and undertook a mixed methods impact evaluation based on the Kirkpatrick model in May 2015. SSC implementation was evaluated using self-reported questionnaire with a 3 point Likert scale to assess six key process measures. Learning, behaviour, organisational change and facilitators and inhibitors to change were evaluated with questionnaires, interviews and focus group discussion. Seventeen individuals participated in the training and seven (40%) were available for impact evaluation at 15 months. No participant had used the SSC prior to training. Over half the participants were following the six processes measures always or most of the time: confirmation of patient identity and the surgical procedure (57%), assessment of difficult intubation risk (72%), assessment of the risk of major blood loss (86%), antibiotic prophylaxis given before skin incision (86%), use of a pulse oximeter (86%), and counting sponges and instruments (71%). All participants reported positive improvements in teamwork, organisation and safe anesthesia. Most participants reported they worked in helpful, supportive and respectful atmosphere; and could speak up if they saw something that might harm a patient. However, less than half felt able to challenge those in authority. Our study demonstrates that a 4-day pilot course for SSC implementation resulted in over 50% of participants using the SSC at 15 months, positive changes in learning, behaviour and organisational change, but less impact on hierarchical culture. The next step is to test our novel implementation strategy in a larger hospital setting.

  20. Implementation of a generalized actuator line model for wind turbine parameterization in the Weather Research and Forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marjanovic, Nikola; Mirocha, Jeffrey D.; Kosović, Branko

    A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulationsmore » show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.« less

  1. Development of Experimental Icing Simulation Capability for Full-Scale Swept Wings: Hybrid Design Process, Years 1 and 2

    NASA Technical Reports Server (NTRS)

    Fujiwara, Gustavo; Bragg, Mike; Triphahn, Chris; Wiberg, Brock; Woodard, Brian; Loth, Eric; Malone, Adam; Paul, Bernard; Pitera, David; Wilcox, Pete; hide

    2017-01-01

    This report presents the key results from the first two years of a program to develop experimental icing simulation capabilities for full-scale swept wings. This investigation was undertaken as a part of a larger collaborative research effort on ice accretion and aerodynamics for large-scale swept wings. Ice accretion and the resulting aerodynamic effect on large-scale swept wings presents a significant airplane design and certification challenge to air frame manufacturers, certification authorities, and research organizations alike. While the effect of ice accretion on straight wings has been studied in detail for many years, the available data on swept-wing icing are much more limited, especially for larger scales.

  2. Permanent Supportive Housing for Transition-Age Youths: Service Costs and Fidelity to the Housing First Model.

    PubMed

    Gilmer, Todd P

    2016-06-01

    Permanent supportive housing (PSH) programs are being implemented nationally and on a large scale. However, little is known about PSH for transition-age youths (ages 18 to 24). This study estimated health services costs associated with participation in PSH among youths and examined the relationship between fidelity to the Housing First model and health service outcomes. Administrative data were used in a quasi-experimental, difference-in-differences design with a propensity score-matched contemporaneous control group to compare health service costs among 2,609 youths in PSH and 2,609 youths with serious mental illness receiving public mental health services in California from January 1, 2004, through June 30, 2010. Data from a survey of PSH program practices were merged with the administrative data to examine changes in service use among 1,299 youths in 63 PSH programs by level of fidelity to the Housing First model. Total service costs increased by $13,337 among youths in PSH compared with youths in the matched control group. Youths in higher-fidelity programs had larger declines in use of inpatient services and larger increases in outpatient visits compared with youths in lower-fidelity programs. PSH for youths was associated with substantial increases in costs. Higher-fidelity PSH programs may be more effective than lower-fidelity programs in reducing use of inpatient services and increasing use of outpatient services. As substantial investments are made in PSH for youths, it is imperative that these programs are designed and implemented to maximize their effectiveness and their impact on youth outcomes.

  3. Conditions for success in introducing telemedicine in diabetes foot care: a qualitative inquiry.

    PubMed

    Kolltveit, Beate-Christin Hope; Gjengedal, Eva; Graue, Marit; Iversen, Marjolein M; Thorne, Sally; Kirkevold, Marit

    2017-01-01

    The uptake of various telehealth technologies to deliver health care services at a distance is expanding; however more knowledge is needed to help understand vital components for success in using telehealth in different work settings. This study was part of a larger trial designed to investigate the effect of an interactive telemedicine platform. The platform consisted of a web based ulcer record linked to a mobile phone to provide care for people with diabetic foot ulcers in outpatient clinics in specialist hospital care in collaboration with primary health care. The aim of this qualitative study was to identify perceptions of health care professionals in different working settings with respect to facilitators to engagement and participation in the application of telemedicine. Ten focus groups were conducted with health care professionals and leaders in Western Norway between January 2014 and June 2015 using Interpretive Description, an applied qualitative research strategy. Four key conditions for success in using telemedicine as a new technology in diabetes foot care were identified: technology and training that were user-friendly; having a telemedicine champion in the work setting; the support of committed and responsible leaders; and effective communication channels at the organizational level. Successful larger scale implementation of telemedicine must involve consideration of complex contextual and organizational factors associated with different work settings. This form of new care technology in diabetes foot care often involves health care professionals working across different settings with different management systems and organizational cultures. Therefore, attention to the distinct needs of each staff group seems an essential condition for effective implementation.

  4. The Open Access Association? EAHIL's new model for sustainability.

    PubMed

    McSean, Tony; Jakobsson, Arne

    2009-12-01

    To discover a governance structure and a business model for the European Association for Health Information and Libraries (EAHIL) which will be economically sustainable in the medium term, arresting a long-term gradual decline in membership numbers and implementing new revenue streams to sustain association activity. Reviewed survival strategies of other professional associations, investigated potential of emerging interactive web technologies, investigated alternative revenue streams based around the 'franchise' of the annual EAHIL conferences and workshops. A fully worked-through and costed alternative structure was produced, based on abolition of the subscription, web-based procedures and functions, increased income from advertising and sponsorship and a large measure of member participation and engagement. Statutes and Rules of Procedure were rewritten to reflect the changes. This plan was put through the Association's approval cycle and implemented in 2005. The new financial model has proved itself sustainable on the basis of the first 2 years' operations. The long-term gradual decline in membership was reversed, with membership numbers trebling across the EAHIL region. The software worked with minimal problems, including the online electoral process. With no identified precedent from other professional associations, the changes represented a considerable risk, which was justifiable because long-term projections made it clear that continuing the traditional model was not viable. The result is a larger, healthier association with a stronger link to its membership. Long-term risks include the high level of member commitment and expertise. There are also important questions about scalability-diseconomies of scale probably limit the applicability of the overall open access model to larger associations.

  5. Public attitudes toward larger cigarette pack warnings: Results from a nationally representative U.S. sample

    PubMed Central

    2017-01-01

    A large body of evidence supports the effectiveness of larger health warnings on cigarette packages. However, there is limited research examining attitudes toward such warning labels, which has potential implications for implementation of larger warning labels. The purpose of the current study was to examine attitudes toward larger warning sizes on cigarette packages and examine variables associated with more favorable attitudes. In a nationally representative survey of U.S. adults (N = 5,014), participants were randomized to different warning size conditions, assessing attitude toward “a health warning that covered (25, 50, 75) % of a cigarette pack.” SAS logistic regression survey procedures were used to account for the complex survey design and sampling weights. Across experimental groups, nearly three-quarters (72%) of adults had attitudes supportive of larger warning labels on cigarette packs. Among the full sample and smokers only (N = 1,511), most adults had favorable attitudes toward labels that covered 25% (78.2% and 75.2%, respectively), 50% (70% and 58.4%, respectively), and 75% (67.9% and 61%, respectively) of a cigarette pack. Young adults, females, racial/ethnic minorities, and non-smokers were more likely to have favorable attitudes toward larger warning sizes. Among smokers only, females and those with higher quit intentions held more favorable attitudes toward larger warning sizes. Widespread support exists for larger warning labels on cigarette packages among U.S. adults, including among smokers. Our findings support the implementation of larger health warnings on cigarette packs in the U.S. as required by the 2009 Tobacco Control Act. PMID:28253257

  6. Modified-Signed-Digit Optical Computing Using Fan-Out

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Zhou, Shaomin; Yeh, Pochi

    1996-01-01

    Experimental optical computing system containing optical fan-out elements implements modified signed-digit (MSD) arithmetic and logic. In comparison with previous optical implementations of MSD arithmetic, this one characterized by larger throughput, greater flexibility, and simpler optics.

  7. A Particle Module for the PLUTO Code. I. An Implementation of the MHD–PIC Equations

    NASA Astrophysics Data System (ADS)

    Mignone, A.; Bodo, G.; Vaidya, B.; Mattia, G.

    2018-05-01

    We describe an implementation of a particle physics module available for the PLUTO code appropriate for the dynamical evolution of a plasma consisting of a thermal fluid and a nonthermal component represented by relativistic charged particles or cosmic rays (CRs). While the fluid is approached using standard numerical schemes for magnetohydrodynamics, CR particles are treated kinetically using conventional Particle-In-Cell (PIC) techniques. The module can be used either to describe test-particle motion in the fluid electromagnetic field or to solve the fully coupled magnetohydrodynamics (MHD)–PIC system of equations with particle backreaction on the fluid as originally introduced by Bai et al. Particle backreaction on the fluid is included in the form of momentum–energy feedback and by introducing the CR-induced Hall term in Ohm’s law. The hybrid MHD–PIC module can be employed to study CR kinetic effects on scales larger than the (ion) skin depth provided that the Larmor gyration scale is properly resolved. When applicable, this formulation avoids resolving microscopic scales, offering substantial computational savings with respect to PIC simulations. We present a fully conservative formulation that is second-order accurate in time and space, and extends to either the Runge–Kutta (RK) or the corner transport upwind time-stepping schemes (for the fluid), while a standard Boris integrator is employed for the particles. For highly energetic relativistic CRs and in order to overcome the time-step restriction, a novel subcycling strategy that retains second-order accuracy in time is presented. Numerical benchmarks and applications including Bell instability, diffusive shock acceleration, and test-particle acceleration in reconnecting layers are discussed.

  8. A new method to measure galaxy bias by combining the density and weak lensing fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pujol, Arnau; Chang, Chihway; Gaztañaga, Enrique

    We present a new method to measure redshift-dependent galaxy bias by combining information from the galaxy density field and the weak lensing field. This method is based on the work of Amara et al., who use the galaxy density field to construct a bias-weighted convergence field κg. The main difference between Amara et al.'s work and our new implementation is that here we present another way to measure galaxy bias, using tomography instead of bias parametrizations. The correlation between κg and the true lensing field κ allows us to measure galaxy bias using different zero-lag correlations, such as / ormore » /. Our method measures the linear bias factor on linear scales, under the assumption of no stochasticity between galaxies and matter. We use the Marenostrum Institut de Ciències de l'Espai (MICE) simulation to measure the linear galaxy bias for a flux-limited sample (i < 22.5) in tomographic redshift bins using this method. This article is the first that studies the accuracy and systematic uncertainties associated with the implementation of the method and the regime in which it is consistent with the linear galaxy bias defined by projected two-point correlation functions (2PCF). We find that our method is consistent with a linear bias at the per cent level for scales larger than 30 arcmin, while non-linearities appear at smaller scales. This measurement is a good complement to other measurements of bias, since it does not depend strongly on σ8 as do the 2PCF measurements. We will apply this method to the Dark Energy Survey Science Verification data in a follow-up article.« less

  9. GIS-based niche modeling for mapping species' habitats

    USGS Publications Warehouse

    Rotenberry, J.T.; Preston, K.L.; Knick, S.

    2006-01-01

    Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.

  10. Extended arrays for nonlinear susceptibility magnitude imaging

    PubMed Central

    Ficko, Bradley W.; Giacometti, Paolo; Diamond, Solomon G.

    2016-01-01

    This study implements nonlinear susceptibility magnitude imaging (SMI) with multifrequency intermodulation and phase encoding. An imaging grid was constructed of cylindrical wells of 3.5-mm diameter and 4.2-mm height on a hexagonal two-dimensional 61-voxel pattern with 5-mm spacing. Patterns of sample wells were filled with 40-μl volumes of Fe3O4 starch-coated magnetic nanoparticles (mNPs) with a hydrodynamic diameter of 100 nm and a concentration of 25 mg/ml. The imaging hardware was configured with three excitation coils and three detection coils in anticipation that a larger imaging system will have arrays of excitation and detection coils. Hexagonal and bar patterns of mNP were successfully imaged (R2 > 0.9) at several orientations. This SMI demonstration extends our prior work to feature a larger coil array, enlarged field-of-view, effective phase encoding scheme, reduced mNP sample size, and more complex imaging patterns to test the feasibility of extending the method beyond the pilot scale. The results presented in this study show that nonlinear SMI holds promise for further development into a practical imaging system for medical applications. PMID:26124044

  11. Accessibility and implementation in UK services of an effective depression relapse prevention programme – mindfulness-based cognitive therapy (MBCT): ASPIRE study protocol

    PubMed Central

    2014-01-01

    Background Mindfulness-based cognitive therapy (MBCT) is a cost-effective psychosocial prevention programme that helps people with recurrent depression stay well in the long term. It was singled out in the 2009 National Institute for Health and Clinical Excellence (NICE) Depression Guideline as a key priority for implementation. Despite good evidence and guideline recommendations, its roll-out and accessibility across the UK appears to be limited and inequitably distributed. The study aims to describe the current state of MBCT accessibility and implementation across the UK, develop an explanatory framework of what is hindering and facilitating its progress in different areas, and develop an Implementation Plan and related resources to promote better and more equitable availability and use of MBCT within the UK National Health Service. Methods/Design This project is a two-phase qualitative, exploratory and explanatory research study, using an interview survey and in-depth case studies theoretically underpinned by the Promoting Action on Implementation in Health Services (PARIHS) framework. Interviews will be conducted with stakeholders involved in commissioning, managing and implementing MBCT services in each of the four UK countries, and will include areas where MBCT services are being implemented successfully and where implementation is not working well. In-depth case studies will be undertaken on a range of MBCT services to develop a detailed understanding of the barriers and facilitators to implementation. Guided by the study’s conceptual framework, data will be synthesized across Phase 1 and Phase 2 to develop a fit for purpose implementation plan. Discussion Promoting the uptake of evidence-based treatments into routine practice and understanding what influences these processes has the potential to support the adoption and spread of nationally recommended interventions like MBCT. This study could inform a larger scale implementation trial and feed into future implementation of MBCT with other long-term conditions and associated co-morbidities. It could also inform the implementation of interventions that are acceptable and effective, but are not widely accessible or implemented. PMID:24884603

  12. Implementation and dissemination of a transition of care program for rural veterans: a controlled before and after study.

    PubMed

    Leonard, Chelsea; Lawrence, Emily; McCreight, Marina; Lippmann, Brandi; Kelley, Lynette; Mayberry, Ashlea; Ladebue, Amy; Gilmartin, Heather; Côté, Murray J; Jones, Jacqueline; Rabin, Borsika A; Ho, P Michael; Burke, Robert

    2017-10-23

    Adapting promising health care interventions to local settings is a critical component in the dissemination and implementation process. The Veterans Health Administration (VHA) rural transitions nurse program (TNP) is a nurse-led, Veteran-centered intervention designed to improve transitional care for rural Veterans funded by VA national offices for dissemination to other VA sites serving a predominantly rural Veteran population. Here, we describe our novel approach to the implementation and evaluation = the TNP. This is a controlled before and after study that assesses both implementation and intervention outcomes. During pre-implementation, we assessed site context using a mixed method approach with data from diverse sources including facility-level quantitative data, key informant and Veteran interviews, observations of the discharge process, and a group brainstorming activity. We used the Practical Robust Implementation and Sustainability Model (PRISM) to inform our inquiries, to integrate data from all sources, and to identify factors that may affect implementation. In the implementation phase, we will use internal and external facilitation, paired with audit and feedback, to encourage appropriate contextual adaptations. We will use a modified Stirman framework to document adaptations. During the evaluation phase, we will measure intervention and implementation outcomes at each site using the RE-AIM framework (Reach, Effectiveness, Adoption, Implementation, and Maintenance). We will conduct a difference-in-differences analysis with propensity-matched Veterans and VA facilities as a control. Our primary intervention outcome is 30-day readmission and Emergency Department visit rates. We will use our findings to develop an implementation toolkit that will inform the larger scale-up of the TNP across the VA. The use of PRISM to inform pre-implementation evaluation and synthesize data from multiple sources, coupled with internal and external facilitation, is a novel approach to engaging sites in adapting interventions while promoting fidelity to the intervention. Our application of PRISM to pre-implementation and midline evaluation, as well as documentation of adaptations, provides an opportunity to identify and address contextual factors that may impede or enhance implementation and sustainability of health interventions and inform dissemination.

  13. A Model for Dissipation of Solar Wind Magnetic Turbulence by Kinetic Alfvén Waves at Electron Scales: Comparison with Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, Anne; Saur, Joachim, E-mail: schreiner@geo.uni-koeln.de

    In hydrodynamic turbulence, it is well established that the length of the dissipation scale depends on the energy cascade rate, i.e., the larger the energy input rate per unit mass, the more the turbulent fluctuations need to be driven to increasingly smaller scales to dissipate the larger energy flux. Observations of magnetic spectral energy densities indicate that this intuitive picture is not valid in solar wind turbulence. Dissipation seems to set in at the same length scale for different solar wind conditions independently of the energy flux. To investigate this difference in more detail, we present an analytic dissipation modelmore » for solar wind turbulence at electron scales, which we compare with observed spectral densities. Our model combines the energy transport from large to small scales and collisionless damping, which removes energy from the magnetic fluctuations in the kinetic regime. We assume wave–particle interactions of kinetic Alfvén waves (KAWs) to be the main damping process. Wave frequencies and damping rates of KAWs are obtained from the hot plasma dispersion relation. Our model assumes a critically balanced turbulence, where larger energy cascade rates excite larger parallel wavenumbers for a certain perpendicular wavenumber. If the dissipation is additionally wave driven such that the dissipation rate is proportional to the parallel wavenumber—as with KAWs—then an increase of the energy cascade rate is counterbalanced by an increased dissipation rate for the same perpendicular wavenumber, leading to a dissipation length independent of the energy cascade rate.« less

  14. A Model for Dissipation of Solar Wind Magnetic Turbulence by Kinetic Alfvén Waves at Electron Scales: Comparison with Observations

    NASA Astrophysics Data System (ADS)

    Schreiner, Anne; Saur, Joachim

    2017-02-01

    In hydrodynamic turbulence, it is well established that the length of the dissipation scale depends on the energy cascade rate, I.e., the larger the energy input rate per unit mass, the more the turbulent fluctuations need to be driven to increasingly smaller scales to dissipate the larger energy flux. Observations of magnetic spectral energy densities indicate that this intuitive picture is not valid in solar wind turbulence. Dissipation seems to set in at the same length scale for different solar wind conditions independently of the energy flux. To investigate this difference in more detail, we present an analytic dissipation model for solar wind turbulence at electron scales, which we compare with observed spectral densities. Our model combines the energy transport from large to small scales and collisionless damping, which removes energy from the magnetic fluctuations in the kinetic regime. We assume wave-particle interactions of kinetic Alfvén waves (KAWs) to be the main damping process. Wave frequencies and damping rates of KAWs are obtained from the hot plasma dispersion relation. Our model assumes a critically balanced turbulence, where larger energy cascade rates excite larger parallel wavenumbers for a certain perpendicular wavenumber. If the dissipation is additionally wave driven such that the dissipation rate is proportional to the parallel wavenumber—as with KAWs—then an increase of the energy cascade rate is counterbalanced by an increased dissipation rate for the same perpendicular wavenumber, leading to a dissipation length independent of the energy cascade rate.

  15. Oklahoma Downbursts and Their Asymmetry.

    DTIC Science & Technology

    1986-11-01

    velocity across the divergence center of at least 10 m s-1. Further, downbursts are called micro- bursts when they are 0.4-4 km in diameter, and macrobursts ...outflows in- vestigated in this study are larger-scale downbursts ( macrobursts ) that were imbedded in large intense convective storms. This does not...observed in this study were associated with intense convective storms and were generally of much larger horizontal scale ( macrobursts ). However, due to

  16. An increase in aerosol burden due to the land-sea warming contrast

    NASA Astrophysics Data System (ADS)

    Hassan, T.; Allen, R.; Randles, C. A.

    2017-12-01

    Climate models simulate an increase in most aerosol species in response to warming, particularly over the tropics and Northern Hemisphere midlatitudes. This increase in aerosol burden is related to a decrease in wet removal, primarily due to reduced large-scale precipitation. Here, we show that the increase in aerosol burden, and the decrease in large-scale precipitation, is related to a robust climate change phenomenon—the land/sea warming contrast. Idealized simulations with two state of the art climate models, the National Center for Atmospheric Research Community Atmosphere Model version 5 (NCAR CAM5) and the Geophysical Fluid Dynamics Laboratory Atmospheric Model 3 (GFDL AM3), show that muting the land-sea warming contrast negates the increase in aerosol burden under warming. This is related to smaller decreases in near-surface relative humidity over land, and in turn, smaller decreases in large-scale precipitation over land—especially in the NH midlatitudes. Furthermore, additional idealized simulations with an enhanced land/sea warming contrast lead to the opposite result—larger decreases in relative humidity over land, larger decreases in large-scale precipitation, and larger increases in aerosol burden. Our results, which relate the increase in aerosol burden to the robust climate projection of enhanced land warming, adds confidence that a warmer world will be associated with a larger aerosol burden.

  17. The Implementation of an Interdisciplinary Co-planning Team Model Among Mathematics and Science Teachers

    NASA Astrophysics Data System (ADS)

    Brown, Michelle Cetner

    In recent years, Science, Technology, Engineering, and Mathematics (STEM) education has become a significant focus of numerous theoretical and commentary articles as researchers have advocated for active and conceptually integrated learning in classrooms. Drawing connections between previously isolated subjects, especially mathematics and science, has been shown to increase student engagement, performance, and critical thinking skills. However, obstacles exist to the widespread implementation of integrated curricula in schools, such as teacher knowledge and school structure and culture. The Interdisciplinary Co-planning Team (ICT) model, in which teachers of different subjects come together regularly to discuss connections between content and to plan larger interdisciplinary activities and smaller examples and discussion points, offers a method for teachers to create sustainable interdisciplinary experiences for students within the bounds of the current school structure. The ICT model is designed to be an iterative, flexible model, providing teachers with both a regular time to come together as "experts" and "teach" each other important concepts from their separate disciplines, and then to bring their shared knowledge and language back to their own classrooms to implement with their students in ways that fit their individual classes. In this multiple-case study, which aims to describe the nature of the co-planning process, the nature of plans, and changes in teacher beliefs as a result of co-planning, three pairs of secondary mathematics and science teachers participated in a 10-week intervention with the ICT model. Each pair constituted one case. Data included observations, interviews, and artifact collection. All interviews, whole-group sessions, and co-planning sessions were transcribed and coded using both theory-based and data-based codes. Finally, a cross-case comparison was used to present similarities and differences across cases. Findings suggest that the ICT model can be implemented with pairs of mathematics and science teachers to create a sustainable way to share experience and expertise, and to create powerful interdisciplinary experiences for their students. In addition, there is evidence that participation with the ICT model positively influences teacher beliefs about the nature of mathematics and science, about teaching and learning, and about interdisciplinary connections. These findings seem to hold across grades, school type, and personal experience. Future implementation of the ICT model on a larger scale is recommended to continue to observe the effects on teachers and students.

  18. Factors affecting economies of scale in combined sewer systems.

    PubMed

    Maurer, Max; Wolfram, Martin; Anja, Herlyn

    2010-01-01

    A generic model is introduced that represents the combined sewer infrastructure of a settlement quantitatively. A catchment area module first calculates the length and size distribution of the required sewer pipes on the basis of rain patterns, housing densities and area size. These results are fed into the sewer-cost module in order to estimate the combined sewer costs of the entire catchment area. A detailed analysis of the relevant input parameters for Swiss settlements is used to identify the influence of size on costs. The simulation results confirm that an economy of scale exists for combined sewer systems. This is the result of two main opposing cost factors: (i) increased construction costs for larger sewer systems due to larger pipes and increased rain runoff in larger settlements, and (ii) lower costs due to higher population and building densities in larger towns. In Switzerland, the more or less organically grown settlement structures and limited land availability emphasise the second factor to show an apparent economy of scale. This modelling approach proved to be a powerful tool for understanding the underlying factors affecting the cost structure for water infrastructures.

  19. Adaptation and validation of the Evidence-Based Practice Belief and Implementation scales for French-speaking Swiss nurses and allied healthcare providers.

    PubMed

    Verloo, Henk; Desmedt, Mario; Morin, Diane

    2017-09-01

    To evaluate two psychometric properties of the French versions of the Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales, namely their internal consistency and construct validity. The Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales developed by Melnyk et al. are recognised as valid, reliable instruments in English. However, no psychometric validation for their French versions existed. Secondary analysis of a cross sectional survey. Source data came from a cross-sectional descriptive study sample of 382 nurses and other allied healthcare providers. Cronbach's alpha was used to evaluate internal consistency, and principal axis factor analysis and varimax rotation were computed to determine construct validity. The French Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales showed excellent reliability, with Cronbach's alphas close to the scores established by Melnyk et al.'s original versions. Principal axis factor analysis showed medium-to-high factor loading scores without obtaining collinearity. Principal axis factor analysis with varimax rotation of the 16-item Evidence-Based Practice Beliefs scale resulted in a four-factor loading structure. Principal axis factor analysis with varimax rotation of the 17-item Evidence-Based Practice Implementation scale revealed a two-factor loading structure. Further research should attempt to understand why the French Evidence-Based Practice Implementation scale showed a two-factor loading structure but Melnyk et al.'s original has only one. The French versions of the Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales can both be considered valid and reliable instruments for measuring Evidence-Based Practice beliefs and implementation. The results suggest that the French Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales are valid and reliable and can therefore be used to evaluate the effectiveness of organisational strategies aimed at increasing professionals' confidence in Evidence-Based Practice, supporting its use and implementation. © 2017 John Wiley & Sons Ltd.

  20. Field-scale effective matrix diffusion coefficient for fractured rock: results from literature survey.

    PubMed

    Zhou, Quanlin; Liu, Hui-Hai; Molz, Fred J; Zhang, Yingqi; Bodvarsson, Gudmundur S

    2007-08-15

    Matrix diffusion is an important mechanism for solute transport in fractured rock. We recently conducted a literature survey on the effective matrix diffusion coefficient, D(m)(e), a key parameter for describing matrix diffusion processes at the field scale. Forty field tracer tests at 15 fractured geologic sites were surveyed and selected for the study, based on data availability and quality. Field-scale D(m)(e) values were calculated, either directly using data reported in the literature, or by reanalyzing the corresponding field tracer tests. The reanalysis was conducted for the selected tracer tests using analytic or semi-analytic solutions for tracer transport in linear, radial, or interwell flow fields. Surveyed data show that the scale factor of the effective matrix diffusion coefficient (defined as the ratio of D(m)(e) to the lab-scale matrix diffusion coefficient, D(m), of the same tracer) is generally larger than one, indicating that the effective matrix diffusion coefficient in the field is comparatively larger than the matrix diffusion coefficient at the rock-core scale. This larger value can be attributed to the many mass-transfer processes at different scales in naturally heterogeneous, fractured rock systems. Furthermore, we observed a moderate, on average trend toward systematic increase in the scale factor with observation scale. This trend suggests that the effective matrix diffusion coefficient is likely to be statistically scale-dependent. The scale-factor value ranges from 0.5 to 884 for observation scales from 5 to 2000 m. At a given scale, the scale factor varies by two orders of magnitude, reflecting the influence of differing degrees of fractured rock heterogeneity at different geologic sites. In addition, the surveyed data indicate that field-scale longitudinal dispersivity generally increases with observation scale, which is consistent with previous studies. The scale-dependent field-scale matrix diffusion coefficient (and dispersivity) may have significant implications for assessing long-term, large-scale radionuclide and contaminant transport events in fractured rock, both for nuclear waste disposal and contaminant remediation.

  1. Comparison of the Computational Efficiency of the Original Versus Reformulated High-Fidelity Generalized Method of Cells

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M; Bednarcyk, Brett; Aboydi, Jacob

    2004-01-01

    The High-Fidelity Generalized Method of Cells (HFGMC) micromechanics model has recently been reformulated by Bansal and Pindera (in the context of elastic phases with perfect bonding) to maximize its computational efficiency. This reformulated version of HFGMC has now been extended to include both inelastic phases and imperfect fiber-matrix bonding. The present paper presents an overview of the HFGMC theory in both its original and reformulated forms and a comparison of the results of the two implementations. The objective is to establish the correlation between the two HFGMC formulations and document the improved efficiency offered by the reformulation. The results compare the macro and micro scale predictions of the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) versions of both formulations into the inelastic regime, and, in the case of the discontinuous reinforcement version, with both perfect and weak interfacial bonding. The results demonstrate that identical predictions are obtained using either the original or reformulated implementations of HFGMC aside from small numerical differences in the inelastic regime due to the different implementation schemes used for the inelastic terms present in the two formulations. Finally, a direct comparison of execution times is presented for the original formulation and reformulation code implementations. It is shown that as the discretization employed in representing the composite repeating unit cell becomes increasingly refined (requiring a larger number of sub-volumes), the reformulated implementation becomes significantly (approximately an order of magnitude at best) more computationally efficient in both the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) cases.

  2. Organizational readiness in specialty mental health care.

    PubMed

    Hamilton, Alison B; Cohen, Amy N; Young, Alexander S

    2010-01-01

    Implementing quality improvement efforts in clinics is challenging. Assessment of organizational "readiness" for change can set the stage for implementation by providing information regarding existing strengths and deficiencies, thereby increasing the chance of a successful improvement effort. This paper discusses organizational assessment in specialty mental health, in preparation for improving care for individuals with schizophrenia. To assess organizational readiness for change in specialty mental health in order to facilitate locally tailored implementation strategies. EQUIP-2 is a site-level controlled trial at nine VA medical centers (four intervention, five control). Providers at all sites completed an organizational readiness for change (ORC) measure, and key stakeholders at the intervention sites completed a semi-structured interview at baseline. At the four intervention sites, 16 administrators and 43 clinical staff completed the ORC, and 38 key stakeholders were interviewed. The readiness domains of training needs, communication, and change were the domains with lower mean scores (i.e., potential deficiencies) ranging from a low of 23.8 to a high of 36.2 on a scale of 10-50, while staff attributes of growth and adaptability had higher mean scores (i.e., potential strengths) ranging from a low of 35.4 to a high of 41.1. Semi-structured interviews revealed that staff perceptions and experiences of change and decision-making are affected by larger structural factors such as change mandates from VA headquarters. Motivation for change, organizational climate, staff perceptions and beliefs, and prior experience with change efforts contribute to readiness for change in specialty mental health. Sites with less readiness for change may require more flexibility in the implementation of a quality improvement intervention. We suggest that uptake of evidence-based practices can be enhanced by tailoring implementation efforts to the strengths and deficiencies of the organizations that are implementing quality improvement changes.

  3. Stream-groundwater exchange and hydrologic turnover at the network scale

    NASA Astrophysics Data System (ADS)

    Covino, Tim; McGlynn, Brian; Mallard, John

    2011-12-01

    The exchange of water between streams and groundwater can influence stream water quality, hydrologic mass balances, and attenuate solute export from watersheds. We used conservative tracer injections (chloride, Cl-) across 10 stream reaches to investigate stream water gains and losses from and to groundwater at larger spatial and temporal scales than typically associated with hyporheic exchanges. We found strong relationships between reach discharge, median tracer velocity, and gross hydrologic loss across a range of stream morphologies and sizes in the 11.4 km2 Bull Trout Watershed of central ID. We implemented these empirical relationships in a numerical network model and simulated stream water gains and losses and subsequent fractional hydrologic turnover across the stream network. We found that stream gains and losses from and to groundwater can influence source water contributions and stream water compositions across stream networks. Quantifying proportional influences of source water contributions from runoff generation locations across the network on stream water composition can provide insight into the internal mechanisms that partially control the hydrologic and biogeochemical signatures observed along networks and at watershed outlets.

  4. The Design and Evaluation of a Large-Scale Real-Walking Locomotion Interface

    PubMed Central

    Peck, Tabitha C.; Fuchs, Henry; Whitton, Mary C.

    2014-01-01

    Redirected Free Exploration with Distractors (RFED) is a large-scale real-walking locomotion interface developed to enable people to walk freely in virtual environments that are larger than the tracked space in their facility. This paper describes the RFED system in detail and reports on a user study that evaluated RFED by comparing it to walking-in-place and joystick interfaces. The RFED system is composed of two major components, redirection and distractors. This paper discusses design challenges, implementation details, and lessons learned during the development of two working RFED systems. The evaluation study examined the effect of the locomotion interface on users’ cognitive performance on navigation and wayfinding measures. The results suggest that participants using RFED were significantly better at navigating and wayfinding through virtual mazes than participants using walking-in-place and joystick interfaces. Participants traveled shorter distances, made fewer wrong turns, pointed to hidden targets more accurately and more quickly, and were able to place and label targets on maps more accurately, and more accurately estimate the virtual environment size. PMID:22184262

  5. Design of a decentralized reusable research database architecture to support data acquisition in large research projects.

    PubMed

    Iavindrasana, Jimison; Depeursinge, Adrien; Ruch, Patrick; Spahni, Stéphane; Geissbuhler, Antoine; Müller, Henning

    2007-01-01

    The diagnostic and therapeutic processes, as well as the development of new treatments, are hindered by the fragmentation of information which underlies them. In a multi-institutional research study database, the clinical information system (CIS) contains the primary data input. An important part of the money of large scale clinical studies is often paid for data creation and maintenance. The objective of this work is to design a decentralized, scalable, reusable database architecture with lower maintenance costs for managing and integrating distributed heterogeneous data required as basis for a large-scale research project. Technical and legal aspects are taken into account based on various use case scenarios. The architecture contains 4 layers: data storage and access are decentralized at their production source, a connector as a proxy between the CIS and the external world, an information mediator as a data access point and the client side. The proposed design will be implemented inside six clinical centers participating in the @neurIST project as part of a larger system on data integration and reuse for aneurism treatment.

  6. An updated protocol for a systematic review of implementation-related measures.

    PubMed

    Lewis, Cara C; Mettert, Kayne D; Dorsey, Caitlin N; Martinez, Ruben G; Weiner, Bryan J; Nolen, Elspeth; Stanick, Cameo; Halko, Heather; Powell, Byron J

    2018-04-25

    Implementation science is the study of strategies used to integrate evidence-based practices into real-world settings (Eccles and Mittman, Implement Sci. 1(1):1, 2006). Central to the identification of replicable, feasible, and effective implementation strategies is the ability to assess the impact of contextual constructs and intervention characteristics that may influence implementation, but several measurement issues make this work quite difficult. For instance, it is unclear which constructs have no measures and which measures have any evidence of psychometric properties like reliability and validity. As part of a larger set of studies to advance implementation science measurement (Lewis et al., Implement Sci. 10:102, 2015), we will complete systematic reviews of measures that map onto the Consolidated Framework for Implementation Research (Damschroder et al., Implement Sci. 4:50, 2009) and the Implementation Outcomes Framework (Proctor et al., Adm Policy Ment Health. 38(2):65-76, 2011), the protocol for which is described in this manuscript. Our primary databases will be PubMed and Embase. Our search strings will be comprised of five levels: (1) the outcome or construct term; (2) terms for measure; (3) terms for evidence-based practice; (4) terms for implementation; and (5) terms for mental health. Two trained research specialists will independently review all titles and abstracts followed by full-text review for inclusion. The research specialists will then conduct measure-forward searches using the "cited by" function to identify all published empirical studies using each measure. The measure and associated publications will be compiled in a packet for data extraction. Data relevant to our Psychometric and Pragmatic Evidence Rating Scale (PAPERS) will be independently extracted and then rated using a worst score counts methodology reflecting "poor" to "excellent" evidence. We will build a centralized, accessible, searchable repository through which researchers, practitioners, and other stakeholders can identify psychometrically and pragmatically strong measures of implementation contexts, processes, and outcomes. By facilitating the employment of psychometrically and pragmatically strong measures identified through this systematic review, the repository would enhance the cumulativeness, reproducibility, and applicability of research findings in the rapidly growing field of implementation science.

  7. Simulation-Based Probabilistic Seismic Hazard Assessment Using System-Level, Physics-Based Models: Assembling Virtual California

    NASA Astrophysics Data System (ADS)

    Rundle, P. B.; Rundle, J. B.; Morein, G.; Donnellan, A.; Turcotte, D.; Klein, W.

    2004-12-01

    The research community is rapidly moving towards the development of an earthquake forecast technology based on the use of complex, system-level earthquake fault system simulations. Using these topologically and dynamically realistic simulations, it is possible to develop ensemble forecasting methods similar to that used in weather and climate research. To effectively carry out such a program, one needs 1) a topologically realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention on a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults in California, from the Mexico-California border to the Mendocino Triple Junction. Virtual California is a "backslip model", meaning that the long term rate of slip on each fault segment in the model is matched to the observed rate. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of 650 fault segments (degrees of freedom) in the model. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a Beowulf clusters consisting of >10 cpus. We also will report results from implementing the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems. We report recent results on use of Virtual California for probabilistic earthquake forecasting for several sub-groups of major faults in California. These methods have the advantage that system-level fault interactions are explicitly included, as well as laboratory-based friction laws.

  8. A spectrophotometric method for detecting substellar companions to late-type M stars

    NASA Astrophysics Data System (ADS)

    Oetiker, Brian Glen

    The most common stars in the Galaxy are the main-sequence M stars, yet current techniques are not optimized for detecting companions around the lowest mass stars; those with spectral designations ranging from M6 to M10. Described in this study is a search for companions around such stars using two methods: a unique implementation of the transit method, and a newly designed differential spectrophotometric method. The TEP project focusses on the detection of transits of terrestrial sized and larger companions in the eclipsing binary system CM Draconis. The newly designed spectrophotometric technique combines the strengths of the spectroscopic and photometric methods, while minimizing their inherent weaknesses. This unique method relies on the placement of three narrow band optical filters on and around the Titanium Oxide (TiO) bandhead near 8420 Å, a feature commonly seen in the atmospheres of late M stars. One filter is placed on the slope of the bandhead feature, while the remaining two are located on the adjacent continuum portions of the star's spectrum. The companion-induced motion of the star results in a doppler shifting of the bandhead feature, which in turn causes a change in flux passing through the filter located on the slope of the TiO bandhead. The spectrophotometric method is optimized for detecting compact systems containing brown dwarfs and giant planets. Because of its low dispersion-high photon efficiency design, this method is well suited for surveying large numbers of faint M stars. A small scale survey has been implemented, producing a candidate brown dwarf class companion of the star WX UMa. Applying the spectrophotometric method to a larger scale survey for brown dwarf and giant planet companions, coupled with a photometric transit study addresses two key astronomical issues. By detecting or placing limits on compact late type M star systems, a discrimination among competing theories of planetary formation may be gained. Furthermore, searching for a broad range of companion masses, may result in a better understanding of the substellar mass function.

  9. Structural practices for controlling sediment transport from erosion

    NASA Astrophysics Data System (ADS)

    Gabriels, Donald; Verbist, Koen; Van de Linden, Bruno

    2013-04-01

    Erosion on agricultural fields in the hilly regions of Flanders, Belgium has been recognized as an important economical and ecological problem that requires effective control measures. This has led to the implementation of on-site and off-site measures such as reduced tillage and the installation of grass buffers trips, and dams made of vegetative materials. Dams made out of coir (coconut) and wood chips were evaluated on three different levels of complexity. Under laboratory conditions, one meter long dams were submitted to two different discharges and three sediment concentrations under two different slopes, to assess the sediment delivery ratios under variable conditions. At the field scale, discharge and sediment concentrations were monitored under natural rainfall conditions on six 3 m wide plots, of which three were equipped with coir dams, while the other three served as control plots. The same plots were also used for rainfall simulations, which allowed controlling sediment delivery boundary conditions more precisely. Results show a clear advantage of these dams to reduce discharge by minimum 49% under both field and laboratory conditions. Sediment delivery ratios (SDR) were very small under laboratory and field rainfall simulations (4-9% and 2% respectively), while larger SDRs were observed under natural conditions (43%), probably due to the small sediment concentrations (1-5 g l-1) observed and as such a larger influence of boundary effects. Also a clear enrichment of larger sand particles (+167%) could be observed behind the dams, showing a significant selective filtering effect.

  10. Development, implementation, and test results on integrated optics switching matrix

    NASA Technical Reports Server (NTRS)

    Rutz, E.

    1982-01-01

    A small integrated optics switching matrix, which was developed, implemented, and tested, indicates high performance. The matrix serves as a model for the design of larger switching matrices. The larger integrated optics switching matrix should form the integral part of a switching center with high data rate throughput of up to 300 megabits per second. The switching matrix technique can accomplish the design goals of low crosstalk and low distortion. About 50 illustrations help explain and depict the many phases of the integrated optics switching matrix. Many equations used to explain and calculate the experimental data are also included.

  11. Scattering Solar Thermal Concentrators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giebink, Noel C.

    2015-01-31

    This program set out to explore a scattering-based approach to concentrate sunlight with the aim of improving collector field reliability and of eliminating wind loading and gross mechanical movement through the use of a stationary collection optic. The approach is based on scattering sunlight from the focal point of a fixed collection optic into the confined modes of a sliding planar waveguide, where it is transported to stationary tubular heat transfer elements located at the edges. Optical design for the first stage of solar concentration, which entails focusing sunlight within a plane over a wide range of incidence angles (>120more » degree full field of view) at fixed tilt, led to the development of a new, folded-path collection optic that dramatically out-performs the current state-of-the-art in scattering concentration. Rigorous optical simulation and experimental testing of this collection optic have validated its performance. In the course of this work, we also identified an opportunity for concentrating photovoltaics involving the use of high efficiency microcells made in collaboration with partners at the University of Illinois. This opportunity exploited the same collection optic design as used for the scattering solar thermal concentrator and was therefore pursued in parallel. This system was experimentally demonstrated to achieve >200x optical concentration with >70% optical efficiency over a full day by tracking with <1 cm of lateral movement at fixed latitude tilt. The entire scattering concentrator waveguide optical system has been simulated, tested, and assembled at small scale to verify ray tracing models. These models were subsequently used to predict the full system optical performance at larger, deployment scale ranging up to >1 meter aperture width. Simulations at an aperture widths less than approximately 0.5 m with geometric gains ~100x predict an overall optical efficiency in the range 60-70% for angles up to 50 degrees from normal. However, the concentrator optical efficiency was found to decrease significantly with increasing aperture width beyond 0.5 m due to parasitic waveguide out-coupling loss and low-level absorption that become dominant at larger scale. A heat transfer model was subsequently implemented to predict collector fluid heat gain and outlet temperature as a function of flow rate using the optical model as a flux input. It was found that the aperture width size limitation imposed by the optical efficiency characteristics of the waveguide limits the absolute optical power delivered to the heat transfer element per unit length. As compared to state-of-the-art parabolic trough CPV system aperture widths approaching 5 m, this limitation leads to an approximate factor of order of magnitude increase in heat transfer tube length to achieve the same heat transfer fluid outlet temperature. The conclusion of this work is that scattering solar thermal concentration cannot be implemented at the scale and efficiency required to compete with the performance of current parabolic trough CSP systems. Applied within the alternate context of CPV, however, the results of this work have likely opened up a transformative new path that enables quasi-static, high efficiency CPV to be implemented on rooftops in the form factor of traditional fixed-panel photovoltaics.« less

  12. Efficient parallel simulation of CO2 geologic sequestration insaline aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Doughty, Christine; Wu, Yu-Shu

    2007-01-01

    An efficient parallel simulator for large-scale, long-termCO2 geologic sequestration in saline aquifers has been developed. Theparallel simulator is a three-dimensional, fully implicit model thatsolves large, sparse linear systems arising from discretization of thepartial differential equations for mass and energy balance in porous andfractured media. The simulator is based on the ECO2N module of the TOUGH2code and inherits all the process capabilities of the single-CPU TOUGH2code, including a comprehensive description of the thermodynamics andthermophysical properties of H2O-NaCl- CO2 mixtures, modeling singleand/or two-phase isothermal or non-isothermal flow processes, two-phasemixtures, fluid phases appearing or disappearing, as well as saltprecipitation or dissolution. The newmore » parallel simulator uses MPI forparallel implementation, the METIS software package for simulation domainpartitioning, and the iterative parallel linear solver package Aztec forsolving linear equations by multiple processors. In addition, theparallel simulator has been implemented with an efficient communicationscheme. Test examples show that a linear or super-linear speedup can beobtained on Linux clusters as well as on supercomputers. Because of thesignificant improvement in both simulation time and memory requirement,the new simulator provides a powerful tool for tackling larger scale andmore complex problems than can be solved by single-CPU codes. Ahigh-resolution simulation example is presented that models buoyantconvection, induced by a small increase in brine density caused bydissolution of CO2.« less

  13. An improved neutral landscape model for recreating real landscapes and generating landscape series for spatial ecological simulations.

    PubMed

    van Strien, Maarten J; Slager, Cornelis T J; de Vries, Bauke; Grêt-Regamey, Adrienne

    2016-06-01

    Many studies have assessed the effect of landscape patterns on spatial ecological processes by simulating these processes in computer-generated landscapes with varying composition and configuration. To generate such landscapes, various neutral landscape models have been developed. However, the limited set of landscape-level pattern variables included in these models is often inadequate to generate landscapes that reflect real landscapes. In order to achieve more flexibility and variability in the generated landscapes patterns, a more complete set of class- and patch-level pattern variables should be implemented in these models. These enhancements have been implemented in Landscape Generator (LG), which is a software that uses optimization algorithms to generate landscapes that match user-defined target values. Developed for participatory spatial planning at small scale, we enhanced the usability of LG and demonstrated how it can be used for larger scale ecological studies. First, we used LG to recreate landscape patterns from a real landscape (i.e., a mountainous region in Switzerland). Second, we generated landscape series with incrementally changing pattern variables, which could be used in ecological simulation studies. We found that LG was able to recreate landscape patterns that approximate those of real landscapes. Furthermore, we successfully generated landscape series that would not have been possible with traditional neutral landscape models. LG is a promising novel approach for generating neutral landscapes and enables testing of new hypotheses regarding the influence of landscape patterns on ecological processes. LG is freely available online.

  14. SQC: secure quality control for meta-analysis of genome-wide association studies.

    PubMed

    Huang, Zhicong; Lin, Huang; Fellay, Jacques; Kutalik, Zoltán; Hubaux, Jean-Pierre

    2017-08-01

    Due to the limited power of small-scale genome-wide association studies (GWAS), researchers tend to collaborate and establish a larger consortium in order to perform large-scale GWAS. Genome-wide association meta-analysis (GWAMA) is a statistical tool that aims to synthesize results from multiple independent studies to increase the statistical power and reduce false-positive findings of GWAS. However, it has been demonstrated that the aggregate data of individual studies are subject to inference attacks, hence privacy concerns arise when researchers share study data in GWAMA. In this article, we propose a secure quality control (SQC) protocol, which enables checking the quality of data in a privacy-preserving way without revealing sensitive information to a potential adversary. SQC employs state-of-the-art cryptographic and statistical techniques for privacy protection. We implement the solution in a meta-analysis pipeline with real data to demonstrate the efficiency and scalability on commodity machines. The distributed execution of SQC on a cluster of 128 cores for one million genetic variants takes less than one hour, which is a modest cost considering the 10-month time span usually observed for the completion of the QC procedure that includes timing of logistics. SQC is implemented in Java and is publicly available at https://github.com/acs6610987/secureqc. jean-pierre.hubaux@epfl.ch. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. Divide-and-conquer density functional theory on hierarchical real-space grids: Parallel implementation and applications

    NASA Astrophysics Data System (ADS)

    Shimojo, Fuyuki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2008-02-01

    A linear-scaling algorithm based on a divide-and-conquer (DC) scheme has been designed to perform large-scale molecular-dynamics (MD) simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT). Electronic wave functions are represented on a real-space grid, which is augmented with a coarse multigrid to accelerate the convergence of iterative solutions and with adaptive fine grids around atoms to accurately calculate ionic pseudopotentials. Spatial decomposition is employed to implement the hierarchical-grid DC-DFT algorithm on massively parallel computers. The largest benchmark tests include 11.8×106 -atom ( 1.04×1012 electronic degrees of freedom) calculation on 131 072 IBM BlueGene/L processors. The DC-DFT algorithm has well-defined parameters to control the data locality, with which the solutions converge rapidly. Also, the total energy is well conserved during the MD simulation. We perform first-principles MD simulations based on the DC-DFT algorithm, in which large system sizes bring in excellent agreement with x-ray scattering measurements for the pair-distribution function of liquid Rb and allow the description of low-frequency vibrational modes of graphene. The band gap of a CdSe nanorod calculated by the DC-DFT algorithm agrees well with the available conventional DFT results. With the DC-DFT algorithm, the band gap is calculated for larger system sizes until the result reaches the asymptotic value.

  16. Large-scale molecular dynamics simulation of DNA: implementation and validation of the AMBER98 force field in LAMMPS.

    PubMed

    Grindon, Christina; Harris, Sarah; Evans, Tom; Novik, Keir; Coveney, Peter; Laughton, Charles

    2004-07-15

    Molecular modelling played a central role in the discovery of the structure of DNA by Watson and Crick. Today, such modelling is done on computers: the more powerful these computers are, the more detailed and extensive can be the study of the dynamics of such biological macromolecules. To fully harness the power of modern massively parallel computers, however, we need to develop and deploy algorithms which can exploit the structure of such hardware. The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a scalable molecular dynamics code including long-range Coulomb interactions, which has been specifically designed to function efficiently on parallel platforms. Here we describe the implementation of the AMBER98 force field in LAMMPS and its validation for molecular dynamics investigations of DNA structure and flexibility against the benchmark of results obtained with the long-established code AMBER6 (Assisted Model Building with Energy Refinement, version 6). Extended molecular dynamics simulations on the hydrated DNA dodecamer d(CTTTTGCAAAAG)(2), which has previously been the subject of extensive dynamical analysis using AMBER6, show that it is possible to obtain excellent agreement in terms of static, dynamic and thermodynamic parameters between AMBER6 and LAMMPS. In comparison with AMBER6, LAMMPS shows greatly improved scalability in massively parallel environments, opening up the possibility of efficient simulations of order-of-magnitude larger systems and/or for order-of-magnitude greater simulation times.

  17. Does Size Matter? Scaling of CO2 Emissions and U.S. Urban Areas

    PubMed Central

    Fragkias, Michail; Lobo, José; Strumsky, Deborah; Seto, Karen C.

    2013-01-01

    Urban areas consume more than 66% of the world’s energy and generate more than 70% of global greenhouse gas emissions. With the world’s population expected to reach 10 billion by 2100, nearly 90% of whom will live in urban areas, a critical question for planetary sustainability is how the size of cities affects energy use and carbon dioxide (CO2) emissions. Are larger cities more energy and emissions efficient than smaller ones? Do larger cities exhibit gains from economies of scale with regard to emissions? Here we examine the relationship between city size and CO2 emissions for U.S. metropolitan areas using a production accounting allocation of emissions. We find that for the time period of 1999–2008, CO2 emissions scale proportionally with urban population size. Contrary to theoretical expectations, larger cities are not more emissions efficient than smaller ones. PMID:23750213

  18. Parallel Simulation of Unsteady Turbulent Flames

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1996-01-01

    Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.

  19. Validating Large Scale Networks Using Temporary Local Scale Networks

    USDA-ARS?s Scientific Manuscript database

    The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...

  20. Measure Twice, Build Once: Bench-Scale Testing to Evaluate Bioretention Media Design

    EPA Science Inventory

    The paper discusses the utility of conducting bench-scale testing on selected bioretention media and media amendments to validate hydrologic properties before installing media and amendments in larger pilot- or full-scale rain garden installations. The bench-scale study conclude...

  1. Variability in Soil Properties at Different Spatial Scales (1 m to 1 km) in a Deciduous Forest Ecosystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garten Jr, Charles T; Kang, S.; Brice, Deanne Jane

    2007-01-01

    The purpose of this research was to test the hypothesis that variability in 11 soil properties, related to soil texture and soil C and N, would increase from small (1 m) to large (1 km) spatial scales in a temperate, mixed-hardwood forest ecosystem in east Tennessee, USA. The results were somewhat surprising and indicated that a fundamental assumption in geospatial analysis, namely that variability increases with increasing spatial scale, did not apply for at least five of the 11 soil properties measured over a 0.5-km2 area. Composite mineral soil samples (15 cm deep) were collected at 1, 5, 10, 50,more » 250, and 500 m distances from a center point along transects in a north, south, east, and westerly direction. A null hypothesis of equal variance at different spatial scales was rejected (P{le}0.05) for mineral soil C concentration, silt content, and the C-to-N ratios in particulate organic matter (POM), mineral-associated organic matter (MOM), and whole surface soil. Results from different tests of spatial variation, based on coefficients of variation or a Mantel test, led to similar conclusions about measurement variability and geographic distance for eight of the 11 variables examined. Measurements of mineral soil C and N concentrations, C concentrations in MOM, extractable soil NH{sub 4}-N, and clay contents were just as variable at smaller scales (1-10 m) as they were at larger scales (50-500 m). On the other hand, measurement variation in mineral soil C-to-N ratios, MOM C-to-N ratios, and the fraction of soil C in POM clearly increased from smaller to larger spatial scales. With the exception of extractable soil NH4-N, measured soil properties in the forest ecosystem could be estimated (with 95% confidence) to within 15% of their true mean with a relatively modest number of sampling points (n{le}25). For some variables, scaling up variation from smaller to larger spatial domains within the ecosystem could be relatively easy because small-scale variation may be indicative of variation at larger scales.« less

  2. A Hessian-based methodology for automatic surface crack detection and classification from pavement images

    NASA Astrophysics Data System (ADS)

    Ghanta, Sindhu; Shahini Shamsabadi, Salar; Dy, Jennifer; Wang, Ming; Birken, Ralf

    2015-04-01

    Around 3,000,000 million vehicle miles are annually traveled utilizing the US transportation systems alone. In addition to the road traffic safety, maintaining the road infrastructure in a sound condition promotes a more productive and competitive economy. Due to the significant amounts of financial and human resources required to detect surface cracks by visual inspection, detection of these surface defects are often delayed resulting in deferred maintenance operations. This paper introduces an automatic system for acquisition, detection, classification, and evaluation of pavement surface cracks by unsupervised analysis of images collected from a camera mounted on the rear of a moving vehicle. A Hessian-based multi-scale filter has been utilized to detect ridges in these images at various scales. Post-processing on the extracted features has been implemented to produce statistics of length, width, and area covered by cracks, which are crucial for roadway agencies to assess pavement quality. This process has been realized on three sets of roads with different pavement conditions in the city of Brockton, MA. A ground truth dataset labeled manually is made available to evaluate this algorithm and results rendered more than 90% segmentation accuracy demonstrating the feasibility of employing this approach at a larger scale.

  3. Tensor-decomposed vibrational coupled-cluster theory: Enabling large-scale, highly accurate vibrational-structure calculations

    NASA Astrophysics Data System (ADS)

    Madsen, Niels Kristian; Godtliebsen, Ian H.; Losilla, Sergio A.; Christiansen, Ove

    2018-01-01

    A new implementation of vibrational coupled-cluster (VCC) theory is presented, where all amplitude tensors are represented in the canonical polyadic (CP) format. The CP-VCC algorithm solves the non-linear VCC equations without ever constructing the amplitudes or error vectors in full dimension but still formally includes the full parameter space of the VCC[n] model in question resulting in the same vibrational energies as the conventional method. In a previous publication, we have described the non-linear-equation solver for CP-VCC calculations. In this work, we discuss the general algorithm for evaluating VCC error vectors in CP format including the rank-reduction methods used during the summation of the many terms in the VCC amplitude equations. Benchmark calculations for studying the computational scaling and memory usage of the CP-VCC algorithm are performed on a set of molecules including thiadiazole and an array of polycyclic aromatic hydrocarbons. The results show that the reduced scaling and memory requirements of the CP-VCC algorithm allows for performing high-order VCC calculations on systems with up to 66 vibrational modes (anthracene), which indeed are not possible using the conventional VCC method. This paves the way for obtaining highly accurate vibrational spectra and properties of larger molecules.

  4. Using Stable Isotopes to Infer the Impacts of Habitat Change on the Diets and Vertical Stratification of Frugivorous Bats in Madagascar.

    PubMed

    Reuter, Kim E; Wills, Abigail R; Lee, Raymond W; Cordes, Erik E; Sewall, Brent J

    2016-01-01

    Human-modified habitats are expanding rapidly; many tropical countries have highly fragmented and degraded forests. Preserving biodiversity in these areas involves protecting species-like frugivorous bats-that are important to forest regeneration. Fruit bats provide critical ecosystem services including seed dispersal, but studies of how their diets are affected by habitat change have often been rather localized. This study used stable isotope analyses (δ15N and δ13C measurement) to examine how two fruit bat species in Madagascar, Pteropus rufus (n = 138) and Eidolon dupreanum (n = 52) are impacted by habitat change across a large spatial scale. Limited data for Rousettus madagascariensis are also presented. Our results indicated that the three species had broadly overlapping diets. Differences in diet were nonetheless detectable between P. rufus and E. dupreanum, and these diets shifted when they co-occurred, suggesting resource partitioning across habitats and vertical strata within the canopy to avoid competition. Changes in diet were correlated with a decrease in forest cover, though at a larger spatial scale in P. rufus than in E. dupreanum. These results suggest fruit bat species exhibit differing responses to habitat change, highlight the threats fruit bats face from habitat change, and clarify the spatial scales at which conservation efforts could be implemented.

  5. Large-Scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation ...2016 4.  TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations . Despite this, very little work has been

  6. Implementation of the Agitated Behavior Scale in the Electronic Health Record.

    PubMed

    Wilson, Helen John; Dasgupta, Kritis; Michael, Kathleen

    The purpose of the study was to implement an Agitated Behavior Scale through an electronic health record and to evaluate the usability of the scale in a brain injury unit at a rehabilitation hospital. A quality improvement project was conducted in the brain injury unit at a large rehabilitation hospital with registered nurses as participants using convenience sampling. The project consisted of three phases and included education, implementation of the scale in the electronic health record, and administration of the survey questionnaire, which utilized the system usability scale. The Agitated Behavior Scale was found to be usable, and there was 92.2% compliance with the use of the electronic Electronic Agitated Behavior Scale. The Agitated Behavior Scale was effectively implemented in the electronic health record and was found to be usable in the assessment of agitation. Utilization of the scale through the electronic health record on a daily basis will allow for an early identification of agitation in patients with traumatic brain injury and enable prompt interventions to manage agitation.

  7. Large Pilot Scale Testing of Linde/BASF Post-Combustion CO 2 Capture Technology at the Abbott Coal-Fired Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Kevin C.

    The work summarized in this report is the first step towards a project that will re-train and create jobs for personnel in the coal industry and continue regional economic development to benefit regions impacted by previous downturns. The larger project is aimed at capturing ~300 tons/day (272 metric tonnes/day) CO 2 at a 90% capture rate from existing coal- fired boilers at the Abbott Power Plant on the campus of University of Illinois (UI). It will employ the Linde-BASF novel amine-based advanced CO 2 capture technology, which has already shown the potential to be cost-effective, energy efficient and compact atmore » the 0.5-1.5 MWe pilot scales. The overall objective of the project is to design and install a scaled-up system of nominal 15 MWe size, integrate it with the Abbott Power Plant flue gas, steam and other utility systems, and demonstrate the viability of continuous operation under realistic conditions with high efficiency and capacity. The project will also begin to build a workforce that understands how to operate and maintain the capture plants by including students from regional community colleges and universities in the operation and evaluation of the capture system. This project will also lay the groundwork for follow-on projects that pilot utilization of the captured CO 2 from coal-fired power plants. The net impact will be to demonstrate a replicable means to (1) use a standardized procedure to evaluate power plants for their ability to be retrofitted with a pilot capture unit; (2) design and construct reliable capture systems based on the Linde-BASF technology; (3) operate and maintain these systems; (4) implement training programs with local community colleges and universities to establish a workforce to operate and maintain the systems; and (5) prepare to evaluate at the large pilot scale level various methods to utilize the resulting captured CO 2. Towards the larger project goal, the UI-led team, together with Linde, has completed a preliminary design for the carbon capture pilot plant with basic engineering and cost estimates, established permitting needs, identified approaches to address Environmental, Health, and Safety concerns related to pilot plant installation and operation, developed approaches for long-term use of the captured carbon, and established strategies for workforce development and job creation that will re-train coal operators to operate carbon capture plants. This report describes Phase I accomplishments and demonstrates that the project team is well-prepared for full implementation of Phase 2, to design, build, and operate the carbon capture pilot plant.« less

  8. Coastal erosion risk assessment using natural and human factors in different scales.

    NASA Astrophysics Data System (ADS)

    Alexandrakis, George; Kampanis, Nikolaos

    2015-04-01

    Climate change, including sea-level rise and increasing storms, raise the threats of coastal erosion. Mitigating and adapting to coastal erosion risks in areas of human interest, like urban areas, culture heritage sites, and areas of economic interest, present a major challenge for society. In this context, decision making needs to be based in reliable risk assessment that includes environmental, social and economic factors. By integrating coastal hazard and risk assessments maps into coastal management plans, risks in areas of interest can be reduced. To address this, the vulnerability of the coast to sea level rise and associated erosion, in terms of expected land loss and socioeconomic importance need to be identified. A holistic risk assessment based in environmental, socioeconomic and economics approach can provide managers information how to mitigate the impact of coastal erosion and plan protection measures. Such an approach needs to consider social, economic and environmental factors, which interactions can be better assessed when distributed and analysed along the geographical space. In this work, estimations of climate change impact to coastline are based on a combination of environmental and economic data analysed in a GIS database. The risk assessment is implemented through the estimation of the vulnerability and exposure variables of the coast in two scales. The larger scale estimates the vulnerability in a regional level, with the use environmental factors with the use of CVI. The exposure variable is estimated by the use of socioeconomic factors. Subsequently, a smaller scale focuses on highly vulnerable beaches with high social and economic value. The vulnerability assessment of the natural processes to the environmental characteristics of the beach is estimated with the use of the Beach Vulnerability Index. As exposure variable, the value of beach width that is capitalized in revenues is implemented through a hedonic pricing model. In this econometric modelling, Beach Value is related with economic and environmental attributes of the beach. All calculations are implemented in a GIS database, organised in five levels. In the first level the gathering of raw data is been made. In the second level data are organized in different scales. Third level, concerns the generating of new thematic data for further use. Risk assessment analysis and cost benefit analysis for protection measures is been made in level four. In the fifth level the results are transformed in user friendly form to be used by coastal managers. As case study area for the application of the method is selected Crete Island, while for the small scale the city of Rethymnon, which at the regional vulnerability analysis was found as high vulnerable. In the small scale vulnerability analysis, the sectors of the beach which are most vulnerable were identified, and risk analysis was made based on the revenue losses. Acknowledgments This work was implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and is co-financed by the European Social Fund (ESF) and the Greek State.

  9. Validation of whitecap fraction and breaking wave parameters from WAVEWATCH-III using in situ and remote-sensing data

    NASA Astrophysics Data System (ADS)

    Leckler, F.; Hanafin, J. A.; Ardhuin, F.; Filipot, J.; Anguelova, M. D.; Moat, B. I.; Yelland, M.; Prytherch, J.

    2012-12-01

    Whitecaps are the main sink of wave energy. Although the exact processes are still unknown, it is clear that they play a significant role in momentum exchange between atmosphere and ocean, and also influence gas and aerosol exchange. Recently, modeling of whitecap properties was implemented in the spectral wave model WAVEWATCH-III ®. This modeling takes place in the context of the Oceanflux-Greenhouse Gas project, to provide a climatology of breaking waves for gas transfer studies. We present here a validation study for two different wave breaking parameterizations implemented in the spectral wave model WAVEWATCH-III ®. The model parameterizations use different approaches related to the steepness of the carrying waves to estimate breaking wave probabilities. That of Ardhuin et al. (2010) is based on the hypothesis that breaking probabilities become significant when the saturation spectrum exceeds a threshold, and includes a modification to allow for greater breaking in the mean wave direction, to agree with observations. It also includes suppression of shorter waves by longer breaking waves. In the second, (Filipot and Ardhuin, 2012) breaking probabilities are defined at different scales using wave steepness, then the breaking wave height distribution is integrated over all scales. We also propose an adaptation of the latter to make it self-consistent. The breaking probabilities parameterized by Filipot and Ardhuin (2012) are much larger for dominant waves than those from the other parameterization, and show better agreement with modeled statistics of breaking crest lengths measured during the FAIRS experiment. This stronger breaking also has an impact on the shorter waves due to the parameterization of short wave damping associated with large breakers, and results in a different distribution of the breaking crest lengths. Converted to whitecap coverage using Reul and Chapron (2003), both parameterizations agree reasonably well with commonly-used empirical fits of whitecap coverage against wind speed (Monahan and Woolf, 1989) and with the global whitecap coverage of Anguelova and Webster (2006), derived from space-borne radiometry. This is mainly due to the fact that the breaking of larger waves in the parametrization by Filipot and Ardhuin (2012) is compensated for by the intense breaking of smaller waves in that of Ardhuin et al. (2010). Comparison with in situ data collected during research ship cruises in the North and South Atlantic (SEASAW, DOGEE and WAGES), and the Norwegian Sea (HiWASE) between 2006 and 2011 also shows good agreement. However, as large scale breakers produce a thicker foam layer, modeled mean foam thickness clearly depends on the scale of the breakers. Foam thickness is thus a more interesting parameter for calibrating and validating breaking wave parameterizations, as the differences in scale can be determined. With this in mind, we present the initial results of validation using an estimation of mean foam thickness using multiple radiometric bands from satellites SMOS and AMSR-E.

  10. Scaling properties of Polish rain series

    NASA Astrophysics Data System (ADS)

    Licznar, P.

    2009-04-01

    Scaling properties as well as multifractal nature of precipitation time series have not been studied for local Polish conditions until recently due to lack of long series of high-resolution data. The first Polish study of precipitation time series scaling phenomena was made on the base of pluviograph data from the Wroclaw University of Environmental and Life Sciences meteorological station located at the south-western part of the country. The 38 annual rainfall records from years 1962-2004 were converted into digital format and transformed into a standard format of 5-minute time series. The scaling properties and multifractal character of this material were studied by means of several different techniques: power spectral density analysis, functional box-counting, probability distribution/multiple scaling and trace moment methods. The result proved the general scaling character of time series at the range of time scales ranging form 5 minutes up to at least 24 hours. At the same time some characteristic breaks at scaling behavior were recognized. It is believed that the breaks were artificial and arising from the pluviograph rain gauge measuring precision limitations. Especially strong limitations at the precision of low-intensity precipitations recording by pluviograph rain gauge were found to be the main reason for artificial break at energy spectra, as was reported by other authors before. The analysis of co-dimension and moments scaling functions showed the signs of the first-order multifractal phase transition. Such behavior is typical for dressed multifractal processes that are observed by spatial or temporal averaging on scales larger than the inner-scale of those processes. The fractal dimension of rainfall process support derived from codimension and moments scaling functions geometry analysis was found to be 0.45. The same fractal dimension estimated by means of the functional box-counting method was equal to 0.58. At the final part of the study implementation of double trace moment method allowed for estimation of local universal multifractal rainfall parameters (α=0.69; C1=0.34; H=-0.01). The research proved the fractal character of rainfall process support and multifractal character of the rainfall intensity values variability among analyzed time series. It is believed that scaling of local Wroclaw's rainfalls for timescales at the range from 24 hours up to 5 minutes opens the door for future research concerning for example random cascades implementation for daily precipitation totals disaggregation for smaller time intervals. The results of such a random cascades functioning in a form of 5 minute artificial rainfall scenarios could be of great practical usability for needs of urban hydrology, and design and hydrodynamic modeling of storm water and combined sewage conveyance systems.

  11. Chondrocyte Deformations as a Function of Tibiofemoral Joint Loading Predicted by a Generalized High-Throughput Pipeline of Multi-Scale Simulations

    PubMed Central

    Sibole, Scott C.; Erdemir, Ahmet

    2012-01-01

    Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE) tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain) were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume) representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method’s generalized nature also allows for substitution of any macro-scale and/or micro-scale model providing application for other multi-scale continuum mechanics problems. PMID:22649535

  12. Analysis of Multiallelic CNVs by Emulsion Haplotype Fusion PCR.

    PubMed

    Tyson, Jess; Armour, John A L

    2017-01-01

    Emulsion-fusion PCR recovers long-range sequence information by combining products in cis from individual genomic DNA molecules. Emulsion droplets act as very numerous small reaction chambers in which different PCR products from a single genomic DNA molecule are condensed into short joint products, to unite sequences in cis from widely separated genomic sites. These products can therefore provide information about the arrangement of sequences and variants at a larger scale than established long-read sequencing methods. The method has been useful in defining the phase of variants in haplotypes, the typing of inversions, and determining the configuration of sequence variants in multiallelic CNVs. In this description we outline the rationale for the application of emulsion-fusion PCR methods to the analysis of multiallelic CNVs, and give practical details for our own implementation of the method in that context.

  13. Evaluation of the Plant-Craig stochastic convection scheme in an ensemble forecasting system

    NASA Astrophysics Data System (ADS)

    Keane, R. J.; Plant, R. S.; Tennant, W. J.

    2015-12-01

    The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic element only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

  14. Capture of white sturgeon larvae downstream of The Dalles Dam, Columbia River, Oregon and Washington, 2012

    USGS Publications Warehouse

    Parsley, Michael J.; Kofoot, Eric

    2013-01-01

    Wild-spawned white sturgeon (Acipenser transmontanus) larvae captured and reared in aquaculture facilities and subsequently released, are increasingly being used in sturgeon restoration programs in the Columbia River Basin. A reconnaissance study was conducted to determine where to deploy nets to capture white sturgeon larvae downstream of a known white sturgeon spawning area. As a result of the study, 103 white sturgeon larvae and 5 newly hatched free-swimming embryos were captured at 3 of 5 reconnaissance netting sites. The netting, conducted downstream of The Dalles Dam on the Columbia River during June 25–29, 2012, provided information for potentially implementing full-scale collection efforts of large numbers of larvae for rearing in aquaculture facilities and for subsequent release at a larger size in white sturgeon restoration programs.

  15. Implementation Fidelity in Community-Based Interventions

    PubMed Central

    Breitenstein, Susan M.; Gross, Deborah; Garvey, Christine; Hill, Carri; Fogg, Louis; Resnick, Barbara

    2012-01-01

    Implementation fidelity is the degree to which an intervention is delivered as intended and is critical to successful translation of evidence-based interventions into practice. Diminished fidelity may be why interventions that work well in highly controlled trials may fail to yield the same outcomes when applied in real life contexts. The purpose of this paper is to define implementation fidelity and describe its importance for the larger science of implementation, discuss data collection methods and current efforts in measuring implementation fidelity in community-based prevention interventions, and present future research directions for measuring implementation fidelity that will advance implementation science. PMID:20198637

  16. Coarse graining flow of spin foam intertwiners

    NASA Astrophysics Data System (ADS)

    Dittrich, Bianca; Schnetter, Erik; Seth, Cameron J.; Steinhaus, Sebastian

    2016-12-01

    Simplicity constraints play a crucial role in the construction of spin foam models, yet their effective behavior on larger scales is scarcely explored. In this article we introduce intertwiner and spin net models for the quantum group SU (2 )k×SU (2 )k, which implement the simplicity constraints analogous to four-dimensional Euclidean spin foam models, namely the Barrett-Crane (BC) and the Engle-Pereira-Rovelli-Livine/Freidel-Krasnov (EPRL/FK) model. These models are numerically coarse grained via tensor network renormalization, allowing us to trace the flow of simplicity constraints to larger scales. In order to perform these simulations we have substantially adapted tensor network algorithms, which we discuss in detail as they can be of use in other contexts. The BC and the EPRL/FK model behave very differently under coarse graining: While the unique BC intertwiner model is a fixed point and therefore constitutes a two-dimensional topological phase, BC spin net models flow away from the initial simplicity constraints and converge to several different topological phases. Most of these phases correspond to decoupling spin foam vertices; however we find also a new phase in which this is not the case, and in which a nontrivial version of the simplicity constraints holds. The coarse graining flow of the BC spin net models indicates furthermore that the transitions between these phases are not of second order. The EPRL/FK model by contrast reveals a far more intricate and complex dynamics. We observe an immediate flow away from the original simplicity constraints; however, with the truncation employed here, the models generically do not converge to a fixed point. The results show that the imposition of simplicity constraints can indeed lead to interesting and also very complex dynamics. Thus we need to further develop coarse graining tools to efficiently study the large scale behavior of spin foam models, in particular for the EPRL/FK model.

  17. A Composite Medium Approximation for Moisture Tension-Dependent Anisotropy in Unsaturated Layered Sediments

    NASA Astrophysics Data System (ADS)

    Pruess, K.

    2001-12-01

    Sedimentary formations often have a layered structure in which hydrogeologic properties have substantially larger correlation length in the bedding plane than perpendicular to it. Laboratory and field experiments and observations have shown that even small-scale layering, down to millimeter-size laminations, can substantially alter and impede the downward migration of infiltrating liquids, while enhancing lateral flow. The fundamental mechanism is that of a capillary barrier: at increasingly negative moisture tension (capillary suction pressure), coarse-grained layers with large pores desaturate more quickly than finer-grained media. This strongly reduces the hydraulic conductivity of the coarser (higher saturated hydraulic conductivity) layers, which then act as barriers to downward flow, forcing water to accumulate and spread near the bottom of the overlying finer-grained material. We present a "composite medium approximation" (COMA) for anisotropic flow behavior on a typical grid block scale (0.1 - 1 m or larger) in finite-difference models. On this scale the medium is conceptualized as consisting of homogeneous horizontal layers with uniform thickness, and capillary equilibrium is assumed to prevail locally. Directionally-dependent relative permeabilities are obtained by considering horizontal flow to proceed via "conductors in parallel," while vertical flow involves "resistors in series." The model is formulated for the general case of N layers, and implementation of a simplified two-layer (fine-coarse) approximation in the multiphase flow simulator TOUGH2 is described. The accuracy of COMA is evaluated by comparing numerical simulations of plume migration in 1-D and 2-D unsaturated flow with results of fine-grid simulations in which all layers are discretized explicitly. Applications to water seepage and solute transport at the Hanford site are also described. This work was supported by the U.S. Department of Energy under Contract No. DE-AC03-76SF00098 through Memorandum Purchase Order 248861-A-B2 between Pacific Northwest National Laboratory and Lawrence Berkeley National Laboratory.

  18. Scaling considerations related to interactions of hydrologic, pedologic and geomorphic processes (Invited)

    NASA Astrophysics Data System (ADS)

    Sidle, R. C.

    2013-12-01

    Hydrologic, pedologic, and geomorphic processes are strongly interrelated and affected by scale. These interactions exert important controls on runoff generation, preferential flow, contaminant transport, surface erosion, and mass wasting. Measurement of hydraulic conductivity (K) and infiltration capacity at small scales generally underestimates these values for application at larger field, hillslope, or catchment scales. Both vertical and slope-parallel saturated flow and related contaminant transport are often influenced by interconnected networks of preferential flow paths, which are not captured in K measurements derived from soil cores. Using such K values in models may underestimate water and contaminant fluxes and runoff peaks. As shown in small-scale runoff plot studies, infiltration rates are typically lower than integrated infiltration across a hillslope or in headwater catchments. The resultant greater infiltration-excess overland flow in small plots compared to larger landscapes is attributed to the lack of preferential flow continuity; plot border effects; greater homogeneity of rainfall inputs, topography and soil physical properties; and magnified effects of hydrophobicity in small plots. At the hillslope scale, isolated areas with high infiltration capacity can greatly reduce surface runoff and surface erosion at the hillslope scale. These hydropedologic and hydrogeomorphic processes are also relevant to both occurrence and timing of landslides. The focus of many landslide studies has typically been either on small-scale vadose zone process and how these affect soil mechanical properties or on larger scale, more descriptive geomorphic studies. One of the issues in translating laboratory-based investigations on geotechnical behavior of soils to field scales where landslides occur is the characterization of large-scale hydrological processes and flow paths that occur in heterogeneous and anisotropic porous media. These processes are not only affected by the spatial distribution of soil physical properties and bioturbations, but also by geomorphic attributes. Interactions among preferential flow paths can induce rapid pore water pressure response within soil mantles and trigger landslides during storm peaks. Alternatively, in poorly developed and unstructured soils, infiltration occurs mainly through the soil matrix and a lag time exists between the rainfall peak and development of pore water pressures at depth. Deep, slow-moving mass failures are also strongly controlled by secondary porosity within the regolith with the timing of activation linked to recharge dynamics. As such, understanding both small and larger scale processes is needed to estimate geomorphic impacts, as well as streamflow generation and contaminant migration.

  19. Assessment of environmental impacts and operational costs of the implementation of an innovative source-separated urine treatment.

    PubMed

    Igos, Elorri; Besson, Mathilde; Navarrete Gutiérrez, Tomás; Bisinella de Faria, Ana Barbara; Benetto, Enrico; Barna, Ligia; Ahmadi, Aras; Spérandio, Mathieu

    2017-12-01

    Innovative treatment technologies and management methods are necessary to valorise the constituents of wastewater, in particular nutrients from urine (highly concentrated and can have significant impacts related to artificial fertilizer production). The FP7 project, ValuefromUrine, proposed a new two-step process (called VFU) based on struvite precipitation and microbial electrolysis cell (MEC) to recover ammonia, which is further transformed into ammonium sulphate. The environmental and economic impacts of its prospective implementation in the Netherlands were evaluated based on life cycle assessment (LCA) methodology and operational costs. In order to tackle the lack of stable data from the pilot plant and the complex effects on wastewater treatment plant (WWTP), process simulation was coupled with LCA and costs assessment using the Python programming language. Additionally, particular attention was given to the propagation and analysis of inputs uncertainties. Five scenarios of VFU implementation were compared to the conventional treatment of 1 m 3 of wastewater. Inventory data were obtained from SUMO software for the WWTP operation. LCA was based on Brightway2 software (using ecoinvent database and ReCiPe method). The results, based on 500 iterations sampled from inputs distributions (foreground parameters, ecoinvent background data and market prices), showed a significant advantage of VFU technology, both at a small and decentralized scale and at a large and centralized scale (95% confidence intervals not including zero values). The benefits mainly concern the production of fertilizers, the decreased efforts at the WWTP, the water savings from toilets flushing, as well as the lower infrastructure volumes if the WWTP is redesigned (in case of significant reduction of nutrients load in wastewater). The modelling approach, which could be applied to other case studies, improves the representativeness and the interpretation of results (e.g. complex relationships, global sensitivity analysis) but requires additional efforts (computing and engineering knowledge, longer calculation time). Finally, the sustainability assessment should be refined in the future with the development of the technology at larger scale to update these preliminary conclusions before its commercialization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. The Spatial Pattern and Interactions of Woody Plants on the Temperate Savanna of Inner Mongolia, China: The Effects of Alternating Seasonal Grazing-Mowing Regimes

    PubMed Central

    2015-01-01

    Ulmus pumila tree-dominated temperate savanna, which is distributed widely throughout the forest-steppe ecotone on the Mongolian Plateau, is a relatively stable woody-herbaceous complex ecosystem in northern China. Relatively more attention has been paid to the degradation of typical steppe areas, whereas less focus has been placed on the succession of this typical temperate savanna under the present management regime. In this study, we established 3 sample plots 100 m×100 m in size along a gradient of fixed distances from one herder’s stationary site and then surveyed all the woody plants in these plots. A spatial point pattern analysis was employed to clarify the spatial distribution and interaction of these woody plants. The results indicated that old U. pumila trees (DBH ≥ 20 cm) showed a random distribution and that medium U. pumila trees (5 cm ≤ DBH < 20 cm) showed an aggregated distribution at a smaller scale and a random distribution at a larger scale; few or no juvenile trees (DBH < 5 cm) were present, and seedlings (without DBH) formed aggregations in all 3 plots. These findings can be explained by an alternate seasonal grazing-mowing regime (exclosure in summer, mowing in autumn and grazing in winter and spring); the shrubs in all 3 plots exist along a grazing gradient that harbors xerophytic and mesophytic shrubs. Of these shrubs, xerophytic shrubs show significant aggregation at a smaller scale (0-5.5 m), whereas mesophytic shrubs show significant aggregation at a larger scale (0-25 m), which may be the result of the dual effects of grazing pressure and climate change. Medium trees and seedlings significantly facilitate the distributions of xerophytic shrubs and compete significantly with mesophytic shrubs due to differences in water use strategies. We conclude that the implementation of an alternative grazing-mowing regime results in xerophytic shrub encroachment or existence, breaking the chain of normal succession in a U. pumila tree community in this typical temperate savanna ecosystem. This might eventually result in the degradation of the original tree-dominated savanna to a xerophytic shrub-dominated savanna. PMID:26196956

  1. Integrating multi-disciplinary field and laboratory methods to investigate the response and recovery of beach-dune systems in Ireland to extreme events

    NASA Astrophysics Data System (ADS)

    Farrell, E.; Lynch, K.; Wilkes Orozco, S.; Castro Camba, G.; Scullion, A.

    2017-12-01

    This two year field monitoring project examines the response and recovery of 1.2km of a coastal beach-dune system in the west coast of Ireland (The Maharees, Brandon Bay, Co. Kerry) to storms. The results from this project initiated a larger scale study to assess the long term evolution of Brandon Bay (12km) and patterns of meso-scale rotation. On a bay scale historic shoreline analyses were completed using historic Ordnance Survey maps, aerial photography, and DGPS surveys inputted to the Digital Shoreline Analysis System. These were coupled with a GSTA-wavemeter experiment that collected 410 sediment samples along the beach and nearshore to identify preferred sediment transport pathways along the bay. On a local scale (1.2km) geomorphological changes of the beach and nearshore were monitored using repeated monthly DGPS surveys and drone technology. Topographical data were correlated with atmospheric data obtained from a locally installed automatic weather station, oceanographic data from secondary sources, and photogrammetry using a camera installed at the site collecting pictures every 10 minutes during daylight hours. Changes in surface elevation landward of the foredune from aeolian processes were measured using five pin transects across the dune. The contribution of local blowout dynamics were measured using drone imagery and structure-from-motion technology. The results establish that the average shoreline recession along the 1.2 km site is 72 m during the past 115 years. The topographic surveys illustrate that natural beach building processes initiate system recovery post storms including elevated foreshores and backshores and nearshore sand bar migration across the entire 1.2 km stretch of coastline. In parallel with the scientific work, the local community have mobilized and are working closely with the lead scientists to implement short term coastal management strategies such as signage, information booklets, sand trap fencing, walkways, wooden revetments, dune planting in order to support the end goal of obtaining financial support from government for a larger, long term coastal protection plan.

  2. Using stable isotopes to identify the scaling effects of riparian peatlands on runoff generation processes and DOC mobilisation

    NASA Astrophysics Data System (ADS)

    Tunaley, Claire; Tetzlaff, Doerthe; Soulsby, Chris

    2017-04-01

    Knowledge of hydrological sources, flow paths, and their connectivity is fundamental to understanding stream flow generation and surface water quality in peatlands. Stable isotopes are proven tools for tracking the sources and flow paths of runoff. However, relativity few studies have used isotopes in peat-dominated catchments. Here, we combined 13 months (June 2014 - July 2015) of daily isotope measurements in stream water with daily DOC and 15 minute FDOM (fluorescent component of dissolved organic matter) data, at three nested scales in NE Scotland, to identify the hydrological processes occurring in riparian peatlands. We investigated how runoff generation processes in a small, riparian peatland dominated headwater catchment (0.65 km2) propagate to larger scales (3.2 km2 and 31 km2) with decreasing percentage of riparian peatland coverage. Isotope damping was most pronounced in the 0.65 km2 catchment due to high water storage in the organic soils which encouraged tracer mixing and resulted in attenuated runoff peaks. At the largest scale, stream flow and water isotope dynamics showed a more flashy response. Particularly insightful in this study was calculating the deviation of the isotopes from the local meteoric water line, the lc-excess. The lc-excess revealed evaporative fractionation in the peatland dominated catchment, particularly during summer low flows. This implied high hydrological connectivity in the form of constant seepage from the peatlands sustaining high baseflows at the headwater scale. This constant connectivity resulted in high DOC concentrations at the peatland site during baseflow ( 5 mg l-1). In contrast, at the larger scales, DOC was minimal during low flows ( 2 mg l-1) due to increased groundwater influence and the disconnection between DOC sources and the stream. Insights into event dynamics through the analysis of DOC hysteresis loops showed slight dilution on the rising limb, the strong influence of dry antecedent conditions and a quick recovery between events at the riparian peatland site. Again, these dynamics were driven by the tight coupling and high connectivity of the landscape to the stream. At larger scales, the disconnection between the landscape units increased and the variable connectivity controlled runoff generation and DOC dynamics. The results presented here suggest that the hydrological processes occurring in riparian peatlands in headwater catchments are less evident at larger scales which may have implications for the larger scale impact of peatland restoration projects.

  3. Heart Rate and Heart Rate Variability in Dairy Cows with Different Temperament and Behavioural Reactivity to Humans

    PubMed Central

    Tőzsér, János; Szenci, Ottó; Póti, Péter; Pajor, Ferenc

    2015-01-01

    From the 1990s, extensive research was started on the physiological aspects of individual traits in animals. Previous research has established two extreme (proactive and reactive) coping styles in several animal species, but the means of reactivity with the autonomic nervous system (ANS) activity has not yet been investigated in cattle. The aim of this study was the characterization of cardiac autonomic activity under different conditions in cows with different individual characteristics. For this purpose, we investigated heart rate and ANS-related heart rate variability (HRV) parameters of dairy cows (N = 282) on smaller- and larger-scale farms grouped by (1) temperament and (2) behavioural reactivity to humans (BRH). Animals with high BRH scores were defined as impulsive, while animals with low BRH scores were defined as reserved. Cardiac parameters were calculated for undisturbed lying (baseline) and for milking bouts, the latter with the presence of an unfamiliar person (stressful situation). Sympathetic tone was higher, while vagal activity was lower in temperamental cows than in calm animals during rest both on smaller- and larger-scale farms. During milking, HRV parameters were indicative of a higher sympathetic and a lower vagal activity of temperamental cows as compared to calm ones in farms of both sizes. Basal heart rate did not differ between BRH groups either on smaller- or larger-scale farms. Differences between basal ANS activity of impulsive and reserved cows reflected a higher resting vagal and lower sympathetic activity of reserved animals compared to impulsive ones both on smaller- and larger-scale farms. There was no difference either in heart rate or in HRV parameters between groups during milking neither in smaller- nor in larger-scale farms. These two groupings allowed to draw possible parallels between personality and cardiac autonomic activity during both rest and milking in dairy cows. Heart rate and HRV seem to be useful for characterisation of physiological differences related to temperament and BRH. PMID:26291979

  4. DEVELOPMENT OF RIPARIAN ZONE INDICATORS (INT. GRANT)

    EPA Science Inventory

    Landscape features (e.g., land use) influence water quality characteristics on a variety of spatial scales. For example, while land use is controlled by anthropogenic features at a local scale, geologic features are set at larger spatial, and longer temporal scales. Individual ...

  5. Lagrangian Statistics and Intermittency in Gulf of Mexico.

    PubMed

    Lin, Liru; Zhuang, Wei; Huang, Yongxiang

    2017-12-12

    Due to the nonlinear interaction between different flow patterns, for instance, ocean current, meso-scale eddies, waves, etc, the movement of ocean is extremely complex, where a multiscale statistics is then relevant. In this work, a high time-resolution velocity with a time step 15 minutes obtained by the Lagrangian drifter deployed in the Gulf of Mexico (GoM) from July 2012 to October 2012 is considered. The measured Lagrangian velocity correlation function shows a strong daily cycle due to the diurnal tidal cycle. The estimated Fourier power spectrum E(f) implies a dual-power-law behavior which is separated by the daily cycle. The corresponding scaling exponents are close to -1.75 and -2.75 respectively for the time scale larger (resp. 0.1 ≤ f ≤ 0.4 day -1 ) and smaller (resp. 2 ≤ f ≤ 8 day -1 ) than 1 day. A Hilbert-based approach is then applied to this data set to identify the possible multifractal property of the cascade process. The results show an intermittent dynamics for the time scale larger than 1 day, while a less intermittent dynamics for the time scale smaller than 1 day. It is speculated that the energy is partially injected via the diurnal tidal movement and then transferred to larger and small scales through a complex cascade process, which needs more studies in the near future.

  6. Modeling Open Architecture and Evolutionary Acquisition: Implementation Lessons from the ARCI Program for the Rapid Capability Insertion Process

    DTIC Science & Technology

    2009-04-22

    Implementation Issues Another RCIP implementation risk is program management burnout . The ACRI program manager specifically identified the potential...of burnout in his program management team due to the repeated, intense Integration phases. To investigate the possibility and severity of this risk to...the ACRI simulation. This suggests that the burnout risk will be larger for RCIP than it was for ACRI. Successfully implementing a sustainable RCIP

  7. Implementation of external cephalic version in the Netherlands: a retrospective cohort study.

    PubMed

    Vlemmix, Floortje; Rosman, Ageeth N; te Hoven, Susan; van de Berg, Suzanne; Fleuren, Margot A H; Rijnders, Marlies E; Beuckens, Antje; Opmeer, Brent C; Mol, Ben Willem J; Kok, Marjolein

    2014-12-01

    External cephalic version (ECV) reduces the rate of elective cesarean sections as a result of breech presentation. Several studies have shown that not all eligible women undergo an ECV attempt. The aim of this study was to evaluate the implementation of ECV in the Netherlands and to explain variation in implementation rates with hospital characteristics and individual factors. We invited 40 hospitals to participate in this retrospective cohort study. We reviewed hospital charts for all singleton breech deliveries from 36 weeks' gestation and onwards between January 2008 and December 2009. We documented whether an ECV attempt was performed, reasons for not performing an attempt, mode of delivery, and hospital characteristics. We included 4,770 women from 36 hospitals. ECV was performed in 2,443 women (62.2% of eligible women, range 8.2-83.6% in different hospitals). Implementation rates were higher in teaching hospitals, hospitals with special office hours for ECV, larger obstetric units, and hospitals located in larger cities. Suboptimal implementation was mainly caused by health care providers who did not offer ECV. ECV implementation rates vary widely among hospitals. Suboptimal implementation is mostly caused by the care provider not offering the treatment and secondly due to women not opting for the offered attempt. A prerequisite for designing a proper implementation strategy is a detailed understanding of the exact reasons for not offering and not opting for ECV. © 2014 Wiley Periodicals, Inc.

  8. Constraint-based strain design using continuous modifications (CosMos) of flux bounds finds new strategies for metabolic engineering.

    PubMed

    Cotten, Cameron; Reed, Jennifer L

    2013-05-01

    In recent years, a growing number of metabolic engineering strain design techniques have employed constraint-based modeling to determine metabolic and regulatory network changes which are needed to improve chemical production. These methods use systems-level analysis of metabolism to help guide experimental efforts by identifying deletions, additions, downregulations, and upregulations of metabolic genes that will increase biological production of a desired metabolic product. In this work, we propose a new strain design method with continuous modifications (CosMos) that provides strategies for deletions, downregulations, and upregulations of fluxes that will lead to the production of the desired products. The method is conceptually simple and easy to implement, and can provide additional strategies over current approaches. We found that the method was able to find strain design strategies that required fewer modifications and had larger predicted yields than strategies from previous methods in example and genome-scale networks. Using CosMos, we identified modification strategies for producing a variety of metabolic products, compared strategies derived from Escherichia coli and Saccharomyces cerevisiae metabolic models, and examined how imperfect implementation may affect experimental outcomes. This study gives a powerful and flexible technique for strain engineering and examines some of the unexpected outcomes that may arise when strategies are implemented experimentally. Copyright © 2013 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. A study of the adequacy of quasi-geostrophic dynamics for modeling the effect of frontal cyclones on the larger scale flow

    NASA Technical Reports Server (NTRS)

    Mudrick, S.

    1985-01-01

    The validity of quasi-geostrophic (QG) dynamics were tested on compared to primitive equation (PE) dynamics, for modeling the effect of cyclone waves on the larger scale flow. The formation of frontal cyclones and the dynamics of occluded frontogenesis were studied. Surface friction runs with the PE model and the wavelength of maximum instability is described. Also fine resolution PE simulation of a polar low is described.

  10. On the Spatio-Temporal Variability of Field-Aligned Currents Observed with the Swarm Satellite Constellation: Implications for the Energetics of Magnetosphere-Ionosphere Coupling

    NASA Astrophysics Data System (ADS)

    Pakhotin, I.; Mann, I. R.; Forsyth, C.; Rae, J.; Burchill, J. K.; Knudsen, D. J.; Murphy, K. R.; Gjerloev, J. W.; Ozeke, L.; Balasis, G.; Daglis, I. A.

    2016-12-01

    With the advent of the Swarm mission with its multi-satellite capacity, it became possible for the first time to make systematic close separation multi-satellite measurements of the magnetic fields associated with field-aligned currents (FACs) at a 50 Hz cadence using fluxgate magnetometers. Initial studies have revealed an even greater level of detail and complexity and spatio-temporal non-stationarity than previously understood. On inter-satellite separation scales of 10 seconds along-track and <120 km cross-track, the peak-to-peak magnitudes of the small scale and poorly correlated inter-spacecraft magnetic field fluctuations can reach tens to hundreds of nanoteslas. These magnitudes are directly comparable to those associated with larger scale magnetic perturbations such as the global scale Region 1 and 2 FAC systems characterised by Iijima and Potemra 40 years ago. We evaluate the impact of these smaller scale magnetic perturbations relative to the larger scale FAC systems statistically as a function of the total number of FAC crossings observed, and as a function of geomagnetic indices, spatial location, and season. Further case studies incorporating Swarm electric field measurements enable estimates of the Poynting flux associated with the small scale and non-stationary magnetic fields. We interpret the small scale structures as Alfvenic, suggesting that Alfven waves play a much larger and more energetically significant role in magnetosphere-ionosphere coupling than previously thought. We further examine what causes such high variability among low-Earth orbit FAC systems to be observed under some conditions but not in others.

  11. Collaboration, negotiation, and coalescence for interagency-collaborative teams to scale-up evidence-based practice.

    PubMed

    Aarons, Gregory A; Fettes, Danielle L; Hurlburt, Michael S; Palinkas, Lawrence A; Gunderson, Lara; Willging, Cathleen E; Chaffin, Mark J

    2014-01-01

    Implementation and scale-up of evidence-based practices (EBPs) is often portrayed as involving multiple stakeholders collaborating harmoniously in the service of a shared vision. In practice, however, collaboration is a more complex process that may involve shared and competing interests and agendas, and negotiation. The present study examined the scale-up of an EBP across an entire service system using the Interagency Collaborative Team approach. Participants were key stakeholders in a large-scale county-wide implementation of an EBP to reduce child neglect, SafeCare. Semistructured interviews and/or focus groups were conducted with 54 individuals representing diverse constituents in the service system, followed by an iterative approach to coding and analysis of transcripts. The study was conceptualized using the Exploration, Preparation, Implementation, and Sustainment framework. Although community stakeholders eventually coalesced around implementation of SafeCare, several challenges affected the implementation process. These challenges included differing organizational cultures, strategies, and approaches to collaboration; competing priorities across levels of leadership; power struggles; and role ambiguity. Each of the factors identified influenced how stakeholders approached the EBP implementation process. System-wide scale-up of EBPs involves multiple stakeholders operating in a nexus of differing agendas, priorities, leadership styles, and negotiation strategies. The term collaboration may oversimplify the multifaceted nature of the scale-up process. Implementation efforts should openly acknowledge and consider this nexus when individual stakeholders and organizations enter into EBP implementation through collaborative processes.

  12. Collaboration, Negotiation, and Coalescence for Interagency-Collaborative Teams to Scale-up Evidence-Based Practice

    PubMed Central

    Aarons, Gregory A.; Fettes, Danielle; Hurlburt, Michael; Palinkas, Lawrence; Gunderson, Lara; Willging, Cathleen; Chaffin, Mark

    2014-01-01

    Objective Implementation and scale-up of evidence-based practices (EBPs) is often portrayed as involving multiple stakeholders collaborating harmoniously in the service of a shared vision. In practice, however, collaboration is a more complex process that may involve shared and competing interests and agendas, and negotiation. The present study examined the scale-up of an EBP across an entire service system using the Interagency Collaborative Team (ICT) approach. Methods Participants were key stakeholders in a large-scale county-wide implementation of an EBP to reduce child neglect, SafeCare®. Semi-structured interviews and/or focus groups were conducted with 54 individuals representing diverse constituents in the service system, followed by an iterative approach to coding and analysis of transcripts. The study was conceptualized using the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework. Results Although community stakeholders eventually coalesced around implementation of SafeCare, several challenges affected the implementation process. These challenges included differing organizational cultures, strategies, and approaches to collaboration, competing priorities across levels of leadership, power struggles, and role ambiguity. Each of the factors identified influenced how stakeholders approached the EBP implementation process. Conclusions System wide scale-up of EBPs involves multiple stakeholders operating in a nexus of differing agendas, priorities, leadership styles, and negotiation strategies. The term collaboration may oversimplify the multifaceted nature of the scale-up process. Implementation efforts should openly acknowledge and consider this nexus when individual stakeholders and organizations enter into EBP implementation through collaborative processes. PMID:24611580

  13. Experimentally simulating the dynamics of quantum light and matter at ultrastrong coupling using circuit QED (1) - implementation and matter dynamics -

    NASA Astrophysics Data System (ADS)

    Kounalakis, M.; Langford, N. K.; Sagastizabal, R.; Dickel, C.; Bruno, A.; Luthi, F.; Thoen, D. J.; Endo, A.; Dicarlo, L.

    The field dipole coupling of quantum light and matter, described by the quantum Rabi model, leads to exotic phenomena when the coupling strength g becomes comparable or larger than the atom and photon frequencies ωq , r. In this ultra-strong coupling regime, excitations are not conserved, leading to collapse-revival dynamics in atom and photon parity and Schrödinger-cat-like atom-photon entanglement. We realize a quantum simulation of the Rabi model using a transmon qubit coupled to a resonator. In this first part, we describe our analog-digital approach to implement up to 90 symmetric Trotter steps, combining single-qubit gates with the Jaynes-Cummings interaction naturally present in our circuit QED system. Controlling the phase of microwave pulses defines a rotating frame and enables simulation of arbitrary parameter regimes of the Rabi model. We demonstrate measurements of qubit parity dynamics showing revivals at g /ωr > 0 . 8 for ωq = 0 and characteristic dynamics for nondegenerate ωq from g / 4 to g. Funding from the EU FP7 Project ScaleQIT, an ERC Grant, the Dutch Research Organization NWO, and Microsoft Research.

  14. Extending the range of real time density matrix renormalization group simulations

    NASA Astrophysics Data System (ADS)

    Kennes, D. M.; Karrasch, C.

    2016-03-01

    We discuss a few simple modifications to time-dependent density matrix renormalization group (DMRG) algorithms which allow to access larger time scales. We specifically aim at beginners and present practical aspects of how to implement these modifications within any standard matrix product state (MPS) based formulation of the method. Most importantly, we show how to 'combine' the Schrödinger and Heisenberg time evolutions of arbitrary pure states | ψ 〉 and operators A in the evaluation of 〈A〉ψ(t) = 〈 ψ | A(t) | ψ 〉 . This includes quantum quenches. The generalization to (non-)thermal mixed state dynamics 〈A〉ρ(t) =Tr [ ρA(t) ] induced by an initial density matrix ρ is straightforward. In the context of linear response (ground state or finite temperature T > 0) correlation functions, one can extend the simulation time by a factor of two by 'exploiting time translation invariance', which is efficiently implementable within MPS DMRG. We present a simple analytic argument for why a recently-introduced disentangler succeeds in reducing the effort of time-dependent simulations at T > 0. Finally, we advocate the python programming language as an elegant option for beginners to set up a DMRG code.

  15. Vectorization of a particle simulation method for hypersonic rarefied flow

    NASA Technical Reports Server (NTRS)

    Mcdonald, Jeffrey D.; Baganoff, Donald

    1988-01-01

    An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.

  16. Using Motion-Sensor Games to Encourage Physical Activity for Adults with Intellectual Disability.

    PubMed

    Taylor, Michael J; Taylor, David; Gamboa, Patricia; Vlaev, Ivo; Darzi, Ara

    2016-01-01

    Adults with Intellectual Disability (ID) are at high risk of being in poor health as a result of exercising infrequently; recent evidence indicates this is often due to there being a lack of opportunities to exercise. This pilot study involved an investigation of the use of motion-sensor game technology to enable and encourage exercise for this population. Five adults (two female; 3 male, aged 34-74 [M = 55.20, SD = 16.71] with ID used motion-sensor games to conduct exercise at weekly sessions at a day-centre. Session attendees reported to have enjoyed using the games, and that they would like to use the games in future. Interviews were conducted with six (four female; two male, aged 27-51 [M = 40.20, SD = 11.28]) day-centre staff, which indicated ways in which the motion-sensor games could be improved for use by adults with ID, and barriers to consider in relation to their possible future implementation. Findings indicate motion-sensor games provide a useful, enjoyable and accessible way for adults with ID to exercise. Future research could investigate implementation of motion-sensor games as a method for exercise promotion for this population on a larger scale.

  17. Fast Updating National Geo-Spatial Databases with High Resolution Imagery: China's Methodology and Experience

    NASA Astrophysics Data System (ADS)

    Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.

    2014-04-01

    Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.

  18. Transition from geostrophic turbulence to inertia-gravity waves in the atmospheric energy spectrum.

    PubMed

    Callies, Jörn; Ferrari, Raffaele; Bühler, Oliver

    2014-12-02

    Midlatitude fluctuations of the atmospheric winds on scales of thousands of kilometers, the most energetic of such fluctuations, are strongly constrained by the Earth's rotation and the atmosphere's stratification. As a result of these constraints, the flow is quasi-2D and energy is trapped at large scales—nonlinear turbulent interactions transfer energy to larger scales, but not to smaller scales. Aircraft observations of wind and temperature near the tropopause indicate that fluctuations at horizontal scales smaller than about 500 km are more energetic than expected from these quasi-2D dynamics. We present an analysis of the observations that indicates that these smaller-scale motions are due to approximately linear inertia-gravity waves, contrary to recent claims that these scales are strongly turbulent. Specifically, the aircraft velocity and temperature measurements are separated into two components: one due to the quasi-2D dynamics and one due to linear inertia-gravity waves. Quasi-2D dynamics dominate at scales larger than 500 km; inertia-gravity waves dominate at scales smaller than 500 km.

  19. Transition from geostrophic turbulence to inertia–gravity waves in the atmospheric energy spectrum

    PubMed Central

    Callies, Jörn; Ferrari, Raffaele; Bühler, Oliver

    2014-01-01

    Midlatitude fluctuations of the atmospheric winds on scales of thousands of kilometers, the most energetic of such fluctuations, are strongly constrained by the Earth’s rotation and the atmosphere’s stratification. As a result of these constraints, the flow is quasi-2D and energy is trapped at large scales—nonlinear turbulent interactions transfer energy to larger scales, but not to smaller scales. Aircraft observations of wind and temperature near the tropopause indicate that fluctuations at horizontal scales smaller than about 500 km are more energetic than expected from these quasi-2D dynamics. We present an analysis of the observations that indicates that these smaller-scale motions are due to approximately linear inertia–gravity waves, contrary to recent claims that these scales are strongly turbulent. Specifically, the aircraft velocity and temperature measurements are separated into two components: one due to the quasi-2D dynamics and one due to linear inertia–gravity waves. Quasi-2D dynamics dominate at scales larger than 500 km; inertia–gravity waves dominate at scales smaller than 500 km. PMID:25404349

  20. Current challenges in quantifying preferential flow through the vadose zone

    NASA Astrophysics Data System (ADS)

    Koestel, John; Larsbo, Mats; Jarvis, Nick

    2017-04-01

    In this presentation, we give an overview of current challenges in quantifying preferential flow through the vadose zone. A review of the literature suggests that current generation models do not fully reflect the present state of process understanding and empirical knowledge of preferential flow. We believe that the development of improved models will be stimulated by the increasingly widespread application of novel imaging technologies as well as future advances in computational power and numerical techniques. One of the main challenges in this respect is to bridge the large gap between the scales at which preferential flow occurs (pore to Darcy scales) and the scale of interest for management (fields, catchments, regions). Studies at the pore scale are being supported by the development of 3-D non-invasive imaging and numerical simulation techniques. These studies are leading to a better understanding of how macropore network topology and initial/boundary conditions control key state variables like matric potential and thus the strength of preferential flow. Extrapolation of this knowledge to larger scales would require support from theoretical frameworks such as key concepts from percolation and network theory, since we lack measurement technologies to quantify macropore networks at these large scales. Linked hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data enable investigation of the larger-scale heterogeneities that can generate preferential flow patterns at pedon, hillslope and field scales. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help in parameterizing models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  1. Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.

    PubMed

    Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve

    2011-11-01

    Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Alignment and political will: upscaling an Australian respectful relationships program.

    PubMed

    Joyce, Andrew; Green, Celia; Kearney, Sarah; Leung, Loksee; Ollis, Debbie

    2018-05-29

    Many small scale efficacious programs and interventions need to be 'scaled-up' in order to reach a larger population. Although it has been argued that interventions deemed suitable for upscaling need to have demonstrated effectiveness, be able to be implemented cost-effectively and be accepted by intended recipients, these factors alone are insufficient in explaining which programs are adopted more broadly. Upscaling research often identifies political will as a key factor in explaining whether programs are supported and up-scaled, but this research lacks any depth into how political will is formed and has not applied policy theories to understanding the upscaling process. This article uses a political science lens to examine the key factors in the upscaling process of a Respectful Relationships in Schools Program. Focus groups and interviews were conducted with project staff, managers and community organizations involved in the program. The results reveal how a key focusing event related to a highly profiled personal tragedy propelled family violence into the national spotlight. At the same time, the organization leading the respectful relationships program leveraged their networks to position the program within the education department which enabled the government to quickly respond to the issue. The study highlights that political will is not a stand-alone factor as depicted by up-scaling models, but rather is the end point of a complex process that involves many elements including the establishment of networks and aligned programs that can capitalize when opportunities arise.

  3. Why is a landscape perspective important in studies of primates?

    PubMed

    Arroyo-Rodríguez, Víctor; Fahrig, Lenore

    2014-10-01

    With accelerated deforestation and fragmentation through the tropics, assessing the impact that landscape spatial changes may have on biodiversity is paramount, as this information is required to design and implement effective management and conservation plans. Primates are expected to be particularly dependent on the landscape context; yet, our understanding on this topic is limited as the majority of primate studies are at the local scale, meaning that landscape-scale inferences are not possible. To encourage primatologists to assess the impact of landscape changes on primates, and help future studies on the topic, we describe the meaning of a "landscape perspective" and evaluate important assumptions of using such a methodological approach. We also summarize a number of important, but unanswered, questions that can be addressed using a landscape-scale study design. For example, it is still unclear if habitat loss has larger consistent negative effects on primates than habitat fragmentation per se. Furthermore, interaction effects between habitat area and other landscape effects (e.g., fragmentation) are unknown for primates. We also do not know if primates are affected by synergistic interactions among factors at the landscape scale (e.g., habitat loss and diseases, habitat loss and climate change, hunting, and land-use change), or whether landscape complexity (or landscape heterogeneity) is important for primate conservation. Testing for patterns in the responses of primates to landscape change will facilitate the development of new guidelines and principles for improving primate conservation. © 2014 Wiley Periodicals, Inc.

  4. 78 FR 72755 - Version 5 Critical Infrastructure Protection Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-03

    ... (e.g., thumb drives and laptop computers) that fall outside of the BES Cyber Asset definition.\\5... implementation study is part of a larger program that includes the development of guidance, outreach to industry... the goals of the implementation study include: (1) Improving industry's understanding of the technical...

  5. Enabling global collaborations through policy engagement and CMS applications

    NASA Astrophysics Data System (ADS)

    Escobar, V. M.; Sepulveda Carlo, E.; Delgado Arias, S.

    2015-12-01

    Different spatial scales prompt different discussions among carbon data stakeholders. NASA's Carbon Monitoring System (CMS) initiative has enabled collaboration opportunities with stakeholders whose data needs and requirements are unique to the spatial scope of their work: from county to the international scale. At the very local level, the Sonoma County Agricultural Preservation and Open Space District leverages CMS high-resolution biomass estimates to develop a Monitoring, Reporting, and Verification (MRV) system in support of the District's 10-year land stewardship plan and the California's Global Warming Solutions Act (AB32). On the eastern coast, at the state level, the Maryland Department of Natural Resources utilizes the same high-resolution biomass estimates on a larger scale to better strategize in achieving the goal of 40% canopy cover statewide by 2020. At a regional scale that encompasses the three states of Maryland, Delaware, and Pennsylvania, LiDAR data collection of the Chesapeake Bay watershed dominate the stakeholder discussions. By collaborating with the U.S. Geological Survey's 3-D Elevation Program (3DEP), high-resolution LiDAR data will fill critical data gaps to help implement watershed protection strategies such as increasing riparian forest buffers to reduce runoff. Outside of the U.S., the World Resources Institute seeks to harness CMS reforestation products and technical expertise in addressing land restoration priorities specific to each Latin American country. CMS applications efforts expand beyond forest carbon examples discussed above to include carbon markets, ocean acidification, national greenhouse gas inventory, and wetlands. The broad array of case studies and lessons learned through CMS Applications in scaling carbon science for policy development at different spatial scales is providing unique opportunities that leverage science through policy needs.

  6. A review of surface energy balance models for estimating actual evapotranspiration with remote sensing at high spatiotemporal resolution over large extents

    USGS Publications Warehouse

    McShane, Ryan R.; Driscoll, Katelyn P.; Sando, Roy

    2017-09-27

    Many approaches have been developed for measuring or estimating actual evapotranspiration (ETa), and research over many years has led to the development of remote sensing methods that are reliably reproducible and effective in estimating ETa. Several remote sensing methods can be used to estimate ETa at the high spatial resolution of agricultural fields and the large extent of river basins. More complex remote sensing methods apply an analytical approach to ETa estimation using physically based models of varied complexity that require a combination of ground-based and remote sensing data, and are grounded in the theory behind the surface energy balance model. This report, funded through cooperation with the International Joint Commission, provides an overview of selected remote sensing methods used for estimating water consumed through ETa and focuses on Mapping Evapotranspiration at High Resolution with Internalized Calibration (METRIC) and Operational Simplified Surface Energy Balance (SSEBop), two energy balance models for estimating ETa that are currently applied successfully in the United States. The METRIC model can produce maps of ETa at high spatial resolution (30 meters using Landsat data) for specific areas smaller than several hundred square kilometers in extent, an improvement in practice over methods used more generally at larger scales. Many studies validating METRIC estimates of ETa against measurements from lysimeters have shown model accuracies on daily to seasonal time scales ranging from 85 to 95 percent. The METRIC model is accurate, but the greater complexity of METRIC results in greater data requirements, and the internalized calibration of METRIC leads to greater skill required for implementation. In contrast, SSEBop is a simpler model, having reduced data requirements and greater ease of implementation without a substantial loss of accuracy in estimating ETa. The SSEBop model has been used to produce maps of ETa over very large extents (the conterminous United States) using lower spatial resolution (1 kilometer) Moderate Resolution Imaging Spectroradiometer (MODIS) data. Model accuracies ranging from 80 to 95 percent on daily to annual time scales have been shown in numerous studies that validated ETa estimates from SSEBop against eddy covariance measurements. The METRIC and SSEBop models can incorporate low and high spatial resolution data from MODIS and Landsat, but the high spatiotemporal resolution of ETa estimates using Landsat data over large extents takes immense computing power. Cloud computing is providing an opportunity for processing an increasing amount of geospatial “big data” in a decreasing period of time. For example, Google Earth EngineTM has been used to implement METRIC with automated calibration for regional-scale estimates of ETa using Landsat data. The U.S. Geological Survey also is using Google Earth EngineTM to implement SSEBop for estimating ETa in the United States at a continental scale using Landsat data.

  7. Significance of connectivity and post-wildfire runoff

    USDA-ARS?s Scientific Manuscript database

    Amplified hillslope soil loss from rain storms following wildfire results from the evolution of runoff and erosion processes across spatial scales. At point to small-plot scales, soil is detached and transported a short distance by rainsplash and sheetflow. Soil transport by water over larger scales...

  8. COMPARING AND LINKING PLUMES ACROSS MODELING APPROACHES

    EPA Science Inventory

    River plumes carry many pollutants, including microorganisms, into lakes and the coastal ocean. The physical scales of many stream and river plumes often lie between the scales for mixing zone plume models, such as the EPA Visual Plumes model, and larger-sized grid scales for re...

  9. Computing and Visualizing the Complex Dynamics of Earthquake Fault Systems: Towards Ensemble Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Rundle, J.; Rundle, P.; Donnellan, A.; Li, P.

    2003-12-01

    We consider the problem of the complex dynamics of earthquake fault systems, and whether numerical simulations can be used to define an ensemble forecasting technology similar to that used in weather and climate research. To effectively carry out such a program, we need 1) a topological realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention of a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults extending throughout California, from the Mexico-California border to the Mendocino Triple Junction. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of all 654 fault segments (degrees of freedom) in the model. Previous versions of Virtual California had used only 215 fault segments to model the strike slip faults in southern California. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a small Beowulf cluster consisting of 10 cpus. We are also planning to run the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We also compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems.

  10. Nesting ecology of Spectacled Eiders Somateria fischeri on the Indigirka River Delta, Russia

    USGS Publications Warehouse

    Pearce, John M.; Esler, Daniel N.; Degtyarev, Andrei G.

    1998-01-01

    In 1994 and 1995 we investigated breeding biology and nest site habitat of Spectacled Eiders on two study areas within the coastal fringe of the Indigirka River Delta, Russia (71°20' N, 150°20' E). Spectacled Eiders were first observed on 6 June in both years and nesting commenced by mid-June. Average clutch size declined with later nest initiation dates by 0.10 eggs per day; clutches were larger in 1994 than 1995 and were slightly larger on a coastal island study area compared to an interior area. Nesting success varied substantially between years, with estimates of 1.6% in 1994 and 27.6% in 1995. Total egg loss, through avian or mammalian predation, occurred more frequently than partial egg loss. Partial egg loss was detected in 16 nests and appeared unrelated to nest initiation date or clutch size. We found no difference among survival rates of nests visited weekly, biweekly, and those at which the hen was never flushed, suggesting that researcher presence did not adversely affect nesting success. A comparison of nine habitat variables within each study area revealed little difference between nest sites and a comparable number of randomly located sites, leading us to conclude that Spectacled Eiders nest randomly with respect to most small scale habitat features. We propose that large scale landscape features are more important indicators of nesting habitat as they may afford greater protection from land-based predators, such as the Arctic Fox. Demographic data collected during this study, along with recent conservation measures implemented by the Republic of Sakha (Yakutia), lead us to conclude that there are few threats to the Indigirka River Delta Spectacled Eider population. Presently, the Indigirka River Delta contains the largest concentration of nesting Spectacled Eiders and deserves continued monitoring and conservation.

  11. Fractal assembly of micrometre-scale DNA origami arrays with arbitrary patterns.

    PubMed

    Tikhomirov, Grigory; Petersen, Philip; Qian, Lulu

    2017-12-06

    Self-assembled DNA nanostructures enable nanometre-precise patterning that can be used to create programmable molecular machines and arrays of functional materials. DNA origami is particularly versatile in this context because each DNA strand in the origami nanostructure occupies a unique position and can serve as a uniquely addressable pixel. However, the scale of such structures has been limited to about 0.05 square micrometres, hindering applications that demand a larger layout and integration with more conventional patterning methods. Hierarchical multistage assembly of simple sets of tiles can in principle overcome this limitation, but so far has not been sufficiently robust to enable successful implementation of larger structures using DNA origami tiles. Here we show that by using simple local assembly rules that are modified and applied recursively throughout a hierarchical, multistage assembly process, a small and constant set of unique DNA strands can be used to create DNA origami arrays of increasing size and with arbitrary patterns. We illustrate this method, which we term 'fractal assembly', by producing DNA origami arrays with sizes of up to 0.5 square micrometres and with up to 8,704 pixels, allowing us to render images such as the Mona Lisa and a rooster. We find that self-assembly of the tiles into arrays is unaffected by changes in surface patterns on the tiles, and that the yield of the fractal assembly process corresponds to about 0.95 m - 1 for arrays containing m tiles. When used in conjunction with a software tool that we developed that converts an arbitrary pattern into DNA sequences and experimental protocols, our assembly method is readily accessible and will facilitate the construction of sophisticated materials and devices with sizes similar to that of a bacterium using DNA nanostructures.

  12. Fractal assembly of micrometre-scale DNA origami arrays with arbitrary patterns

    NASA Astrophysics Data System (ADS)

    Tikhomirov, Grigory; Petersen, Philip; Qian, Lulu

    2017-12-01

    Self-assembled DNA nanostructures enable nanometre-precise patterning that can be used to create programmable molecular machines and arrays of functional materials. DNA origami is particularly versatile in this context because each DNA strand in the origami nanostructure occupies a unique position and can serve as a uniquely addressable pixel. However, the scale of such structures has been limited to about 0.05 square micrometres, hindering applications that demand a larger layout and integration with more conventional patterning methods. Hierarchical multistage assembly of simple sets of tiles can in principle overcome this limitation, but so far has not been sufficiently robust to enable successful implementation of larger structures using DNA origami tiles. Here we show that by using simple local assembly rules that are modified and applied recursively throughout a hierarchical, multistage assembly process, a small and constant set of unique DNA strands can be used to create DNA origami arrays of increasing size and with arbitrary patterns. We illustrate this method, which we term ‘fractal assembly’, by producing DNA origami arrays with sizes of up to 0.5 square micrometres and with up to 8,704 pixels, allowing us to render images such as the Mona Lisa and a rooster. We find that self-assembly of the tiles into arrays is unaffected by changes in surface patterns on the tiles, and that the yield of the fractal assembly process corresponds to about 0.95m - 1 for arrays containing m tiles. When used in conjunction with a software tool that we developed that converts an arbitrary pattern into DNA sequences and experimental protocols, our assembly method is readily accessible and will facilitate the construction of sophisticated materials and devices with sizes similar to that of a bacterium using DNA nanostructures.

  13. Solar forcing of the stream flow of a continental scale South American river.

    PubMed

    Mauas, Pablo J D; Flamenco, Eduardo; Buccino, Andrea P

    2008-10-17

    Solar forcing on climate has been reported in several studies although the evidence so far remains inconclusive. Here, we analyze the stream flow of one of the largest rivers in the world, the Paraná in southeastern South America. For the last century, we find a strong correlation with the sunspot number, in multidecadal time scales, and with larger solar activity corresponding to larger stream flow. The correlation coefficient is r=0.78, significant to a 99% level. In shorter time scales we find a strong correlation with El Niño. These results are a step toward flood prediction, which might have great social and economic impacts.

  14. Pollutant Transport and Fate: Relations Between Flow-paths and Downstream Impacts of Human Activities

    NASA Astrophysics Data System (ADS)

    Thorslund, J.; Jarsjo, J.; Destouni, G.

    2017-12-01

    The quality of freshwater resources is increasingly impacted by human activities. Humans also extensively change the structure of landscapes, which may alter natural hydrological processes. To manage and maintain freshwater of good water quality, it is critical to understand how pollutants are released into, transported and transformed within the hydrological system. Some key scientific questions include: What are net downstream impacts of pollutants across different hydroclimatic and human disturbance conditions, and on different scales? What are the functions within and between components of the landscape, such as wetlands, on mitigating pollutant load delivery to downstream recipients? We explore these questions by synthesizing results from several relevant case study examples of intensely human-impacted hydrological systems. These case study sites have been specifically evaluated in terms of net impact of human activities on pollutant input to the aquatic system, as well as flow-path distributions trough wetlands as a potential ecosystem service of pollutant mitigation. Results shows that although individual wetlands have high retention capacity, efficient net retention effects were not always achieved at a larger landscape scale. Evidence suggests that the function of wetlands as mitigation solutions to pollutant loads is largely controlled by large-scale parallel and circular flow-paths, through which multiple wetlands are interconnected in the landscape. To achieve net mitigation effects at large scale, a large fraction of the polluted large-scale flows must be transported through multiple connected wetlands. Although such large-scale flow interactions are critical for assessing water pollution spreading and fate through the landscape, our synthesis shows a frequent lack of knowledge at such scales. We suggest ways forward for addressing the mismatch between the large scales at which key pollutant pressures and water quality changes take place and the relatively scale at which most studies and implementations are currently made. These suggestions can help bridge critical knowledge gaps, as needed for improving water quality predictions and mitigation solutions under human and environmental changes.

  15. Exploring the Alignment of the Intended and Implemented Curriculum through Teachers' Interpretation: A Case Study of A-Level Biology Practical Work

    ERIC Educational Resources Information Center

    Phaeton, Mukaro Joe; Stears, Michèle

    2017-01-01

    The research reported on here is part of a larger study exploring the alignment of the intended, implemented and attained curriculum with regard to practical work in the Zimbabwean A-level Biology curriculum. In this paper we focus on the alignment between the intended and implemented A-Level Biology curriculum through the lens of teachers'…

  16. The influence of differential response on decision-making in child protective service agencies.

    PubMed

    Janczewski, Colleen E

    2015-01-01

    Differential response (DR) profoundly changes the decision pathways of public child welfare systems, yet little is known about how DR shapes the experiences of children whose reports receive an investigation rather than an alternate response. Using data from the National Child Abuse and Neglect Data System (NCANDS), this study examined the relationship between DR implementation and decision outcomes in neglect cases, as measured by investigation, substantiation, and removal rates in 297 U.S. counties. Multivariate regression models included county-level measures of child poverty and proportions of African American children. Path analyses were also conducted to identify mediating effects of prior decision points and moderating effects of DR on poverty and race's influence on decision outcomes. Results indicate that compared to non-DR counties, those implementing DR have significantly lower investigation and substantiation rates within county populations but higher substantiation rates among investigated cases. Regression models showed significant reductions in removal rates associated with DR implementation, but these effects became insignificant in path models that accounted for mediation effects of previous decision points. Findings also suggest that DR implementation may reduce the positive association between child poverty rates and investigation rates, but additional studies with larger samples are needed to confirm this moderation effect. Two methods of calculating decision outcomes, population- and decision-based enumeration, were used, and policy and research implications of each are discussed. This study demonstrates that despite their inherit complexity, large administrative datasets such as NCANDS can be used to assess the impact of wide-scale system change across jurisdictions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. A Comprehensive Onboarding and Orientation Plan for Neurocritical Care Advanced Practice Providers.

    PubMed

    Langley, Tamra M; Dority, Jeremy; Fraser, Justin F; Hatton, Kevin W

    2018-06-01

    As the role of advanced practice providers (APPs) expands to include increasingly complex patient care within the intensive care unit, the educational needs of these providers must also be expanded. An onboarding process was designed for APPs in the neurocritical care service line. Onboarding for new APPs revolved around 5 specific areas: candidate selection, proctor assignment, 3-phased orientation process, remediation, and mentorship. To ensure effective training for APPs, using the most time-conscious approach, the backbone of the process is a structured curriculum. This was developed and integrated within the standard orientation and onboarding process. The curriculum design incorporated measurable learning goals, objective assessments of phased goal achievements, and opportunities for remediation. The neurocritical care service implemented an onboarding process in 2014. Four APPs (3 nurse practitioners and 1 physician assistant) were employed by the department before the implementation of the orientation program. The length of employment ranged from 1 to 4 years. Lack of clinical knowledge and/or sufficient training was cited as reasons for departure from the position in 2 of the 4 APPs, as either self-expression or peer evaluation. Since implementation of this program, 12 APPs have completed the program, of which 10 remain within the division, creating an 83% retention rate. The onboarding process, including a 3-phased, structured orientation plan for neurocritical care, has increased APP retention since its implementation. The educational model, along with proctoring and mentorship, has improved clinical knowledge and increased nurse practitioner retention. A larger-scale study would help to support the validity of this onboarding process.

  18. Natural Flood Management Plus: Scaling Up Nature Based Solutions to Larger Catchments

    NASA Astrophysics Data System (ADS)

    Quinn, Paul; Nicholson, Alex; Adams, Russ

    2017-04-01

    It has been established that networks NFM features, such as ponds and wetlands, can have a significant effect on flood flow and pollution at local scales (less than 10km2). However, it is much less certain that NFM and NBS can impact at larger scales and protect larger cities. This is especially true for recent storms in the UK such as storm Desmond that caused devastation across the north of England. It is possible using observed rainfall and runoff data to estimate the amounts of storage that would be required to impact on extreme flood events. Here we will how a toolkit that will estimate the amount of storage that can be accrued through a dense networks of NFM features. The analysis suggest that the use of many hundreds of small NFM features can have a significant impact on peak flow, however we still require more storage in order to address extreme events and to satisfy flood engineers who may propose more traditional flood defences. We will also show case studies of larger NFM feature positioned on flood plains that can store significantly more flood flow. Examples designs of NFM plus feature will be shown. The storage aggregation tool will then show the degree to which storing large amounts of flood flow in NFM plus features can contribute to flood management and estimate the likely costs. Together smaller and larger NFM features if used together can produce significant flood storage and at a much lower cost than traditional schemes.

  19. Chronic care coordination by integrating care through a team-based, population-driven approach: a case study.

    PubMed

    van Eeghen, Constance O; Littenberg, Benjamin; Kessler, Rodger

    2018-05-23

    Patients with chronic conditions frequently experience behavioral comorbidities to which primary care cannot easily respond. This study observed a Vermont family medicine practice with integrated medical and behavioral health services that use a structured approach to implement a chronic care management system with Lean. The practice chose to pilot a population-based approach to improve outcomes for patients with poorly controlled Type 2 diabetes using a stepped-care model with an interprofessional team including a community health nurse. This case study observed the team's use of Lean, with which it designed and piloted a clinical algorithm composed of patient self-assessment, endorsement of behavioral goals, shared documentation of goals and plans, and follow-up. The team redesigned workflows and measured reach (patients who engaged to the end of the pilot), outcomes (HbA1c results), and process (days between HbA1c tests). The researchers evaluated practice member self-reports about the use of Lean and facilitators and barriers to move from pilot to larger scale applications. Of 20 eligible patients recruited over 3 months, 10 agreed to participate and 9 engaged fully (45%); 106 patients were controls. Relative to controls, outcomes and process measures improved but lacked significance. Practice members identified barriers that prevented implementation of all changes needed but were in agreement that the pilot produced useful outcomes. A systematized, population-based, chronic care management service is feasible in a busy primary care practice. To test at scale, practice leadership will need to allocate staffing, invest in shared documentation, and standardize workflows to streamline office practice responsibilities.

  20. Root water uptake and lateral interactions among root systems in a temperate forest

    NASA Astrophysics Data System (ADS)

    Agee, E.; He, L.; Bisht, G.; Gough, C. M.; Couvreur, V.; Matheny, A. M.; Bohrer, G.; Ivanov, V. Y.

    2016-12-01

    A growing body of research has highlighted the importance of root architecture and hydraulic properties to the maintenance of the transpiration stream under water limitation and drought. Detailed studies of single plant systems have shown the ability of root systems to adjust zones of uptake due to the redistribution of local water potential gradients, thereby delaying the onset of stress under drying conditions. An open question is how lateral interactions and competition among neighboring plants impact individual and community resilience to water stress. While computational complexity has previously hindered the implementation of microscopic root system structure and function in larger scale hydrological models, newer hybrid approaches allow for the resolution of these properties at the plot scale. Using a modified version of the PFLOTRAN model, which represents the 3-D physics of variably saturated soil, we model root water uptake in a one-hectare temperate forest plot under natural and synthetic forcings. Two characteristic hydraulic architectures, tap roots and laterally sprawling roots, are implemented in an ensemble of simulations. Variations of root architecture, their hydraulic properties, and degree of system interactions produce variable local response to water limitation and provide insights on individual and community response to changing meteorological conditions. Results demonstrate the ability of interacting systems to shift areas of active uptake based on local gradients, allowing individuals to meet water demands despite competition from their peers. These results further illustrate how inter- and intra-species variations in root properties may influence not only individual response to water stress, but also help quantify the margins of resilience for forest ecosystems under changing climate.

  1. Using qualitative and quantitative methods to evaluate small-scale disease management pilot programs.

    PubMed

    Esposito, Dominick; Taylor, Erin Fries; Gold, Marsha

    2009-02-01

    Interest in disease management programs continues to grow as managed care plans, the federal and state governments, and other organizations consider such efforts as a means to improve health care quality and reduce costs. These efforts vary in size, scope, and target population. While large-scale programs provide the means to measure impacts, evaluation of smaller interventions remains valuable as they often represent the early planning stages of larger initiatives. This paper describes a multi-method approach for evaluating small interventions that sought to improve the quality of care for Medicaid beneficiaries with multiple chronic conditions. Our approach relied on quantitative and qualitative methods to develop a complete understanding of each intervention. Quantitative data in the form of both process measures, such as case manager contacts, and outcome measures, such as hospital use, were reported and analyzed. Qualitative information was collected through interviews and the development of logic models to document the flow of intervention activities and how they were intended to affect outcomes. The logic models helped us to understand the underlying reasons for the success or lack thereof of each intervention. The analysis provides useful information on several fronts. First, qualitative data provided valuable information about implementation. Second, process measures helped determine whether implementation occurred as anticipated. Third, outcome measures indicated the potential for favorable results later, possibly suggesting further study. Finally, the evaluation of qualitative and quantitative data in combination helped us assess the potential promise of each intervention and identify common themes and challenges across all interventions.

  2. Coevolution of Information Sharing and Implementation of Evidence-Based Practices Among North American Tobacco Cessation Quitlines

    PubMed Central

    Saul, Jessie E.; Lemaire, Robin H.; Valente, Thomas W.; Leischow, Scott J.

    2015-01-01

    Objectives. We examined the coevolution of information sharing and implementation of evidence-based practices among US and Canadian tobacco cessation quitlines within the North American Quitline Consortium (NAQC). Methods. Web-based surveys were used to collect data from key respondents representing each of 74 participating funders of NAQC quitlines during the summer and fall of 2009, 2010, and 2011. We used stochastic actor-based models to estimate changes in information sharing and practice implementation in the NAQC network. Results. Funders were more likely to share information within their own country and with funders that contracted with the same service provider. Funders contracting with larger service providers shared less information but implemented significantly more practices. Funders connected to larger numbers of tobacco control researchers more often received information from other funders. Intensity of ties to the NAQC network administrative organization did not influence funders’ decisions to share information or implement practices. Conclusions. Our findings show the importance of monitoring the NAQC network over time. We recommend increased cross-border information sharing and sharing of information between funders contracting with different and smaller service providers. PMID:26180993

  3. Architectural frameworks: defining the structures for implementing learning health systems.

    PubMed

    Lessard, Lysanne; Michalowski, Wojtek; Fung-Kee-Fung, Michael; Jones, Lori; Grudniewicz, Agnes

    2017-06-23

    The vision of transforming health systems into learning health systems (LHSs) that rapidly and continuously transform knowledge into improved health outcomes at lower cost is generating increased interest in government agencies, health organizations, and health research communities. While existing initiatives demonstrate that different approaches can succeed in making the LHS vision a reality, they are too varied in their goals, focus, and scale to be reproduced without undue effort. Indeed, the structures necessary to effectively design and implement LHSs on a larger scale are lacking. In this paper, we propose the use of architectural frameworks to develop LHSs that adhere to a recognized vision while being adapted to their specific organizational context. Architectural frameworks are high-level descriptions of an organization as a system; they capture the structure of its main components at varied levels, the interrelationships among these components, and the principles that guide their evolution. Because these frameworks support the analysis of LHSs and allow their outcomes to be simulated, they act as pre-implementation decision-support tools that identify potential barriers and enablers of system development. They thus increase the chances of successful LHS deployment. We present an architectural framework for LHSs that incorporates five dimensions-goals, scientific, social, technical, and ethical-commonly found in the LHS literature. The proposed architectural framework is comprised of six decision layers that model these dimensions. The performance layer models goals, the scientific layer models the scientific dimension, the organizational layer models the social dimension, the data layer and information technology layer model the technical dimension, and the ethics and security layer models the ethical dimension. We describe the types of decisions that must be made within each layer and identify methods to support decision-making. In this paper, we outline a high-level architectural framework grounded in conceptual and empirical LHS literature. Applying this architectural framework can guide the development and implementation of new LHSs and the evolution of existing ones, as it allows for clear and critical understanding of the types of decisions that underlie LHS operations. Further research is required to assess and refine its generalizability and methods.

  4. Impact of an interprofessional shared decision-making and goal-setting decision aid for patients with diabetes on decisional conflict--study protocol for a randomized controlled trial.

    PubMed

    Yu, Catherine H; Ivers, Noah M; Stacey, Dawn; Rezmovitz, Jeremy; Telner, Deanna; Thorpe, Kevin; Hall, Susan; Settino, Marc; Kaplan, David M; Coons, Michael; Sodhi, Sumeet; Sale, Joanna; Straus, Sharon E

    2015-06-27

    Competing health concerns present real obstacles to people living with diabetes and other chronic diseases as well as to their primary care providers. Guideline implementation interventions rarely acknowledge this, leaving both patients and providers feeling overwhelmed by the volume of recommended actions. Interprofessional (IP) shared decision-making (SDM) with the use of decision aids may help to set treatment priorities. We developed an evidence-based SDM intervention for patients with diabetes and other conditions that was framed by the IP-SDM model and followed a user-centered approach. Our objective in the present study is to pilot an IP-SDM and goal-setting toolkit following the Knowledge-to-Action Framework to assess (1) intervention fidelity and the feasibility of conducting a larger trial and (2) impact on decisional conflict, diabetes distress, health-related quality of life and patient assessment of chronic illness care. A two-step, parallel-group, clustered randomized controlled trial (RCT) will be conducted, with the primary goal being to assess intervention fidelity and the feasibility of conducting a larger RCT. The first step is a provider-directed implementation only; the second (after a 6-month delay) involves both provider- and patient-directed implementation. Half of the clusters will be assigned to receive the IP-SDM toolkit, and the other will be assigned to be mailed a diabetes guidelines summary. Individual interviews with patients, their family members and health care providers will be conducted upon trial completion to explore toolkit use. A secondary purpose of this trial is to gather estimates of the toolkit's impact on decisional conflict. Secondary outcomes include diabetes distress, quality of life and chronic illness care, which will be assessed on the basis of patient-completed questionnaires of validated scales at baseline and at 6 and 12 months. Multilevel hierarchical regression models will be used to account for the clustered nature of the data. An individualized approach to patients with multiple chronic conditions using SDM and goal setting is a desirable strategy for achieving guideline-concordant treatment in a patient-centered fashion. Our pilot trial will provide insights regarding strategies for the routine implementation of such interventions in clinical practice, and it will offer an assessment of the impact of this approach. Clinicaltrials.gov Identifier: NCT02379078. Date of Registration: 11 February 2015.

  5. Effects of Implementing Subgrid-Scale Cloud-Radiation Interactions in a Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Herwehe, J. A.; Alapaty, K.; Otte, T.; Nolte, C. G.

    2012-12-01

    Interactions between atmospheric radiation, clouds, and aerosols are the most important processes that determine the climate and its variability. In regional scale models, when used at relatively coarse spatial resolutions (e.g., larger than 1 km), convective cumulus clouds need to be parameterized as subgrid-scale clouds. Like many groups, our regional climate modeling group at the EPA uses the Weather Research & Forecasting model (WRF) as a regional climate model (RCM). One of the findings from our RCM studies is that the summertime convective systems simulated by the WRF model are highly energetic, leading to excessive surface precipitation. We also found that the WRF model does not consider the interactions between convective clouds and radiation, thereby omitting an important process that drives the climate. Thus, the subgrid-scale cloudiness associated with convective clouds (from shallow cumuli to thunderstorms) does not exist and radiation passes through the atmosphere nearly unimpeded, potentially leading to overly energetic convection. This also has implications for air quality modeling systems that are dependent upon cloud properties from the WRF model, as the failure to account for subgrid-scale cloudiness can lead to problems such as the underrepresentation of aqueous chemistry processes within clouds and the overprediction of ozone from overactive photolysis. In an effort to advance the climate science of the cloud-aerosol-radiation (CAR) interactions in RCM systems, as a first step we have focused on linking the cumulus clouds with the radiation processes. To this end, our research group has implemented into WRF's Kain-Fritsch (KF) cumulus parameterization a cloudiness formulation that is widely used in global earth system models (e.g., CESM/CAM5). Estimated grid-scale cloudiness and associated condensate are adjusted to account for the subgrid clouds and then passed to WRF's Rapid Radiative Transfer Model - Global (RRTMG) radiation schemes to affect the shortwave and longwave radiative processes. To evaluate the effects of implementing the subgrid-scale cloud-radiation interactions on WRF regional climate simulations, a three-year study period (1988-1990) was simulated over the CONUS using two-way nested domains with 108 km and 36 km horizontal grid spacing, without and with the cumulus feedbacks to radiation, and without and with some form of four dimensional data assimilation (FDDA). Initial and lateral boundary conditions (as well as data for the FDDA, when enabled) were supplied from downscaled NCEP-NCAR Reanalysis II (R2) data sets. Evaluation of the simulation results will be presented comparing regional surface precipitation and temperature statistics with North American Regional Reanalysis (NARR) data and Climate Forecast System Reanalysis (CFSR) data, respectively, as well as comparison with available surface radiation (SURFRAD) and satellite (CERES) observations. This research supports improvements in the EPA's WRF-CMAQ modeling system, leading to better predictions of present and future air quality and climate interactions in order to protect human health and the environment.

  6. Collaboration in national forest management

    Treesearch

    Susan Charnley; Jonathan W. Long; Frank K. Lake

    2014-01-01

    National forest management efforts have generally moved toward collaborative and participatory approaches at a variety of scales. This includes, at a larger scale, greater public participation in transparent and inclusive democratic processes and, at a smaller scale, more engagement with local communities. Participatory approaches are especially important for an all-...

  7. Snake scales, partial exposure, and the Snake Detection Theory: A human event-related potentials study.

    PubMed

    Van Strien, Jan W; Isbell, Lynne A

    2017-04-07

    Studies of event-related potentials in humans have established larger early posterior negativity (EPN) in response to pictures depicting snakes than to pictures depicting other creatures. Ethological research has recently shown that macaques and wild vervet monkeys respond strongly to partially exposed snake models and scale patterns on the snake skin. Here, we examined whether snake skin patterns and partially exposed snakes elicit a larger EPN in humans. In Task 1, we employed pictures with close-ups of snake skins, lizard skins, and bird plumage. In task 2, we employed pictures of partially exposed snakes, lizards, and birds. Participants watched a random rapid serial visual presentation of these pictures. The EPN was scored as the mean activity (225-300 ms after picture onset) at occipital and parieto-occipital electrodes. Consistent with previous studies, and with the Snake Detection Theory, the EPN was significantly larger for snake skin pictures than for lizard skin and bird plumage pictures, and for lizard skin pictures than for bird plumage pictures. Likewise, the EPN was larger for partially exposed snakes than for partially exposed lizards and birds. The results suggest that the EPN snake effect is partly driven by snake skin scale patterns which are otherwise rare in nature.

  8. Snake scales, partial exposure, and the Snake Detection Theory: A human event-related potentials study

    PubMed Central

    Van Strien, Jan W.; Isbell, Lynne A.

    2017-01-01

    Studies of event-related potentials in humans have established larger early posterior negativity (EPN) in response to pictures depicting snakes than to pictures depicting other creatures. Ethological research has recently shown that macaques and wild vervet monkeys respond strongly to partially exposed snake models and scale patterns on the snake skin. Here, we examined whether snake skin patterns and partially exposed snakes elicit a larger EPN in humans. In Task 1, we employed pictures with close-ups of snake skins, lizard skins, and bird plumage. In task 2, we employed pictures of partially exposed snakes, lizards, and birds. Participants watched a random rapid serial visual presentation of these pictures. The EPN was scored as the mean activity (225–300 ms after picture onset) at occipital and parieto-occipital electrodes. Consistent with previous studies, and with the Snake Detection Theory, the EPN was significantly larger for snake skin pictures than for lizard skin and bird plumage pictures, and for lizard skin pictures than for bird plumage pictures. Likewise, the EPN was larger for partially exposed snakes than for partially exposed lizards and birds. The results suggest that the EPN snake effect is partly driven by snake skin scale patterns which are otherwise rare in nature. PMID:28387376

  9. 25 CFR 169.6 - Maps.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the map of the last section may include any excess of 10 miles or less. (c) The scale of maps showing... larger scale when necessary and when an increase in scale cannot be avoided through the use of separate field notes, but the scale must not be increased to such extent as to make the maps too cumbersome for...

  10. Evaluating scale-up rules of a high-shear wet granulation process.

    PubMed

    Tao, Jing; Pandey, Preetanshu; Bindra, Dilbir S; Gao, Julia Z; Narang, Ajit S

    2015-07-01

    This work aimed to evaluate the commonly used scale-up rules for high-shear wet granulation process using a microcrystalline cellulose-lactose-based low drug loading formulation. Granule properties such as particle size, porosity, flow, and tabletability, and tablet dissolution were compared across scales using scale-up rules based on different impeller speed calculations or extended wet massing time. Constant tip speed rule was observed to produce slightly less granulated material at the larger scales. Longer wet massing time can be used to compensate for the lower shear experienced by the granules at the larger scales. Constant Froude number and constant empirical stress rules yielded granules that were more comparable across different scales in terms of compaction performance and tablet dissolution. Granule porosity was shown to correlate well with blend tabletability and tablet dissolution, indicating the importance of monitoring granule densification (porosity) during scale-up. It was shown that different routes can be chosen during scale-up to achieve comparable granule growth and densification by altering one of the three parameters: water amount, impeller speed, and wet massing time. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  11. Access-in-turn test architecture for low-power test application

    NASA Astrophysics Data System (ADS)

    Wang, Weizheng; Wang, JinCheng; Wang, Zengyun; Xiang, Lingyun

    2017-03-01

    This paper presents a novel access-in-turn test architecture (AIT-TA) for testing of very large scale integrated (VLSI) designs. In the proposed scheme, each scan cell in a chain receives test data from shift-in line in turn while pushing its test response to the shift-out line. It solves the power problem of conventional scan architecture to a great extent and suppresses significantly the switching activity during shift and capture operation with acceptable hardware overhead. Thus, it can help to implement the test at much higher operation frequencies resulting shorter test application time. The proposed test approach enhances the architecture of conventional scan flip-flops and backward compatible with existing test pattern generation and simulation techniques. Experimental results obtained for some larger ISCAS'89 and ITC'99 benchmark circuits illustrate effectiveness of the proposed low-power test application scheme.

  12. Multidimensional signaling via wavelet packets

    NASA Astrophysics Data System (ADS)

    Lindsey, Alan R.

    1995-04-01

    This work presents a generalized signaling strategy for orthogonally multiplexed communication. Wavelet packet modulation (WPM) employs the basis functions from an arbitrary pruning of a full dyadic tree structured filter bank as orthogonal pulse shapes for conventional QAM symbols. The multi-scale modulation (MSM) and M-band wavelet modulation (MWM) schemes which have been recently introduced are handled as special cases, with the added benefit of an entire library of potentially superior sets of basis functions. The figures of merit are derived and it is shown that the power spectral density is equivalent to that for QAM (in fact, QAM is another special case) and hence directly applicable in existing systems employing this standard modulation. Two key advantages of this method are increased flexibility in time-frequency partitioning and an efficient all-digital filter bank implementation, making the WPM scheme more robust to a larger set of interferences (both temporal and sinusoidal) and computationally attractive as well.

  13. Line-imaging velocimetry for observing spatially heterogeneous mechanical and chemical responses in plastic bonded explosives during impact.

    PubMed

    Bolme, C A; Ramos, K J

    2013-08-01

    A line-imaging velocity interferometer was implemented on a single-stage light gas gun to probe the spatial heterogeneity of mechanical response, chemical reaction, and initiation of detonation in explosives. The instrument is described in detail, and then data are presented on several shock-compressed materials to demonstrate the instrument performance on both homogeneous and heterogeneous samples. The noise floor of this diagnostic was determined to be 0.24 rad with a shot on elastically compressed sapphire. The diagnostic was then applied to two heterogeneous plastic bonded explosives: 3,3(')-diaminoazoxyfurazan (DAAF) and PBX 9501, where significant spatial velocity heterogeneity was observed during the build up to detonation. In PBX 9501, the velocity heterogeneity was consistent with the explosive grain size, however in DAAF, we observed heterogeneity on a much larger length scale than the grain size that was similar to the imaging resolution of the instrument.

  14. Line-imaging velocimetry for observing spatially heterogeneous mechanical and chemical responses in plastic bonded explosives during impact

    NASA Astrophysics Data System (ADS)

    Bolme, C. A.; Ramos, K. J.

    2013-08-01

    A line-imaging velocity interferometer was implemented on a single-stage light gas gun to probe the spatial heterogeneity of mechanical response, chemical reaction, and initiation of detonation in explosives. The instrument is described in detail, and then data are presented on several shock-compressed materials to demonstrate the instrument performance on both homogeneous and heterogeneous samples. The noise floor of this diagnostic was determined to be 0.24 rad with a shot on elastically compressed sapphire. The diagnostic was then applied to two heterogeneous plastic bonded explosives: 3,3'-diaminoazoxyfurazan (DAAF) and PBX 9501, where significant spatial velocity heterogeneity was observed during the build up to detonation. In PBX 9501, the velocity heterogeneity was consistent with the explosive grain size, however in DAAF, we observed heterogeneity on a much larger length scale than the grain size that was similar to the imaging resolution of the instrument.

  15. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  16. Coarse-graining using the relative entropy and simplex-based optimization methods in VOTCA

    NASA Astrophysics Data System (ADS)

    Rühle, Victor; Jochum, Mara; Koschke, Konstantin; Aluru, N. R.; Kremer, Kurt; Mashayak, S. Y.; Junghans, Christoph

    2014-03-01

    Coarse-grained (CG) simulations are an important tool to investigate systems on larger time and length scales. Several methods for systematic coarse-graining were developed, varying in complexity and the property of interest. Thus, the question arises which method best suits a specific class of system and desired application. The Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) provides a uniform platform for coarse-graining methods and allows for their direct comparison. We present recent advances of VOTCA, namely the implementation of the relative entropy method and downhill simplex optimization for coarse-graining. The methods are illustrated by coarse-graining SPC/E bulk water and a water-methanol mixture. Both CG models reproduce the pair distributions accurately. SYM is supported by AFOSR under grant 11157642 and by NSF under grant 1264282. CJ was supported in part by the NSF PHY11-25915 at KITP. K. Koschke acknowledges funding by the Nestle Research Center.

  17. Similar problems, different solutions: comparing refuse collection in the Netherlands and Spain.

    PubMed

    Bel, Germà; Fageda, Xavier; Dijkgraaf, Elbert; Gradus, Raymond

    2010-01-01

    Because of differences in institutional arrangements, public service markets, and national traditions regarding government intervention, local public service provision can vary greatly. In this paper we compare the procedures adopted by the local governments of The Netherlands and Spain in arranging for the provision of solid waste collection. We find that Spain faces a problem of consolidation, opting more frequently to implement policies of privatization and cooperation, at the expense of competition. By contrast, The Netherlands, which has larger municipalities on average, resorts somewhat less to privatization and cooperation, and more to competition. Both options-cooperation and competition-have their merits when striving to strike a balance between transaction costs and scale economies. The choices made in organizational reform seem to be related to several factors, among which the nature of the political system and the size of municipalities appear to be relevant.

  18. High-efficiency multiphoton boson sampling

    NASA Astrophysics Data System (ADS)

    Wang, Hui; He, Yu; Li, Yu-Huai; Su, Zu-En; Li, Bo; Huang, He-Liang; Ding, Xing; Chen, Ming-Cheng; Liu, Chang; Qin, Jian; Li, Jin-Peng; He, Yu-Ming; Schneider, Christian; Kamp, Martin; Peng, Cheng-Zhi; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei

    2017-06-01

    Boson sampling is considered as a strong candidate to demonstrate 'quantum computational supremacy' over classical computers. However, previous proof-of-principle experiments suffered from small photon number and low sampling rates owing to the inefficiencies of the single-photon sources and multiport optical interferometers. Here, we develop two central components for high-performance boson sampling: robust multiphoton interferometers with 99% transmission rate and actively demultiplexed single-photon sources based on a quantum dot-micropillar with simultaneously high efficiency, purity and indistinguishability. We implement and validate three-, four- and five-photon boson sampling, and achieve sampling rates of 4.96 kHz, 151 Hz and 4 Hz, respectively, which are over 24,000 times faster than previous experiments. Our architecture can be scaled up for a larger number of photons and with higher sampling rates to compete with classical computers, and might provide experimental evidence against the extended Church-Turing thesis.

  19. Information Security Analysis Using Game Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlicher, Bob G; Abercrombie, Robert K

    Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less

  20. ID201202961, DOE S-124,539, Information Security Analysis Using Game Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Schlicher, Bob G

    Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less

  1. [The network of health promoting companies (WHP) in the province of Bergamo].

    PubMed

    Moretti, R; Cremaschini, M; Brembilla, G; Franchin, D; Barbaglio, G; Sarnataro, F; Spada, P; Mologni, G; Fiandri, R

    2012-01-01

    To create, by 2012, a network of Promoting Health companies in the Province of Bergamo, with at least 10% of companies with over 90 employees (about 10,000 workers) adherent, and attending up to 15% by 2015. The work was carried out by building partnerships and collaboration with Confindustria Bergamo and the main healthcare and Union stakeholders in the province, selecting good practices and experimenting feasibility and effectiveness in two mid-sized companies, before extending the proposal A system of accreditation was defined. Member companies should implement a at least 18 good practices in three years. The areas of good practices are: nutrition, tobacco, physical activity, road safety, alcohol and substance and wellbeing. The results are surprising in terms of network and adhesion. Currently 46 companies are involved (over 9,200 employees). The model seems to work well and in our opinion is extensible on a larger scale.

  2. US/Brazil joint pilot project objectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-12-01

    This paper describes a joint US/Brazil pilot project for rural electrification, whose major goals are: to establish technical, institutional, and economic confidence in using renewable energy (PV and wind) to meet the needs of the citizens of rural Brazil; to establish on-going institutional, individual and business relationships necessary to implement sustainable programs and commitments; to lay the groundwork for larger scale rural electrification through the use of distributed renewable technologies. The projects have supported low power home lighting systems, lighting and refrigeration for schools and medical centers, and water pumping systems. This is viewed as a long term project, wheremore » much of the equipment will come from the US, but Brazil will be responsible for program management, and sharing data gained from the program. The paper describes in detail the Brazilian program which was instituted to support this phased project.« less

  3. Training pediatric clinical pharmacology and therapeutics specialists of the future: the needs, the reality, and opportunities for international networking.

    PubMed

    Gazarian, Madlen

    2009-01-01

    In recent years there has been a rapid and marked increase in global recognition of the need for better medicines for children, with various initiatives being implemented at global and regional levels. These exciting developments are matched by recognition of the need to build greater capacity in the field of pediatric clinical pharmacology and therapeutics to help deliver on the promise of better medicines for children. A range of pediatric medicines researchers, educators, clinical therapeutics practitioners, and experts in drug evaluation, regulation, and broader medicines policy are needed on a larger scale, in both developed and developing world settings. The current and likely future training needs to meet these diverse challenges, the current realities of trying to meet such needs, and the opportunities for international networking to help meet future training needs are discussed from a global perspective.

  4. Evaluation of the Plant-Craig stochastic convection scheme (v2.0) in the ensemble forecasting system MOGREPS-R (24 km) based on the Unified Model (v7.3)

    NASA Astrophysics Data System (ADS)

    Keane, Richard J.; Plant, Robert S.; Tennant, Warren J.

    2016-05-01

    The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

  5. Threshold secret sharing scheme based on phase-shifting interferometry.

    PubMed

    Deng, Xiaopeng; Shi, Zhengang; Wen, Wei

    2016-11-01

    We propose a new method for secret image sharing with the (3,N) threshold scheme based on phase-shifting interferometry. The secret image, which is multiplied with an encryption key in advance, is first encrypted by using Fourier transformation. Then, the encoded image is shared into N shadow images based on the recording principle of phase-shifting interferometry. Based on the reconstruction principle of phase-shifting interferometry, any three or more shadow images can retrieve the secret image, while any two or fewer shadow images cannot obtain any information of the secret image. Thus, a (3,N) threshold secret sharing scheme can be implemented. Compared with our previously reported method, the algorithm of this paper is suited for not only a binary image but also a gray-scale image. Moreover, the proposed algorithm can obtain a larger threshold value t. Simulation results are presented to demonstrate the feasibility of the proposed method.

  6. A quantum Fredkin gate.

    PubMed

    Patel, Raj B; Ho, Joseph; Ferreyrol, Franck; Ralph, Timothy C; Pryde, Geoff J

    2016-03-01

    Minimizing the resources required to build logic gates into useful processing circuits is key to realizing quantum computers. Although the salient features of a quantum computer have been shown in proof-of-principle experiments, difficulties in scaling quantum systems have made more complex operations intractable. This is exemplified in the classical Fredkin (controlled-SWAP) gate for which, despite theoretical proposals, no quantum analog has been realized. By adding control to the SWAP unitary, we use photonic qubit logic to demonstrate the first quantum Fredkin gate, which promises many applications in quantum information and measurement. We implement example algorithms and generate the highest-fidelity three-photon Greenberger-Horne-Zeilinger states to date. The technique we use allows one to add a control operation to a black-box unitary, something that is impossible in the standard circuit model. Our experiment represents the first use of this technique to control a two-qubit operation and paves the way for larger controlled circuits to be realized efficiently.

  7. Fluids and Combustion Facility Acoustic Emissions Controlled by Aggressive Low-Noise Design Process

    NASA Technical Reports Server (NTRS)

    Cooper, Beth A.; Young, Judith A.

    2004-01-01

    The Fluids and Combustion Facility (FCF) is a dual-rack microgravity research facility that is being developed by Northrop Grumman Information Technology (NGIT) for the International Space Station (ISS) at the NASA Glenn Research Center. As an on-orbit test bed, FCF will host a succession of experiments in fluid and combustion physics. The Fluids Integrated Rack (FIR) and the Combustion Integrated Rack (CIR) must meet ISS acoustic emission requirements (ref. 1), which support speech communication and hearing-loss-prevention goals for ISS crew. To meet these requirements, the NGIT acoustics team implemented an aggressive low-noise design effort that incorporated frequent acoustic emission testing for all internal noise sources, larger-scale systems, and fully integrated racks (ref. 2). Glenn's Acoustical Testing Laboratory (ref. 3) provided acoustical testing services (see the following photograph) as well as specialized acoustical engineering support as part of the low-noise design process (ref. 4).

  8. Milestone Deliverable: FY18-Q1: Deploy production sliding mesh capability with linear solver benchmarking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domino, Stefan P.

    2017-12-01

    This milestone was focused on deploying and verifying a “sliding-mesh interface,” and establishing baseline timings for blade-resolved simulations of a sub-MW-scale turbine. In the ExaWind project, we are developing both sliding-mesh and overset-mesh approaches for handling the rotating blades in an operating wind turbine. In the sliding-mesh approach, the turbine rotor and its immediate surrounding fluid are captured in a “disk” that is embedded in the larger fluid domain. The embedded fluid is simulated in a coordinate system that rotates with the rotor. It is important that the coupling algorithm (and its implementation) between the rotating and inertial discrete modelsmore » maintains the accuracy of the numerical methods on either side of the interface, i.e., the interface is “design order.”« less

  9. Electronics Shielding and Reliability Design Tools

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; ONeill, P. M.; Zang, Thomas A., Jr.; Pandolf, John E.; Koontz, Steven L.; Boeder, P.; Reddell, B.; Pankop, C.

    2006-01-01

    It is well known that electronics placement in large-scale human-rated systems provides opportunity to optimize electronics shielding through materials choice and geometric arrangement. For example, several hundred single event upsets (SEUs) occur within the Shuttle avionic computers during a typical mission. An order of magnitude larger SEU rate would occur without careful placement in the Shuttle design. These results used basic physics models (linear energy transfer (LET), track structure, Auger recombination) combined with limited SEU cross section measurements allowing accurate evaluation of target fragment contributions to Shuttle avionics memory upsets. Electronics shielding design on human-rated systems provides opportunity to minimize radiation impact on critical and non-critical electronic systems. Implementation of shielding design tools requires adequate methods for evaluation of design layouts, guiding qualification testing, and an adequate follow-up on final design evaluation including results from a systems/device testing program tailored to meet design requirements.

  10. Shared decision-making in medication management: development of a training intervention

    PubMed Central

    Stead, Ute; Morant, Nicola; Ramon, Shulamit

    2017-01-01

    Shared decision-making is a collaborative process in which clinicians and patients make treatment decisions together. Although it is considered essential to patient-centred care, the adoption of shared decision-making into routine clinical practice has been slow, and there is a need to increase implementation. This paper describes the development and delivery of a training intervention to promote shared decision-making in medication management in mental health as part of the Shared Involvement in Medication Management Education (ShIMME) project. Three stakeholder groups (service users, care coordinators and psychiatrists) received training in shared decision-making, and their feedback was evaluated. The programme was mostly well received, with all groups rating interaction with peers as the best aspect of the training. This small-scale pilot shows that it is feasible to deliver training in shared decision-making to several key stakeholders. Larger studies will be required to assess the effectiveness of such training. PMID:28811918

  11. Shared decision-making in medication management: development of a training intervention.

    PubMed

    Stead, Ute; Morant, Nicola; Ramon, Shulamit

    2017-08-01

    Shared decision-making is a collaborative process in which clinicians and patients make treatment decisions together. Although it is considered essential to patient-centred care, the adoption of shared decision-making into routine clinical practice has been slow, and there is a need to increase implementation. This paper describes the development and delivery of a training intervention to promote shared decision-making in medication management in mental health as part of the Shared Involvement in Medication Management Education (ShIMME) project. Three stakeholder groups (service users, care coordinators and psychiatrists) received training in shared decision-making, and their feedback was evaluated. The programme was mostly well received, with all groups rating interaction with peers as the best aspect of the training. This small-scale pilot shows that it is feasible to deliver training in shared decision-making to several key stakeholders. Larger studies will be required to assess the effectiveness of such training.

  12. Auditory and visual cortex of primates: a comparison of two sensory systems

    PubMed Central

    Rauschecker, Josef P.

    2014-01-01

    A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177

  13. Context Matters for Social-Emotional Learning: Examining Variation in Program Impact by Dimensions of School Climate.

    PubMed

    McCormick, Meghan P; Cappella, Elise; O'Connor, Erin E; McClowry, Sandee G

    2015-09-01

    This paper examines whether three dimensions of school climate-leadership, accountability, and safety/respect-moderated the impacts of the INSIGHTS program on students' social-emotional, behavioral, and academic outcomes. Twenty-two urban schools and N = 435 low-income racial/ethnic minority students were enrolled in the study and received intervention services across the course of 2 years, in both kindergarten and first grade. Intervention effects on math and reading achievement were larger for students enrolled in schools with lower overall levels of leadership, accountability, and safety/respect at baseline. Program impacts on disruptive behaviors were greater in schools with lower levels of accountability at baseline; impacts on sustained attention were greater in schools with lower levels of safety/respect at baseline. Implications for Social-Emotional Learning program implementation, replication, and scale-up are discussed.

  14. Experimental realization of a one-way quantum computer algorithm solving Simon's problem.

    PubMed

    Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G

    2014-11-14

    We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.

  15. Socio-contextual Network Mining for User Assistance in Web-based Knowledge Gathering Tasks

    NASA Astrophysics Data System (ADS)

    Rajendran, Balaji; Kombiah, Iyakutti

    Web-based Knowledge Gathering (WKG) is a specialized and complex information seeking task carried out by many users on the web, for their various learning, and decision-making requirements. We construct a contextual semantic structure by observing the actions of the users involved in WKG task, in order to gain an understanding of their task and requirement. We also build a knowledge warehouse in the form of a master Semantic Link Network (SLX) that accommodates and assimilates all the contextual semantic structures. This master SLX, which is a socio-contextual network, is then mined to provide contextual inputs to the current users through their agents. We validated our approach through experiments and analyzed the benefits to the users in terms of resource explorations and the time saved. The results are positive enough to motivate us to implement in a larger scale.

  16. Community assembly of the ferns of Florida.

    PubMed

    Sessa, Emily B; Chambers, Sally M; Li, Daijiang; Trotta, Lauren; Endara, Lorena; Burleigh, J Gordon; Baiser, Benjamin

    2018-03-01

    Many ecological and evolutionary processes shape the assembly of organisms into local communities from a regional pool of species. We analyzed phylogenetic and functional diversity to understand community assembly of the ferns of Florida at two spatial scales. We built a phylogeny for 125 of the 141 species of ferns in Florida using five chloroplast markers. We calculated mean pairwise dissimilarity (MPD) and mean nearest taxon distance (MNTD) from phylogenetic distances and functional trait data for both spatial scales and compared the results to null models to assess significance. Our results for over vs. underdispersion in functional and phylogenetic diversity differed depending on spatial scale and metric considered. At the county scale, MPD revealed evidence for phylogenetic overdispersion, while MNTD revealed phylogenetic and functional underdispersion, and at the conservation area scale, MPD revealed phylogenetic and functional underdispersion while MNTD revealed evidence only of functional underdispersion. Our results are consistent with environmental filtering playing a larger role at the smaller, conservation area scale. The smaller spatial units are likely composed of fewer local habitat types that are selecting for closely related species, with the larger-scale units more likely to be composed of multiple habitat types that bring together a larger pool of species from across the phylogeny. Several aspects of fern biology, including their unique physiology and water relations and the importance of the independent gametophyte stage of the life cycle, make ferns highly sensitive to local, microhabitat conditions. © 2018 The Authors. American Journal of Botany is published by Wiley Periodicals, Inc. on behalf of the Botanical Society of America.

  17. WE-E-17A-06: Assessing the Scale of Tumor Heterogeneity by Complete Hierarchical Segmentation On MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gensheimer, M; Trister, A; Ermoian, R

    2014-06-15

    Purpose: In many cancers, intratumoral heterogeneity exists in vascular and genetic structure. We developed an algorithm which uses clinical imaging to interrogate different scales of heterogeneity. We hypothesize that heterogeneity of perfusion at large distance scales may correlate with propensity for disease recurrence. We applied the algorithm to initial diagnosis MRI of rhabdomyosarcoma patients to predict recurrence. Methods: The Spatial Heterogeneity Analysis by Recursive Partitioning (SHARP) algorithm recursively segments the tumor image. The tumor is repeatedly subdivided, with each dividing line chosen to maximize signal intensity difference between the two subregions. This process continues to the voxel level, producing segmentsmore » at multiple scales. Heterogeneity is measured by comparing signal intensity histograms between each segmented region and the adjacent region. We measured the scales of contrast enhancement heterogeneity of the primary tumor in 18 rhabdomyosarcoma patients. Using Cox proportional hazards regression, we explored the influence of heterogeneity parameters on relapse-free survival (RFS). To compare with existing methods, fractal and Haralick texture features were also calculated. Results: The complete segmentation produced by SHARP allows extraction of diverse features, including the amount of heterogeneity at various distance scales, the area of the tumor with the most heterogeneity at each scale, and for a given point in the tumor, the heterogeneity at different scales. 10/18 rhabdomyosarcoma patients suffered disease recurrence. On contrast-enhanced MRI, larger scale of maximum signal intensity heterogeneity, relative to tumor diameter, predicted for shorter RFS (p=0.05). Fractal dimension, fractal fit, and three Haralick features did not predict RFS (p=0.09-0.90). Conclusion: SHARP produces an automatic segmentation of tumor regions and reports the amount of heterogeneity at various distance scales. In rhabdomyosarcoma, RFS was shorter when the primary tumor exhibited larger scale of heterogeneity on contrast-enhanced MRI. If validated on a larger dataset, this imaging biomarker could be useful to help personalize treatment.« less

  18. Peptides and Anti-peptide Antibodies for Small and Medium Scale Peptide and Anti-peptide Affinity Microarrays: Antigenic Peptide Selection, Immobilization, and Processing.

    PubMed

    Zhang, Fan; Briones, Andrea; Soloviev, Mikhail

    2016-01-01

    This chapter describes the principles of selection of antigenic peptides for the development of anti-peptide antibodies for use in microarray-based multiplex affinity assays and also with mass-spectrometry detection. The methods described here are mostly applicable to small to medium scale arrays. Although the same principles of peptide selection would be suitable for larger scale arrays (with 100+ features) the actual informatics software and printing methods may well be different. Because of the sheer number of proteins/peptides to be processed and analyzed dedicated software capable of processing all the proteins and an enterprise level array robotics may be necessary for larger scale efforts. This report aims to provide practical advice to those who develop or use arrays with up to ~100 different peptide or protein features.

  19. Automated expert modeling for automated student evaluation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, Robert G.

    The 8th International Conference on Intelligent Tutoring Systems provides a leading international forum for the dissemination of original results in the design, implementation, and evaluation of intelligent tutoring systems and related areas. The conference draws researchers from a broad spectrum of disciplines ranging from artificial intelligence and cognitive science to pedagogy and educational psychology. The conference explores intelligent tutoring systems increasing real world impact on an increasingly global scale. Improved authoring tools and learning object standards enable fielding systems and curricula in real world settings on an unprecedented scale. Researchers deploy ITS's in ever larger studies and increasingly use datamore » from real students, tasks, and settings to guide new research. With high volumes of student interaction data, data mining, and machine learning, tutoring systems can learn from experience and improve their teaching performance. The increasing number of realistic evaluation studies also broaden researchers knowledge about the educational contexts for which ITS's are best suited. At the same time, researchers explore how to expand and improve ITS/student communications, for example, how to achieve more flexible and responsive discourse with students, help students integrate Web resources into learning, use mobile technologies and games to enhance student motivation and learning, and address multicultural perspectives.« less

  20. Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems

    DOE PAGES

    Hendrix, Valerie; Fox, James; Ghoshal, Devarshi; ...

    2016-07-21

    The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less

  1. Investigation related to hydrogen isotopes separation by cryogenic distillation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bornea, A.; Zamfirache, M.; Stefanescu, I.

    2008-07-15

    Research conducted in the last fifty years has shown that one of the most efficient techniques of removing tritium from the heavy water used as moderator and coolant in CANDU reactors (as that operated at Cernavoda (Romania)) is hydrogen cryogenic distillation. Designing and implementing the concept of cryogenic distillation columns require experiments to be conducted as well as computer simulations. Particularly, computer simulations are of great importance when designing and evaluating the performances of a column or a series of columns. Experimental data collected from laboratory work will be used as input for computer simulations run at larger scale (formore » The Pilot Plant for Tritium and Deuterium Separation) in order to increase the confidence in the simulated results. Studies carried out were focused on the following: - Quantitative analyses of important parameters such as the number of theoretical plates, inlet area, reflux flow, flow-rates extraction, working pressure, etc. - Columns connected in series in such a way to fulfil the separation requirements. Experiments were carried out on a laboratory-scale installation to investigate the performance of contact elements with continuous packing. The packing was manufactured in our institute. (authors)« less

  2. Jade: using on-demand cloud analysis to give scientists back their flow

    NASA Astrophysics Data System (ADS)

    Robinson, N.; Tomlinson, J.; Hilson, A. J.; Arribas, A.; Powell, T.

    2017-12-01

    The UK's Met Office generates 400 TB weather and climate data every day by running physical models on its Top 20 supercomputer. As data volumes explode, there is a danger that analysis workflows become dominated by watching progress bars, and not thinking about science. We have been researching how we can use distributed computing to allow analysts to process these large volumes of high velocity data in a way that's easy, effective and cheap.Our prototype analysis stack, Jade, tries to encapsulate this. Functionality includes: An under-the-hood Dask engine which parallelises and distributes computations, without the need to retrain analysts Hybrid compute clusters (AWS, Alibaba, and local compute) comprising many thousands of cores Clusters which autoscale up/down in response to calculation load using Kubernetes, and balances the cluster across providers based on the current price of compute Lazy data access from cloud storage via containerised OpenDAP This technology stack allows us to perform calculations many orders of magnitude faster than is possible on local workstations. It is also possible to outperform dedicated local compute clusters, as cloud compute can, in principle, scale to much larger scales. The use of ephemeral compute resources also makes this implementation cost efficient.

  3. Line Scanning Thermography for Rapid Nondestructive Inspection of Large Scale Composites

    NASA Astrophysics Data System (ADS)

    Chung, S.; Ley, O.; Godinez, V.; Bandos, B.

    2011-06-01

    As next generation structures are utilizing larger amounts of composite materials, a rigorous and reliable method is needed to inspect these structures in order to prevent catastrophic failure and extend service life. Current inspection methods, such as ultrasonic, generally require extended down time and man hours as they are typically carried out via point-by-point measurements. A novel Line Scanning Thermography (LST) System has been developed for the non-contact, large-scale field inspection of composite structures with faster scanning times than conventional thermography systems. LST is a patented dynamic thermography technique where the heat source and thermal camera move in tandem, which allows the continuous scan of long surfaces without the loss of resolution. The current system can inspect an area of 10 in2 per 1 second, and has a resolution of 0.05×0.03 in2. Advanced data gathering protocols have been implemented for near-real time damage visualization and post-analysis algorithms for damage interpretation. The system has been used to successfully detect defects (delamination, dry areas) in fiber-reinforced composite sandwich panels for Navy applications, as well as impact damage in composite missile cases and armor ceramic panels.

  4. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations

    PubMed Central

    Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-01-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911

  5. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations.

    PubMed

    Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-05-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.

  6. Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrix, Valerie; Fox, James; Ghoshal, Devarshi

    The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less

  7. Organelle Size Scaling of the Budding Yeast Vacuole by Relative Growth and Inheritance.

    PubMed

    Chan, Yee-Hung M; Reyes, Lorena; Sohail, Saba M; Tran, Nancy K; Marshall, Wallace F

    2016-05-09

    It has long been noted that larger animals have larger organs compared to smaller animals of the same species, a phenomenon termed scaling [1]. Julian Huxley proposed an appealingly simple model of "relative growth"-in which an organ and the whole body grow with their own intrinsic rates [2]-that was invoked to explain scaling in organs from fiddler crab claws to human brains. Because organ size is regulated by complex, unpredictable pathways [3], it remains unclear whether scaling requires feedback mechanisms to regulate organ growth in response to organ or body size. The molecular pathways governing organelle biogenesis are simpler than organogenesis, and therefore organelle size scaling in the cell provides a more tractable case for testing Huxley's model. We ask the question: is it possible for organelle size scaling to arise if organelle growth is independent of organelle or cell size? Using the yeast vacuole as a model, we tested whether mutants defective in vacuole inheritance, vac8Δ and vac17Δ, tune vacuole biogenesis in response to perturbations in vacuole size. In vac8Δ/vac17Δ, vacuole scaling increases with the replicative age of the cell. Furthermore, vac8Δ/vac17Δ cells continued generating vacuole at roughly constant rates even when they had significantly larger vacuoles compared to wild-type. With support from computational modeling, these results suggest there is no feedback between vacuole biogenesis rates and vacuole or cell size. Rather, size scaling is determined by the relative growth rates of the vacuole and the cell, thus representing a cellular version of Huxley's model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. UNDERSTANDING METHANE EMISSIONS SOURCES AND VIABLE MITIGATION MEASURES IN THE NATURAL GAS TRANSMISSION SYSTEMS: RUSSIAN AND U.S. EXPERIENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishkov, A.; Akopova, Gretta; Evans, Meredydd

    This article will compare the natural gas transmission systems in the U.S. and Russia and review experience with methane mitigation technologies in the two countries. Russia and the United States (U.S.) are the world's largest consumers and producers of natural gas, and consequently, have some of the largest natural gas infrastructure. This paper compares the natural gas transmission systems in Russia and the U.S., their methane emissions and experiences in implementing methane mitigation technologies. Given the scale of the two systems, many international oil and natural gas companies have expressed interest in better understanding the methane emission volumes and trendsmore » as well as the methane mitigation options. This paper compares the two transmission systems and documents experiences in Russia and the U.S. in implementing technologies and programs for methane mitigation. The systems are inherently different. For instance, while the U.S. natural gas transmission system is represented by many companies, which operate pipelines with various characteristics, in Russia predominately one company, Gazprom, operates the gas transmission system. However, companies in both countries found that reducing methane emissions can be feasible and profitable. Examples of technologies in use include replacing wet seals with dry seals, implementing Directed Inspection and Maintenance (DI&M) programs, performing pipeline pump-down, applying composite wrap for non-leaking pipeline defects and installing low-bleed pneumatics. The research methodology for this paper involved a review of information on methane emissions trends and mitigation measures, analytical and statistical data collection; accumulation and analysis of operational data on compressor seals and other emission sources; and analysis of technologies used in both countries to mitigate methane emissions in the transmission sector. Operators of natural gas transmission systems have many options to reduce natural gas losses. Depending on the value of gas, simple, low-cost measures, such as adjusting leaking equipment components, or larger-scale measures, such as installing dry seals on compressors, can be applied.« less

  9. Science of Integrated Approaches to Natural Resources Management

    NASA Astrophysics Data System (ADS)

    Tengberg, Anna; Valencia, Sandra

    2017-04-01

    To meet multiple environmental objectives, integrated programming is becoming increasingly important for the Global Environmental Facility (GEF), the financial mechanism of the multilateral environmental agreements, including the United Nations Convention to Combat Desertification (UNCCD). Integration of multiple environmental, social and economic objectives also contributes to the achievement of the Sustainable Development Goals (SDGs) in a timely and cost-effective way. However, integration is often not well defined. This paper therefore focuses on identifying key aspects of integration and assessing their implementation in natural resources management (NRM) projects. To that end, we draw on systems thinking literature, and carry out an analysis of a random sample of GEF integrated projects and in-depth case studies demonstrating lessons learned and good practices in addressing land degradation and other NRM challenges. We identify numerous challenges and opportunities of integrated approaches that need to be addressed in order to maximise the catalytic impact of the GEF during problem diagnosis, project design, implementation and governance. We highlight the need for projects to identify clearer system boundaries and main feedback mechanisms within those boundaries, in order to effectively address drivers of environmental change. We propose a theory of change for Integrated Natural Resources Management (INRM) projects, where short-term environmental and socio-economic benefits will first accrue at the local level. Implementation of improved INRM technologies and practices at the local level can be extended through spatial planning, strengthening of innovation systems, and financing and incentive mechanisms at the watershed and/or landscape/seascape level to sustain and enhance ecosystem services at larger scales and longer time spans. We conclude that the evolving scientific understanding of factors influencing social, technical and institutional innovations and transitions towards sustainable management of natural resources should be harnessed and integrated into GEF's influencing models and theory of change, and be coupled with updated approaches for learning, adaptive management and scaling up.

  10. How Difficult is it to Reduce Low-Level Cloud Biases With the Higher-Order Turbulence Closure Approach in Climate Models?

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2015-01-01

    Low-level clouds cover nearly half of the Earth and play a critical role in regulating the energy and hydrological cycle. Despite the fact that a great effort has been put to advance the modeling and observational capability in recent years, low-level clouds remains one of the largest uncertainties in the projection of future climate change. Low-level cloud feedbacks dominate the uncertainty in the total cloud feedback in climate sensitivity and projection studies. These clouds are notoriously difficult to simulate in climate models due to its complicated interactions with aerosols, cloud microphysics, boundary-layer turbulence and cloud dynamics. The biases in both low cloud coverage/water content and cloud radiative effects (CREs) remain large. A simultaneous reduction in both cloud and CRE biases remains elusive. This presentation first reviews the effort of implementing the higher-order turbulence closure (HOC) approach to representing subgrid-scale turbulence and low-level cloud processes in climate models. There are two HOCs that have been implemented in climate models. They differ in how many three-order moments are used. The CLUBB are implemented in both CAM5 and GDFL models, which are compared with IPHOC that is implemented in CAM5 by our group. IPHOC uses three third-order moments while CLUBB only uses one third-order moment while both use a joint double-Gaussian distribution to represent the subgrid-scale variability. Despite that HOC is more physically consistent and produces more realistic low-cloud geographic distributions and transitions between cumulus and stratocumulus regimes, GCMs with traditional cloud parameterizations outperform in CREs because tuning of this type of models is more extensively performed than those with HOCs. We perform several tuning experiments with CAM5 implemented with IPHOC in an attempt to produce the nearly balanced global radiative budgets without deteriorating the low-cloud simulation. One of the issues in CAM5-IPHOC is that cloud water content is much higher than in CAM5, which is combined with higher low-cloud coverage to produce larger shortwave CREs in some low-cloud prevailing regions. Thus, the cloud-radiative feedbacks are exaggerated there. The turning exercise is focused on microphysical parameters, which are also commonly used for tuning in climate models. The results will be discussed in this presentation.

  11. Using Stable Isotopes to Infer the Impacts of Habitat Change on the Diets and Vertical Stratification of Frugivorous Bats in Madagascar

    PubMed Central

    Reuter, Kim E.; Wills, Abigail R.; Lee, Raymond W.; Cordes, Erik E.; Sewall, Brent J.

    2016-01-01

    Human-modified habitats are expanding rapidly; many tropical countries have highly fragmented and degraded forests. Preserving biodiversity in these areas involves protecting species–like frugivorous bats–that are important to forest regeneration. Fruit bats provide critical ecosystem services including seed dispersal, but studies of how their diets are affected by habitat change have often been rather localized. This study used stable isotope analyses (δ15N and δ13C measurement) to examine how two fruit bat species in Madagascar, Pteropus rufus (n = 138) and Eidolon dupreanum (n = 52) are impacted by habitat change across a large spatial scale. Limited data for Rousettus madagascariensis are also presented. Our results indicated that the three species had broadly overlapping diets. Differences in diet were nonetheless detectable between P. rufus and E. dupreanum, and these diets shifted when they co-occurred, suggesting resource partitioning across habitats and vertical strata within the canopy to avoid competition. Changes in diet were correlated with a decrease in forest cover, though at a larger spatial scale in P. rufus than in E. dupreanum. These results suggest fruit bat species exhibit differing responses to habitat change, highlight the threats fruit bats face from habitat change, and clarify the spatial scales at which conservation efforts could be implemented. PMID:27097316

  12. The latest developments and outlook for hydrogen liquefaction technology

    NASA Astrophysics Data System (ADS)

    Ohlig, K.; Decker, L.

    2014-01-01

    Liquefied hydrogen is presently mainly used for space applications and the semiconductor industry. While clean energy applications, for e.g. the automotive sector, currently contribute to this demand with a small share only, their demand may see a significant boost in the next years with the need for large scale liquefaction plants exceeding the current plant sizes by far. Hydrogen liquefaction for small scale plants with a maximum capacity of 3 tons per day (tpd) is accomplished with a Brayton refrigeration cycle using helium as refrigerant. This technology is characterized by low investment costs but lower process efficiency and hence higher operating costs. For larger plants, a hydrogen Claude cycle is used, characterized by higher investment but lower operating costs. However, liquefaction plants meeting the potentially high demand in the clean energy sector will need further optimization with regard to energy efficiency and hence operating costs. The present paper gives an overview of the currently applied technologies, including their thermodynamic and technical background. Areas of improvement are identified to derive process concepts for future large scale hydrogen liquefaction plants meeting the needs of clean energy applications with optimized energy efficiency and hence minimized operating costs. Compared to studies in this field, this paper focuses on application of new technology and innovative concepts which are either readily available or will require short qualification procedures. They will hence allow implementation in plants in the close future.

  13. Direct pore-scale reactive transport modelling of dynamic wettability changes induced by surface complexation

    NASA Astrophysics Data System (ADS)

    Maes, Julien; Geiger, Sebastian

    2018-01-01

    Laboratory experiments have shown that oil production from sandstone and carbonate reservoirs by waterflooding could be significantly increased by manipulating the composition of the injected water (e.g. by lowering the ionic strength). Recent studies suggest that a change of wettability induced by a change in surface charge is likely to be one of the driving mechanism of the so-called low-salinity effect. In this case, the potential increase of oil recovery during waterflooding at low ionic strength would be strongly impacted by the inter-relations between flow, transport and chemical reaction at the pore-scale. Hence, a new numerical model that includes two-phase flow, solute reactive transport and wettability alteration is implemented based on the Direct Numerical Simulation of the Navier-Stokes equations and surface complexation modelling. Our model is first used to match experimental results of oil droplet detachment from clay patches. We then study the effect of wettability change on the pore-scale displacement for simple 2D calcite micro-models and evaluate the impact of several parameters such as water composition and injected velocity. Finally, we repeat the simulation experiments on a larger and more complex pore geometry representing a carbonate rock. Our simulations highlight two different effects of low-salinity on oil production from carbonate rocks: a smaller number of oil clusters left in the pores after invasion, and a greater number of pores invaded.

  14. Identification of attributes that promote the adoption and implementation of 2005 Dietary Guidelines for Americans

    USDA-ARS?s Scientific Manuscript database

    As part of a larger study, this research was to identify attributes of the Dietary Guidelines for Americans (DGAs) that would promote their adoption and implementation by participants in a nutrition intervention. Project procedures were guided by the Diffusion of Innovations (DOI) theory. To identif...

  15. Engaging with Faculty to Develop, Implement, and Pilot Electronic Performance Assessments of Student Teachers Using Mobile Devices

    ERIC Educational Resources Information Center

    Haughton, Noela A.; Keil, Virginia L.

    2009-01-01

    This article discusses the development and implementation of a technology-supported student teacher performance assessment that supports integration with a larger electronic assessment system. The authors spearheaded a multidisciplinary team to develop a comprehensive performance assessment based on the Pathwise framework. The team collaborated…

  16. Strengthening organizations to implement evidence-based clinical practices.

    PubMed

    VanDeusen Lukas, Carol; Engle, Ryann L; Holmes, Sally K; Parker, Victoria A; Petzel, Robert A; Nealon Seibert, Marjorie; Shwartz, Michael; Sullivan, Jennifer L

    2010-01-01

    Despite recognition that implementation of evidence-based clinical practices (EBPs) usually depends on the structure and processes of the larger health care organizational context, the dynamics of implementation are not well understood. This project's aim was to deepen that understanding by implementing and evaluating an organizational model hypothesized to strengthen the ability of health care organizations to facilitate EBPs. CONCEPTUAL MODEL: The model posits that implementation of EBPs will be enhanced through the presence of three interacting components: active leadership commitment to quality, robust clinical process redesign incorporating EBPs into routine operations, and use of management structures and processes to support and align redesign. In a mixed-methods longitudinal comparative case study design, seven medical centers in one network in the Department of Veterans Affairs participated in an intervention to implement the organizational model over 3 years. The network was selected randomly from three interested in using the model. The target EBP was hand-hygiene compliance. Measures included ratings of implementation fidelity, observed hand-hygiene compliance, and factors affecting model implementation drawn from interviews. Analyses support the hypothesis that greater fidelity to the organizational model was associated with higher compliance with hand-hygiene guidelines. High-fidelity sites showed larger effect sizes for improvement in hand-hygiene compliance than lower-fidelity sites. Adherence to the organizational model was in turn affected by factors in three categories: urgency to improve, organizational environment, and improvement climate. Implementation of EBPs, particularly those that cut across multiple processes of care, is a complex process with many possibilities for failure. The results provide the basis for a refined understanding of relationships among components of the organizational model and factors in the organizational context affecting them. This understanding suggests practical lessons for future implementation efforts and contributes to theoretical understanding of the dynamics of the implementation of EBPs.

  17. Australian smokers' support for plain or standardised packs before and after implementation: findings from the ITC Four Country Survey.

    PubMed

    Swift, Elena; Borland, Ron; Cummings, K Michael; Fong, Geoffrey T; McNeill, Ann; Hammond, David; Thrasher, James F; Partos, Timea R; Yong, Hua-Hie

    2015-11-01

    Plain packaging (PP) for tobacco products was fully implemented in Australia on 1 December 2012 along with larger graphic health warnings. Using longitudinal data from the Australian arm of the ITC Four Country Survey, we examined attitudes to the new packs before and after implementation, predictors of attitudinal change, and the relationship between support and quitting activity. A population-based cohort study design, with some cross-sectional analyses. Surveys of Australian smokers assessed attitudes to PP at four time points prior to implementation (from 2007 to 2012) and one post-implementation wave collected (early/mid-2013). Trend analysis showed a slight rise in opposition to PP among smokers in the waves leading up to their implementation, but no change in support. Support for PP increased significantly after implementation (28.2% pre vs 49% post), such that post-PP more smokers were supportive than opposed (49% vs 34.7%). Multivariate analysis showed support either before or after implementation was predicted by belief in greater adverse health impacts of smoking, desire to quit and lower addiction. Among those not supportive before implementation, having no clear opinion about PP (versus being opposed) prior to the changes also predicted support post-implementation. Support for PP was prospectively associated with higher levels of quitting activity. Since implementation of PP along with larger warnings, support among Australian smokers has increased. Support is related to lower addiction, stronger beliefs in the negative health impacts of smoking, and higher levels of quitting activity. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  18. Going to Scale: Experiences Implementing a School-Based Trauma Intervention

    ERIC Educational Resources Information Center

    Nadeem, Erum; Jaycox, Lisa H.; Kataoka, Sheryl H.; Langley, Audra K.; Stein, Bradley D.

    2011-01-01

    This article describes implementation experiences "scaling up" the Cognitive Behavioral Intervention for Trauma in Schools (CBITS)--an intervention developed using a community partnered research framework. Case studies from two sites that have successfully implemented CBITS are used to examine macro- and school-level implementation…

  19. Between a Map and a Data Rod

    NASA Technical Reports Server (NTRS)

    Teng, William; Rui, Hualan; Strub, Richard; Vollmer, Bruce

    2015-01-01

    A Digital Divide has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or maps) and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported data rods project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectivesconstraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly (virtual) data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a years worth of time series for hourly data (9,000 time steps) in 90 seconds. Our approach is a specific implementation of the general optimal strategy of reorganizing data to match the desired means of access. Results from our project have already significantly extended NASA data to the large and important hydrology user community that has been, heretofore, mostly unable to easily access and use NASA data.

  20. Between a Map and a Data Rod

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Rui, H.; Strub, R. F.; Vollmer, B.

    2015-12-01

    A "Digital Divide" has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or "maps") and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported "data rods" project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectives/constraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly ("virtual") data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a year's worth of time series for hourly data (~9,000 time steps) in ~90 seconds. Our approach is a specific implementation of the general optimal strategy of reorganizing data to match the desired means of access. Results from our project have already significantly extended NASA data to the large and important hydrology user community that has been, heretofore, mostly unable to easily access and use NASA data.

  1. Sources and Loading of Nitrogen to U.S. Estuaries

    EPA Science Inventory

    Previous assessments of land-based nitrogen loading and sources to U.S. estuaries have been limited to estimates for larger systems with watersheds at the scale of 8-digit HUCs and larger, in part due to the coarse resolution of available data, including estuarine watershed bound...

  2. Is orbital volume associated with eyeball and visual cortex volume in humans?

    PubMed

    Pearce, Eiluned; Bridge, Holly

    2013-01-01

    In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (n = 88) and brain and visual cortex (n = 99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes and (iii) different visual cortical areas, independently of overall brain volume. In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices.

  3. Is orbital volume associated with eyeball and visual cortex volume in humans?

    PubMed Central

    Pearce, Eiluned; Bridge, Holly

    2013-01-01

    Background In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. Aim To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Subjects & Methods Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (N=88), and brain and visual cortex (N=99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. Results A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes, (iii) different visual cortical areas, independently of overall brain volume. Conclusion In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices. PMID:23879766

  4. Primary Accretion and Turbulent Cascades: Scale-Dependence of Particle Concentration Multiplier Probability Distribution Functions

    NASA Astrophysics Data System (ADS)

    Cuzzi, Jeffrey N.; Weston, B.; Shariff, K.

    2013-10-01

    Primitive bodies with 10s-100s of km diameter (or even larger) may form directly from small nebula constituents, bypassing the step-by-step “incremental growth” that faces a variety of barriers at cm, m, and even 1-10km sizes. In the scenario of Cuzzi et al (Icarus 2010 and LPSC 2012; see also Chambers Icarus 2010) the immediate precursors of 10-100km diameter asteroid formation are dense clumps of chondrule-(mm-) size objects. These predictions utilize a so-called cascade model, which is popular in turbulence studies. One of its usual assumptions is that certain statistical properties of the process (the so-called multiplier pdfs p(m)) are scale-independent within a cascade of energy from large eddy scales to smaller scales. In similar analyses, Pan et al (2011 ApJ) found discrepancies with results of Cuzzi and coworkers; one possibility was that p(m) for particle concentration is not scale-independent. To assess the situation we have analyzed recent 3D direct numerical simulations of particles in turbulence covering a much wider range of scales than analyzed by either Cuzzi and coworkers or by Pan and coworkers (see Bec et al 2010, J. Flu. Mech 646, 527). We calculated p(m) at scales ranging from 45-1024η where η is the Kolmogorov scale, for both particles with a range of stopping times spanning the optimum value, and for energy dissipation in the fluid. For comparison, the p(m) for dissipation have been observed to be scale-independent in atmospheric flows (at much larger Reynolds number) for scales of at least 30-3000η. We found that, in the numerical simulations, the multiplier distributions for both particle concentration and fluid dissipation are as expected at scales of tens of η, but both become narrower and less intermittent at larger scales. This is consistent with observations of atmospheric flows showing scale independence to >3000η if scale-free behavior is established only after some number 10 of large-scale bifurcations (at scales perhaps 10x smaller than the largest scales in the flow), but become scale-free at smaller scales. Predictions of primitive body initial mass functions can now be redone using a slightly modified cascade.

  5. Implementation of Goal Attainment Scaling in Community Intellectual Disability Services

    ERIC Educational Resources Information Center

    Chapman, Melanie; Burton, Mark; Hunt, Victoria; Reeves, David

    2006-01-01

    The authors describe the evaluation of the implementation of an outcome measurement system (Goal Attainment Scaling-GAS) within the context of an interdisciplinary and interagency intellectual disability services setting. The GAS database allowed analysis of follow-up goals and indicated the extent of implementation, while a rater study evaluated…

  6. Empirical Comparison of Visualization Tools for Larger-Scale Network Analysis

    DOE PAGES

    Pavlopoulos, Georgios A.; Paez-Espino, David; Kyrpides, Nikos C.; ...

    2017-07-18

    Gene expression, signal transduction, protein/chemical interactions, biomedical literature cooccurrences, and other concepts are often captured in biological network representations where nodes represent a certain bioentity and edges the connections between them. While many tools to manipulate, visualize, and interactively explore such networks already exist, only few of them can scale up and follow today’s indisputable information growth. In this review, we shortly list a catalog of available network visualization tools and, from a user-experience point of view, we identify four candidate tools suitable for larger-scale network analysis, visualization, and exploration. Lastly, we comment on their strengths and their weaknesses andmore » empirically discuss their scalability, user friendliness, and postvisualization capabilities.« less

  7. Empirical Comparison of Visualization Tools for Larger-Scale Network Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlopoulos, Georgios A.; Paez-Espino, David; Kyrpides, Nikos C.

    Gene expression, signal transduction, protein/chemical interactions, biomedical literature cooccurrences, and other concepts are often captured in biological network representations where nodes represent a certain bioentity and edges the connections between them. While many tools to manipulate, visualize, and interactively explore such networks already exist, only few of them can scale up and follow today’s indisputable information growth. In this review, we shortly list a catalog of available network visualization tools and, from a user-experience point of view, we identify four candidate tools suitable for larger-scale network analysis, visualization, and exploration. Lastly, we comment on their strengths and their weaknesses andmore » empirically discuss their scalability, user friendliness, and postvisualization capabilities.« less

  8. A feasibility study for the application of seismic interferometry by multidimensional deconvolution for lithospheric-scale imaging

    NASA Astrophysics Data System (ADS)

    Ruigrok, Elmer; van der Neut, Joost; Djikpesse, Hugues; Chen, Chin-Wu; Wapenaar, Kees

    2010-05-01

    Active-source surveys are widely used for the delineation of hydrocarbon accumulations. Most source and receiver configurations are designed to illuminate the first 5 km of the earth. For a deep understanding of the evolution of the crust, much larger depths need to be illuminated. The use of large-scale active surveys is feasible, but rather costly. As an alternative, we use passive acquisition configurations, aiming at detecting responses from distant earthquakes, in combination with seismic interferometry (SI). SI refers to the principle of generating new seismic responses by combining seismic observations at different receiver locations. We apply SI to the earthquake responses to obtain responses as if there was a source at each receiver position in the receiver array. These responses are subsequently migrated to obtain an image of the lithosphere. Conventionally, SI is applied by a crosscorrelation of responses. Recently, an alternative implementation was proposed as SI by multidimensional deconvolution (MDD) (Wapenaar et al. 2008). SI by MDD compensates both for the source-sampling and the source wavelet irregularities. Another advantage is that the MDD relation also holds for media with severe anelastic losses. A severe restriction though for the implementation of MDD was the need to estimate responses without free-surface interaction, from the earthquake responses. To mitigate this restriction, Groenestijn en Verschuur (2009) proposed to introduce the incident wavefield as an additional unknown in the inversion process. As an alternative solution, van der Neut et al. (2010) showed that the required wavefield separation may be implemented after a crosscorrelation step. These last two approaches facilitate the application of MDD for lithospheric-scale imaging. In this work, we study the feasibility for the implementation of MDD when considering teleseismic wavefields. We address specific problems for teleseismic wavefields, such as long and complicated source wavelets, source-side reverberations and illumination gaps. We exemplify the feasibility of SI by MDD on synthetic data, based on field data from the Laramie and the POLARIS-MIT array. van Groenestijn, G.J.A. & Verschuur, D.J., 2009. Estimation of primaries by sparse inversion from passive seismic data, Expanded abstracts, 1597-1601, SEG. van der Neut, J.R, Ruigrok, E.N., Draganov, D.S., & Wapenaar, K., 2010. Retrieving the earth's reflection response by multi-dimensional deconvolution of ambient seismic noise, Extended abstracts, submitted, EAGE. Wapenaar, K., van der Neut, J., & Ruigrok, E.N., 2008. Passive seismic interferometry by multidimensional deconvolution, Geophysics, 75, A51-A56.

  9. Termination Shock Transition in Multi-ion Multi-fluid MHD Models of the Heliosphere

    NASA Astrophysics Data System (ADS)

    Zieger, B.; Opher, M.; Toth, G.

    2013-12-01

    As evidenced by Voyager 2 observations, pickup ions (PUIs) play a significant role in the termination shock (TS) transition of the solar wind [Richardson et al., Nature, 2008]. Recent kinetic simulations [Ariad and Gedalin, JGR, 2013] came to the conclusion that the contribution of the high energy tail of PUIs is negligible at the shock transition. The Rankine-Hugoniot (R-H) relations are determined by the low energy body of PUIs. Particle-in-cell simulations by Wu et al. [JGR, 2010] have shown that the sum of the thermal solar wind and non-thermal PUI distributions downstream of the TS can be approximated with a 2-Maxwellian distribution. It is important to note that this 2-Maxwellian distribution neglects the suprathermal tail population that has a characteristic power-law distribution. These results justify the fluid description of PUIs in our large-scale multi-ion multi-fluid MHD simulations of the heliospheric interface [Prested et al., JGR, 2013; Zieger et al., GRL, 2013]. The closure of the multi-ion MHD equations could be implemented with separate momentum and energy equations for the different ion species (thermal solar wind and PUIs) where the transfer rate of momentum and energy between the two ion species are considered as source terms, like in Glocer et al. [JGR, 2009]. Another option is to solve for the total energy equation with an additional equation for the PUI pressure, as suggested by Fahr and Chalov [A&A, 2008]. In this paper, we validate the energy conservation and the R-H relations across the TS in different numerical implementations of our latest multi-ion multi-fluid MHD model. We assume an instantaneous pickup process, where the convection velocity of the two ion fluids are the same, and the so-called strong scattering approximation, where newly born PUIs attain their spherical shell distribution within a short distance on fluid scales (spatial scales much larger than the respective ion gyroradius).

  10. Architectural Optimization of Digital Libraries

    NASA Technical Reports Server (NTRS)

    Biser, Aileen O.

    1998-01-01

    This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.

  11. Religion and Wellbeing: Concurrent Validation of the Spiritual Well-Being Scale.

    ERIC Educational Resources Information Center

    Bufford, Rodger K.; Parker, Thomas G., Jr.

    This study was designed to explore the concurrent validity of the Spiritual Well-being Scale (SWB). Ninety first-year student volunteers at an evangelical seminary served as subjects. As part of a larger study, the students completed the SWB and the Interpersonal Behavior Survey (IBS). The SWB Scale is a 20-item self-report scale. Ten items…

  12. Urbanisation at multiple scales is associated with larger size and higher fecundity of an orb-weaving spider.

    PubMed

    Lowe, Elizabeth C; Wilder, Shawn M; Hochuli, Dieter F

    2014-01-01

    Urbanisation modifies landscapes at multiple scales, impacting the local climate and changing the extent and quality of natural habitats. These habitat modifications significantly alter species distributions and can result in increased abundance of select species which are able to exploit novel ecosystems. We examined the effect of urbanisation at local and landscape scales on the body size, lipid reserves and ovary weight of Nephila plumipes, an orb weaving spider commonly found in both urban and natural landscapes. Habitat variables at landscape, local and microhabitat scales were integrated to create a series of indexes that quantified the degree of urbanisation at each site. Spider size was negatively associated with vegetation cover at a landscape scale, and positively associated with hard surfaces and anthropogenic disturbance on a local and microhabitat scale. Ovary weight increased in higher socioeconomic areas and was positively associated with hard surfaces and leaf litter at a local scale. The larger size and increased reproductive capacity of N.plumipes in urban areas show that some species benefit from the habitat changes associated with urbanisation. Our results also highlight the importance of incorporating environmental variables from multiple scales when quantifying species responses to landscape modification.

  13. Urbanisation at Multiple Scales Is Associated with Larger Size and Higher Fecundity of an Orb-Weaving Spider

    PubMed Central

    Lowe, Elizabeth C.; Wilder, Shawn M.; Hochuli, Dieter F.

    2014-01-01

    Urbanisation modifies landscapes at multiple scales, impacting the local climate and changing the extent and quality of natural habitats. These habitat modifications significantly alter species distributions and can result in increased abundance of select species which are able to exploit novel ecosystems. We examined the effect of urbanisation at local and landscape scales on the body size, lipid reserves and ovary weight of Nephila plumipes, an orb weaving spider commonly found in both urban and natural landscapes. Habitat variables at landscape, local and microhabitat scales were integrated to create a series of indexes that quantified the degree of urbanisation at each site. Spider size was negatively associated with vegetation cover at a landscape scale, and positively associated with hard surfaces and anthropogenic disturbance on a local and microhabitat scale. Ovary weight increased in higher socioeconomic areas and was positively associated with hard surfaces and leaf litter at a local scale. The larger size and increased reproductive capacity of N.plumipes in urban areas show that some species benefit from the habitat changes associated with urbanisation. Our results also highlight the importance of incorporating environmental variables from multiple scales when quantifying species responses to landscape modification. PMID:25140809

  14. Strategies for implementing Climate Smart Agriculture and creating marketable Greenhouse emission reduction credits, for small scale rice farmers in Asia

    NASA Astrophysics Data System (ADS)

    Ahuja, R.; Kritee, K.; Rudek, J.; Van Sanh, N.; Thu Ha, T.

    2014-12-01

    Industrial agriculture systems, mostly in developed and some emerging economies, are far different from the small holder farms that dot the landscapes in Asia and Africa. At Environmental Defense Fund, along with our partners from non-governmental, corporate, academic and government sectors and farmers, we have worked actively in India and Vietnam for the last four years to better understand how small scale farmers working on rice paddy (and other upland crops) cultivation can best deal with climate change. Some of the questions we have tried to answer are: What types of implementable best practices, both old and new, on small farm systems lend themselves to improved yields, farm incomes, climate resilience and mitigation? Can these practices be replicated everywhere or is the change more landscape and people driven? What are the institutional, cultural, financial and risk-perception related barriers that prevent scaling up of these practices? How do we innovate and overcome these barriers? The research community needs to work more closely together and leverage multiple scientific, economic and policy disciplines to fully answer these questions. In the case of small farm systems, we find that it helps to follow certain steps if the climate-smart (or low carbon) farming programs are to succeed and the greenhouse credits generated are to be marketed: Demographic data collection and plot demarcation Farmer networks and diaries Rigorous baseline determination via surveys Alternative practice determination via consultation with local universities/experts Measurements on representative plots for 3-4 years (including GHG emissions, yields, inputs, economic and environmental savings) to help calibrate biogeochemical models and/or calculate regional emission factors. Propagation of alternative practices across the landscape via local NGOs/governments Recording of parameters necessary to extrapolate representative plot GHG emission reductions to all farmers in a given landscape under several existing and new carbon offset methodologies. In this presentation, we will discuss our initial encouraging results on the basis of which our wider team now seeks to identify and recommend policies that the local governments to be able to scale up climate smart agriculture to larger jurisdictional levels.

  15. Microprocessor-Based Systems Control for the Rigidized Inflatable Get-Away-Special Experiment

    DTIC Science & Technology

    2004-03-01

    communications and faster data throughput increase, satellites are becoming larger. Larger satellite antennas help to provide the needed gain to...increase communications in space. Compounding the performance and size trade-offs are the payload weight and size limit imposed by the launch vehicles...increased communications capacity, and reduce launch costs. This thesis develops and implements the computer control system and power system to

  16. The AppScale Cloud Platform

    PubMed Central

    Krintz, Chandra

    2013-01-01

    AppScale is an open source distributed software system that implements a cloud platform as a service (PaaS). AppScale makes cloud applications easy to deploy and scale over disparate cloud fabrics, implementing a set of APIs and architecture that also makes apps portable across the services they employ. AppScale is API-compatible with Google App Engine (GAE) and thus executes GAE applications on-premise or over other cloud infrastructures, without modification. PMID:23828721

  17. Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.

    PubMed

    Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin

    2010-05-12

    Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.

  18. Laboratory generated M -6 earthquakes

    USGS Publications Warehouse

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  19. Measuring the topology of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Gott, J. Richard, III

    1988-01-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  20. Measuring the topology of large-scale structure in the universe

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III

    1988-11-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  1. Prediction skill of tropical synoptic scale transients from ECMWF and NCEP ensemble prediction systems

    DOE PAGES

    Taraphdar, S.; Mukhopadhyay, P.; Leung, L. Ruby; ...

    2016-12-05

    The prediction skill of tropical synoptic scale transients (SSTR) such as monsoon low and depression during the boreal summer of 2007–2009 are assessed using high resolution ECMWF and NCEP TIGGE forecasts data. By analyzing 246 forecasts for lead times up to 10 days, it is found that the models have good skills in forecasting the planetary scale means but the skills of SSTR remain poor, with the latter showing no skill beyond 2 days for the global tropics and Indian region. Consistent forecast skills among precipitation, velocity potential, and vorticity provide evidence that convection is the primary process responsible formore » precipitation. The poor skills of SSTR can be attributed to the larger random error in the models as they fail to predict the locations and timings of SSTR. Strong correlation between the random error and synoptic precipitation suggests that the former starts to develop from regions of convection. As the NCEP model has larger biases of synoptic scale precipitation, it has a tendency to generate more random error that ultimately reduces the prediction skill of synoptic systems in that model. Finally, the larger biases in NCEP may be attributed to the model moist physics and/or coarser horizontal resolution compared to ECMWF.« less

  2. Constraints on muscle performance provide a novel explanation for the scaling of posture in terrestrial animals.

    PubMed

    Usherwood, James R

    2013-08-23

    Larger terrestrial animals tend to support their weight with more upright limbs. This makes structural sense, reducing the loading on muscles and bones, which is disproportionately challenging in larger animals. However, it does not account for why smaller animals are more crouched; instead, they could enjoy relatively more slender supporting structures or higher safety factors. Here, an alternative account for the scaling of posture is proposed, with close parallels to the scaling of jump performance. If the costs of locomotion are related to the volume of active muscle, and the active muscle volume required depends on both the work and the power demanded during the push-off phase of each step (not just the net positive work), then the disproportional scaling of requirements for work and push-off power are revealing. Larger animals require relatively greater active muscle volumes for dynamically similar gaits (e.g. top walking speed)-which may present an ultimate constraint to the size of running animals. Further, just as for jumping, animals with shorter legs and briefer push-off periods are challenged to provide the power (not the work) required for push-off. This can be ameliorated by having relatively long push-off periods, potentially accounting for the crouched stance of small animals.

  3. Constraints on muscle performance provide a novel explanation for the scaling of posture in terrestrial animals

    PubMed Central

    Usherwood, James R.

    2013-01-01

    Larger terrestrial animals tend to support their weight with more upright limbs. This makes structural sense, reducing the loading on muscles and bones, which is disproportionately challenging in larger animals. However, it does not account for why smaller animals are more crouched; instead, they could enjoy relatively more slender supporting structures or higher safety factors. Here, an alternative account for the scaling of posture is proposed, with close parallels to the scaling of jump performance. If the costs of locomotion are related to the volume of active muscle, and the active muscle volume required depends on both the work and the power demanded during the push-off phase of each step (not just the net positive work), then the disproportional scaling of requirements for work and push-off power are revealing. Larger animals require relatively greater active muscle volumes for dynamically similar gaits (e.g. top walking speed)—which may present an ultimate constraint to the size of running animals. Further, just as for jumping, animals with shorter legs and briefer push-off periods are challenged to provide the power (not the work) required for push-off. This can be ameliorated by having relatively long push-off periods, potentially accounting for the crouched stance of small animals. PMID:23825086

  4. The Triggering Mechanism of coronal jets and CMEs: Flux Cancelation

    NASA Technical Reports Server (NTRS)

    Panesar, Navdeep K.; Sterling, Alphonse C.; Moore, Ronald L.

    2017-01-01

    Recent investigations show that coronal jets are driven by the eruption of a small-scale filament (10,000 - 20,000 km long, called a minifilament) following magnetic flux cancelation at the neutral line underneath the minifilament. Minifilament eruptions appear to be analogous to larger-scale solar filament eruptions: they both reside, before the eruption, in the highly sheared field between the adjacent opposite-polarity magnetic flux patches (neutral line); jet-producing minifilament and larger-scale solar filament first show a slow-rise, followed by a fast-rise as they erupt; during the jet-producing minifilament eruption a jet bright point (JBP) appears at the location where the minifilament was rooted before the eruption, analogous to the situation with CME-producing larger-scale filament eruptions where a solar flare arcade forms during the filament eruption along the neutral line along which the filament resided prior to its eruption. In the present study we investigate the triggering mechanism of CME-producing large solar filament eruptions, and find that enduring flux cancelation at the neutral line of the filaments often triggers their eruptions. This corresponds to the finding that persistent flux cancelation at the neutral is the cause of jet-producing minifilament eruptions. Thus our observations support coronal jets being miniature version of CMEs.

  5. The interior of 67P/C-G comet as seen by CONSERT bistatic radar on ROSETTA, key results and implications.

    NASA Astrophysics Data System (ADS)

    Kofman, W.; Herique, A.; Ciarletti, V.; Lasue, J.; Levasseur-Regourd, AC.; Zine, S.; Plettemeier, D.

    2017-09-01

    The structure of the nucleus is one of the major unknowns in cometary science. The scientific objectives of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) aboard ESA's spacecraft Rosetta are to perform an interior characterization of comet 67P/Churyumov-Gerasimenko nucleus. This is done by means of a bistatic sounding between the lander Philae laying on the comet's surface and the orbiter Rosetta. Current interpretation of the CONSERT signals is consistent with a highly porous carbon rich primitive body. Internal inhomogeneities are not detected at the wavelength scale and are either smaller, or present a low dielectric contrast. Given the high bulk porosity of 75% inside the sounded part of the nucleus, a likely interior model would be obtained by a mixture, at this 3-m size scale, of voids (vacuum) and blobs with material made of ices and dust with porosity larger than 60%. The absence of any pulse spreading due to scattering allows us to exclude heterogeneity with higher contrast (0.25) and larger size (3m) (but smaller than few wavelengths scale, since larger scales would be responsible for multipath propagation). CONSERT is the first successful radar probe to study the sub-surface of a small body.

  6. Prediction skill of tropical synoptic scale transients from ECMWF and NCEP ensemble prediction systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taraphdar, S.; Mukhopadhyay, P.; Leung, L. Ruby

    The prediction skill of tropical synoptic scale transients (SSTR) such as monsoon low and depression during the boreal summer of 2007–2009 are assessed using high resolution ECMWF and NCEP TIGGE forecasts data. By analyzing 246 forecasts for lead times up to 10 days, it is found that the models have good skills in forecasting the planetary scale means but the skills of SSTR remain poor, with the latter showing no skill beyond 2 days for the global tropics and Indian region. Consistent forecast skills among precipitation, velocity potential, and vorticity provide evidence that convection is the primary process responsible formore » precipitation. The poor skills of SSTR can be attributed to the larger random error in the models as they fail to predict the locations and timings of SSTR. Strong correlation between the random error and synoptic precipitation suggests that the former starts to develop from regions of convection. As the NCEP model has larger biases of synoptic scale precipitation, it has a tendency to generate more random error that ultimately reduces the prediction skill of synoptic systems in that model. Finally, the larger biases in NCEP may be attributed to the model moist physics and/or coarser horizontal resolution compared to ECMWF.« less

  7. School-wide PBIS: An Example of Applied Behavior Analysis Implemented at a Scale of Social Importance.

    PubMed

    Horner, Robert H; Sugai, George

    2015-05-01

    School-wide Positive Behavioral Interventions and Supports (PBIS) is an example of applied behavior analysis implemented at a scale of social importance. In this paper, PBIS is defined and the contributions of behavior analysis in shaping both the content and implementation of PBIS are reviewed. Specific lessons learned from implementation of PBIS over the past 20 years are summarized.

  8. Scale issues in soil hydrology related to measurement and simulation: A case study in Colorado

    USDA-ARS?s Scientific Manuscript database

    State variables, such as soil water content (SWC), are typically measured or inferred at very small scales while being simulated at larger scales relevant to spatial management or hillslope areas. Thus there is an implicit spatial disparity that is often ignored. Surface runoff, on the other hand, ...

  9. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  10. International bioenergy synthesis-lessons learned and opportunities for the western United States

    Treesearch

    D.L. Nicholls; R. Monserud; D. Dykstra

    2009-01-01

    This synthesis examines international opportunities for utilizing biomass for energy at several different scales, with an emphasis on larger scale electrical power generation at stand-alone facilities as well as smaller scale thermal heating applications such as those at governmental, educational, or other institutional facilities. It identifies barriers that can...

  11. Competition between stink bug and heliothine caterpillar pests on cotton at within-plant spatial scales

    USDA-ARS?s Scientific Manuscript database

    We investigated competition between Heliothine larvae and the secondary pests, southern green and brown stink bugs at the single boll and multiple boll scales; if competition does not occur with this very close association, it might be unlikely at larger scales with less close association. In both ...

  12. Losers in the 'Rock-Paper-Scissors' game: The role of non-hierarchical competition and chaos as biodiversity sustaining agents in aquatic systems

    EPA Science Inventory

    Processes occurring within small areas (patch-scale) that influence species richness and spatial heterogeneity of larger areas (landscape-scale) have long been an interest of ecologists. This research focused on the role of patch-scale deterministic chaos arising in phytoplankton...

  13. Estimating landscape-scale impacts of agricultural management on soil carbon using measurements and models.

    USDA-ARS?s Scientific Manuscript database

    Agriculture covers 40% of Earth’s ice-free land area and has broad impacts on global biogeochemical cycles. While some agricultural management changes are small in scale or impact, others have the potential to shift biogeochemical cycles at landscape and larger scales if widely adopted. Understandin...

  14. Implementation of enhanced recovery programme after pancreatoduodenectomy: a single-centre UK pilot study.

    PubMed

    Abu Hilal, Mohammed; Di Fabio, Francesco; Badran, Abdallah; Alsaati, Hani; Clarke, Hannah; Fecher, Imogen; Armstrong, Thomas H; Johnson, Colin D; Pearce, Neil W

    2013-01-01

    Data on enhanced recovery programmes after pancreatoduodenectomy (ERP-PD) is limited. The aim of this pilot study was to evaluate the feasibility, safety and clinical outcomes of ERP-PD when implemented at a high-volume UK university referral centre. This was an observational single-surgeon case-control study (before-and-after pathway). A total of 20 consecutive patients were prospectively enrolled for the ERP-PD and compared with 24 consecutive patients previously treated during an equal time frame. Patients in the ERP-PD group had a significant shorter time to remove naso-gastric tube (median of 5 vs. 7 days, p = 0.0001), start liquid diet (median of 2 vs. 5 days, p < 0.0001), start solid food (median of 4 vs. 9 days, p < 0.0001), pass stools (median of 6 vs. 7 days, p = 0.002), and had shorter length of stay (median of 8.5 days vs. 13 days, p = 0.015) compared to the pre-pathway group. Postoperative complications were overall less frequent but not significantly different in the ERP-PD group (p = 0.077). No difference in mortality and readmission rates was found. Our findings support the feasibility and safety of ERP-PD. Improved patients' outcomes, significant bed day savings and increase National Health Service productivity are anticipated with implementation of ERP-PD on a larger scale. Copyright © 2012 IAP and EPC. Published by Elsevier B.V. All rights reserved.

  15. A feasibility study to assess the effectiveness of safe dates for teen mothers.

    PubMed

    Herrman, Judith W; Waterhouse, Julie K

    2014-01-01

    To determine the effectiveness of the adapted Safe Dates curriculum as an intervention for pregnant and/or parenting teens to prevent teen dating violence (TDV). This pre-/posttest, single-sample study provided a means to assess the effectiveness of an adapted Safe Dates curriculum for teen mothers. The adapted Safe Dates curriculum was implemented in three schools designed for the unique needs of teens who are pregnant and/or parenting. The final sample of 41 teen participants, with a mean age of 16.27, completed 80% of the curriculum and two of the three assessments. Most of the teens were pregnant during participation in the curriculum, and six had infants between age 1 and 3 months. The teen mothers completed the pretest, participated in the 10-session adapted Safe Dates curriculum, and completed the posttest at the end of the program and 1 month after program completion. The pre/posttest was adapted from the Safe Dates curriculum-specific evaluation instrument. Senior, undergraduate nursing students were trained in and implemented the curriculum. Participation in the adapted Safe Dates program yielded significant differences in the areas of responses to anger, gender stereotyping, awareness of resources for perpetrators and victims, and psychological violence perpetration. This adapted program may be effective in changing selected outcomes. The implementation of a larger scale, experimental/control group study may demonstrate the program's efficacy at reducing the incidence of TDV among teen mothers. © 2014 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses.

  16. Using an adaptive expertise lens to understand the quality of teachers' classroom implementation of computer-supported complex systems curricula in high school science

    NASA Astrophysics Data System (ADS)

    Yoon, Susan A.; Koehler-Yom, Jessica; Anderson, Emma; Lin, Joyce; Klopfer, Eric

    2015-05-01

    Background: This exploratory study is part of a larger-scale research project aimed at building theoretical and practical knowledge of complex systems in students and teachers with the goal of improving high school biology learning through professional development and a classroom intervention. Purpose: We propose a model of adaptive expertise to better understand teachers' classroom practices as they attempt to navigate myriad variables in the implementation of biology units that include working with computer simulations, and learning about and teaching through complex systems ideas. Sample: Research participants were three high school biology teachers, two females and one male, ranging in teaching experience from six to 16 years. Their teaching contexts also ranged in student achievement from 14-47% advanced science proficiency. Design and methods: We used a holistic multiple case study methodology and collected data during the 2011-2012 school year. Data sources include classroom observations, teacher and student surveys, and interviews. Data analyses and trustworthiness measures were conducted through qualitative mining of data sources and triangulation of findings. Results: We illustrate the characteristics of adaptive expertise of more or less successful teaching and learning when implementing complex systems curricula. We also demonstrate differences between case study teachers in terms of particular variables associated with adaptive expertise. Conclusions: This research contributes to scholarship on practices and professional development needed to better support teachers to teach through a complex systems pedagogical and curricular approach.

  17. Environmental stochasticity controls soil erosion variability

    PubMed Central

    Kim, Jongho; Ivanov, Valeriy Y.; Fatichi, Simone

    2016-01-01

    Understanding soil erosion by water is essential for a range of research areas but the predictive skill of prognostic models has been repeatedly questioned because of scale limitations of empirical data and the high variability of soil loss across space and time scales. Improved understanding of the underlying processes and their interactions are needed to infer scaling properties of soil loss and better inform predictive methods. This study uses data from multiple environments to highlight temporal-scale dependency of soil loss: erosion variability decreases at larger scales but the reduction rate varies with environment. The reduction of variability of the geomorphic response is attributed to a ‘compensation effect’: temporal alternation of events that exhibit either source-limited or transport-limited regimes. The rate of reduction is related to environment stochasticity and a novel index is derived to reflect the level of variability of intra- and inter-event hydrometeorologic conditions. A higher stochasticity index implies a larger reduction of soil loss variability (enhanced predictability at the aggregated temporal scales) with respect to the mean hydrologic forcing, offering a promising indicator for estimating the degree of uncertainty of erosion assessments. PMID:26925542

  18. A New Framework for Cumulus Parametrization - A CPT in action

    NASA Astrophysics Data System (ADS)

    Jakob, C.; Peters, K.; Protat, A.; Kumar, V.

    2016-12-01

    The representation of convection in climate model remains a major Achilles Heel in our pursuit of better predictions of global and regional climate. The basic principle underpinning the parametrisation of tropical convection in global weather and climate models is that there exist discernible interactions between the resolved model scale and the parametrised cumulus scale. Furthermore, there must be at least some predictive power in the larger scales for the statistical behaviour on small scales for us to be able to formally close the parametrised equations. The presentation will discuss a new framework for cumulus parametrisation based on the idea of separating the prediction of cloud area from that of velocity. This idea is put into practice by combining an existing multi-scale stochastic cloud model with observations to arrive at the prediction of the area fraction for deep precipitating convection. Using mid-tropospheric humidity and vertical motion as predictors, the model is shown to reproduce the observed behaviour of both mean and variability of deep convective area fraction well. The framework allows for the inclusion of convective organisation and can - in principle - be made resolution-aware or resolution-independent. When combined with simple assumptions about cloud-base vertical motion the model can be used as a closure assumption in any existing cumulus parametrisation. Results of applying this idea in the the ECHAM model indicate significant improvements in the simulation of tropical variability, including but not limited to the MJO. This presentation will highlight how the close collaboration of the observational, theoretical and model development community in the spirit of the climate process teams can lead to significant progress in long-standing issues in climate modelling while preserving the freedom of individual groups in pursuing their specific implementation of an agreed framework.

  19. Short-term prediction of rain attenuation level and volatility in Earth-to-Satellite links at EHF band

    NASA Astrophysics Data System (ADS)

    de Montera, L.; Mallet, C.; Barthès, L.; Golé, P.

    2008-08-01

    This paper shows how nonlinear models originally developed in the finance field can be used to predict rain attenuation level and volatility in Earth-to-Satellite links operating at the Extremely High Frequencies band (EHF, 20 50 GHz). A common approach to solving this problem is to consider that the prediction error corresponds only to scintillations, whose variance is assumed to be constant. Nevertheless, this assumption does not seem to be realistic because of the heteroscedasticity of error time series: the variance of the prediction error is found to be time-varying and has to be modeled. Since rain attenuation time series behave similarly to certain stocks or foreign exchange rates, a switching ARIMA/GARCH model was implemented. The originality of this model is that not only the attenuation level, but also the error conditional distribution are predicted. It allows an accurate upper-bound of the future attenuation to be estimated in real time that minimizes the cost of Fade Mitigation Techniques (FMT) and therefore enables the communication system to reach a high percentage of availability. The performance of the switching ARIMA/GARCH model was estimated using a measurement database of the Olympus satellite 20/30 GHz beacons and this model is shown to outperform significantly other existing models. The model also includes frequency scaling from the downlink frequency to the uplink frequency. The attenuation effects (gases, clouds and rain) are first separated with a neural network and then scaled using specific scaling factors. As to the resulting uplink prediction error, the error contribution of the frequency scaling step is shown to be larger than that of the downlink prediction, indicating that further study should focus on improving the accuracy of the scaling factor.

  20. Moving Teachers: Implementation of Transfer Incentives in Seven Districts. Executive Summary. NCEE 2012-4052

    ERIC Educational Resources Information Center

    Glazerman, Steven; Protik, Ali; Teh, Bing-ru; Bruch, Julie; Seftor, Neil

    2012-01-01

    This report describes the implementation and intermediate impacts of an intervention designed to provide incentives for a school district's highest-performing teachers to work in its lowest-achieving schools. The report is part of a larger study in which random assignment was used to form two equivalent groups of classrooms organized into teacher…

  1. The Relationship Between School Climate and Implementation of an Innovation in Elementary Schools.

    ERIC Educational Resources Information Center

    Young, I. Phillip; Kasten, Katherine

    As part of a larger project on studies of implementation, specifically of Individually Guided Education (IGE), this paper describes the preliminary results of research on school climate, an important factor in retarding or promoting change. A review of the literature on school climate includes a description of Likert and Likert's Profile of a…

  2. Challenges for a New Bilingual Program: Implementing the International Baccalaureate Primary Years Programme in Four Colombian Schools

    ERIC Educational Resources Information Center

    Lochmiller, Chad R.; Lucero, Audrey; Lester, Jessica Nina

    2016-01-01

    The International Baccalaureate (IB) has expanded in Latin America. Drawing from a larger multi-sited qualitative case study, we examined the challenges associated with the implementation of the IB Primary Years Programme (PYP) in a Colombian and bilingual context. Findings highlight (1) the intersecting nature of challenges associated with the…

  3. Lecturers' Perceptions of the Implementation of the Revised English Language Nigeria Certificate in Education Curriculum

    ERIC Educational Resources Information Center

    Tom-Lawyer, Oris Oritsebemigho

    2015-01-01

    This paper examines the perceptions of English language lecturers from three colleges of education on the factors that inhibit the implementation process of the revised English Language Nigeria Certificate Education Curriculum. The study which is underpinned by the CIPP Evaluation model is part of a larger study on the evaluation of the…

  4. Advancing Perspectives of Sustainability and Large-Scale Implementation of Design Teams in Ghana's Polytechnics: Issues and Opportunities

    ERIC Educational Resources Information Center

    Bakah, Marie Afua Baah; Voogt, Joke M.; Pieters, Jules M.

    2012-01-01

    Polytechnic staff perspectives are sought on the sustainability and large-scale implementation of design teams (DT), as a means for collaborative curriculum design and teacher professional development in Ghana's polytechnics, months after implementation. Data indicates that teachers still collaborate in DTs for curriculum design and professional…

  5. Implementing Small Scale ICT Projects in Developing Countries--How Challenging Is It?

    ERIC Educational Resources Information Center

    Karunaratne, Thashmee; Peiris, Colombage; Hansson, Henrik

    2018-01-01

    This paper summarises experiences of efforts made by twenty individuals when implementing small-scale ICT development projects in their organizations located in seven developing countries. The main focus of these projects was the use of ICT in educational settings. Challenges encountered and the contributing factors for implementation success of…

  6. Enabling and challenging factors in institutional reform: The case of SCALE-UP

    NASA Astrophysics Data System (ADS)

    Foote, Kathleen; Knaub, Alexis; Henderson, Charles; Dancy, Melissa; Beichner, Robert J.

    2016-06-01

    While many innovative teaching strategies exist, integration into undergraduate science teaching has been frustratingly slow. This study aims to understand the low uptake of research-based instructional innovations by studying 21 successful implementations of the Student Centered Active Learning with Upside-down Pedagogies (SCALE-UP) instructional reform. SCALE-UP significantly restructures the classroom environment and pedagogy to promote highly active and interactive instruction. Although originally designed for university introductory physics courses, SCALE-UP has spread to many other disciplines at hundreds of departments around the world. This study reports findings from in-depth, open-ended interviews with 21 key contact people involved with successful secondary implementations of SCALE-UP throughout the United States. We defined successful implementations as those who restructured their pedagogy and classroom and sustained and/or spread the change. Interviews were coded to identify the most common enabling and challenging factors during reform implementation and compared to the theoretical framework of Kotter's 8-step Change Model. The most common enabling influences that emerged are documenting and leveraging evidence of local success, administrative support, interaction with outside SCALE-UP user(s), and funding. Many challenges are linked to the lack of these enabling factors including difficulty finding funding, space, and administrative and/or faculty support for reform. Our focus on successful secondary implementations meant that most interviewees were able to overcome challenges. Presentation of results is illuminated with case studies, quotes, and examples that can help secondary implementers with SCALE-UP reform efforts specifically. We also discuss the implications for policy makers, researchers, and the higher education community concerned with initiating structural change.

  7. Crystal Face Distributions and Surface Site Densities of Two Synthetic Goethites: Implications for Adsorption Capacities as a Function of Particle Size.

    PubMed

    Livi, Kenneth J T; Villalobos, Mario; Leary, Rowan; Varela, Maria; Barnard, Jon; Villacís-García, Milton; Zanella, Rodolfo; Goodridge, Anna; Midgley, Paul

    2017-09-12

    Two synthetic goethites of varying crystal size distributions were analyzed by BET, conventional TEM, cryo-TEM, atomic resolution STEM and HRTEM, and electron tomography in order to determine the effects of crystal size, shape, and atomic scale surface roughness on their adsorption capacities. The two samples were determined by BET to have very different site densities based on Cr VI adsorption experiments. Model specific surface areas generated from TEM observations showed that, based on size and shape, there should be little difference in their adsorption capacities. Electron tomography revealed that both samples crystallized with an asymmetric {101} tablet habit. STEM and HRTEM images showed a significant increase in atomic-scale surface roughness of the larger goethite. This difference in roughness was quantified based on measurements of relative abundances of crystal faces {101} and {201} for the two goethites, and a reactive surface site density was calculated for each goethite. Singly coordinated sites on face {210} are 2.5 more dense than on face {101}, and the larger goethite showed an average total of 36% {210} as compared to 14% for the smaller goethite. This difference explains the considerably larger adsorption capacitiy of the larger goethite vs the smaller sample and points toward the necessity of knowing the atomic scale surface structure in predicting mineral adsorption processes.

  8. Assessing organizational implementation context in the education sector: confirmatory factor analysis of measures of implementation leadership, climate, and citizenship.

    PubMed

    Lyon, Aaron R; Cook, Clayton R; Brown, Eric C; Locke, Jill; Davis, Chayna; Ehrhart, Mark; Aarons, Gregory A

    2018-01-08

    A substantial literature has established the role of the inner organizational setting on the implementation of evidence-based practices in community contexts, but very little of this research has been extended to the education sector, one of the most common settings for the delivery of mental and behavioral health services to children and adolescents. The current study examined the factor structure, psychometric properties, and interrelations of an adapted set of pragmatic organizational instruments measuring key aspects of the organizational implementation context in schools: (1) strategic implementation leadership, (2) strategic implementation climate, and (3) implementation citizenship behavior. The Implementation Leadership Scale (ILS), Implementation Climate Scale (ICS), and Implementation Citizenship Behavior Scale (ICBS) were adapted by a research team that included the original scale authors and experts in the implementation of evidence-based practices in schools. These instruments were then administered to a geographically representative sample (n = 196) of school-based mental/behavioral health consultants to assess the reliability and structural validity via a series of confirmatory factor analyses. Overall, the original factor structures for the ILS, ICS, and ICBS were confirmed in the current sample. The one exception was poor functioning of the Rewards subscale of the ICS, which was removed in the final ICS model. Correlations among the revised measures, evaluated as part of an overarching model of the organizational implementation context, indicated both unique and shared variance. The current analyses suggest strong applicability of the revised instruments to implementation of evidence-based mental and behavioral practices in the education sector. The one poorly functioning subscale (Rewards on the ICS) was attributed to typical educational policies that do not allow for individual financial incentives to personnel. Potential directions for future expansion, revision, and application of the instruments in schools are discussed.

  9. The Implementation Leadership Scale (ILS): development of a brief measure of unit level implementation leadership.

    PubMed

    Aarons, Gregory A; Ehrhart, Mark G; Farahnak, Lauren R

    2014-04-14

    In healthcare and allied healthcare settings, leadership that supports effective implementation of evidenced-based practices (EBPs) is a critical concern. However, there are no empirically validated measures to assess implementation leadership. This paper describes the development, factor structure, and initial reliability and convergent and discriminant validity of a very brief measure of implementation leadership: the Implementation Leadership Scale (ILS). Participants were 459 mental health clinicians working in 93 different outpatient mental health programs in Southern California, USA. Initial item development was supported as part of a two United States National Institutes of Health (NIH) studies focused on developing implementation leadership training and implementation measure development. Clinician work group/team-level data were randomly assigned to be utilized for an exploratory factor analysis (n = 229; k = 46 teams) or for a confirmatory factor analysis (n = 230; k = 47 teams). The confirmatory factor analysis controlled for the multilevel, nested data structure. Reliability and validity analyses were then conducted with the full sample. The exploratory factor analysis resulted in a 12-item scale with four subscales representing proactive leadership, knowledgeable leadership, supportive leadership, and perseverant leadership. Confirmatory factor analysis supported an a priori higher order factor structure with subscales contributing to a single higher order implementation leadership factor. The scale demonstrated excellent internal consistency reliability as well as convergent and discriminant validity. The ILS is a brief and efficient measure of unit level leadership for EBP implementation. The availability of the ILS will allow researchers to assess strategic leadership for implementation in order to advance understanding of leadership as a predictor of organizational context for implementation. The ILS also holds promise as a tool for leader and organizational development to improve EBP implementation.

  10. The implementation leadership scale (ILS): development of a brief measure of unit level implementation leadership

    PubMed Central

    2014-01-01

    Background In healthcare and allied healthcare settings, leadership that supports effective implementation of evidenced-based practices (EBPs) is a critical concern. However, there are no empirically validated measures to assess implementation leadership. This paper describes the development, factor structure, and initial reliability and convergent and discriminant validity of a very brief measure of implementation leadership: the Implementation Leadership Scale (ILS). Methods Participants were 459 mental health clinicians working in 93 different outpatient mental health programs in Southern California, USA. Initial item development was supported as part of a two United States National Institutes of Health (NIH) studies focused on developing implementation leadership training and implementation measure development. Clinician work group/team-level data were randomly assigned to be utilized for an exploratory factor analysis (n = 229; k = 46 teams) or for a confirmatory factor analysis (n = 230; k = 47 teams). The confirmatory factor analysis controlled for the multilevel, nested data structure. Reliability and validity analyses were then conducted with the full sample. Results The exploratory factor analysis resulted in a 12-item scale with four subscales representing proactive leadership, knowledgeable leadership, supportive leadership, and perseverant leadership. Confirmatory factor analysis supported an a priori higher order factor structure with subscales contributing to a single higher order implementation leadership factor. The scale demonstrated excellent internal consistency reliability as well as convergent and discriminant validity. Conclusions The ILS is a brief and efficient measure of unit level leadership for EBP implementation. The availability of the ILS will allow researchers to assess strategic leadership for implementation in order to advance understanding of leadership as a predictor of organizational context for implementation. The ILS also holds promise as a tool for leader and organizational development to improve EBP implementation. PMID:24731295

  11. A pilot mixed methods study of patient satisfaction with chiropractic care for back pain.

    PubMed

    Rowell, Robert M; Polipnick, Judith

    2008-10-01

    Patient satisfaction is important to payers, clinicians, and patients. The concept of satisfaction is multifactorial and measurement is challenging. Our objective was to explore the use of a mixed-methods design to examine patient satisfaction with chiropractic care for low back pain. Patients were treated 3 times per week for 3 weeks. Outcomes were collected at week 3 and week 4. Qualitative interviews were conducted by the treating clinician and a nontreating staff member. Outcome measures were the Roland Morris Back Pain Disability Questionnaire, the visual analog scale for pain, and the Patient Satisfaction Scale. Interviews were recorded and transcribed and analyzed for themes and constructs of satisfaction. We compared qualitative interview data with quantitative outcomes, and qualitative data from 2 different interviewers. All patients reported high levels of satisfaction. Clinical outcomes were unremarkable with little change noted on visual analog scale and Roland Morris Back Pain Disability Questionnaire scores. We categorized patient comments into the same constructs of satisfaction as those identified for the Patient Satisfaction Scale: Information, Effectiveness, and Caring. An additional construct (Quality of Care) and additional subcategories were identified. Satisfaction with care is not explained by outcome alone. The qualitative data collected from 2 different interviewers had few differences. The results of this study suggest that it is feasible to use a mixed-methods design to examine patient satisfaction. We were able to refine data collection and analysis procedures for the outcome measures and qualitative interview data. We identified limitations and offer recommendations for the next step: the implementation of a larger study.

  12. Smokefree implementation in Colombia: Monitoring, outside funding, and business support.

    PubMed

    Uang, Randy; Crosbie, Eric; Glantz, Stanton A

    2017-01-01

    To analyze successful national smokefree policy implementation in Colombia, a middle income country. Key informants at the national and local levels were interviewed and news sources and government ministry resolutions were reviewed. Colombia's Ministry of Health coordinated local implementation practices, which were strongest in larger cities with supportive leadership. Nongovernmental organizations provided technical assistance and highlighted noncompliance. Organizations outside Colombia funded some of these efforts. The bar owners' association provided concerted education campaigns. Tobacco interests did not openly challenge implementation. Health organization monitoring, external funding, and hospitality industry support contributed to effective implementation, and could be cultivated in other low and middle income countries.

  13. Setting Learning Analytics in Context: Overcoming the Barriers to Large-Scale Adoption

    ERIC Educational Resources Information Center

    Ferguson, Rebecca; Macfadyen, Leah P.; Clow, Doug; Tynan, Belinda; Alexander, Shirley; Dawson, Shane

    2014-01-01

    A core goal for most learning analytic projects is to move from small-scale research towards broader institutional implementation, but this introduces a new set of challenges because institutions are stable systems, resistant to change. To avoid failure and maximize success, implementation of learning analytics at scale requires explicit and…

  14. Multiple Flow Loop SCADA System Implemented on the Production Prototype Loop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baily, Scott A.; Dalmas, Dale Allen; Wheat, Robert Mitchell

    2015-11-16

    The following report covers FY 15 activities to develop supervisory control and data acquisition (SCADA) system for the Northstar Moly99 production prototype gas flow loop. The goal of this effort is to expand the existing system to include a second flow loop with a larger production-sized blower. Besides testing the larger blower, this system will demonstrate the scalability of our solution to multiple flow loops.

  15. A simple shear limited, single size, time dependent flocculation model

    NASA Astrophysics Data System (ADS)

    Kuprenas, R.; Tran, D. A.; Strom, K.

    2017-12-01

    This research focuses on the modeling of flocculation of cohesive sediment due to turbulent shear, specifically, investigating the dependency of flocculation on the concentration of cohesive sediment. Flocculation is important in larger sediment transport models as cohesive particles can create aggregates which are orders of magnitude larger than their unflocculated state. As the settling velocity of each particle is determined by the sediment size, density, and shape, accounting for this aggregation is important in determining where the sediment is deposited. This study provides a new formulation for flocculation of cohesive sediment by modifying the Winterwerp (1998) flocculation model (W98) so that it limits floc size to that of the Kolmogorov micro length scale. The W98 model is a simple approach that calculates the average floc size as a function of time. Because of its simplicity, the W98 model is ideal for implementing into larger sediment transport models; however, the model tends to over predict the dependency of the floc size on concentration. It was found that the modification of the coefficients within the original model did not allow for the model to capture the dependency on concentration. Therefore, a new term within the breakup kernel of the W98 formulation was added. The new formulation results is a single size, shear limited, and time dependent flocculation model that is able to effectively capture the dependency of the equilibrium size of flocs on both suspended sediment concentration and the time to equilibrium. The overall behavior of the new model is explored and showed align well with other studies on flocculation. Winterwerp, J. C. (1998). A simple model for turbulence induced flocculation of cohesive sediment. .Journal of Hydraulic Research, 36(3):309-326.

  16. Triggering of Solar Magnetic Eruptions on Various Size Scales Alphonse Sterling

    NASA Technical Reports Server (NTRS)

    Sterling, A.C.

    2010-01-01

    A solar eruption that produces a coronal mass ejection (CME) together with a flare is driven by the eruption of a closed-loop magnetic arcade that has a sheared-field core. Before eruption, the sheared core envelops a polarity inversion line along which cool filament material may reside. The sheared-core arcade erupts when there is a breakdown in the balance between the confining downward-directed magnetic tension of the overall arcade field and the upward-directed force of the pent-up magnetic pressure of the sheared field in the core of the arcade. What triggers the breakdown in this balance in favor of the upward-directed force is still an unsettled question. We consider several eruption examples, using imaging data from the SoHO, TRACE and Hinode satellites, and other sources, along with information about the magnetic field of the erupting regions. In several cases, observations of large-scale eruptions, where the magnetic neutral line spans few x 10,000 km, are consistent with magnetic flux cancellation being the trigger to the eruption's onset, even though the amount of flux canceled is only few percent of the total magnetic flux of the erupting region. In several other cases, an initial compact (small size-scale) eruption occurs embedded inside of a larger closed magnetic loop system, so that the smaller eruption destabilizes and causes the eruption of the much larger system. In this way, small-scale eruptive events can result in eruption of much larger-scale systems.

  17. Scaling theory of temporal correlations and size-dependent fluctuations in the traded value of stocks

    NASA Astrophysics Data System (ADS)

    Eisler, Zoltán; Kertész, János

    2006-04-01

    Records of the traded value fi of stocks display fluctuation scaling, a proportionality between the standard deviation σi and the average ⟨fi⟩ : σi∝⟨fi⟩α , with a strong time scale dependence α(Δt) . The nontrivial (i.e., neither 0.5 nor 1) value of α may have different origins and provides information about the microscopic dynamics. We present a set of stylized facts and then show their connection to such behavior. The functional form α(Δt) originates from two aspects of the dynamics: Stocks of larger companies both tend to be traded in larger packages and also display stronger correlations of traded value. The results are integrated into a general framework that can be applied to a wide range of complex systems.

  18. Development and implementation of a Bayesian-based aquifer vulnerability assessment in Florida

    USGS Publications Warehouse

    Arthur, J.D.; Wood, H.A.R.; Baker, A.E.; Cichon, J.R.; Raines, G.L.

    2007-01-01

    The Florida Aquifer Vulnerability Assessment (FAVA) was designed to provide a tool for environmental, regulatory, resource management, and planning professionals to facilitate protection of groundwater resources from surface sources of contamination. The FAVA project implements weights-of-evidence (WofE), a data-driven, Bayesian-probabilistic model to generate a series of maps reflecting relative aquifer vulnerability of Florida's principal aquifer systems. The vulnerability assessment process, from project design to map implementation is described herein in reference to the Floridan aquifer system (FAS). The WofE model calculates weighted relationships between hydrogeologic data layers that influence aquifer vulnerability and ambient groundwater parameters in wells that reflect relative degrees of vulnerability. Statewide model input data layers (evidential themes) include soil hydraulic conductivity, density of karst features, thickness of aquifer confinement, and hydraulic head difference between the FAS and the watertable. Wells with median dissolved nitrogen concentrations exceeding statistically established thresholds serve as training points in the WofE model. The resulting vulnerability map (response theme) reflects classified posterior probabilities based on spatial relationships between the evidential themes and training points. The response theme is subjected to extensive sensitivity and validation testing. Among the model validation techniques is calculation of a response theme based on a different water-quality indicator of relative recharge or vulnerability: dissolved oxygen. Successful implementation of the FAVA maps was facilitated by the overall project design, which included a needs assessment and iterative technical advisory committee input and review. Ongoing programs to protect Florida's springsheds have led to development of larger-scale WofE-based vulnerability assessments. Additional applications of the maps include land-use planning amendments and prioritization of land purchases to protect groundwater resources. ?? International Association for Mathematical Geology 2007.

  19. Texas Medication Algorithm Project: development and feasibility testing of a treatment algorithm for patients with bipolar disorder.

    PubMed

    Suppes, T; Swann, A C; Dennehy, E B; Habermacher, E D; Mason, M; Crismon, M L; Toprac, M G; Rush, A J; Shon, S P; Altshuler, K Z

    2001-06-01

    Use of treatment guidelines for treatment of major psychiatric illnesses has increased in recent years. The Texas Medication Algorithm Project (TMAP) was developed to study the feasibility and process of developing and implementing guidelines for bipolar disorder, major depressive disorder, and schizophrenia in the public mental health system of Texas. This article describes the consensus process used to develop the first set of TMAP algorithms for the Bipolar Disorder Module (Phase 1) and the trial testing the feasibility of their implementation in inpatient and outpatient psychiatric settings across Texas (Phase 2). The feasibility trial answered core questions regarding implementation of treatment guidelines for bipolar disorder. A total of 69 patients were treated with the original algorithms for bipolar disorder developed in Phase 1 of TMAP. Results support that physicians accepted the guidelines, followed recommendations to see patients at certain intervals, and utilized sequenced treatment steps differentially over the course of treatment. While improvements in clinical symptoms (24-item Brief Psychiatric Rating Scale) were observed over the course of enrollment in the trial, these conclusions are limited by the fact that physician volunteers were utilized for both treatment and ratings. and there was no control group. Results from Phases 1 and 2 indicate that it is possible to develop and implement a treatment guideline for patients with a history of mania in public mental health clinics in Texas. TMAP Phase 3, a recently completed larger and controlled trial assessing the clinical and economic impact of treatment guidelines and patient and family education in the public mental health system of Texas, improves upon this methodology.

  20. Community Readiness Within Systems of Care: The Validity and Reliability of the System of Care Readiness and Implementation Measurement Scale (SOC-RIMS).

    PubMed

    Rosas, Scott R; Behar, Lenore B; Hydaker, William M

    2016-01-01

    Establishing a system of care requires communities to identify ways to successfully implement strategies and support positive outcomes for children and their families. Such community transformation is complex and communities vary in terms of their readiness for implementing sustainable community interventions. Assessing community readiness and guiding implementation, specifically for the funded communities implementing a system of care, requires a well-designed tool with sound psychometric properties. This scale development study used the results of a previously published concept mapping study to create, administer, and assess the psychometric characteristics of the System of Care Readiness and Implementation Measurement Scale (SOC-RIMS). The results indicate the SOC-RIMS possesses excellent internal consistency characteristics, measures clearly discernible dimensions of community readiness, and demonstrates the target constructs exist within a broad network of content. The SOC-RIMS can be a useful part of a comprehensive assessment in communities where system of care practices, principles, and philosophies are implemented and evaluated.

  1. Taking School-Based Substance Abuse Prevention to Scale: District-Wide Implementation of Keep a Clear Mind

    ERIC Educational Resources Information Center

    Jowers, Keri L.; Bradshaw, Catherine P.; Gately, Sherry

    2007-01-01

    Public schools are under increased pressure to implement evidence-based substance abuse prevention programs. A number of model programs have been identified, but little research has examined the effectiveness of these programs when "brought to scale" or implemented district-wide. The current paper summarizes the application of the Adelman and…

  2. Status of States' Progress in Implementing Part H of IDEA: Report #3.

    ERIC Educational Resources Information Center

    Harbin, Gloria L.; And Others

    This report focuses on progress in the implementation of Part H of the Individuals with Disabilities Education Act (IDEA) through a comparison of states' status on three yearly administrations of the State Progress Scale. The scale was designed to monitor implementation of the required 14 components in the stages of policy development, policy…

  3. Rotational quenching of H2O by He: mixed quantum/classical theory and comparison with quantum results.

    PubMed

    Ivanov, Mikhail; Dubernet, Marie-Lise; Babikov, Dmitri

    2014-04-07

    The mixed quantum/classical theory (MQCT) formulated in the space-fixed reference frame is used to compute quenching cross sections of several rotationally excited states of water molecule by impact of He atom in a broad range of collision energies, and is tested against the full-quantum calculations on the same potential energy surface. In current implementation of MQCT method, there are two major sources of errors: one affects results at energies below 10 cm(-1), while the other shows up at energies above 500 cm(-1). Namely, when the collision energy E is below the state-to-state transition energy ΔE the MQCT method becomes less accurate due to its intrinsic classical approximation, although employment of the average-velocity principle (scaling of collision energy in order to satisfy microscopic reversibility) helps dramatically. At higher energies, MQCT is expected to be accurate but in current implementation, in order to make calculations computationally affordable, we had to cut off the basis set size. This can be avoided by using a more efficient body-fixed formulation of MQCT. Overall, the errors of MQCT method are within 20% of the full-quantum results almost everywhere through four-orders-of-magnitude range of collision energies, except near resonances, where the errors are somewhat larger.

  4. Parallel Fokker–Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick

    2017-01-01

    A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less

  5. Path durations for use in the stochastic‐method simulation of ground motions

    USGS Publications Warehouse

    Boore, David M.; Thompson, Eric M.

    2014-01-01

    The stochastic method of ground‐motion simulation assumes that the energy in a target spectrum is spread over a duration DT. DT is generally decomposed into the duration due to source effects (DS) and to path effects (DP). For the most commonly used source, seismological theory directly relates DS to the source corner frequency, accounting for the magnitude scaling of DT. In contrast, DP is related to propagation effects that are more difficult to represent by analytic equations based on the physics of the process. We are primarily motivated to revisit DT because the function currently employed by many implementations of the stochastic method for active tectonic regions underpredicts observed durations, leading to an overprediction of ground motions for a given target spectrum. Further, there is some inconsistency in the literature regarding which empirical duration corresponds to DT. Thus, we begin by clarifying the relationship between empirical durations and DT as used in the first author’s implementation of the stochastic method, and then we develop a new DP relationship. The new DP function gives significantly longer durations than in the previous DP function, but the relative contribution of DP to DT still diminishes with increasing magnitude. Thus, this correction is more important for small events or subfaults of larger events modeled with the stochastic finite‐fault method.

  6. Creating Energy for Change.

    ERIC Educational Resources Information Center

    Neiman, Robert A.

    2002-01-01

    Describes the use of small-scale change projects by Philadelphia's Department of Human Services to generate new outcomes and new skills and experience that improved basic day-to-day operations, strategic planning, and cumulatively produced larger-scale changes in service, financing, and performance. (Author/LRW)

  7. ASIC implementation of recursive scaled discrete cosine transform algorithm

    NASA Astrophysics Data System (ADS)

    On, Bill N.; Narasimhan, Sam; Huang, Victor K.

    1994-05-01

    A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.

  8. Economies of Scale and Scope in E-Learning

    ERIC Educational Resources Information Center

    Morris, David

    2008-01-01

    Economies of scale are often cited in the higher education literature as being one of the drivers for the deployment of e-learning. They are variously used to support the notions that higher education is becoming more global, that national policy towards e-learning should promote scale efficiencies, that larger institutions will be better able to…

  9. A Multi-Scale Perspective of the Effects of Forest Fragmentation on Birds in Eastern Forests

    Treesearch

    Frank R. Thompson; Therese M. Donovan; Richard M. DeGraff; John Faaborg; Scott K. Robinson

    2002-01-01

    We propose a model that considers forest fragmentation within a spatial hierarchy that includes regional or biogeographic effects, landscape-level fragmentation effects, and local habitat effects. We hypothesize that effects operate "top down" in that larger scale effects provide constraints or context for smaller scale effects. Bird species' abundance...

  10. Examining the Variability of Sleep Patterns during Treatment for Chronic Insomnia: Application of a Location-Scale Mixed Model

    PubMed Central

    Ong, Jason C.; Hedeker, Donald; Wyatt, James K.; Manber, Rachel

    2016-01-01

    Study Objectives: The purpose of this study was to introduce a novel statistical technique called the location-scale mixed model that can be used to analyze the mean level and intra-individual variability (IIV) using longitudinal sleep data. Methods: We applied the location-scale mixed model to examine changes from baseline in sleep efficiency on data collected from 54 participants with chronic insomnia who were randomized to an 8-week Mindfulness-Based Stress Reduction (MBSR; n = 19), an 8-week Mindfulness-Based Therapy for Insomnia (MBTI; n = 19), or an 8-week self-monitoring control (SM; n = 16). Sleep efficiency was derived from daily sleep diaries collected at baseline (days 1–7), early treatment (days 8–21), late treatment (days 22–63), and post week (days 64–70). The behavioral components (sleep restriction, stimulus control) were delivered during late treatment in MBTI. Results: For MBSR and MBTI, the pre-to-post change in mean levels of sleep efficiency were significantly larger than the change in mean levels for the SM control, but the change in IIV was not significantly different. During early and late treatment, MBSR showed a larger increase in mean levels of sleep efficiency and a larger decrease in IIV relative to the SM control. At late treatment, MBTI had a larger increase in the mean level of sleep efficiency compared to SM, but the IIV was not significantly different. Conclusions: The location-scale mixed model provides a two-dimensional analysis on the mean and IIV using longitudinal sleep diary data with the potential to reveal insights into treatment mechanisms and outcomes. Citation: Ong JC, Hedeker D, Wyatt JK, Manber R. Examining the variability of sleep patterns during treatment for chronic insomnia: application of a location-scale mixed model. J Clin Sleep Med 2016;12(6):797–804. PMID:26951414

  11. Theoretical prediction and impact of fundamental electric dipole moments

    DOE PAGES

    Ellis, Sebastian A. R.; Kane, Gordon L.

    2016-01-13

    The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level inmore » the theory at the unification or string scale ~O(10 16 GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about 5 × 10 –30e cm, and the neutron EDM should not be larger than about 5 × 10 –29e cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. As a result, we comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.« less

  12. Theoretical prediction and impact of fundamental electric dipole moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Sebastian A. R.; Kane, Gordon L.

    The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level inmore » the theory at the unification or string scale ~O(10 16 GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about 5 × 10 –30e cm, and the neutron EDM should not be larger than about 5 × 10 –29e cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. As a result, we comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.« less

  13. The scaling of postcranial muscles in cats (Felidae) I: forelimb, cervical, and thoracic muscles.

    PubMed

    Cuff, Andrew R; Sparkes, Emily L; Randau, Marcela; Pierce, Stephanie E; Kitchener, Andrew C; Goswami, Anjali; Hutchinson, John R

    2016-07-01

    The body masses of cats (Mammalia, Carnivora, Felidae) span a ~300-fold range from the smallest to largest species. Despite this range, felid musculoskeletal anatomy remains remarkably conservative, including the maintenance of a crouched limb posture at unusually large sizes. The forelimbs in felids are important for body support and other aspects of locomotion, as well as climbing and prey capture, with the assistance of the vertebral (and hindlimb) muscles. Here, we examine the scaling of the anterior postcranial musculature across felids to assess scaling patterns between different species spanning the range of felid body sizes. The muscle architecture (lengths and masses of the muscle-tendon unit components) for the forelimb, cervical and thoracic muscles was quantified to analyse how the muscles scale with body mass. Our results demonstrate that physiological cross-sectional areas of the forelimb muscles scale positively with increasing body mass (i.e. becoming relatively larger). Many significantly allometric variables pertain to shoulder support, whereas the rest of the limb muscles become relatively weaker in larger felid species. However, when phylogenetic relationships were corrected for, most of these significant relationships disappeared, leaving no significantly allometric muscle metrics. The majority of cervical and thoracic muscle metrics are not significantly allometric, despite there being many allometric skeletal elements in these regions. When forelimb muscle data were considered in isolation or in combination with those of the vertebral muscles in principal components analyses and MANOVAs, there was no significant discrimination among species by either size or locomotory mode. Our results support the inference that larger felid species have relatively weaker anterior postcranial musculature compared with smaller species, due to an absence of significant positive allometry of forelimb or vertebral muscle architecture. This difference in strength is consistent with behavioural changes in larger felids, such as a reduction of maximal speed and other aspects of locomotor abilities. © 2016 Anatomical Society.

  14. Observing Triggered Earthquakes Across Iran with Calibrated Earthquake Locations

    NASA Astrophysics Data System (ADS)

    Karasozen, E.; Bergman, E.; Ghods, A.; Nissen, E.

    2016-12-01

    We investigate earthquake triggering phenomena in Iran by analyzing patterns of aftershock activity around mapped surface ruptures. Iran has an intense level of seismicity (> 40,000 events listed in the ISC Bulletin since 1960) due to it accommodating a significant portion of the continental collision between Arabia and Eurasia. There are nearly thirty mapped surface ruptures associated with earthquakes of M 6-7.5, mostly in eastern and northwestern Iran, offering a rich potential to study the kinematics of earthquake nucleation, rupture propagation, and subsequent triggering. However, catalog earthquake locations are subject to up to 50 km of location bias from the combination of unknown Earth structure and unbalanced station coverage, making it challenging to assess both the rupture directivity of larger events and the spatial patterns of their aftershocks. To overcome this limitation, we developed a new two-tiered multiple-event relocation approach to obtain hypocentral parameters that are minimally biased and have realistic uncertainties. In the first stage, locations of small clusters of well-recorded earthquakes at local spatial scales (100s of events across 100 km length scales) are calibrated either by using near-source arrival times or independent location constraints (e.g. local aftershock studies, InSAR solutions), using an implementation of the Hypocentroidal Decomposition relocation technique called MLOC. Epicentral uncertainties are typically less than 5 km. Then, these events are used as prior constraints in the code BayesLoc, a Bayesian relocation technique that can handle larger datasets, to yield region-wide calibrated hypocenters (1000s of events over 1000 km length scales). With locations and errors both calibrated, the pattern of aftershock activity can reveal the type of the earthquake triggering: dynamic stress changes promote an increase in the seismicity rate in the direction of unilateral propagation, whereas static stress changes should not be biased by rupture propagation direction. Here we present results from Ahar, Baladeh, Qom, Rigan, Silakhour and Zirkuh clusters, that include early-instrumental and modern mainshock-aftershock sequences. These will in turn provide a greatly improved basis for research into seismic hazards in this region.

  15. Cryopreservation of Brain Endothelial Cells Derived from Human Induced Pluripotent Stem Cells Is Enhanced by Rho-Associated Coiled Coil-Containing Kinase Inhibition.

    PubMed

    Wilson, Hannah K; Faubion, Madeline G; Hjortness, Michael K; Palecek, Sean P; Shusta, Eric V

    2016-12-01

    The blood-brain barrier (BBB) maintains brain homeostasis but also presents a major obstacle to brain drug delivery. Brain microvascular endothelial cells (BMECs) form the principal barrier and therefore represent the major cellular component of in vitro BBB models. Such models are often used for mechanistic studies of the BBB in health and disease and for drug screening. Recently, human induced pluripotent stem cells (iPSCs) have emerged as a new source for generating BMEC-like cells for use in in vitro human BBB studies. However, the inability to cryopreserve iPSC-BMECs has impeded implementation of this model by requiring a fresh differentiation to generate cells for each experiment. Cryopreservation of differentiated iPSC-BMECs would have a number of distinct advantages, including enabling production of larger scale lots, decreasing lead time to generate purified iPSC-BMEC cultures, and facilitating use of iPSC-BMECs in large-scale screening. In this study, we demonstrate that iPSC-BMECs can be successfully cryopreserved at multiple differentiation stages. Cryopreserved iPSC-BMECs retain high viability, express standard endothelial and BBB markers, and reach a high transendothelial electrical resistance (TEER) of ∼3000 Ω·cm 2 , equivalent to nonfrozen controls. Rho-associated coiled coil-containing kinase (ROCK) inhibitor Y-27632 substantially increased survival and attachment of cryopreserved iPSC-BMECs, as well as stabilized TEER above 800 Ω·cm 2 out to 7 days post-thaw. Overall, cryopreservation will ease handling and storage of high-quality iPSC-BMECs, reducing a key barrier to greater implementation of these cells in modeling the human BBB.

  16. Adverse Effects of Daylight Saving Time on Adolescents' Sleep and Vigilance

    PubMed Central

    Medina, Diana; Ebben, Matthew; Milrad, Sara; Atkinson, Brianna; Krieger, Ana C.

    2015-01-01

    Study Objectives: Daylight saving time (DST) has been established with the intent to reduce energy expenditure, however unintentional effects on sleep and vigilance have not been consistently measured. The objective of this study was to test the hypothesis that DST adversely affects high school students' sleep and vigilance on the school days following its implementation. Methods: A natural experiment design was used to assess baseline and post-DST differences in objective and subjective measures of sleep and vigilance by actigraphy, sleep diary, sleepiness scale, and psychomotor vigilance testing (PVT). Students were tested during school days immediately preceding and following DST. Results: A total of 40 high school students were enrolled in this study; 35 completed the protocol. Sleep duration declined by an average of 32 minutes on the weeknights post-DST, reflecting a cumulative sleep loss of 2 h 42 min as compared to the baseline week (p = 0.001). This finding was confirmed by sleep diary analyses, reflecting an average sleep loss of 27 min/night (p = 0.004) post-DST. Vigilance significantly deteriorated, with a decline in PVT performance post-DST, resulting in longer reaction times (p < 0.001) and increased lapses (p < 0.001). Increased daytime sleepiness was also demonstrated (p < 0.001). Conclusions: The early March DST onset adversely affected sleep and vigilance in high school students resulting in increased daytime sleepiness. Larger scale evaluations of sleep impairments related to DST are needed to further quantify this problem in the population. If confirmed, measures to attenuate sleep loss post-DST should be implemented. Citation: Medina D, Ebben M, Milrad S, Atkinson B, Krieger AC. Adverse effects of daylight saving time on adolescents' sleep and vigilance. J Clin Sleep Med 2015;11(8):879–884. PMID:25979095

  17. Observed decrease in atmospheric mercury explained by global decline in anthropogenic emissions

    PubMed Central

    Zhang, Yanxu; Jacob, Daniel J.; Horowitz, Hannah M.; Chen, Long; Amos, Helen M.; Krabbenhoft, David P.; Slemr, Franz; St. Louis, Vincent L.; Sunderland, Elsie M.

    2016-01-01

    Observations of elemental mercury (Hg0) at sites in North America and Europe show large decreases (∼1–2% y−1) from 1990 to present. Observations in background northern hemisphere air, including Mauna Loa Observatory (Hawaii) and CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container) aircraft flights, show weaker decreases (<1% y−1). These decreases are inconsistent with current global emission inventories indicating flat or increasing emissions over that period. However, the inventories have three major flaws: (i) they do not account for the decline in atmospheric release of Hg from commercial products; (ii) they are biased in their estimate of artisanal and small-scale gold mining emissions; and (iii) they do not properly account for the change in Hg0/HgII speciation of emissions from coal-fired utilities after implementation of emission controls targeted at SO2 and NOx. We construct an improved global emission inventory for the period 1990 to 2010 accounting for the above factors and find a 20% decrease in total Hg emissions and a 30% decrease in anthropogenic Hg0 emissions, with much larger decreases in North America and Europe offsetting the effect of increasing emissions in Asia. Implementation of our inventory in a global 3D atmospheric Hg simulation [GEOS-Chem (Goddard Earth Observing System-Chemistry)] coupled to land and ocean reservoirs reproduces the observed large-scale trends in atmospheric Hg0 concentrations and in HgII wet deposition. The large trends observed in North America and Europe reflect the phase-out of Hg from commercial products as well as the cobenefit from SO2 and NOx emission controls on coal-fired utilities. PMID:26729866

  18. Observed decrease in atmospheric mercury explained by global decline in anthropogenic emissions.

    PubMed

    Zhang, Yanxu; Jacob, Daniel J; Horowitz, Hannah M; Chen, Long; Amos, Helen M; Krabbenhoft, David P; Slemr, Franz; St Louis, Vincent L; Sunderland, Elsie M

    2016-01-19

    Observations of elemental mercury (Hg(0)) at sites in North America and Europe show large decreases (∼ 1-2% y(-1)) from 1990 to present. Observations in background northern hemisphere air, including Mauna Loa Observatory (Hawaii) and CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container) aircraft flights, show weaker decreases (<1% y(-1)). These decreases are inconsistent with current global emission inventories indicating flat or increasing emissions over that period. However, the inventories have three major flaws: (i) they do not account for the decline in atmospheric release of Hg from commercial products; (ii) they are biased in their estimate of artisanal and small-scale gold mining emissions; and (iii) they do not properly account for the change in Hg(0)/Hg(II) speciation of emissions from coal-fired utilities after implementation of emission controls targeted at SO2 and NOx. We construct an improved global emission inventory for the period 1990 to 2010 accounting for the above factors and find a 20% decrease in total Hg emissions and a 30% decrease in anthropogenic Hg(0) emissions, with much larger decreases in North America and Europe offsetting the effect of increasing emissions in Asia. Implementation of our inventory in a global 3D atmospheric Hg simulation [GEOS-Chem (Goddard Earth Observing System-Chemistry)] coupled to land and ocean reservoirs reproduces the observed large-scale trends in atmospheric Hg(0) concentrations and in Hg(II) wet deposition. The large trends observed in North America and Europe reflect the phase-out of Hg from commercial products as well as the cobenefit from SO2 and NOx emission controls on coal-fired utilities.

  19. Elaboration d'une structure de collecte des matieres residuelles selon la Theorie Constructale

    NASA Astrophysics Data System (ADS)

    Al-Maalouf, George

    Currently, more than 80% of the waste management costs are attributed to the waste collection phase. In order to reduce these costs, one current solution resides in the implementation of waste transfer stations. In these stations, at least 3 collection vehicles transfer their load into a larger hauling truck. This cost reduction is based on the principle of economy of scale applied to the transportation sector. This solution improves the efficiency of the system; nevertheless, it does not optimize it. Recent studies show that the compactor trucks used in the collection phase generate significant economic losses mainly due to the frequent stops and the transportation to transfer stations often far from the collection area. This study suggests the restructuring of the waste collection process by dividing it into two phases: the collection phase, and the transportation to the transfer station phase. To achieve this, a deterministic theory called: "the Constructal Theory" (CT) is used. The results show that starting a certain density threshold, the application of the CT minimizes energy losses in the system. In fact, the collection is optimal if it is done using a combination of low capacity vehicle to collect door to door and transfer their charge into high-capacity trucks. These trucks will then transport their load to the transfer station. To minimize the costs of labor, this study proposes the use of Cybernetic Transport System (CTS) as an automated collection vehicle to collect small amounts of waste. Finally, the optimization method proposed is part of a decentralized approach to the collection and treatment of waste. This allows the implementation of multi-process waste treatment facilities on a territory scale. Keywords: Waste collection, Constructal Theory, Cybernetic Transportation Systems.

  20. Studying Regional Wave Source Time Functions Using the Empirical Green's Function Method: Application to Central Asia

    NASA Astrophysics Data System (ADS)

    Xie, J.; Schaff, D. P.; Chen, Y.; Schult, F.

    2013-12-01

    Reliably estimated source time functions (STFs) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection and discrimination, and minimization of parameter trade-off in attenuation studies. We have searched for candidate pairs of larger and small earthquakes in and around China that share the same focal mechanism but significantly differ in magnitudes, so that the empirical Green's function (EGF) method can be applied to study the STFs of the larger events. We conducted about a million deconvolutions using waveforms from 925 earthquakes, and screened the deconvolved traces to exclude those that are from event pairs that involved different mechanisms. Only 2,700 traces passed this screening and could be further analyzed using the EGF method. We have developed a series of codes for speeding up the final EGF analysis by implementing automations and user-graphic interface procedures. The codes have been fully tested with a subset of screened data and we are currently applying them to all the screened data. We will present a large number of deconvolved STFs retrieved using various phases (Lg, Pn, Sn and Pg and coda) with information on any directivities, any possible dependence of pulse durations on the wave types, on scaling relations for the pulse durations and event sizes, and on the estimated source static stress drops.

  1. Regional effects of agricultural conservation practices on nutrient transport in the Upper Mississippi River Basin

    USGS Publications Warehouse

    Garcia, Ana Maria.; Alexander, Richard B.; Arnold, Jeffrey G.; Norfleet, Lee; White, Michael J.; Robertson, Dale M.; Schwarz, Gregory E.

    2016-01-01

    Despite progress in the implementation of conservation practices, related improvements in water quality have been challenging to measure in larger river systems. In this paper we quantify these downstream effects by applying the empirical U.S. Geological Survey water-quality model SPARROW to investigate whether spatial differences in conservation intensity were statistically correlated with variations in nutrient loads. In contrast to other forms of water quality data analysis, the application of SPARROW controls for confounding factors such as hydrologic variability, multiple sources and environmental processes. A measure of conservation intensity was derived from the USDA-CEAP regional assessment of the Upper Mississippi River and used as an explanatory variable in a model of the Upper Midwest. The spatial pattern of conservation intensity was negatively correlated (p = 0.003) with the total nitrogen loads in streams in the basin. Total phosphorus loads were weakly negatively correlated with conservation (p = 0.25). Regional nitrogen reductions were estimated to range from 5 to 34% and phosphorus reductions from 1 to 10% in major river basins of the Upper Mississippi region. The statistical associations between conservation and nutrient loads are consistent with hydrological and biogeochemical processes such as denitrification. The results provide empirical evidence at the regional scale that conservation practices have had a larger statistically detectable effect on nitrogen than on phosphorus loadings in streams and rivers of the Upper Mississippi Basin.

  2. Regional Effects of Agricultural Conservation Practices on Nutrient Transport in the Upper Mississippi River Basin.

    PubMed

    García, Ana María; Alexander, Richard B; Arnold, Jeffrey G; Norfleet, Lee; White, Michael J; Robertson, Dale M; Schwarz, Gregory

    2016-07-05

    Despite progress in the implementation of conservation practices, related improvements in water quality have been challenging to measure in larger river systems. In this paper we quantify these downstream effects by applying the empirical U.S. Geological Survey water-quality model SPARROW to investigate whether spatial differences in conservation intensity were statistically correlated with variations in nutrient loads. In contrast to other forms of water quality data analysis, the application of SPARROW controls for confounding factors such as hydrologic variability, multiple sources and environmental processes. A measure of conservation intensity was derived from the USDA-CEAP regional assessment of the Upper Mississippi River and used as an explanatory variable in a model of the Upper Midwest. The spatial pattern of conservation intensity was negatively correlated (p = 0.003) with the total nitrogen loads in streams in the basin. Total phosphorus loads were weakly negatively correlated with conservation (p = 0.25). Regional nitrogen reductions were estimated to range from 5 to 34% and phosphorus reductions from 1 to 10% in major river basins of the Upper Mississippi region. The statistical associations between conservation and nutrient loads are consistent with hydrological and biogeochemical processes such as denitrification. The results provide empirical evidence at the regional scale that conservation practices have had a larger statistically detectable effect on nitrogen than on phosphorus loadings in streams and rivers of the Upper Mississippi Basin.

  3. Power Scaling and Seasonal Evolution of Floe Areas in the Arctic East Siberian Sea

    NASA Astrophysics Data System (ADS)

    Barton, C. C.; Geise, G. R.; Tebbens, S. F.

    2016-12-01

    The size distribution of floes and its evolution during the Arctic summer season and a model of fragmentation that generates a power law scaling distribution of fragment sizes are the subject of this paper. This topic is of relevance to marine vessels that encounter floes, to the calculation of sea ice albedo, to the determination of Arctic heat exchange which is strongly influenced by ice concentrations and the amount of open water between floes, and to photosynthetic marine organisms which are dependent upon sunlight penetrating the spaces between floes. Floes are 2-3 m thick and initially range in area from one to millions of square meters. The cumulative number versus floe area distribution of seasonal sea floes from six satellite images of the Arctic Ocean during the summer breakup and melting is well fit by two scale-invariant power law scaling regimes for floe areas ranging from 30 m2 to 28,400,000 m2. Scaling exponents, B, for larger floe areas range from -0.6 to -1.0 with an average of -0.8. Scaling exponents, B, for smaller floe areas range from -0.3 to -0.6 with an average of -0.5. The inflection point between the two scaling regimes ranges from 283 x 102 m2 to 4850 x 102 m2 and generally moves from larger to smaller floe areas through the summer melting season. We observe that the two scaling regimes and the inflection between them are established during the initial breakup of sea ice solely by the process of fracture. The distributions of floe size regimes retain their scaling exponents as the floe pack evolves from larger to smaller floe areas from the initial breakup through the summer season, due to grinding, crushing, fracture, and melting. The scaling exponents for floe area distribution are in the same range as those reported in previous studies of Arctic floes and for the single scaling exponents found for crushed and ground geologic materials including streambed gravel, lunar debris, and artificially crushed quartz. A probabilistic fragmentation model that produces a power distribution of particle sizes has been developed and will be presented.

  4. Examining the Variability of Sleep Patterns during Treatment for Chronic Insomnia: Application of a Location-Scale Mixed Model.

    PubMed

    Ong, Jason C; Hedeker, Donald; Wyatt, James K; Manber, Rachel

    2016-06-15

    The purpose of this study was to introduce a novel statistical technique called the location-scale mixed model that can be used to analyze the mean level and intra-individual variability (IIV) using longitudinal sleep data. We applied the location-scale mixed model to examine changes from baseline in sleep efficiency on data collected from 54 participants with chronic insomnia who were randomized to an 8-week Mindfulness-Based Stress Reduction (MBSR; n = 19), an 8-week Mindfulness-Based Therapy for Insomnia (MBTI; n = 19), or an 8-week self-monitoring control (SM; n = 16). Sleep efficiency was derived from daily sleep diaries collected at baseline (days 1-7), early treatment (days 8-21), late treatment (days 22-63), and post week (days 64-70). The behavioral components (sleep restriction, stimulus control) were delivered during late treatment in MBTI. For MBSR and MBTI, the pre-to-post change in mean levels of sleep efficiency were significantly larger than the change in mean levels for the SM control, but the change in IIV was not significantly different. During early and late treatment, MBSR showed a larger increase in mean levels of sleep efficiency and a larger decrease in IIV relative to the SM control. At late treatment, MBTI had a larger increase in the mean level of sleep efficiency compared to SM, but the IIV was not significantly different. The location-scale mixed model provides a two-dimensional analysis on the mean and IIV using longitudinal sleep diary data with the potential to reveal insights into treatment mechanisms and outcomes. © 2016 American Academy of Sleep Medicine.

  5. Synchrotron imaging of the grasshopper tracheal system : morphological and physiological components of tracheal hypermetry.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenlee, K. J.; Henry, J. R.; Kirkton, S. D.

    2009-11-01

    As grasshoppers increase in size during ontogeny, they have mass specifically greater whole body tracheal and tidal volumes and ventilation than predicted by an isometric relationship with body mass and body volume. However, the morphological and physiological bases to this respiratory hypermetry are unknown. In this study, we use synchrotron imaging to demonstrate that tracheal hypermetry in developing grasshoppers (Schistocerca americana) is due to increases in air sacs and tracheae and occurs in all three body segments, providing evidence against the hypothesis that hypermetry is due to gaining flight ability. We also assessed the scaling of air sac structure andmore » function by assessing volume changes of focal abdominal air sacs. Ventilatory frequencies increased in larger animals during hypoxia (5% O{sub 2}) but did not scale in normoxia. For grasshoppers in normoxia, inflated and deflated air sac volumes and ventilation scaled hypermetrically. During hypoxia (5% O{sub 2}), many grasshoppers compressed air sacs nearly completely regardless of body size, and air sac volumes scaled isometrically. Together, these results demonstrate that whole body tracheal hypermetry and enhanced ventilation in larger/older grasshoppers are primarily due to proportionally larger air sacs and higher ventilation frequencies in larger animals during hypoxia. Prior studies showed reduced whole body tracheal volumes and tidal volume in late-stage grasshoppers, suggesting that tissue growth compresses air sacs. In contrast, we found that inflated volumes, percent volume changes, and ventilation were identical in abdominal air sacs of late-stage fifth instar and early-stage animals, suggesting that decreasing volume of the tracheal system later in the instar occurs in other body regions that have harder exoskeleton.« less

  6. Maximizing School Counselors' Efforts by Implementing School-Wide Positive Behavioral Interventions and Supports: A Case Study from the Field

    ERIC Educational Resources Information Center

    Goodman-Scott, Emily

    2014-01-01

    School-Wide Positive Behavioral Interventions and Supports (PBIS) are school-wide, data-driven frameworks for promoting safe schools and student learning. This article explains PBIS and provides practical examples of PBIS implementation by describing a school counselor-run PBIS framework in one elementary school, as part of a larger, district-wide…

  7. Voltage Imaging of Waking Mouse Cortex Reveals Emergence of Critical Neuronal Dynamics

    PubMed Central

    Scott, Gregory; Fagerholm, Erik D.; Mutoh, Hiroki; Leech, Robert; Sharp, David J.; Shew, Woodrow L.

    2014-01-01

    Complex cognitive processes require neuronal activity to be coordinated across multiple scales, ranging from local microcircuits to cortex-wide networks. However, multiscale cortical dynamics are not well understood because few experimental approaches have provided sufficient support for hypotheses involving multiscale interactions. To address these limitations, we used, in experiments involving mice, genetically encoded voltage indicator imaging, which measures cortex-wide electrical activity at high spatiotemporal resolution. Here we show that, as mice recovered from anesthesia, scale-invariant spatiotemporal patterns of neuronal activity gradually emerge. We show for the first time that this scale-invariant activity spans four orders of magnitude in awake mice. In contrast, we found that the cortical dynamics of anesthetized mice were not scale invariant. Our results bridge empirical evidence from disparate scales and support theoretical predictions that the awake cortex operates in a dynamical regime known as criticality. The criticality hypothesis predicts that small-scale cortical dynamics are governed by the same principles as those governing larger-scale dynamics. Importantly, these scale-invariant principles also optimize certain aspects of information processing. Our results suggest that during the emergence from anesthesia, criticality arises as information processing demands increase. We expect that, as measurement tools advance toward larger scales and greater resolution, the multiscale framework offered by criticality will continue to provide quantitative predictions and insight on how neurons, microcircuits, and large-scale networks are dynamically coordinated in the brain. PMID:25505314

  8. Large Scale Density Estimation of Blue and Fin Whales: Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope

  9. 'Scaling-up is a craft not a science': Catalysing scale-up of health innovations in Ethiopia, India and Nigeria.

    PubMed

    Spicer, Neil; Bhattacharya, Dipankar; Dimka, Ritgak; Fanta, Feleke; Mangham-Jefferies, Lindsay; Schellenberg, Joanna; Tamire-Woldemariam, Addis; Walt, Gill; Wickremasinghe, Deepthi

    2014-11-01

    Donors and other development partners commonly introduce innovative practices and technologies to improve health in low and middle income countries. Yet many innovations that are effective in improving health and survival are slow to be translated into policy and implemented at scale. Understanding the factors influencing scale-up is important. We conducted a qualitative study involving 150 semi-structured interviews with government, development partners, civil society organisations and externally funded implementers, professional associations and academic institutions in 2012/13 to explore scale-up of innovative interventions targeting mothers and newborns in Ethiopia, the Indian state of Uttar Pradesh and the six states of northeast Nigeria, which are settings with high burdens of maternal and neonatal mortality. Interviews were analysed using a common analytic framework developed for cross-country comparison and themes were coded using Nvivo. We found that programme implementers across the three settings require multiple steps to catalyse scale-up. Advocating for government to adopt and finance health innovations requires: designing scalable innovations; embedding scale-up in programme design and allocating time and resources; building implementer capacity to catalyse scale-up; adopting effective approaches to advocacy; presenting strong evidence to support government decision making; involving government in programme design; invoking policy champions and networks; strengthening harmonisation among external programmes; aligning innovations with health systems and priorities. Other steps include: supporting government to develop policies and programmes and strengthening health systems and staff; promoting community uptake by involving media, community leaders, mobilisation teams and role models. We conclude that scale-up has no magic bullet solution - implementers must embrace multiple activities, and require substantial support from donors and governments in doing so. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Scaling Personalization: Exploring the Implementation of an Academic and Social-Emotional Innovation in High Schools

    ERIC Educational Resources Information Center

    Rutledge, Stacey A.; Brown, Stephanie; Petrova, Kitchka

    2017-01-01

    Scaling in educational settings has tended to focus on replication of external programs with less focus on the nature of adaptation. In this article, we explore the scaling of Personalization for Academic and Social-emotional Learning (PASL), a systemic high school reform effort that was intentionally identified, developed, and implemented with…

  11. Scaling Personalization: Exploring the Implementation of an Academic and Social Emotional Innovation in High Schools

    ERIC Educational Resources Information Center

    Rutledge, Stacey; Brown, Stephanie; Petrova, Kitchka

    2017-01-01

    Scaling in educational settings has tended to focus on replication of external programs with less focus on the nature of adaptation. In this article, we explore the scaling of Personalization for Academic and Social-emotional Learning (PASL), a systemic high school reform effort that was intentionally identified, developed, and implemented with…

  12. Implementation of emergency department transfer communication measures in Minnesota critical access hospitals.

    PubMed

    Klingner, Jill; Moscovice, Ira; Casey, Michelle; McEllistrem Evenson, Alex

    2015-01-01

    Previously published findings based on field tests indicated that emergency department patient transfer communication measures are feasible and worthwhile to implement in rural hospitals. This study aims to expand those findings by focusing on the wide-scale implementation of these measures in the 79 Critical Access Hospitals (CAHs) in Minnesota from 2011 to 2013. Information was obtained from interviews with key informants involved in implementing the emergency department patient transfer communication measures in Minnesota as part of required statewide quality reporting. The first set of interviews targeted state-level organizations regarding their experiences working with providers. A second set of interviews targeted quality and administrative staff from CAHs regarding their experiences implementing measures. Implementing the measures in Minnesota CAHs proved to be successful in a number of respects, but informants also faced new challenges. Our recommendations, addressed to those seeking to successfully implement these measures in other states, take these challenges into account. Field-testing new quality measure implementations with volunteers may not be indicative of a full-scale implementation that requires facilities to participate. The implementation team's composition, communication efforts, prior relationships with facilities and providers, and experience with data collection and abstraction tools are critical factors in successfully implementing required reporting of quality measures on a wide scale. © 2014 National Rural Health Association.

  13. Tailoring Enterprise Systems Engineering Policy for Project Scale and Complexity

    NASA Technical Reports Server (NTRS)

    Cox, Renee I.; Thomas, L. Dale

    2014-01-01

    Space systems are characterized by varying degrees of scale and complexity. Accordingly, cost-effective implementation of systems engineering also varies depending on scale and complexity. Recognizing that systems engineering and integration happen everywhere and at all levels of a given system and that the life cycle is an integrated process necessary to mature a design, the National Aeronautic and Space Administration's (NASA's) Marshall Space Flight Center (MSFC) has developed a suite of customized implementation approaches based on project scale and complexity. While it may be argued that a top-level system engineering process is common to and indeed desirable across an enterprise for all space systems, implementation of that top-level process and the associated products developed as a result differ from system to system. The implementation approaches used for developing a scientific instrument necessarily differ from those used for a space station. .

  14. Large-scale fortification of condiments and seasonings as a public health strategy: equity considerations for implementation.

    PubMed

    Zamora, Gerardo; Flores-Urrutia, Mónica Crissel; Mayén, Ana-Lucia

    2016-09-01

    Fortification of staple foods with vitamins and minerals is an effective approach to increase micronutrient intake and improve nutritional status. The specific use of condiments and seasonings as vehicles in large-scale fortification programs is a relatively new public health strategy. This paper underscores equity considerations for the implementation of large-scale fortification of condiments and seasonings as a public health strategy by examining nonexhaustive examples of programmatic experiences and pilot projects in various settings. An overview of conceptual elements in implementation research and equity is presented, followed by an examination of equity considerations for five implementation strategies: (1) enhancing the capabilities of the public sector, (2) improving the performance of implementing agencies, (3) strengthening the capabilities and performance of frontline workers, (3) empowering communities and individuals, and (4) supporting multiple stakeholders engaged in improving health. Finally, specific considerations related to intersectoral action are considered. Large-scale fortification of condiments and seasonings cannot be a standalone strategy and needs to be implemented with concurrent and coordinated public health strategies, which should be informed by a health equity lens. © 2016 New York Academy of Sciences.

  15. Embarking on large-scale qualitative research: reaping the benefits of mixed methods in studying youth, clubs and drugs

    PubMed Central

    Hunt, Geoffrey; Moloney, Molly; Fazio, Adam

    2012-01-01

    Qualitative research is often conceptualized as inherently small-scale research, primarily conducted by a lone researcher enmeshed in extensive and long-term fieldwork or involving in-depth interviews with a small sample of 20 to 30 participants. In the study of illicit drugs, traditionally this has often been in the form of ethnographies of drug-using subcultures. Such small-scale projects have produced important interpretive scholarship that focuses on the culture and meaning of drug use in situated, embodied contexts. Larger-scale projects are often assumed to be solely the domain of quantitative researchers, using formalistic survey methods and descriptive or explanatory models. In this paper, however, we will discuss qualitative research done on a comparatively larger scale—with in-depth qualitative interviews with hundreds of young drug users. Although this work incorporates some quantitative elements into the design, data collection, and analysis, the qualitative dimension and approach has nevertheless remained central. Larger-scale qualitative research shares some of the challenges and promises of smaller-scale qualitative work including understanding drug consumption from an emic perspective, locating hard-to-reach populations, developing rapport with respondents, generating thick descriptions and a rich analysis, and examining the wider socio-cultural context as a central feature. However, there are additional challenges specific to the scale of qualitative research, which include data management, data overload and problems of handling large-scale data sets, time constraints in coding and analyzing data, and personnel issues including training, organizing and mentoring large research teams. Yet large samples can prove to be essential for enabling researchers to conduct comparative research, whether that be cross-national research within a wider European perspective undertaken by different teams or cross-cultural research looking at internal divisions and differences within diverse communities and cultures. PMID:22308079

  16. HIPAA is larger and more complex than Y2K.

    PubMed

    Tempesco, J W

    2000-07-01

    The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a larger and more complex problem than Y2K ever was. According to the author, the costs associated with a project of such unending scope and in support of intrusion into both information and operational systems of every health care transaction will be incalculable. Some estimate that the administrative simplification policies implemented through HIPAA will save billions of dollars annually, but it remains to be seen whether the savings will outweigh implementation and ongoing expenses associated with systemwide application of the regulations. This article addresses the rules established for electronic data interchange, data set standards for diagnostic and procedure codes, unique identifiers, coordination of benefits, privacy of individual health care information, electronic signatures, and security requirements.

  17. SITE TECHNOLOGY CAPSULE: SONOTECH PULSE COMBUSTION SYSTEM

    EPA Science Inventory

    Sonotech has targeted waste incineration as a potential application for this technology. Based on bench-scale rotary-kiln simulator tests, Sonotech proposed a demonstration under the SITE program to evaluate the Sonotech pulse combustion system on a larger scale at EPA's IRF in J...

  18. Current implementation and future plans on new code architecture, programming language and user interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brun, B.

    1997-07-01

    Computer technology has improved tremendously during the last years with larger media capacity, memory and more computational power. Visual computing with high-performance graphic interface and desktop computational power have changed the way engineers accomplish everyday tasks, development and safety studies analysis. The emergence of parallel computing will permit simulation over a larger domain. In addition, new development methods, languages and tools have appeared in the last several years.

  19. Galactic outflows, star formation histories, and time-scales in starburst dwarf galaxies from STARBIRDS

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen B. W.; Skillman, Evan D.; Heilman, Taryn N.; Mitchell, Noah P.; Kelley, Tyler

    2018-07-01

    Winds are predicted to be ubiquitous in low-mass, actively star-forming galaxies. Observationally, winds have been detected in relatively few local dwarf galaxies, with even fewer constraints placed on their time-scales. Here, we compare galactic outflows traced by diffuse, soft X-ray emission from Chandra Space Telescope archival observations to the star formation histories derived from Hubble Space Telescope imaging of the resolved stellar populations in six starburst dwarfs. We constrain the longevity of a wind to have an upper limit of 25 Myr based on galaxies whose starburst activity has already declined, although a larger sample is needed to confirm this result. We find an average 16 per cent efficiency for converting the mechanical energy of stellar feedback to thermal, soft X-ray emission on the 25 Myr time-scale, somewhat higher than simulations predict. The outflows have likely been sustained for time-scales comparable to the duration of the starbursts (i.e. 100s Myr), after taking into account the time for the development and cessation of the wind. The wind time-scales imply that material is driven to larger distances in the circumgalactic medium than estimated by assuming short, 5-10 Myr starburst durations, and that less material is recycled back to the host galaxy on short time-scales. In the detected outflows, the expelled hot gas shows various morphologies that are not consistent with a simple biconical outflow structure. The sample and analysis are part of a larger program, the STARBurst IRregular Dwarf Survey (STARBIRDS), aimed at understanding the life cycle and impact of starburst activity in low-mass systems.

  20. Primary Care-Mental Health Integration in the Veterans Affairs Health System: Program Characteristics and Performance.

    PubMed

    Cornwell, Brittany L; Brockmann, Laurie M; Lasky, Elaine C; Mach, Jennifer; McCarthy, John F

    2018-06-01

    The Veterans Health Administration (VHA) has achieved substantial national implementation of primary care-mental health integration (PC-MHI) services. However, little is known regarding program characteristics, variation in characteristics across settings, or associations between program fidelity and performance. This study identified core elements of PC-MHI services and evaluated their associations with program characteristics and performance. A principal-components analysis (PCA) of reports from 349 sites identified factors associated with PC-MHI fidelity. Analyses assessed the correlation among factors and between each factor and facility type (medical center or community-based outpatient clinic), primary care population size, and performance indicators (receipt of PC-MHI services, same-day access to mental health and primary care services, and extended duration of services). PCA identified seven factors: core implementation, care management (CM) assessments and supervision, CM supervision receipt, colocated collaborative care (CCC) by prescribing providers, CCC by behavioral health providers, participation in patient aligned care teams (PACTs) for special populations, and treatment of complex mental health conditions. Sites serving larger populations had greater core implementation scores. Medical centers and sites serving larger populations had greater scores for CCC by prescribing providers, CM assessments and supervision, and participation in PACTs. Greater core implementation scores were associated with greater same-day access. Sites with greater scores for CM assessments and supervision had lower scores for treatment of complex conditions. Outpatient clinics and sites serving smaller populations experienced challenges in integrated care implementation. To enhance same-day access, VHA should continue to prioritize PC-MHI implementation. Providing brief, problem-focused care may enhance CM implementation.

  1. HOW GALACTIC ENVIRONMENT REGULATES STAR FORMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meidt, Sharon E.

    2016-02-10

    In a new simple model I reconcile two contradictory views on the factors that determine the rate at which molecular clouds form stars—internal structure versus external, environmental influences—providing a unified picture for the regulation of star formation in galaxies. In the presence of external pressure, the pressure gradient set up within a self-gravitating turbulent (isothermal) cloud leads to a non-uniform density distribution. Thus the local environment of a cloud influences its internal structure. In the simple equilibrium model, the fraction of gas at high density in the cloud interior is determined simply by the cloud surface density, which is itselfmore » inherited from the pressure in the immediate surroundings. This idea is tested using measurements of the properties of local clouds, which are found to show remarkable agreement with the simple equilibrium model. The model also naturally predicts the star formation relation observed on cloud scales and at the same time provides a mapping between this relation and the closer-to-linear molecular star formation relation measured on larger scales in galaxies. The key is that pressure regulates not only the molecular content of the ISM but also the cloud surface density. I provide a straightforward prescription for the pressure regulation of star formation that can be directly implemented in numerical models. Predictions for the dense gas fraction and star formation efficiency measured on large-scales within galaxies are also presented, establishing the basis for a new picture of star formation regulated by galactic environment.« less

  2. Consumer-Operated Service Programs: monetary and donated costs and cost-effectiveness.

    PubMed

    Yates, Brian T; Mannix, Danyelle; Freed, Michael C; Campbell, Jean; Johnsen, Matthew; Jones, Kristine; Blyler, Crystal R

    2011-01-01

    Examine cost differences between Consumer Operated Service Programs (COSPs) as possibly determined by a) size of program, b) use of volunteers and other donated resources, c) cost-of-living differences between program locales, d) COSP model applied, and e) delivery system used to implement the COSP model. As part of a larger evaluation of COSP, data on operating costs, enrollments, and mobilization of donated resources were collected for eight programs representing three COSP models (drop-in centers, mutual support, and education/advocacy training). Because the 8 programs were operated in geographically diverse areas of the US, costs were examined with and without adjustment for differences in local cost of living. Because some COSPs use volunteers and other donated resources, costs were measured with and without these resources being monetized. Scale of operation also was considered as a mediating variable for differences in program costs. Cost per visit, cost per consumer per quarter, and total program cost were calculated separately for funds spent and for resources donated for each COSP. Differences between COSPs in cost per consumer and cost per visit seem better explained by economies of scale and delivery system used than by cost-of-living differences between program locations or COSP model. Given others' findings that different COSP models produce little variation in service effectiveness, minimize service costs by maximizing scale of operation while using a delivery system that allows staff and facilities resources to be increased or decreased quickly to match number of consumers seeking services.

  3. Promotion of emotional wellbeing in oncology inpatients using VR.

    PubMed

    Espinoza, Macarena; Baños, Rosa M; García-Palacios, Azucena; Cervera, José M; Esquerdo, Gaspar; Barrajón, Enrique; Botella, Cristina

    2012-01-01

    In Psycho-oncology, VR has been utilized mainly to manage pain and distress associated to medical procedures and chemotherapy, with very few applications aimed at promotion of wellbeing in hospitalized patients. Considering this, it was implemented a psychological intervention that uses VR to induce positive emotions on adult oncology inpatients with the purpose of evaluating its utility to improve emotional wellbeing in this population. Sample was composed of 33 patients (69.7% men, aged from 41 to 85 years old; X=62.1; SD=10.77). Intervention lasted 4 sessions of 30 minutes, along one week. In these sessions, two virtual environments designed to induce joy or relaxation were used. Symptoms of depression and anxiety (Hospital Anxiety and Depression Scale, HADS) and level of happiness (Fordyce Scale) were assessed before and after the VR intervention. Also, Visual Analogue Scales (VAS) were used to assess emotional state and physical discomfort before and after each session. There were significant improvements in distress and level of happiness after the VR intervention. Also, it was detected an increment in positive emotions and a decrease in negative emotions after sessions. Results emphasize the potential of VR as a positive technology that can be used to promote wellbeing during hospitalization, especially considering the shortness of the intervention and the advanced state of disease of the participants. Despite the encouraging of these results, it is necessary to confirm them in studies with larger samples and control groups.

  4. The latest developments and outlook for hydrogen liquefaction technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohlig, K.; Decker, L.

    2014-01-29

    Liquefied hydrogen is presently mainly used for space applications and the semiconductor industry. While clean energy applications, for e.g. the automotive sector, currently contribute to this demand with a small share only, their demand may see a significant boost in the next years with the need for large scale liquefaction plants exceeding the current plant sizes by far. Hydrogen liquefaction for small scale plants with a maximum capacity of 3 tons per day (tpd) is accomplished with a Brayton refrigeration cycle using helium as refrigerant. This technology is characterized by low investment costs but lower process efficiency and hence highermore » operating costs. For larger plants, a hydrogen Claude cycle is used, characterized by higher investment but lower operating costs. However, liquefaction plants meeting the potentially high demand in the clean energy sector will need further optimization with regard to energy efficiency and hence operating costs. The present paper gives an overview of the currently applied technologies, including their thermodynamic and technical background. Areas of improvement are identified to derive process concepts for future large scale hydrogen liquefaction plants meeting the needs of clean energy applications with optimized energy efficiency and hence minimized operating costs. Compared to studies in this field, this paper focuses on application of new technology and innovative concepts which are either readily available or will require short qualification procedures. They will hence allow implementation in plants in the close future.« less

  5. Field study of in situ remediation of petroleum hydrocarbon contaminated soil on site using microwave energy.

    PubMed

    Chien, Yi-Chi

    2012-01-15

    Many laboratory-scale studies strongly suggested that remediation of petroleum hydrocarbon contaminated soil by microwave heating is very effective; however, little definitive field data existed to support the laboratory-scale observations. This study aimed to evaluate the performance of a field-scale microwave heating system to remediate petroleum hydrocarbon contaminated soil. A constant microwave power of 2 kW was installed directly in the contaminated area that applied in the decontamination process for 3.5h without water input. The C10-C40 hydrocarbons were destroyed, desorbed or co-evaporated with moisture from soil by microwave heating. The moisture may play an important role in the absorption of microwave and in the distribution of heat. The success of this study paved the way for the second and much larger field test in the remediation of petroleum hydrocarbon contaminated soil by microwave heating in place. Implemented in its full configuration for the first time at a real site, the microwave heating has demonstrated its robustness and cost-effectiveness in cleaning up petroleum hydrocarbon contaminated soil in place. Economically, the concept of the microwave energy supply to the soil would be a network of independent antennas which powered by an individual low power microwave generator. A microwave heating system with low power generators shows very flexible, low cost and imposes no restrictions on the number and arrangement of the antennas. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Impacts devalue the potential of large-scale terrestrial CO2 removal through biomass plantations

    NASA Astrophysics Data System (ADS)

    Boysen, L. R.; Lucht, W.; Gerten, D.; Heck, V.

    2016-09-01

    Large-scale biomass plantations (BPs) are often considered a feasible and safe climate engineering proposal for extracting carbon from the atmosphere and, thereby, reducing global mean temperatures. However, the capacity of such terrestrial carbon dioxide removal (tCDR) strategies and their larger Earth system impacts remain to be comprehensively studied—even more so under higher carbon emissions and progressing climate change. Here, we use a spatially explicit process-based biosphere model to systematically quantify the potentials and trade-offs of a range of BP scenarios dedicated to tCDR, representing different assumptions about which areas are convertible. Based on a moderate CO2 concentration pathway resulting in a global mean warming of 2.5 °C above preindustrial level by the end of this century—similar to the Representative Concentration Pathway (RCP) 4.5—we assume tCDR to be implemented when a warming of 1.5 °C is reached in year 2038. Our results show that BPs can slow down the progression of increasing cumulative carbon in the atmosphere only sufficiently if emissions are reduced simultaneously like in the underlying RCP4.5 trajectory. The potential of tCDR to balance additional, unabated emissions leading towards a business-as-usual pathway alike RCP8.5 is therefore very limited. Furthermore, in the required large-scale applications, these plantations would induce significant trade-offs with food production and biodiversity and exert impacts on forest extent, biogeochemical cycles and biogeophysical properties.

  7. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  8. Pulmonary diffusional screening and the scaling laws of mammalian metabolic rates

    NASA Astrophysics Data System (ADS)

    Hou, Chen; Mayo, Michael

    2011-12-01

    Theoretical considerations suggest that the mammalian metabolic rate is linearly proportional to the surface areas of mitochondria, capillary, and alveolar membranes. However, the scaling exponents of these surface areas to the mammals' body mass (approximately 0.9-1) are higher than exponents of the resting metabolic rate (RMR) to body mass (approximately 0.75), although similar to the one of exercise metabolic rate (EMR); the underlying physiological cause of this mismatch remains unclear. The analysis presented here shows that discrepancies between the scaling exponents of RMR and the relevant surface areas may originate from, at least for the system of alveolar membranes in mammalian lungs, the facts that (i) not all of the surface area is involved in the gas exchange and (ii) that larger mammals host a smaller effective surface area that participates in the material exchange rate. A result of these facts is that lung surface areas unused at rest are activated under heavy breathing conditions (e.g., exercise), wherein larger mammals support larger activated surface areas that provide a higher capability to increase the gas-exchange rate, allowing for mammals to meet, for example, the high energetic demands of foraging and predation.

  9. Nowhere safe? Exploring the influence of urbanization across mainland and insular seashores in continental Portugal and the Azorean Archipelago.

    PubMed

    Bertocci, Iacopo; Arenas, Francisco; Cacabelos, Eva; Martins, Gustavo M; Seabra, Maria I; Álvaro, Nuno V; Fernandes, Joana N; Gaião, Raquel; Mamede, Nuno; Mulas, Martina; Neto, Ana I

    2017-01-30

    Differences in the structure and functioning of intensively urbanized vs. less human-affected systems are reported, but such evidence is available for a much larger extent in terrestrial than in marine systems. We examined the hypotheses that (i) urbanization was associated to different patterns of variation of intertidal assemblages between urban and extra-urban environments; (ii) such patterns were consistent across mainland and insular systems, spatial scales from 10scm to 100skm, and a three months period. Several trends emerged: (i) a more homogeneous distribution of most algal groups in the urban compared to the extra-urban condition and the opposite pattern of most invertebrates; (ii) smaller/larger variances of most organisms where these were, respectively, less/more abundant; (iii) largest variability of most response variables at small scale; (iv) no facilitation of invasive species by urbanization and larger cover of canopy-forming algae in the insular extra-urban condition. Present findings confirm the acknowledged notion that future management strategies will require to include representative assemblages and their relevant scales of variation associated to urbanization gradients on both the mainland and the islands. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. What is This Thing Called Tremor?

    NASA Astrophysics Data System (ADS)

    Rubin, A. M.; Bostock, M. G.

    2017-12-01

    Tremor has many enigmatic attributes. The LFEs that comprise it have a dearth of large events, implying a characteristic scale. Bostock et al. (2015) found LFE duration beneath Vancouver Island to be nearly independent of magnitude. That duration ( 0.4 s), multiplied by a shear wave speed, defines a length scale far larger than the spatial separation between consecutive but non-colocated detections. If one LFE ruptures multiple brittle patches in a ductile matrix its propagation speed can be slowed to the extent that consecutive events don't overlap, but then why aren't there larger and smaller LFEs with larger and smaller durations? Perhaps there are. Tremor seismograms from Vancouver Island are often saturated with direct arrivals, by which we mean time lags between events shorter than typical event durations. Direct evidence of this, given the small coda amplitude of LFE stacks, is that seismograms at stations many kilometers apart often track each other wiggle for wiggle. We see this behavior over the full range tremor amplitudes, from close to the noise level on a tremor-free day to 10 times larger. If the LFE magnitude-frequency relation is time-independent, this factor of 10 implies that the LFE occurrence rate during loud tremor is 10^2=100 times that during quiet tremor (>250 LFEs per second). We investigate the implications of this by comparing observed seismograms to synthetics made from the superposition of "LFEs" that are Poissonian in time over a range of average rates. We find that provided the LFEs have a characteristic scale (whether exponential or power law), saturation completely obscures the moment-duration scaling of the contributing events; that is, the moment-duration scaling of LFEs may be identical to that of regular earthquakes. Nonetheless, there are subtle differences between our synthetics and real seismograms, remarkably independent of tremor amplitude, that remain to be explained. Foremost among these is a slightly greater affinity of tremor for the positive than the negative LFE template. In this respect tremor appears most similar to "slightly saturated" synthetics, implying a time-dependent moment-frequency distribution (larger LFEs when tremor is loud). One possibility is that tremor consists of aborted earthquakes quenched by reflections from the base of the high Vp/Vs layer.

  11. CMOL/CMOS hardware architectures and performance/price for Bayesian memory - The building block of intelligent systems

    NASA Astrophysics Data System (ADS)

    Zaveri, Mazad Shaheriar

    The semiconductor/computer industry has been following Moore's law for several decades and has reaped the benefits in speed and density of the resultant scaling. Transistor density has reached almost one billion per chip, and transistor delays are in picoseconds. However, scaling has slowed down, and the semiconductor industry is now facing several challenges. Hybrid CMOS/nano technologies, such as CMOL, are considered as an interim solution to some of the challenges. Another potential architectural solution includes specialized architectures for applications/models in the intelligent computing domain, one aspect of which includes abstract computational models inspired from the neuro/cognitive sciences. Consequently in this dissertation, we focus on the hardware implementations of Bayesian Memory (BM), which is a (Bayesian) Biologically Inspired Computational Model (BICM). This model is a simplified version of George and Hawkins' model of the visual cortex, which includes an inference framework based on Judea Pearl's belief propagation. We then present a "hardware design space exploration" methodology for implementing and analyzing the (digital and mixed-signal) hardware for the BM. This particular methodology involves: analyzing the computational/operational cost and the related micro-architecture, exploring candidate hardware components, proposing various custom hardware architectures using both traditional CMOS and hybrid nanotechnology - CMOL, and investigating the baseline performance/price of these architectures. The results suggest that CMOL is a promising candidate for implementing a BM. Such implementations can utilize the very high density storage/computation benefits of these new nano-scale technologies much more efficiently; for example, the throughput per 858 mm2 (TPM) obtained for CMOL based architectures is 32 to 40 times better than the TPM for a CMOS based multiprocessor/multi-FPGA system, and almost 2000 times better than the TPM for a PC implementation. We later use this methodology to investigate the hardware implementations of cortex-scale spiking neural system, which is an approximate neural equivalent of BICM based cortex-scale system. The results of this investigation also suggest that CMOL is a promising candidate to implement such large-scale neuromorphic systems. In general, the assessment of such hypothetical baseline hardware architectures provides the prospects for building large-scale (mammalian cortex-scale) implementations of neuromorphic/Bayesian/intelligent systems using state-of-the-art and beyond state-of-the-art silicon structures.

  12. Scaling the Pyramid Model across Complex Systems Providing Early Care for Preschoolers: Exploring How Models for Decision Making May Enhance Implementation Science

    ERIC Educational Resources Information Center

    Johnson, LeAnne D.

    2017-01-01

    Bringing effective practices to scale across large systems requires attending to how information and belief systems come together in decisions to adopt, implement, and sustain those practices. Statewide scaling of the Pyramid Model, a framework for positive behavior intervention and support, across different types of early childhood programs…

  13. Smokefree implementation in Colombia: Monitoring, outside funding, and business support

    PubMed Central

    Uang, Randy; Crosbie, Eric; Glantz, Stanton A

    2017-01-01

    Objective To analyze successful national smokefree policy implementation in Colombia, a middle income country. Materials and methods Key informants at the national and local levels were interviewed and news sources and government ministry resolutions were reviewed. Results Colombia’s Ministry of Health coordinated local implementation practices, which were strongest in larger cities with supportive leadership. Nongovernmental organizations provided technical assistance and highlighted noncompliance. Organizations outside Colombia funded some of these efforts. The bar owners’ association provided concerted education campaigns. Tobacco interests did not openly challenge implementation. Conclusions Health organization monitoring, external funding, and hospitality industry support contributed to effective implementation, and could be cultivated in other low and middle income countries. PMID:28562713

  14. Multispectral Image Enhancement Through Adaptive Wavelet Fusion

    DTIC Science & Technology

    2016-09-14

    13. SUPPLEMENTARY NOTES 14. ABSTRACT This research developed a multiresolution image fusion scheme based on guided filtering . Guided filtering can...effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale...details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at

  15. Hierarchical subdivisions of the Columbia Plateau and Blue Mountains ecoregions, Oregon and Washington.

    Treesearch

    Sharon E. Clarke; Sandra A. Bryce

    1997-01-01

    This document presents two spatial scales of a hierarchical, ecoregional framework and provides a connection to both larger and smaller scale ecological classifications. The two spatial scales are subregions (1:250,000) and landscape-level ecoregions (1:100,000), or Level IV and Level V ecoregions. Level IV ecoregions were developed by the Environmental Protection...

  16. Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection

    ERIC Educational Resources Information Center

    Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas

    2011-01-01

    Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…

  17. Multiscale spatial and small-scale temporal variation in the composition of Riverine fish communities.

    PubMed

    Growns, Ivor; Astles, Karen; Gehrke, Peter

    2006-03-01

    We studied the multiscale (sites, river reaches and rivers) and short-term temporal (monthly) variability in a freshwater fish assemblage. We found that small-scale spatial variation and short-term temporal variability significantly influenced fish community structure in the Macquarie and Namoi Rivers. However, larger scale spatial differences between rivers were the largest source of variation in the data. The interaction between temporal change and spatial variation in fish community structure, whilst statistically significant, was smaller than the variation between rivers. This suggests that although the fish communities within each river changed between sampling occasions, the underlying differences between rivers were maintained. In contrast, the strongest interaction between temporal and spatial effects occurred at the smallest spatial scale, at the level of individual sites. This means whilst the composition of the fish assemblage at a given site may fluctuate, the magnitude of these changes is unlikely to affect larger scale differences between reaches within rivers or between rivers. These results suggest that sampling at any time within a single season will be sufficient to show spatial differences that occur over large spatial scales, such as comparisons between rivers or between biogeographical regions.

  18. Influence of injection mode on transport properties in kilometer-scale three-dimensional discrete fracture networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyman, Jeffrey De'Haven; Painter, S. L.; Viswanathan, H.

    We investigate how the choice of injection mode impacts transport properties in kilometer-scale three-dimensional discrete fracture networks (DFN). The choice of injection mode, resident and flux-weighted, is designed to mimic different physical phenomena. It has been hypothesized that solute plumes injected under resident conditions evolve to behave similarly to solutes injected under flux-weighted conditions. Previously, computational limitations have prohibited the large-scale simulations required to investigate this hypothesis. We investigate this hypothesis by using a high-performance DFN suite, dfnWorks, to simulate flow in kilometer-scale three-dimensional DFNs based on fractured granite at the Forsmark site in Sweden, and adopt a Lagrangian approachmore » to simulate transport therein. Results show that after traveling through a pre-equilibrium region, both injection methods exhibit linear scaling of the first moment of travel time and power law scaling of the breakthrough curve with similar exponents, slightly larger than 2. Lastly, the physical mechanisms behind this evolution appear to be the combination of in-network channeling of mass into larger fractures, which offer reduced resistance to flow, and in-fracture channeling, which results from the topology of the DFN.« less

  19. Influence of injection mode on transport properties in kilometer-scale three-dimensional discrete fracture networks

    DOE PAGES

    Hyman, Jeffrey De'Haven; Painter, S. L.; Viswanathan, H.; ...

    2015-09-12

    We investigate how the choice of injection mode impacts transport properties in kilometer-scale three-dimensional discrete fracture networks (DFN). The choice of injection mode, resident and flux-weighted, is designed to mimic different physical phenomena. It has been hypothesized that solute plumes injected under resident conditions evolve to behave similarly to solutes injected under flux-weighted conditions. Previously, computational limitations have prohibited the large-scale simulations required to investigate this hypothesis. We investigate this hypothesis by using a high-performance DFN suite, dfnWorks, to simulate flow in kilometer-scale three-dimensional DFNs based on fractured granite at the Forsmark site in Sweden, and adopt a Lagrangian approachmore » to simulate transport therein. Results show that after traveling through a pre-equilibrium region, both injection methods exhibit linear scaling of the first moment of travel time and power law scaling of the breakthrough curve with similar exponents, slightly larger than 2. Lastly, the physical mechanisms behind this evolution appear to be the combination of in-network channeling of mass into larger fractures, which offer reduced resistance to flow, and in-fracture channeling, which results from the topology of the DFN.« less

  20. Solar radiation variability over La Réunion island and associated larger-scale dynamics

    NASA Astrophysics Data System (ADS)

    Mialhe, Pauline; Morel, Béatrice; Pohl, Benjamin; Bessafi, Miloud; Chabriat, Jean-Pierre

    2017-04-01

    This study aims to examine the solar radiation variability over La Réunion island and its relationship with large-scale circulation. The Satellite Application Facility on Climate Monitoring (CM SAF) produces a Shortwave Incoming Solar radiation (SIS) data record called Solar surfAce RAdiation Heliosat - East (SARAH-E). A comparison to in situ observations from Météo-France measurements networks quantifies the skill of SARAH-E grids which we use as dataset. First step of the work, irradiance mean cycles are calculated to describe the diurnal-seasonal SIS behaviour over La Réunion island. By analogy with the climate anomalies, instantaneous deviations are computed after removal of the mean states. Finally, we associate these anomalies with larger-scale atmospheric dynamics into the South West Indian Ocean by applying multivariate clustering analyses (Hierarchical Ascending Classification, k-means).

Top