DOT National Transportation Integrated Search
2016-08-31
A major challenge for achieving large-scale adoption of EVs is an accessible infrastructure for the communities. The societal benefits of large-scale adoption of EVs cannot be realized without adequate deployment of publicly accessible charging stati...
Creating Grander Families: Older Adults Adopting Younger Kin and Nonkin
ERIC Educational Resources Information Center
Hinterlong, James; Ryan, Scott
2008-01-01
Purpose: There is a dearth of research on older adoptive parents caring for minor children, despite a growing number of such adoptions finalized each year. This study offers a large-scale investigation of adoptive families headed by older parents. We describe these families and explore how preadoptive kinship between the adoptive parent and the…
Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries
NASA Astrophysics Data System (ADS)
Marinagi, Catherine; Trivellas, Panagiotis; Reklitis, Panagiotis; Skourlas, Christos
2015-02-01
This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers' reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefits from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.
Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marinagi, Catherine, E-mail: marinagi@teihal.gr, E-mail: ptrivel@yahoo.com, E-mail: preklitis@yahoo.com; Trivellas, Panagiotis, E-mail: marinagi@teihal.gr, E-mail: ptrivel@yahoo.com, E-mail: preklitis@yahoo.com; Reklitis, Panagiotis, E-mail: marinagi@teihal.gr, E-mail: ptrivel@yahoo.com, E-mail: preklitis@yahoo.com
2015-02-09
This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers’ reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefitsmore » from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.« less
NASA Astrophysics Data System (ADS)
Fitzgerald, Michael; Danaia, Lena; McKinnon, David H.
2017-07-01
In recent years, calls for the adoption of inquiry-based pedagogies in the science classroom have formed a part of the recommendations for large-scale high school science reforms. However, these pedagogies have been problematic to implement at scale. This research explores the perceptions of 34 positively inclined early-adopter teachers in relation to their implementation of inquiry-based pedagogies. The teachers were part of a large-scale Australian high school intervention project based around astronomy. In a series of semi-structured interviews, the teachers identified a number of common barriers that prevented them from implementing inquiry-based approaches. The most important barriers identified include the extreme time restrictions on all scales, the poverty of their common professional development experiences, their lack of good models and definitions for what inquiry-based teaching actually is, and the lack of good resources enabling the capacity for change. Implications for expectations of teachers and their professional learning during educational reform and curriculum change are discussed.
ERIC Educational Resources Information Center
Gaffney, Jon D. H.; Richards, Evan; Kustusch, Mary Bridget; Ding, Lin; Beichner, Robert J.
2008-01-01
The SCALE-UP (Student-Centered Activities for Large Enrollment for Undergraduate Programs) project was developed to implement reforms designed for small classes into large physics classes. Over 50 schools across the country, ranging from Wake Technical Community College to Massachusetts Institute of Technology (MIT), have adopted it for classes of…
ERIC Educational Resources Information Center
Badilescu-Buga, Emil
2012-01-01
Learning Activity Management System (LAMS) has been trialled and used by users from many countries around the globe, but despite the positive attitude towards its potential benefits to pedagogical processes its adoption in practice has been uneven, reflecting how difficult it is to make a new technology based concept an integral part of the…
Aqueous Two-Phase Systems at Large Scale: Challenges and Opportunities.
Torres-Acosta, Mario A; Mayolo-Deloisa, Karla; González-Valdez, José; Rito-Palomares, Marco
2018-06-07
Aqueous two-phase systems (ATPS) have proved to be an efficient and integrative operation to enhance recovery of industrially relevant bioproducts. After ATPS discovery, a variety of works have been published regarding their scaling from 10 to 1000 L. Although ATPS have achieved high recovery and purity yields, there is still a gap between their bench-scale use and potential industrial applications. In this context, this review paper critically analyzes ATPS scale-up strategies to enhance the potential industrial adoption. In particular, large-scale operation considerations, different phase separation procedures, the available optimization techniques (univariate, response surface methodology, and genetic algorithms) to maximize recovery and purity and economic modeling to predict large-scale costs, are discussed. ATPS intensification to increase the amount of sample to process at each system, developing recycling strategies and creating highly efficient predictive models, are still areas of great significance that can be further exploited with the use of high-throughput techniques. Moreover, the development of novel ATPS can maximize their specificity increasing the possibilities for the future industry adoption of ATPS. This review work attempts to present the areas of opportunity to increase ATPS attractiveness at industrial levels. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Co-governing decentralised water systems: an analytical framework.
Yu, C; Brown, R; Morison, P
2012-01-01
Current discourses in urban water management emphasise a diversity of water sources and scales of infrastructure for resilience and adaptability. During the last 2 decades, in particular, various small-scale systems emerged and developed so that the debate has largely moved from centralised versus decentralised water systems toward governing integrated and networked systems of provision and consumption where small-scale technologies are embedded in large-scale centralised infrastructures. However, while centralised systems have established boundaries of ownership and management, decentralised water systems (such as stormwater harvesting technologies for the street, allotment/house scales) do not, therefore the viability for adoption and/or continued use of decentralised water systems is challenged. This paper brings together insights from the literature on public sector governance, co-production and social practices model to develop an analytical framework for co-governing such systems. The framework provides urban water practitioners with guidance when designing co-governance arrangements for decentralised water systems so that these systems continue to exist, and become widely adopted, within the established urban water regime.
Nadeem, Erum; Ringle, Vanesa
2017-01-01
The de-adoption of evidence-based practices (EBPs) is a largely understudied topic. The present study examined factors related to the de-adoption of an EBP for students exposed to traumatic events in a large urban school district. Qualitative interviews conducted with school clinicians and district administrators two years after the district embarked on a large-scale roll-out of the EBP distinguished between factors that impacted partial de-adoption after one year (phase 1) and complete de-adoption by the district after two years (phase 2). Phase 1 factors included organizational consistency, workforce stability, prior success, positive student outcomes, school- and district- level supports, innovation-setting fit, and innovation-related issues. Phase 2 factors included district-level leadership changes, financial and workforce instability, and shifting priorities. Study results suggest that sustainment-enhancing strategies should be included in the early stages of program implementation to most effectively adapt to school- and system- level changes. PMID:28775793
ADOPT: Automotive Deployment Options Projection Tool | Transportation
new model options by combining high-selling powertrains and high-selling vehicle platforms. NREL has . Screenshot of the ADOPT user interface, with two simulation scenario options (low tech and high tech emissions. Biomass Market Dynamics Supporting the Large-Scale Deployment of High-Octane Fuel Production in
Towards a Critical Theory of Educational Technology
ERIC Educational Resources Information Center
Okan, Zuhal
2007-01-01
The purpose of this study is to offer a critical consideration of current initiatives, and common sense discourses, forcing educators to adopt and integrate educational technology on a large scale. This study argues that it is time, in the relative absence of a critical debate, to ask questions that should precede a wholesale adoption of…
Organizational Learning and Large-Scale Change: Adoption of Electronic Medical Records
ERIC Educational Resources Information Center
Chavis, Virginia D.
2010-01-01
Despite implementation of electronic medical record (EMR) systems in the United States and other countries, there is no organizational development model that addresses medical professionals' attitudes toward technology adoption in a learning organization. The purpose of this study was to assess whether a model would change those attitudes toward…
Building the Case for Large Scale Behavioral Education Adoptions
ERIC Educational Resources Information Center
Layng, Zachary R.; Layng, T. V. Joe
2012-01-01
Behaviorally-designed educational programs are often based on a research tradition that is not widely understood by potential users of the programs. Though the data may be sound and the prediction of outcomes for individual learners quite good, those advocating adoption of behaviorally-designed educational programs may need to do more in order to…
Setting Learning Analytics in Context: Overcoming the Barriers to Large-Scale Adoption
ERIC Educational Resources Information Center
Ferguson, Rebecca; Macfadyen, Leah P.; Clow, Doug; Tynan, Belinda; Alexander, Shirley; Dawson, Shane
2014-01-01
A core goal for most learning analytic projects is to move from small-scale research towards broader institutional implementation, but this introduces a new set of challenges because institutions are stable systems, resistant to change. To avoid failure and maximize success, implementation of learning analytics at scale requires explicit and…
Combined heat and power systems: economic and policy barriers to growth.
Kalam, Adil; King, Abigail; Moret, Ellen; Weerasinghe, Upekha
2012-04-23
Combined Heat and Power (CHP) systems can provide a range of benefits to users with regards to efficiency, reliability, costs and environmental impact. Furthermore, increasing the amount of electricity generated by CHP systems in the United States has been identified as having significant potential for impressive economic and environmental outcomes on a national scale. Given the benefits from increasing the adoption of CHP technologies, there is value in improving our understanding of how desired increases in CHP adoption can be best achieved. These obstacles are currently understood to stem from regulatory as well as economic and technological barriers. In our research, we answer the following questions: Given the current policy and economic environment facing the CHP industry, what changes need to take place in this space in order for CHP systems to be competitive in the energy market? We focus our analysis primarily on Combined Heat and Power Systems that use natural gas turbines. Our analysis takes a two-pronged approach. We first conduct a statistical analysis of the impact of state policies on increases in electricity generated from CHP system. Second, we conduct a Cost-Benefit analysis to determine in which circumstances funding incentives are necessary to make CHP technologies cost-competitive. Our policy analysis shows that regulatory improvements do not explain the growth in adoption of CHP technologies but hold the potential to encourage increases in electricity generated from CHP system in small-scale applications. Our Cost-Benefit analysis shows that CHP systems are only cost competitive in large-scale applications and that funding incentives would be necessary to make CHP technology cost-competitive in small-scale applications. From the synthesis of these analyses we conclude that because large-scale applications of natural gas turbines are already cost-competitive, policy initiatives aimed at a CHP market dominated primarily by large-scale (and therefore already cost-competitive) systems have not been effectively directed. Our recommendation is that for CHP technologies using natural gas turbines, policy focuses should be on increasing CHP growth in small-scale systems. This result can be best achieved through redirection of state and federal incentives, research and development, adoption of smart grid technology, and outreach and education.
Determinants of Taxpayers' Adoption of Electronic Filing Methods in Taiwan: An Exploratory Study
ERIC Educational Resources Information Center
Fu, Jen-Ruei; Chao, Wen-Pin; Farn, Cheng-Kiang
2004-01-01
Using the personal income tax filing as an example of e-governmental services, this paper seeks to develop an understanding of the factors that influence citizens' adoption of electronic tax-filing services based on empirical data gathered from a large-scale nationwide survey. The taxpayers' satisfaction and usage intention regarding the use of…
ERIC Educational Resources Information Center
Johnson, LeAnne D.
2017-01-01
Bringing effective practices to scale across large systems requires attending to how information and belief systems come together in decisions to adopt, implement, and sustain those practices. Statewide scaling of the Pyramid Model, a framework for positive behavior intervention and support, across different types of early childhood programs…
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
Combined heat and power systems: economic and policy barriers to growth
2012-01-01
Background Combined Heat and Power (CHP) systems can provide a range of benefits to users with regards to efficiency, reliability, costs and environmental impact. Furthermore, increasing the amount of electricity generated by CHP systems in the United States has been identified as having significant potential for impressive economic and environmental outcomes on a national scale. Given the benefits from increasing the adoption of CHP technologies, there is value in improving our understanding of how desired increases in CHP adoption can be best achieved. These obstacles are currently understood to stem from regulatory as well as economic and technological barriers. In our research, we answer the following questions: Given the current policy and economic environment facing the CHP industry, what changes need to take place in this space in order for CHP systems to be competitive in the energy market? Methods We focus our analysis primarily on Combined Heat and Power Systems that use natural gas turbines. Our analysis takes a two-pronged approach. We first conduct a statistical analysis of the impact of state policies on increases in electricity generated from CHP system. Second, we conduct a Cost-Benefit analysis to determine in which circumstances funding incentives are necessary to make CHP technologies cost-competitive. Results Our policy analysis shows that regulatory improvements do not explain the growth in adoption of CHP technologies but hold the potential to encourage increases in electricity generated from CHP system in small-scale applications. Our Cost-Benefit analysis shows that CHP systems are only cost competitive in large-scale applications and that funding incentives would be necessary to make CHP technology cost-competitive in small-scale applications. Conclusion From the synthesis of these analyses we conclude that because large-scale applications of natural gas turbines are already cost-competitive, policy initiatives aimed at a CHP market dominated primarily by large-scale (and therefore already cost-competitive) systems have not been effectively directed. Our recommendation is that for CHP technologies using natural gas turbines, policy focuses should be on increasing CHP growth in small-scale systems. This result can be best achieved through redirection of state and federal incentives, research and development, adoption of smart grid technology, and outreach and education. PMID:22540988
Modelling the large-scale redshift-space 3-point correlation function of galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
An Investigation on Computer-Adaptive Multistage Testing Panels for Multidimensional Assessment
ERIC Educational Resources Information Center
Wang, Xinrui
2013-01-01
The computer-adaptive multistage testing (ca-MST) has been developed as an alternative to computerized adaptive testing (CAT), and been increasingly adopted in large-scale assessments. Current research and practice only focus on ca-MST panels for credentialing purposes. The ca-MST test mode, therefore, is designed to gauge a single scale. The…
Comparing Validity Evidence of Two ECERS-R Scoring Systems
ERIC Educational Resources Information Center
Zeng, Songtian
2017-01-01
Over 30 states have adopted the Early Childhood Environmental Rating Scale-Revised (ECERS-R) as a component of their program quality assessment systems, but the use of ECERS-R on such a large scale has raised important questions about implementation. One of the most pressing question centers upon decisions users must make between two scoring…
NASA Astrophysics Data System (ADS)
Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong
2017-12-01
The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.
IS THE SMALL-SCALE MAGNETIC FIELD CORRELATED WITH THE DYNAMO CYCLE?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karak, Bidya Binay; Brandenburg, Axel, E-mail: bbkarak@nordita.org
2016-01-01
The small-scale magnetic field is ubiquitous at the solar surface—even at high latitudes. From observations we know that this field is uncorrelated (or perhaps even weakly anticorrelated) with the global sunspot cycle. Our aim is to explore the origin, and particularly the cycle dependence, of such a phenomenon using three-dimensional dynamo simulations. We adopt a simple model of a turbulent dynamo in a shearing box driven by helically forced turbulence. Depending on the dynamo parameters, large-scale (global) and small-scale (local) dynamos can be excited independently in this model. Based on simulations in different parameter regimes, we find that, when onlymore » the large-scale dynamo is operating in the system, the small-scale magnetic field generated through shredding and tangling of the large-scale magnetic field is positively correlated with the global magnetic cycle. However, when both dynamos are operating, the small-scale field is produced from both the small-scale dynamo and the tangling of the large-scale field. In this situation, when the large-scale field is weaker than the equipartition value of the turbulence, the small-scale field is almost uncorrelated with the large-scale magnetic cycle. On the other hand, when the large-scale field is stronger than the equipartition value, we observe an anticorrelation between the small-scale field and the large-scale magnetic cycle. This anticorrelation can be interpreted as a suppression of the small-scale dynamo. Based on our studies we conclude that the observed small-scale magnetic field in the Sun is generated by the combined mechanisms of a small-scale dynamo and tangling of the large-scale field.« less
Impact of the HITECH financial incentives on EHR adoption in small, physician-owned practices.
Cohen, Martin F
2016-10-01
Physicians in small physician-owned practices in the United States have been slower to adopt EHRs than physicians in large practices or practices owned by large organizations. The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 included provisions intended to address many of the potential barriers to EHR adoption cited in the literature, including a financial incentives program that has paid physicians and other professionals $13 billion through December 2015. Given the range of factors that may be influencing physicians' decisions on whether to adopt an EHR, and given the level of HITECH expenditures to date, there is significant policy value in assessing whether the HITECH incentives have actually had an impact on EHR adoption decisions among U.S. physicians in small, physician-owned practices. This study addresses this question by analyzing physicians' own views on the influence of the HITECH incentives as well as other potential considerations in their decision-making on whether to adopt an EHR. Using data from a national survey of physicians, five composite scales were created from groups of survey items to reflect physician views on different potential facilitators and barriers for EHR adoption as of 2011, after the launch of the HITECH incentives program. Multinomial and binary logistic regression models were specified to test which of these physician-reported considerations have a significant relationship with EHR adoption status among 1043 physicians working in physician-owned practices with no more than 10 physicians. Physicians' views on the importance of the HITECH financial incentives are strongly associated with EHR adoption during the first three years of the HITECH period (2010-2012). In the study's primary model, a one-point increase on a three-point scale for physician-reported influence of the HITECH financial incentives increases the relative risk of being in the process of adoption in 2011, compared to the risk of remaining a non-adopter, by a factor of 4.02 (p<0.001, 95% CI of 2.06-7.85). In a second model which excludes pre-HITECH adopters from the data, a one-point increase on the incentives scale increases the relative risk of having become a new EHR user in 2010 or 2011, compared to the risk of remaining a non-adopter, by a factor of 3.98 (p<0.01, 95% CI of 1.48-10.68) and also increases the relative risk of being in the process of adoption in 2011 by a factor of 5.73 (p<0.001, 95% CI of 2.57-12.76), compared to the risk of remaining a non-adopter in 2011. In contrast, a composite scale that reflects whether physicians viewed choosing a specific EHR vendor as challenging is not associated with adoption status. This study's principal finding is that the HITECH financial incentives were influential in accelerating EHR adoption among small, physician-owned practices in the United States. A second finding is that physician decision-making on EHR adoption in the United States has not matched what would be predicted by the literature on network effects. The market's failure to converge on a dominant design in the absence of interoperability means it will be difficult to achieve widespread exchange of patients' clinical information among different health care provider organizations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Haase, Doreen; Puan, Kia Joo; Starke, Mireille; Lai, Tuck Siong; Soh, Melissa Yan Ling; Karunanithi, Iyswariya; San Luis, Boris; Poh, Tuang Yeow; Yusof, Nurhashikin; Yeap, Chun Hsien; Phang, Chew Yen; Chye, Willis Soon Yuan; Chan, Marieta; Koh, Mickey Boon Chai; Goh, Yeow Tee; Bertin-Maghit, Sebastien; Nardin, Alessandra; Ho, Liam Pock; Rotzschke, Olaf
2015-01-01
Adoptive cell therapy is an emerging treatment strategy for a number of serious diseases. Regulatory T (Treg) cells represent 1 cell type of particular interest for therapy of inflammatory conditions, as they are responsible for controlling unwanted immune responses. Initial clinical trials of adoptive transfer of Treg cells in patients with graft-versus-host disease were shown to be safe. However, obtaining sufficient numbers of highly pure and functional Treg cells with minimal contamination remains a challenge. We developed a novel approach to isolate "untouched" human Treg cells from healthy donors on the basis of negative selection using the surface markers CD49d and CD127. This procedure, which uses an antibody cocktail and magnetic beads for separation in an automated system (RoboSep), was scaled up and adapted to be compatible with good manufacturing practice conditions. With this setup we performed 9 Treg isolations from large-scale leukapheresis samples in a good manufacturing practice facility. These runs yielded sufficient numbers of "untouched" Treg cells for immediate use in clinical applications. The cell preparations consisted of viable highly pure FoxP3-positive Treg cells that were functional in suppressing the proliferation of effector T cells. Contamination with CD4 effector T cells was <10%. All other cell types did not exceed 2% in the final product. Remaining isolation reagents were reduced to levels that are considered safe. Treg cells isolated with this procedure will be used in a phase I clinical trial of adoptive transfer into leukemia patients developing graft-versus-host disease after stem cell transplantation.
ERIC Educational Resources Information Center
York, Travis; Becker, Christian
2012-01-01
Despite increased attention for environmental sustainability programming, large-scale adoption of pro-environmental behaviors has been slow and largely short-term. This article analyzes the crucial role of ethics in this respect. The authors utilize an interdisciplinary approach drawing on virtue ethics and cognitive development theory to…
Jesse D. Young; Nathaniel M. Anderson; Helen T. Naughton; Katrina Mullan
2018-01-01
Abundant stocks of woody biomass that are associated with active forest management can be used as fuel for bioenergy in many applications. Though factors driving large-scale biomass use in industrial settings have been studied extensively, small-scale biomass combustion systems commonly used by institutions for heating have received less attention. A zero inflated...
Effects of Scaled-Up Professional Development Courses about Inquiry-Based Learning on Teachers
ERIC Educational Resources Information Center
Maass, Katja; Engeln, Katrin
2018-01-01
Although well researched in educational studies, inquiry-based learning, a student-centred way of teaching, is far away from being implemented in day-to-day science and mathematics teaching on a large scale. It is a challenge for teachers to adopt this new way of teaching in an often not supportive school context. Therefore it is important to…
Modeling the adoption of innovations in the presence of geographic and media influences.
Toole, Jameson L; Cha, Meeyoung; González, Marta C
2012-01-01
While there is a large body of work examining the effects of social network structure on innovation adoption, models to date have lacked considerations of real geography or mass media. In this article, we show these features are crucial to making more accurate predictions of a social contagion and technology adoption at a city-to-city scale. Using data from the adoption of the popular micro-blogging platform, Twitter, we present a model of adoption on a network that places friendships in real geographic space and exposes individuals to mass media influence. We show that homophily both among individuals with similar propensities to adopt a technology and geographic location is critical to reproducing features of real spatiotemporal adoption. Furthermore, we estimate that mass media was responsible for increasing Twitter's user base two to four fold. To reflect this strength, we extend traditional contagion models to include an endogenous mass media agent that responds to those adopting an innovation as well as influencing agents to adopt themselves.
Large-scale dynamos in rapidly rotating plane layer convection
NASA Astrophysics Data System (ADS)
Bushby, P. J.; Käpylä, P. J.; Masada, Y.; Brandenburg, A.; Favier, B.; Guervilly, C.; Käpylä, M. J.
2018-05-01
Context. Convectively driven flows play a crucial role in the dynamo processes that are responsible for producing magnetic activity in stars and planets. It is still not fully understood why many astrophysical magnetic fields have a significant large-scale component. Aims: Our aim is to investigate the dynamo properties of compressible convection in a rapidly rotating Cartesian domain, focusing upon a parameter regime in which the underlying hydrodynamic flow is known to be unstable to a large-scale vortex instability. Methods: The governing equations of three-dimensional non-linear magnetohydrodynamics (MHD) are solved numerically. Different numerical schemes are compared and we propose a possible benchmark case for other similar codes. Results: In keeping with previous related studies, we find that convection in this parameter regime can drive a large-scale dynamo. The components of the mean horizontal magnetic field oscillate, leading to a continuous overall rotation of the mean field. Whilst the large-scale vortex instability dominates the early evolution of the system, the large-scale vortex is suppressed by the magnetic field and makes a negligible contribution to the mean electromotive force that is responsible for driving the large-scale dynamo. The cycle period of the dynamo is comparable to the ohmic decay time, with longer cycles for dynamos in convective systems that are closer to onset. In these particular simulations, large-scale dynamo action is found only when vertical magnetic field boundary conditions are adopted at the upper and lower boundaries. Strongly modulated large-scale dynamos are found at higher Rayleigh numbers, with periods of reduced activity (grand minima-like events) occurring during transient phases in which the large-scale vortex temporarily re-establishes itself, before being suppressed again by the magnetic field.
Biotechnology: herbicide-resistant crops
USDA-ARS?s Scientific Manuscript database
Transgenic, herbicide-resistant (HR) crops are planted on about 80% of the land covered by transgenic crops. More than 90% of HR crios are glyphosate-resistant (GR) crops, the others being resistant to glufosinate. The wide-scale adoption of HR crops, largely for economic reasons, has been the mos...
Solution-Processed Metal Coating to Nonwoven Fabrics for Wearable Rechargeable Batteries.
Lee, Kyulin; Choi, Jin Hyeok; Lee, Hye Moon; Kim, Ki Jae; Choi, Jang Wook
2017-12-27
Wearable rechargeable batteries require electrode platforms that can withstand various physical motions, such as bending, folding, and twisting. To this end, conductive textiles and paper have been highlighted, as their porous structures can accommodate the stress built during various physical motions. However, fabrics with plain weaves or knit structures have been mostly adopted without exploration of nonwoven counterparts. Also, the integration of conductive materials, such as carbon or metal nanomaterials, to achieve sufficient conductivity as current collectors is not well-aligned with large-scale processing in terms of cost and quality control. Here, the superiority of nonwoven fabrics is reported in electrochemical performance and bending capability compared to currently dominant woven counterparts, due to smooth morphology near the fiber intersections and the homogeneous distribution of fibers. Moreover, solution-processed electroless deposition of aluminum and nickel-copper composite is adopted for cathodes and anodes, respectively, demonstrating the large-scale feasibility of conductive nonwoven platforms for wearable rechargeable batteries. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
Australian Employers' Adoption of Traineeships
ERIC Educational Resources Information Center
Smith, Erica; Comyn, Paul; Brennan Kemmis, Ros; Smith, Andy
2011-01-01
Traineeships are apprenticeship-like training arrangements that were initiated in Australia in 1985. They were designed to introduce apprenticeship training to a broader range of industries, occupations and individuals; they are available in occupations outside the traditional trades and crafts. Many companies use them on a large scale, some…
On Adaptive Extended Compatibility Changing Type of Product Design Strategy
NASA Astrophysics Data System (ADS)
Wenwen, Jiang; Zhibin, Xie
The article uses research ways of Enterprise localization and enterprise's development course to research strategy of company's product design and development. It announces at different stages for development, different kinds of enterprises will adopt product design and development policies of different modes. It also announces close causality between development course of company and central technology and product. The result indicated enterprises in leading position in market, technology and brand adopt pioneer strategy type of product research and development. These enterprise relying on the large-scale leading enterprise offering a complete set service adopts the passively duplicating type tactic of product research and development. Some enterprise in part of advantage in technology, market, management or brand adopt following up strategy of product research and development. The enterprises with relative advantage position adopt the strategy of technology applied taking optimizing services as centre in product research and development in fields of brand culture and market service.
Constraints on a scale-dependent bias from galaxy clustering
NASA Astrophysics Data System (ADS)
Amendola, L.; Menegoni, E.; Di Porto, C.; Corsi, M.; Branchini, E.
2017-01-01
We forecast the future constraints on scale-dependent parametrizations of galaxy bias and their impact on the estimate of cosmological parameters from the power spectrum of galaxies measured in a spectroscopic redshift survey. For the latter we assume a wide survey at relatively large redshifts, similar to the planned Euclid survey, as the baseline for future experiments. To assess the impact of the bias we perform a Fisher matrix analysis, and we adopt two different parametrizations of scale-dependent bias. The fiducial models for galaxy bias are calibrated using mock catalogs of H α emitting galaxies mimicking the expected properties of the objects that will be targeted by the Euclid survey. In our analysis we have obtained two main results. First of all, allowing for a scale-dependent bias does not significantly increase the errors on the other cosmological parameters apart from the rms amplitude of density fluctuations, σ8 , and the growth index γ , whose uncertainties increase by a factor up to 2, depending on the bias model adopted. Second, we find that the accuracy in the linear bias parameter b0 can be estimated to within 1%-2% at various redshifts regardless of the fiducial model. The nonlinear bias parameters have significantly large errors that depend on the model adopted. Despite this, in the more realistic scenarios departures from the simple linear bias prescription can be detected with a ˜2 σ significance at each redshift explored. Finally, we use the Fisher matrix formalism to assess the impact od assuming an incorrect bias model and find that the systematic errors induced on the cosmological parameters are similar or even larger than the statistical ones.
Observing relationships in Finnish adoptive families: Oulu Family Rating Scale.
Tienari, Pekka; Wynne, Lyman C; Sorri, Anneli; Lahti, Ilpo; Moring, Juha; Nieminen, Pentti; Joukamaa, Matti; Naarala, Mikko; Seitamaa, Markku; Wahlberg, Karl-Erik; Miettunen, Jouko
2005-01-01
Adoption studies were intended to separate genetic from environmental "causal" factors. In earlier adoption studies, psychiatric diagnostic labels for the adoptive parents were used as a proxy for the multiple dimensions of the family rearing environment. In the Finnish Adoption Study, research design provided the opportunity to study directly the adoptive family rearing environment. For this purpose 33 sub-scales were selected creating what we call Oulu Family Rating Scale (OPAS, Oulun PerheArviointiSkaala). In this paper, the manual for scoring of these sub-scales is presented.
USDA-ARS?s Scientific Manuscript database
While hydrotreated renewable jet fuel (HRJ) has been demonstrated for use in commercial and military aviation, a challenge to large-scale adoption is availability of cost competitive feedstocks. Brassica oilseed crops like Brassica napus, B. rapa, B. juncea, B. carinata, Sinapis alba, and Camelina s...
Stemming the Tide: Retaining and Supporting Science Teachers
ERIC Educational Resources Information Center
Pirkle, Sheila F.
2011-01-01
Chronically high rates of new and experienced science teacher attrition and the findings of new large-scale mentoring programs indicate that administrators should adopt new approaches. A science teacher's role encompasses demanding responsibilities, such as observing laboratory safety and OSHA mandates, as well as management of a business-like,…
USDA-ARS?s Scientific Manuscript database
Micropropagation of Psidium guajava L. (guava) is a viable alternative to currently adopted techniques for large-scale plant propagation of commercial cultivars. Assessment of clonal fidelity in micropropagated plants is the first step towards ensuring genetic uniformity in mass production of planti...
NASA Astrophysics Data System (ADS)
Plebe, Alice; Grasso, Giorgio
2016-12-01
This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.
[Perception of parental socialization strategies in adoptive and non-adoptive families].
Bernedo Muñoz, Isabel María; Fuentes Rebollo, María Jesús; Fernández-Molina, M; Bersabé Morán, Rosa
2007-11-01
Although parental socialization styles have been investigated in recent years, little research has been carried out on the issue of parental styles in adoptive families. The aim of this research is to analyse parental styles both from the point of view of the parents and of adopted and non-adopted adolescents, taking as covariables the adolescents' sex and age. The sample was made up of 55 adopted adolescents (20 boys and 35 girls with an age range of 11-17 years) and their 55 adoptive parents, and 402 non-adopted adolescents (200 boys and 202 girls with an age range of 11-17 years), and their 258 parents. Two scales evaluated parental styles: the Affect Scale and the Rules and Demands Scale. The results showed that, both from the point of view of the parents and of the adolescents, adoptive families are more affective, communicative and inductive, and less critical and indulgent than non-adoptive families. No differences were found between adopted and non-adopted adolescents on the Parents' Rigidity Scale.
Deployment dynamics and control of large-scale flexible solar array system with deployable mast
NASA Astrophysics Data System (ADS)
Li, Hai-Quan; Liu, Xiao-Feng; Guo, Shao-Jing; Cai, Guo-Ping
2016-10-01
In this paper, deployment dynamics and control of large-scale flexible solar array system with deployable mast are investigated. The adopted solar array system is introduced firstly, including system configuration, deployable mast and solar arrays with several mechanisms. Then dynamic equation of the solar array system is established by the Jourdain velocity variation principle and a method for dynamics with topology changes is introduced. In addition, a PD controller with disturbance estimation is designed to eliminate the drift of spacecraft mainbody. Finally the validity of the dynamic model is verified through a comparison with ADAMS software and the deployment process and dynamic behavior of the system are studied in detail. Simulation results indicate that the proposed model is effective to describe the deployment dynamics of the large-scale flexible solar arrays and the proposed controller is practical to eliminate the drift of spacecraft mainbody.
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
Manufacture of tumor- and virus-specific T lymphocytes for adoptive cell therapies
Wang, X; Rivière, I
2015-01-01
Adoptive transfer of tumor-infiltrating lymphocytes (TILs) and genetically engineered T lymphocytes expressing chimeric antigen receptors (CARs) or conventional alpha/beta T-cell receptors (TCRs), collectively termed adoptive cell therapy (ACT), is an emerging novel strategy to treat cancer patients. Application of ACT has been constrained by the ability to isolate and expand functional tumor-reactive T cells. The transition of ACT from a promising experimental regimen to an established standard of care treatment relies largely on the establishment of safe, efficient, robust and cost-effective cell manufacturing protocols. The manufacture of cellular products under current good manufacturing practices (cGMPs) has a critical role in the process. Herein, we review current manufacturing methods for the large-scale production of clinical-grade TILs, virus-specific and genetically modified CAR or TCR transduced T cells in the context of phase I/II clinical trials as well as the regulatory pathway to get these complex personalized cellular products to the clinic. PMID:25721207
Manufacture of tumor- and virus-specific T lymphocytes for adoptive cell therapies.
Wang, X; Rivière, I
2015-03-01
Adoptive transfer of tumor-infiltrating lymphocytes (TILs) and genetically engineered T lymphocytes expressing chimeric antigen receptors (CARs) or conventional alpha/beta T-cell receptors (TCRs), collectively termed adoptive cell therapy (ACT), is an emerging novel strategy to treat cancer patients. Application of ACT has been constrained by the ability to isolate and expand functional tumor-reactive T cells. The transition of ACT from a promising experimental regimen to an established standard of care treatment relies largely on the establishment of safe, efficient, robust and cost-effective cell manufacturing protocols. The manufacture of cellular products under current good manufacturing practices (cGMPs) has a critical role in the process. Herein, we review current manufacturing methods for the large-scale production of clinical-grade TILs, virus-specific and genetically modified CAR or TCR transduced T cells in the context of phase I/II clinical trials as well as the regulatory pathway to get these complex personalized cellular products to the clinic.
Adoption of a High-Impact Innovation in a Homogeneous Population
NASA Astrophysics Data System (ADS)
Weiss, Curtis H.; Poncela-Casasnovas, Julia; Glaser, Joshua I.; Pah, Adam R.; Persell, Stephen D.; Baker, David W.; Wunderink, Richard G.; Nunes Amaral, Luís A.
2014-10-01
Adoption of innovations, whether new ideas, technologies, or products, is crucially important to knowledge societies. The landmark studies of adoption dealt with innovations having great societal impact (such as antibiotics or hybrid crops) but where determining the utility of the innovation was straightforward (such as fewer side effects or greater yield). Recent large-scale studies of adoption were conducted within heterogeneous populations and focused on products with little societal impact. Here, we focus on a case with great practical significance: adoption by small groups of highly trained individuals of innovations with large societal impact but for which it is impractical to determine the true utility of the innovation. Specifically, we study experimentally the adoption by critical care physicians of a diagnostic assay that complements current protocols for the diagnosis of life-threatening bacterial infections and for which a physician cannot estimate the true accuracy of the assay based on personal experience. We show through computational modeling of the experiment that infection-spreading models—which have been formalized as generalized contagion processes—are not consistent with the experimental data, while a model inspired by opinion models is able to reproduce the empirical data. Our modeling approach enables us to investigate the efficacy of different intervention schemes on the rate and robustness of innovation adoption in the real world. While our study is focused on critical care physicians, our findings have implications for other settings in education, research, and business, where small groups of highly qualified peers make decisions about the adoption of innovations whose utility is difficult if not impossible to gauge.
Adoption of a High-Impact Innovation in a Homogeneous Population.
Weiss, Curtis H; Poncela-Casasnovas, Julia; Glaser, Joshua I; Pah, Adam R; Persell, Stephen D; Baker, David W; Wunderink, Richard G; Nunes Amaral, Luís A
2014-10-15
Adoption of innovations, whether new ideas, technologies, or products, is crucially important to knowledge societies. The landmark studies of adoption dealt with innovations having great societal impact (such as antibiotics or hybrid crops) but where determining the utility of the innovation was straightforward (such as fewer side effects or greater yield). Recent large-scale studies of adoption were conducted within heterogeneous populations and focused on products with little societal impact. Here, we focus on a case with great practical significance: adoption by small groups of highly trained individuals of innovations with large societal impact but for which it is impractical to determine the true utility of the innovation. Specifically, we study experimentally the adoption by critical care physicians of a diagnostic assay that complements current protocols for the diagnosis of life-threatening bacterial infections and for which a physician cannot estimate the true accuracy of the assay based on personal experience. We show through computational modeling of the experiment that infection-spreading models-which have been formalized as generalized contagion processes-are not consistent with the experimental data, while a model inspired by opinion models is able to reproduce the empirical data. Our modeling approach enables us to investigate the efficacy of different intervention schemes on the rate and robustness of innovation adoption in the real world. While our study is focused on critical care physicians, our findings have implications for other settings in education, research, and business, where small groups of highly qualified peers make decisions about the adoption of innovations whose utility is difficult if not impossible to gauge.
Role of optometry school in single day large scale school vision testing
Anuradha, N; Ramani, Krishnakumar
2015-01-01
Background: School vision testing aims at identification and management of refractive errors. Large-scale school vision testing using conventional methods is time-consuming and demands a lot of chair time from the eye care professionals. A new strategy involving a school of optometry in single day large scale school vision testing is discussed. Aim: The aim was to describe a new approach of performing vision testing of school children on a large scale in a single day. Materials and Methods: A single day vision testing strategy was implemented wherein 123 members (20 teams comprising optometry students and headed by optometrists) conducted vision testing for children in 51 schools. School vision testing included basic vision screening, refraction, frame measurements, frame choice and referrals for other ocular problems. Results: A total of 12448 children were screened, among whom 420 (3.37%) were identified to have refractive errors. 28 (1.26%) children belonged to the primary, 163 to middle (9.80%), 129 (4.67%) to secondary and 100 (1.73%) to the higher secondary levels of education respectively. 265 (2.12%) children were referred for further evaluation. Conclusion: Single day large scale school vision testing can be adopted by schools of optometry to reach a higher number of children within a short span. PMID:25709271
Incentivizing the Production and Use of Open Educational Resources in Higher Education Institutions
ERIC Educational Resources Information Center
Annand, David; Jensen, Tilly
2017-01-01
Substituting open educational resources (OER) for commercially-produced textbooks results in demonstrable cost savings for students in most higher education institutions. Yet OER are still not widely used, and progress toward large-scale adoption in most colleges and universities has been slow. This article reviews the literature informing…
Educational Interventions for Children with ASD: A Systematic Literature Review 2008-2013
ERIC Educational Resources Information Center
Bond, Caroline; Symes, Wendy; Hebron, Judith; Humphrey, Neil; Morewood, Gareth; Woods, Kevin
2016-01-01
Systematic literature reviews can play a key role in underpinning evidence-based practice. To date, large-scale reviews of interventions for individuals with Autism Spectrum Disorder (ASD) have focused primarily on research quality. To assist practitioners, the current review adopted a broader framework which allowed for greater consideration of…
International Students' and Employers' Use of Rankings: A Cross-National Analysis
ERIC Educational Resources Information Center
Souto-Otero, Manuel; Enders, Jürgen
2017-01-01
The article examines, primarily based on large-scale survey data, the functionalist proposition that HE customers, students and employers, demand rankings to be able to adopt informed decisions on where to study and who to recruit respectively. This is contrasted to a Weberian "conflict" perspective on rankings in which positional…
Educational Games and Virtual Reality as Disruptive Technologies
ERIC Educational Resources Information Center
Psotka, Joseph
2013-01-01
New technologies often have the potential for disrupting existing established practices, but nowhere is this so pertinent as in education and training today. And yet, education has been glacially slow to adopt these changes in a large scale way, and innovations seem to be imposed mainly by students' and their changing social lifestyles than…
ERIC Educational Resources Information Center
Wallace, Mike
This paper explores how characteristics of complex educational change may virtually dictate the leadership strategies adopted by those charged with bringing about change. The change in question here is the large-scale reorganization of local education authorities (LEAs) across England. The article focuses on how across-the-board initiatives to…
Web-based Visual Analytics for Extreme Scale Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Evans, Katherine J; Harney, John F
In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less
Approximate Computing Techniques for Iterative Graph Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh
Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with lowmore » impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.« less
Hudson, Judith N; Farmer, Elizabeth A; Weston, Kathryn M; Bushnell, John A
2015-01-16
Particularly when undertaken on a large scale, implementing innovation in higher education poses many challenges. Sustaining the innovation requires early adoption of a coherent implementation strategy. Using an example from clinical education, this article describes a process used to implement a large-scale innovation with the intent of achieving sustainability. Desire to improve the effectiveness of undergraduate medical education has led to growing support for a longitudinal integrated clerkship (LIC) model. This involves a move away from the traditional clerkship of 'block rotations' with frequent changes in disciplines, to a focus upon clerkships with longer duration and opportunity for students to build sustained relationships with supervisors, mentors, colleagues and patients. A growing number of medical schools have adopted the LIC model for a small percentage of their students. At a time when increasing medical school numbers and class sizes are leading to competition for clinical supervisors it is however a daunting challenge to provide a longitudinal clerkship for an entire medical school class. This challenge is presented to illustrate the strategy used to implement sustainable large scale innovation. A strategy to implement and build a sustainable longitudinal integrated community-based clerkship experience for all students was derived from a framework arising from Roberto and Levesque's research in business. The framework's four core processes: chartering, learning, mobilising and realigning, provided guidance in preparing and rolling out the 'whole of class' innovation. Roberto and Levesque's framework proved useful for identifying the foundations of the implementation strategy, with special emphasis on the relationship building required to implement such an ambitious initiative. Although this was innovation in a new School it required change within the school, wider university and health community. Challenges encountered included some resistance to moving away from traditional hospital-centred education, initial student concern, resource limitations, workforce shortage and potential burnout of the innovators. Large-scale innovations in medical education may productively draw upon research from other disciplines for guidance on how to lay the foundations for successfully achieving sustainability.
Mechanisation of large-scale agricultural fields in developing countries - a review.
Onwude, Daniel I; Abdulstter, Rafia; Gomes, Chandima; Hashim, Norhashila
2016-09-01
Mechanisation of large-scale agricultural fields often requires the application of modern technologies such as mechanical power, automation, control and robotics. These technologies are generally associated with relatively well developed economies. The application of these technologies in some developing countries in Africa and Asia is limited by factors such as technology compatibility with the environment, availability of resources to facilitate the technology adoption, cost of technology purchase, government policies, adequacy of technology and appropriateness in addressing the needs of the population. As a result, many of the available resources have been used inadequately by farmers, who continue to rely mostly on conventional means of agricultural production, using traditional tools and equipment in most cases. This has led to low productivity and high cost of production among others. Therefore this paper attempts to evaluate the application of present day technology and its limitations to the advancement of large-scale mechanisation in developing countries of Africa and Asia. Particular emphasis is given to a general understanding of the various levels of mechanisation, present day technology, its management and application to large-scale agricultural fields. This review also focuses on/gives emphasis to future outlook that will enable a gradual, evolutionary and sustainable technological change. The study concludes that large-scale-agricultural farm mechanisation for sustainable food production in Africa and Asia must be anchored on a coherent strategy based on the actual needs and priorities of the large-scale farmers. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Creating grander families: older adults adopting younger kin and nonkin.
Hinterlong, James; Ryan, Scott
2008-08-01
There is a dearth of research on older adoptive parents caring for minor children, despite a growing number of such adoptions finalized each year. This study offers a large-scale investigation of adoptive families headed by older parents. We describe these families and explore how preadoptive kinship between the adoptive parent and the child impacts adoption outcomes. We analyze data from kin (n = 98) and nonkin (n = 310) adoptive families headed by adults aged 60 years and older. We find that older kin adoptive families are smaller, report lower income, and include adoptive mothers with less formal education. Children in these families had less severe needs for special care at the time of placement. Although kin and nonkin older parents offer similar assessments of their parent-child relationships, kin adopters indicate a greater willingness to adopt the same child again and yet report less positive current family functioning. Multivariate regression analyses reveal that preadoptive kinship predicts more negative parental assessment of the adoption's impact on the family and less positive family functioning net of other parent, family, and child characteristics. Externalizing behavior by the child (e.g., delinquency or aggression) is the strongest predictor of deleterious outcomes for both groups. Kin adoption by older adults creates new families under strain but does not reduce parental commitment to the child. We conclude that older adults serve as effective adoptive parents but would benefit from preadoption and postadoption services to assist them in preparing for and positively addressing the challenging behaviors exhibited by adopted children.
The PMA Scale: A Measure of Physicians' Motivation to Adopt Medical Devices.
Hatz, Maximilian H M; Sonnenschein, Tim; Blankart, Carl Rudolf
2017-04-01
Studies have often stated that individual-level determinants are important drivers for the adoption of medical devices. Empirical evidence supporting this claim is, however, scarce. At the individual level, physicians' adoption motivation was often considered important in the context of adoption decisions, but a clear notion of its dimensions and corresponding measurement scales is not available. To develop and subsequently validate a scale to measure the motivation to adopt medical devices of hospital-based physicians. The development and validation of the physician-motivation-adoption (PMA) scale were based on a literature search, internal expert meetings, a pilot study with physicians, and a three-stage online survey. The data collected in the online survey were analyzed using exploratory factor analysis (EFA), and the PMA scale was revised according to the results. Confirmatory factor analysis (CFA) was conducted to test the results from the EFA in the third stage. Reliability and validity tests and subgroup analyses were also conducted. Overall, 457 questionnaires were completed by medical personnel of the National Health Service England. The EFA favored a six-factor solution to appropriately describe physicians' motivation. The CFA confirmed the results from the EFA. Our tests indicated good reliability and validity of the PMA scale. This is the first reliable and valid scale to measure physicians' adoption motivation. Future adoption studies assessing the individual level should include the PMA scale to obtain more information about the role of physicians' motivation in the broader adoption context. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Comparative Approaches to Genetic Discrimination: Chasing Shadows?
Joly, Yann; Feze, Ida Ngueng; Song, Lingqiao; Knoppers, Bartha M
2017-05-01
Genetic discrimination (GD) is one of the most pervasive issues associated with genetic research and its large-scale implementation. An increasing number of countries have adopted public policies to address this issue. Our research presents a worldwide comparative review and typology of these approaches. We conclude with suggestions for public policy development. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaluating English Language Teaching Software for Kids: Education or Entertainment or Both?
ERIC Educational Resources Information Center
Kazanci, Zekeriya; Okan, Zuhal
2009-01-01
The purpose of this study is to offer a critical consideration of instructional software designed particularly for children. Since the early 1990s computer applications integrating education with entertainment have been adopted on a large scale by both educators and parents. It is expected that through edutainment software the process of learning…
Leadership Identity in Ethnically Diverse Schools in South Africa and England
ERIC Educational Resources Information Center
Lumby, Jacky; Heystek, Jan
2012-01-01
This article adopts an international perspective to examine the perceptions and practice of leaders in a South African and an English primary school and the leadership implications. Both schools have experienced a relatively swift and large scale diversification of learners away from the previous white majority. In each case the educators have not…
Application of Open-Source Enterprise Information System Modules: An Empirical Study
ERIC Educational Resources Information Center
Lee, Sang-Heui
2010-01-01
Although there have been a number of studies on large scale implementation of proprietary enterprise information systems (EIS), open-source software (OSS) for EIS has received limited attention in spite of its potential as a disruptive innovation. Cost saving is the main driver for adopting OSS among the other possible benefits including security…
Whatever Happened to All Those Plans to Hire More Minority Professors?
ERIC Educational Resources Information Center
Gose, Ben
2008-01-01
Nationwide, minority and female faculty members were trailblazers in the 1960s and 1970s. Only in the past generation have most colleges adopted large-scale plans to diversify their faculties. This article revisited ambitious plans announced at five universities during the past two decades to see how they have fared. They are Duke University,…
Miake-Lye, Isomi M; Chuang, Emmeline; Rodriguez, Hector P; Kominski, Gerald F; Yano, Elizabeth M; Shortell, Stephen M
2017-08-24
Theories, models, and frameworks used by implementation science, including Diffusion of Innovations, tend to focus on the adoption of one innovation, when often organizations may be facing multiple simultaneous adoption decisions. For instance, despite evidence that care management practices (CMPs) are helpful in managing chronic illness, there is still uneven adoption by physician organizations. This exploratory paper leverages this natural variation in uptake to describe inter-organizational patterns in adoption of CMPs and to better understand how adoption choices may be related to one another. We assessed a cross section of national survey data from physician organizations reporting on the use of 20 CMPs (5 each for asthma, congestive heart failure, depression, and diabetes). Item response theory was used to explore patterns in adoption, first considering all 20 CMPs together and then by subsets according to disease focus or CMP type (e.g., registries, patient reminders). Mokken scale analysis explored whether adoption choices were linked by disease focus or CMP type and whether a consistent ordering of adoption choices was present. The Mokken scale for all 20 CMPs demonstrated medium scalability (H = 0.43), but no consistent ordering. Scales for subsets of CMPs sharing a disease focus had medium scalability (0.4 < H < 0.5), while subsets sharing a CMP type had strong scalability (H > 0.5). Scales for CMP type consistently ranked diabetes CMPs as most adoptable and depression CMPs as least adoptable. Within disease focus scales, patient reminders were ranked as the most adoptable CMP, while clinician feedback and patient education were ranked the least adoptable. Patterns of adoption indicate that innovation characteristics may influence adoption. CMP dissemination efforts may be strengthened by encouraging traditionally non-adopting organizations to focus on more adoptable practices first and then describing a pathway for the adoption of subsequent CMPs. Clarifying why certain CMPs are "less adoptable" may also provide insights into how to overcome CMP adoption constraints.
Yoshikawa, Toshiaki; Takahara, Masashi; Tomiyama, Mai; Nieda, Mie; Maekawa, Ryuji; Nakatsura, Tetsuya
2014-11-01
Specific cellular immunotherapy for cancer requires efficient generation and expansion of cytotoxic T lymphocytes (CTLs) that recognize tumor-associated antigens. However, it is difficult to isolate and expand functionally active T-cells ex vivo. In this study, we investigated the efficacy of a new method to induce expansion of antigen-specific CTLs for adoptive immunotherapy. We used tumor-associated antigen glypican-3 (GPC3)-derived peptide and cytomegalovirus (CMV)-derived peptide as antigens. Treatment of human peripheral blood mononuclear cells (PBMCs) with zoledronate is a method that enables large-scale γδ T-cell expansion. To induce expansion of γδ T cells and antigen-specific CTLs, the PBMCs of healthy volunteers or patients vaccinated with GPC3 peptide were cultured with both peptide and zoledronate for 14 days. The expansion of γδ T cells and peptide-specific CTLs from a few PBMCs using zoledronate yields cell numbers sufficient for adoptive transfer. The rate of increase of GPC3‑specific CTLs was approximately 24- to 170,000-fold. These CD8(+) cells, including CTLs, showed GPC3-specific cytotoxicity against SK-Hep-1/hGPC3 and T2 pulsed with GPC3 peptide, but not against SK-Hep-1/vec and T2 pulsed with human immunodeficiency virus peptide. On the other hand, CD8(-) cells, including γδ T cells, showed cytotoxicity against SK-Hep-1/hGPC3 and SK-Hep-1/vec, but did not show GPC3 specificity. Furthermore, adoptive cell transfer of CD8(+) cells, CD8(-) cells, and total cells after expansion significantly inhibited tumor growth in an NOD/SCID mouse model. This study indicates that simultaneous expansion of γδ T cells and peptide-specific CTLs using zoledronate is useful for adoptive immunotherapy.
NASA Astrophysics Data System (ADS)
Fitzgerald, Michael; McKinnon, David H.; Danaia, Lena
2015-12-01
In this paper, we outline the theory behind the educational design used to implement a large-scale high school astronomy education project. This design was created in response to the realization of ineffective educational design in the initial early stages of the project. The new design follows an iterative improvement model where the materials and general approach can evolve in response to solicited feedback. The improvement cycle concentrates on avoiding overly positive self-evaluation while addressing relevant external school and community factors while concentrating on backward mapping from clearly set goals. Limiting factors, including time, resources, support and the potential for failure in the classroom, are dealt with as much as possible in the large-scale design allowing teachers the best chance of successful implementation in their real-world classroom. The actual approach adopted following the principles of this design is also outlined, which has seen success in bringing real astronomical data and access to telescopes into the high school classroom.
Large-scale fabrication of micro-lens array by novel end-fly-cutting-servo diamond machining.
Zhu, Zhiwei; To, Suet; Zhang, Shaojian
2015-08-10
Fast/slow tool servo (FTS/STS) diamond turning is a very promising technique for the generation of micro-lens array (MLA). However, it is still a challenge to process MLA in large scale due to certain inherent limitations of this technique. In the present study, a novel ultra-precision diamond cutting method, as the end-fly-cutting-servo (EFCS) system, is adopted and investigated for large-scale generation of MLA. After a detailed discussion of the characteristic advantages for processing MLA, the optimal toolpath generation strategy for the EFCS is developed with consideration of the geometry and installation pose of the diamond tool. A typical aspheric MLA over a large area is experimentally fabricated, and the resulting form accuracy, surface micro-topography and machining efficiency are critically investigated. The result indicates that the MLA with homogeneous quality over the whole area is obtained. Besides, high machining efficiency, extremely small volume of control points for the toolpath, and optimal usage of system dynamics of the machine tool during the whole cutting can be simultaneously achieved.
Optimal variable-grid finite-difference modeling for porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Li, Haishan
2014-12-01
Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.
Evolving from bioinformatics in-the-small to bioinformatics in-the-large.
Parker, D Stott; Gorlick, Michael M; Lee, Christopher J
2003-01-01
We argue the significance of a fundamental shift in bioinformatics, from in-the-small to in-the-large. Adopting a large-scale perspective is a way to manage the problems endemic to the world of the small-constellations of incompatible tools for which the effort required to assemble an integrated system exceeds the perceived benefit of the integration. Where bioinformatics in-the-small is about data and tools, bioinformatics in-the-large is about metadata and dependencies. Dependencies represent the complexities of large-scale integration, including the requirements and assumptions governing the composition of tools. The popular make utility is a very effective system for defining and maintaining simple dependencies, and it offers a number of insights about the essence of bioinformatics in-the-large. Keeping an in-the-large perspective has been very useful to us in large bioinformatics projects. We give two fairly different examples, and extract lessons from them showing how it has helped. These examples both suggest the benefit of explicitly defining and managing knowledge flows and knowledge maps (which represent metadata regarding types, flows, and dependencies), and also suggest approaches for developing bioinformatics database systems. Generally, we argue that large-scale engineering principles can be successfully adapted from disciplines such as software engineering and data management, and that having an in-the-large perspective will be a key advantage in the next phase of bioinformatics development.
New-type planar field emission display with superaligned carbon nanotube yarn emitter.
Liu, Peng; Wei, Yang; Liu, Kai; Liu, Liang; Jiang, Kaili; Fan, Shoushan
2012-05-09
With the superaligned carbon nanotube yarn as emitter, we have fabricated a 16 × 16 pixel field emission display prototype by adopting screen printing and laser cutting technologies. A planar diode field emission structure has been adopted. A very sharp carbon nanotube yarn tip emitter can be formed by laser cutting. Low voltage phosphor was coated on the anode electrodes also by screen printing. With a specially designed circuit, we have demonstrated the dynamic character display with the field emission display prototype. The emitter material and fabrication technologies in this paper are both easy to scale up to large areas.
Solar Energy Technologies Office FY 2017 Budget At-A-Glance
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2016-03-01
The Solar Energy Technologies Office supports the SunShot Initiative goal to make solar energy technologies cost competitive with conventional energy sources by 2020. Reducing the total installed cost for utility-scale solar electricity by approximately 75% (2010 baseline) to roughly $0.06 per kWh without subsidies will enable rapid, large-scale adoption of solar electricity across the United States. This investment will help re-establish American technological and market leadership in solar energy, reduce environmental impacts of electricity generation, and strengthen U.S. economic competitiveness.
Computational solutions to large-scale data management and analysis
Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.
2011-01-01
Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155
ERIC Educational Resources Information Center
Burt, S. Alexandra
2010-01-01
A recent large-scale meta-analysis of twin and adoption studies indicated that shared environmental influences make important contributions to most forms of child and adolescent psychopathology (Burt, 2009b). The sole exception to this robust pattern of results was observed for attention-deficit/hyperactivity disorder (ADHD), which appeared to be…
ERIC Educational Resources Information Center
Perz, Stephen G.; Cabrera, Liliana; Carvalho, Lucas Araujo; Castillo, Jorge; Barnes, Grenville
2010-01-01
Recent years have witnessed an expansion in international investment in large-scale infrastructure projects with the goal of achieving global economic integration. We focus on one such project, the Inter-Oceanic Highway in the "MAP" region, a trinational frontier where Bolivia, Brazil, and Peru meet in the southwestern Amazon. We adopt a…
Video games: a route to large-scale STEM education?
Mayo, Merrilea J
2009-01-02
Video games have enormous mass appeal, reaching audiences in the hundreds of thousands to millions. They also embed many pedagogical practices known to be effective in other environments. This article reviews the sparse but encouraging data on learning outcomes for video games in science, technology, engineering, and math (STEM) disciplines, then reviews the infrastructural obstacles to wider adoption of this new medium.
ERIC Educational Resources Information Center
You, Zhuran; Hu, Yingzi
2013-01-01
The past decade or so has witnessed a large-scale reform of the Chinese national college entrance exam (the gaokao) system, which nonetheless has been trapped within a dilemma of balancing diversification and equality. Specifically speaking, the reform needs to reconcile the clash between adopting diverse and holistic college admissions to fix the…
The Views of International Students Regarding University Support Services in Australia: A Case Study
ERIC Educational Resources Information Center
Roberts, Pam; Boldy, Duncan; Dunworth, Katie
2015-01-01
This paper reports on a study aimed at developing an improved understanding of the support needs of international students. Using a case study approach at one Australian university, a three stage data collection process was adopted: interviews with key support service providers in the university, student focus groups, and a large-scale survey.…
ERIC Educational Resources Information Center
Tack, Hanne; Valcke, Martin; Rots, Isabel; Struyven, Katrien; Vanderlinde, Ruben
2018-01-01
Taking into account the pressing need to understand more about what teacher educators' professional development characterises, this article adopts a mixed method approach to explore Flemish (Dutch-speaking part of Belgium) teacher educators' professional development needs and opportunities. Analysis results of a large-scale survey study with 611…
Soil carbon accounting and assumptions for forestry and forest-related land use change
Linda S. Heath; James E. Smith
2000-01-01
Comprehensive, large-scale carbon accounting systems are needed as nations agree to work toward reducing their greenhouse gas (GHG) emissions. However, adopting a standard accounting system is difficult because multiple science and policy uses for such a system help fuel the debate about the nature of an appropriate system. Accounting systems must address all major...
Kushniruk, A; Kaipio, J; Nieminen, M; Hyppönen, H; Lääveri, T; Nohr, C; Kanstrup, A M; Berg Christiansen, M; Kuo, M-H; Borycki, E
2014-08-15
The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems.
Kaipio, J.; Nieminen, M.; Hyppönen, H.; Lääveri, T.; Nohr, C.; Kanstrup, A. M.; Berg Christiansen, M.; Kuo, M.-H.; Borycki, E.
2014-01-01
Summary Objectives The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Methods Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. Results It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. Conclusion As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems. PMID:25123725
Changing classroom designs: Easy; Changing instructors' pedagogies: Not so easy...
NASA Astrophysics Data System (ADS)
Lasry, Nathaniel; Charles, Elizabeth; Whittaker, Chris; Dedic, Helena; Rosenfield, Steven
2013-01-01
Technology-rich student-centered classrooms such as SCALE-UP and TEAL are designed to actively engage students. We examine what happens when instructors adopt the classroom but not the pedagogy that goes with it. We measure the effect of using socio-technological spaces on students' conceptual change and compare learning gains made in groups using different pedagogies (active learning vs. conventional instruction). We also correlate instructors' self-reported instructional approach (teacher-centered, student-centered) with their classes' normalized FCI gains. We find that technology-rich spaces are only effective when implemented with student-centered active pedagogies. In their absence, the technology-rich classroom is not significantly different from conventional teacher-centered classrooms. We also find that instructors' self-reported perception of student-centeredness accounts for a large fraction of the variance (r2 = 0.83) in their class' average normalized gain. Adopting student-centered pedagogies appears to be a necessary condition for the effective use of technology-rich spaces. However, adopting a new pedagogy seems more difficult than adopting new technology.
Adiabatic quantum-flux-parametron cell library adopting minimalist design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeuchi, Naoki, E-mail: takeuchi-naoki-kx@ynu.jp; Yamanashi, Yuki; Yoshikawa, Nobuyuki
We herein build an adiabatic quantum-flux-parametron (AQFP) cell library adopting minimalist design and a symmetric layout. In the proposed minimalist design, every logic cell is designed by arraying four types of building block cells: buffer, NOT, constant, and branch cells. Therefore, minimalist design enables us to effectively build and customize an AQFP cell library. The symmetric layout reduces unwanted parasitic magnetic coupling and ensures a large mutual inductance in an output transformer, which enables very long wiring between logic cells. We design and fabricate several logic circuits using the minimal AQFP cell library so as to test logic cells inmore » the library. Moreover, we experimentally investigate the maximum wiring length between logic cells. Finally, we present an experimental demonstration of an 8-bit carry look-ahead adder designed using the minimal AQFP cell library and demonstrate that the proposed cell library is sufficiently robust to realize large-scale digital circuits.« less
Adiabatic quantum-flux-parametron cell library adopting minimalist design
NASA Astrophysics Data System (ADS)
Takeuchi, Naoki; Yamanashi, Yuki; Yoshikawa, Nobuyuki
2015-05-01
We herein build an adiabatic quantum-flux-parametron (AQFP) cell library adopting minimalist design and a symmetric layout. In the proposed minimalist design, every logic cell is designed by arraying four types of building block cells: buffer, NOT, constant, and branch cells. Therefore, minimalist design enables us to effectively build and customize an AQFP cell library. The symmetric layout reduces unwanted parasitic magnetic coupling and ensures a large mutual inductance in an output transformer, which enables very long wiring between logic cells. We design and fabricate several logic circuits using the minimal AQFP cell library so as to test logic cells in the library. Moreover, we experimentally investigate the maximum wiring length between logic cells. Finally, we present an experimental demonstration of an 8-bit carry look-ahead adder designed using the minimal AQFP cell library and demonstrate that the proposed cell library is sufficiently robust to realize large-scale digital circuits.
Kearney, Sean Patrick; Coops, Nicholas C; Chan, Kai M A; Fonte, Steven J; Siles, Pablo; Smukler, Sean M
2017-11-01
Agroforestry management in smallholder agriculture can provide climate change mitigation and adaptation benefits and has been promoted as 'climate-smart agriculture' (CSA), yet has generally been left out of international and voluntary carbon (C) mitigation agreements. A key reason for this omission is the cost and uncertainty of monitoring C at the farm scale in heterogeneous smallholder landscapes. A largely overlooked alternative is to monitor C at more aggregated scales and develop C contracts with groups of land owners, community organizations or C aggregators working across entire landscapes (e.g., watersheds, communities, municipalities, etc.). In this study we use a 100-km 2 agricultural area in El Salvador to demonstrate how high-spatial resolution optical satellite imagery can be used to map aboveground woody biomass (AGWB) C at the landscape scale with very low uncertainty (95% probability of a deviation of less than 1%). Uncertainty of AGWB-C estimates remained low (<5%) for areas as small as 250 ha, despite high uncertainties at the farm and plot scale (34-99%). We estimate that CSA adoption could more than double AGWB-C stocks on agricultural lands in the study area, and that utilizing AGWB-C maps to target denuded areas could increase C gains per unit area by 46%. The potential value of C credits under a plausible adoption scenario would range from $38,270 to $354,000 yr -1 for the study area, or about $13 to $124 ha -1 yr -1 , depending on C prices. Considering farm sizes in smallholder landscapes rarely exceed 1-2 ha, relying solely on direct C payments to farmers may not lead to widespread CSA adoption, especially if farm-scale monitoring is required. Instead, landscape-scale approaches to C contracting, supported by satellite-based monitoring methods such as ours, could be a key strategy to reduce costs and uncertainty of C monitoring in heterogeneous smallholder landscapes, thereby incentivizing more widespread CSA adoption. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Block, P. J.; Alexander, S.; WU, S.
2017-12-01
Skillful season-ahead predictions conditioned on local and large-scale hydro-climate variables can provide valuable knowledge to farmers and reservoir operators, enabling informed water resource allocation and management decisions. In Ethiopia, the potential for advancing agriculture and hydropower management, and subsequently economic growth, is substantial, yet evidence suggests a weak adoption of prediction information by sectoral audiences. To address common critiques, including skill, scale, and uncertainty, probabilistic forecasts are developed at various scales - temporally and spatially - for the Finchaa hydropower dam and the Koga agricultural scheme in an attempt to promote uptake and application. Significant prediction skill is evident across scales, particularly for statistical models. This raises questions regarding other potential barriers to forecast utilization at community scales, which are also addressed.
Skyscape Archaeology: an emerging interdiscipline for archaeoastronomers and archaeologists
NASA Astrophysics Data System (ADS)
Henty, Liz
2016-02-01
For historical reasons archaeoastronomy and archaeology differ in their approach to prehistoric monuments and this has created a divide between the disciplines which adopt seemingly incompatible methodologies. The reasons behind the impasse will be explored to show how these different approaches gave rise to their respective methods. Archaeology investigations tend to concentrate on single site analysis whereas archaeoastronomical surveys tend to be data driven from the examination of a large number of similar sets. A comparison will be made between traditional archaeoastronomical data gathering and an emerging methodology which looks at sites on a small scale and combines archaeology and astronomy. Silva's recent research in Portugal and this author's survey in Scotland have explored this methodology and termed it skyscape archaeology. This paper argues that this type of phenomenological skyscape archaeology offers an alternative to large scale statistical studies which analyse astronomical data obtained from a large number of superficially similar archaeological sites.
Gao, Tia; Kim, Matthew I.; White, David; Alm, Alexander M.
2006-01-01
We have developed a system for real-time patient monitoring during large-scale disasters. Our system is designed with scalable algorithms to monitor large numbers of patients, an intuitive interface to support the overwhelmed responders, and ad-hoc mesh networking capabilities to maintain connectivity to patients in the chaotic settings. This paper describes an iterative approach to user-centered design adopted to guide development of our system. This system is a part of the Advanced Health and Disaster Aid Network (AID-N) architecture. PMID:17238348
Kozica, Samantha L; Teede, Helena J; Harrison, Cheryce L; Klein, Ruth; Lombard, Catherine B
2016-01-01
The prevalence of obesity in rural and remote areas is elevated in comparison to urban populations, highlighting the need for interventions targeting obesity prevention in these settings. Implementing evidence-based obesity prevention programs is challenging. This study aimed to investigate factors influencing the implementation of obesity prevention programs, including adoption, program delivery, community uptake, and continuation, specifically within rural settings. Nested within a large-scale randomized controlled trial, a qualitative exploratory approach was adopted, with purposive sampling techniques utilized, to recruit stakeholders from 41 small rural towns in Australia. In-depth semistructured interviews were conducted with clinical health professionals, health service managers, and local government employees. Open coding was completed independently by 2 investigators and thematic analysis undertaken. In-depth interviews revealed that obesity prevention programs were valued by the rural workforce. Program implementation is influenced by interrelated factors across: (1) contextual factors and (2) organizational capacity. Key recommendations to manage the challenges of implementing evidence-based programs focused on reducing program delivery costs, aided by the provision of a suite of implementation and evaluation resources. Informing the scale-up of future prevention programs, stakeholders highlighted the need to build local rural capacity through developing supportive university partnerships, generating local program ownership and promoting active feedback to all program partners. We demonstrate that the rural workforce places a high value on obesity prevention programs. Our results inform the future scale-up of obesity prevention programs, providing an improved understanding of strategies to optimize implementation of evidence-based prevention programs. © 2015 National Rural Health Association.
Velasco, Veronica; Griffin, Kenneth W; Antichi, Mariella; Celata, Corrado
2015-10-01
Across developed countries, experimentation with alcohol, tobacco, and other drugs often begins in the early adolescent years. Several evidence-based programs have been developed to prevent adolescent substance use. Many of the most rigorously tested and empirically supported prevention programs were initially developed and tested in the United States. Increasingly, these interventions are being adopted for use in Europe and throughout the world. This paper reports on a large-scale comprehensive initiative designed to select, adapt, implement, and sustain an evidence-based drug abuse prevention program in Italy. As part of a large-scale regionally funded collaboration in the Lombardy region of Italy, we report on processes through which a team of stakeholders selected, translated and culturally adapted, planned, implemented and evaluated the Life Skills Training (LST) school-based drug abuse prevention program, an evidence-based intervention developed in the United States. We discuss several challenges and lessons learned and implications for prevention practitioners and researchers attempting to undertake similar international dissemination projects. We review several published conceptual models designed to promote the replication and widespread dissemination of effective programs, and discuss their strengths and limitations in the context of planning and implementing a complex, large-scale real-world dissemination effort. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dwarshuis, Nate J; Parratt, Kirsten; Santiago-Miranda, Adriana; Roy, Krishnendu
2017-05-15
Therapeutic cells hold tremendous promise in treating currently incurable, chronic diseases since they perform multiple, integrated, complex functions in vivo compared to traditional small-molecule drugs or biologics. However, they also pose significant challenges as therapeutic products because (a) their complex mechanisms of actions are difficult to understand and (b) low-cost bioprocesses for large-scale, reproducible manufacturing of cells have yet to be developed. Immunotherapies using T cells and dendritic cells (DCs) have already shown great promise in treating several types of cancers, and human mesenchymal stromal cells (hMSCs) are now extensively being evaluated in clinical trials as immune-modulatory cells. Despite these exciting developments, the full potential of cell-based therapeutics cannot be realized unless new engineering technologies enable cost-effective, consistent manufacturing of high-quality therapeutic cells at large-scale. Here we review cell-based immunotherapy concepts focused on the state-of-the-art in manufacturing processes including cell sourcing, isolation, expansion, modification, quality control (QC), and culture media requirements. We also offer insights into how current technologies could be significantly improved and augmented by new technologies, and how disciplines must converge to meet the long-term needs for large-scale production of cell-based immunotherapies. Copyright © 2017 Elsevier B.V. All rights reserved.
Hou, Deyi; Guthrie, Peter; Rigby, Mark
2016-12-15
Over the past decade, sustainable remediation has grown from an emerging concept into a widely accepted new institutional norm. Scholar literature increased exponentially from nearly none in late 1990s to over 400 publications per year in 2014. The present study used a questionnaire survey conducted in 2012 and 2014 to assess the global trend in the awareness and practice of sustainable remediation. A total of 373 responses were received from survey participants located in 22 countries. The survey found that the US and the UK similarly had the highest level of awareness and adoption rate of sustainable remediation. Asia and other developing countries had much lower awareness levels and/or adoption rates. For all regions, the adoption rates were significantly lower than awareness levels, indicating a large gap between awareness and practice. One specific example is regarding minimizing greenhouse gas emission, which is a focal point in sustainable remediation literature, but with very low adoption rate according to this survey. This study also found that the adoption rates of a few sustainable remediation considerations, such as "minimizing local scale secondary impact", "minimizing national to global scale secondary impact", and "bringing prosperity to disadvantaged community", had decreased between 2012 and 2014. On the other hand, the survey also suggests the remediation community has rendered more expertise, training, and resources in sustainable remediation between 2012 and 2014. The mixed results suggest that in order to enhance sustainable remediation adoption, it is imperative to employ continued effort to enhance the understanding of sustainable remediation by practitioners and to link self-interest and public interest with sustainable remediation considerations. Copyright © 2016 Elsevier Ltd. All rights reserved.
A National Survey of Early Adopters of E-Book Reading in Sweden
ERIC Educational Resources Information Center
Bergström, Annika; Höglund, Lars
2014-01-01
Introduction: Reading literature is believed to be a cornerstone of democracy and good citizenship. With a decline in book reading and an increasing e-book market, it is of importance to follow the diffusion of e-book reading. Method: Data were collected in a large-scale, mail survey of the Swedish population aged 16 to 85 years conducted in 2012.…
RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chokchai "Box" Leangsuksun
2011-05-31
Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.
Closing the Gap Between Research and Field Applications for Multi-UAV Cooperative Missions
2013-09-01
IMU Inertial Measurement Units INCOSE International Council on Systems Engineering ISR Intelligence Surveillance and Reconnaissance ISTAR...light-weight and low-cost inertial measurement units ( IMUs ) are widely adopted for navigation of small- scale UAVs. Low-costs IMUs are characterized...by high measurement noises and large measurement biases. Hence pure initial navigation using low-cost IMUs drifts rapidly. In practice, inertial
ERIC Educational Resources Information Center
Rosen, Andrew S.
2018-01-01
Student evaluations of teaching are widely adopted across academic institutions, but there are many underlying trends and biases that can influence their interpretation. Publicly accessible web-based student evaluations of teaching are of particular relevance, due to their widespread use by students in the course selection process and the quantity…
Weighing trees with lasers: advances, challenges and opportunities
Boni Vicari, M.; Burt, A.; Calders, K.; Lewis, S. L.; Raumonen, P.; Wilkes, P.
2018-01-01
Terrestrial laser scanning (TLS) is providing exciting new ways to quantify tree and forest structure, particularly above-ground biomass (AGB). We show how TLS can address some of the key uncertainties and limitations of current approaches to estimating AGB based on empirical allometric scaling equations (ASEs) that underpin all large-scale estimates of AGB. TLS provides extremely detailed non-destructive measurements of tree form independent of tree size and shape. We show examples of three-dimensional (3D) TLS measurements from various tropical and temperate forests and describe how the resulting TLS point clouds can be used to produce quantitative 3D models of branch and trunk size, shape and distribution. These models can drastically improve estimates of AGB, provide new, improved large-scale ASEs, and deliver insights into a range of fundamental tree properties related to structure. Large quantities of detailed measurements of individual 3D tree structure also have the potential to open new and exciting avenues of research in areas where difficulties of measurement have until now prevented statistical approaches to detecting and understanding underlying patterns of scaling, form and function. We discuss these opportunities and some of the challenges that remain to be overcome to enable wider adoption of TLS methods. PMID:29503726
Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration
Anefalos Pereira, S.; Baltzell, N.; Barion, L.; ...
2016-02-11
A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c up to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and high-packed and high-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). We report here the results of the tests of a large scale prototype of the RICH detector performed withmore » the hadron beam of the CERN T9 experimental hall for the direct detection configuration. As a result, the tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1:500 in the whole momentum range.« less
The Use of Electronic Data Capture Tools in Clinical Trials: Web-Survey of 259 Canadian Trials
Jonker, Elizabeth; Sampson, Margaret; Krleža-Jerić, Karmela; Neisa, Angelica
2009-01-01
Background Electronic data capture (EDC) tools provide automated support for data collection, reporting, query resolution, randomization, and validation, among other features, for clinical trials. There is a trend toward greater adoption of EDC tools in clinical trials, but there is also uncertainty about how many trials are actually using this technology in practice. A systematic review of EDC adoption surveys conducted up to 2007 concluded that only 20% of trials are using EDC systems, but previous surveys had weaknesses. Objectives Our primary objective was to estimate the proportion of phase II/III/IV Canadian clinical trials that used an EDC system in 2006 and 2007. The secondary objectives were to investigate the factors that can have an impact on adoption and to develop a scale to assess the extent of sophistication of EDC systems. Methods We conducted a Web survey to estimate the proportion of trials that were using an EDC system. The survey was sent to the Canadian site coordinators for 331 trials. We also developed and validated a scale using Guttman scaling to assess the extent of sophistication of EDC systems. Trials using EDC were compared by the level of sophistication of their systems. Results We had a 78.2% response rate (259/331) for the survey. It is estimated that 41% (95% CI 37.5%-44%) of clinical trials were using an EDC system. Trials funded by academic institutions, government, and foundations were less likely to use an EDC system compared to those sponsored by industry. Also, larger trials tended to be more likely to adopt EDC. The EDC sophistication scale had six levels and a coefficient of reproducibility of 0.901 (P< .001) and a coefficient of scalability of 0.79. There was no difference in sophistication based on the funding source, but pediatric trials were likely to use a more sophisticated EDC system. Conclusion The adoption of EDC systems in clinical trials in Canada is higher than the literature indicated: a large proportion of clinical trials in Canada use some form of automated data capture system. To inform future adoption, research should gather stronger evidence on the costs and benefits of using different EDC systems. PMID:19275984
Eating green. Consumers' willingness to adopt ecological food consumption behaviors.
Tobler, Christina; Visschers, Vivianne H M; Siegrist, Michael
2011-12-01
Food consumption is associated with various environmental impacts, and consumers' food choices therefore represent important environmental decisions. In a large-scale survey, we examined consumers' beliefs about ecological food consumption and their willingness to adopt such behaviors. Additionally, we investigated in more detail how different motives and food-related attitudes influenced consumers' willingness to reduce meat consumption and to buy seasonal fruits and vegetables. We found consumers believed avoiding excessive packaging had the strongest impact on the environment, whereas they rated purchasing organic food and reducing meat consumption as least environmentally beneficial. Similarly, respondents appeared to be most unwilling to reduce meat consumption and purchase organic food. Taste and environmental motives influenced consumers' willingness to eat seasonal fruits and vegetables, whereas preparedness to reduce meat consumption was influenced by health and ethical motives. Women and respondents who preferred natural foods were more willing to adopt ecological food consumption patterns. Copyright © 2011 Elsevier Ltd. All rights reserved.
Pressing needs of biomedical text mining in biocuration and beyond: opportunities and challenges
Singhal, Ayush; Leaman, Robert; Catlett, Natalie; Lemberger, Thomas; McEntyre, Johanna; Polson, Shawn; Xenarios, Ioannis; Arighi, Cecilia; Lu, Zhiyong
2016-01-01
Text mining in the biomedical sciences is rapidly transitioning from small-scale evaluation to large-scale application. In this article, we argue that text-mining technologies have become essential tools in real-world biomedical research. We describe four large scale applications of text mining, as showcased during a recent panel discussion at the BioCreative V Challenge Workshop. We draw on these applications as case studies to characterize common requirements for successfully applying text-mining techniques to practical biocuration needs. We note that system ‘accuracy’ remains a challenge and identify several additional common difficulties and potential research directions including (i) the ‘scalability’ issue due to the increasing need of mining information from millions of full-text articles, (ii) the ‘interoperability’ issue of integrating various text-mining systems into existing curation workflows and (iii) the ‘reusability’ issue on the difficulty of applying trained systems to text genres that are not seen previously during development. We then describe related efforts within the text-mining community, with a special focus on the BioCreative series of challenge workshops. We believe that focusing on the near-term challenges identified in this work will amplify the opportunities afforded by the continued adoption of text-mining tools. Finally, in order to sustain the curation ecosystem and have text-mining systems adopted for practical benefits, we call for increased collaboration between text-mining researchers and various stakeholders, including researchers, publishers and biocurators. PMID:28025348
Pressing needs of biomedical text mining in biocuration and beyond: opportunities and challenges
Singhal, Ayush; Leaman, Robert; Catlett, Natalie; ...
2016-12-26
Text mining in the biomedical sciences is rapidly transitioning from small-scale evaluation to large-scale application. In this article, we argue that text-mining technologies have become essential tools in real-world biomedical research. We describe four large scale applications of text mining, as showcased during a recent panel discussion at the BioCreative V Challenge Workshop. We draw on these applications as case studies to characterize common requirements for successfully applying text-mining techniques to practical biocuration needs. We note that system ‘accuracy’ remains a challenge and identify several additional common difficulties and potential research directions including (i) the ‘scalability’ issue due to themore » increasing need of mining information from millions of full-text articles, (ii) the ‘interoperability’ issue of integrating various text-mining systems into existing curation workflows and (iii) the ‘reusability’ issue on the difficulty of applying trained systems to text genres that are not seen previously during development. We then describe related efforts within the text-mining community, with a special focus on the BioCreative series of challenge workshops. We believe that focusing on the near-term challenges identified in this work will amplify the opportunities afforded by the continued adoption of text-mining tools. In conclusion, in order to sustain the curation ecosystem and have text-mining systems adopted for practical benefits, we call for increased collaboration between text-mining researchers and various stakeholders, including researchers, publishers and biocurators.« less
Pressing needs of biomedical text mining in biocuration and beyond: opportunities and challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singhal, Ayush; Leaman, Robert; Catlett, Natalie
Text mining in the biomedical sciences is rapidly transitioning from small-scale evaluation to large-scale application. In this article, we argue that text-mining technologies have become essential tools in real-world biomedical research. We describe four large scale applications of text mining, as showcased during a recent panel discussion at the BioCreative V Challenge Workshop. We draw on these applications as case studies to characterize common requirements for successfully applying text-mining techniques to practical biocuration needs. We note that system ‘accuracy’ remains a challenge and identify several additional common difficulties and potential research directions including (i) the ‘scalability’ issue due to themore » increasing need of mining information from millions of full-text articles, (ii) the ‘interoperability’ issue of integrating various text-mining systems into existing curation workflows and (iii) the ‘reusability’ issue on the difficulty of applying trained systems to text genres that are not seen previously during development. We then describe related efforts within the text-mining community, with a special focus on the BioCreative series of challenge workshops. We believe that focusing on the near-term challenges identified in this work will amplify the opportunities afforded by the continued adoption of text-mining tools. In conclusion, in order to sustain the curation ecosystem and have text-mining systems adopted for practical benefits, we call for increased collaboration between text-mining researchers and various stakeholders, including researchers, publishers and biocurators.« less
Novel approach for extinguishing large-scale coal fires using gas-liquid foams in open pit mines.
Lu, Xinxiao; Wang, Deming; Qin, Botao; Tian, Fuchao; Shi, Guangyi; Dong, Shuaijun
2015-12-01
Coal fires are a serious threat to the workers' security and safe production in open pit mines. The coal fire source is hidden and innumerable, and the large-area cavity is prevalent in the coal seam after the coal burned, causing the conventional extinguishment technology difficult to work. Foams are considered as an efficient means of fire extinguishment in these large-scale workplaces. A noble foam preparation method is introduced, and an original design of cavitation jet device is proposed to add foaming agent stably. The jet cavitation occurs when the water flow rate and pressure ratio reach specified values. Through self-building foaming system, the high performance foams are produced and then infused into the blast drilling holes at a large flow. Without complicated operation, this system is found to be very suitable for extinguishing large-scale coal fires. Field application shows that foam generation adopting the proposed key technology makes a good fire extinguishment effect. The temperature reduction using foams is 6-7 times higher than water, and CO concentration is reduced from 9.43 to 0.092‰ in the drilling hole. The coal fires are controlled successfully in open pit mines, ensuring the normal production as well as the security of personnel and equipment.
Multi scales based sparse matrix spectral clustering image segmentation
NASA Astrophysics Data System (ADS)
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
Diffusion and Large-Scale Adoption of Computer-Supported Training Simulations in the Military Domain
2013-09-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS This thesis was performed at the MOVES Institute Approved for public...Suite 1204, Arlington, VA 22202–4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704–0188) Washington DC 20503. 1...7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943–5000 8. PERFORMING ORGANIZATION REPORT NUMBER
ERIC Educational Resources Information Center
Griffiths, Rebecca; Mulhern, Christine; Spies, Richard; Chingos, Matthew
2015-01-01
To address the paucity of data on the use of MOOCs in "traditional" postsecondary institutions, Ithaka S+R and the University System of Maryland studied the feasibility of repurposing MOOCs for use in hybrid, credit-bearing courses. In this paper we will describe the design of a large-scale study undertaken to examine the use of MOOCs in…
Zhang, Gongxuan; Wang, Yongli; Wang, Tianshu
2018-01-01
We study the problem of employing a mobile-sink into a large-scale Event-Driven Wireless Sensor Networks (EWSNs) for the purpose of data harvesting from sensor-nodes. Generally, this employment improves the main weakness of WSNs that is about energy-consumption in battery-driven sensor-nodes. The main motivation of our work is to address challenges which are related to a network’s topology by adopting a mobile-sink that moves in a predefined trajectory in the environment. Since, in this fashion, it is not possible to gather data from sensor-nodes individually, we adopt the approach of defining some of the sensor-nodes as Rendezvous Points (RPs) in the network. We argue that RP-planning in this case is a tradeoff between minimizing the number of RPs while decreasing the number of hops for a sensor-node that needs data transformation to the related RP which leads to minimizing average energy consumption in the network. We address the problem by formulating the challenges and expectations as a Mixed Integer Linear Programming (MILP). Henceforth, by proving the NP-hardness of the problem, we propose three effective and distributed heuristics for RP-planning, identifying sojourn locations, and constructing routing trees. Finally, experimental results prove the effectiveness of our approach. PMID:29734718
Scalable and sustainable electrochemical allylic C-H oxidation
NASA Astrophysics Data System (ADS)
Horn, Evan J.; Rosen, Brandon R.; Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D.; Baran, Phil S.
2016-05-01
New methods and strategies for the direct functionalization of C-H bonds are beginning to reshape the field of retrosynthetic analysis, affecting the synthesis of natural products, medicines and materials. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C-H functionalization, owing to the utility of enones and allylic alcohols as versatile intermediates, and their prevalence in natural and unnatural materials. Allylic oxidations have featured in hundreds of syntheses, including some natural product syntheses regarded as “classics”. Despite many attempts to improve the efficiency and practicality of this transformation, the majority of conditions still use highly toxic reagents (based around toxic elements such as chromium or selenium) or expensive catalysts (such as palladium or rhodium). These requirements are problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. This oxidation strategy is therefore rarely used for large-scale synthetic applications, limiting the adoption of this retrosynthetic strategy by industrial scientists. Here we describe an electrochemical C-H oxidation strategy that exhibits broad substrate scope, operational simplicity and high chemoselectivity. It uses inexpensive and readily available materials, and represents a scalable allylic C-H oxidation (demonstrated on 100 grams), enabling the adoption of this C-H oxidation strategy in large-scale industrial settings without substantial environmental impact.
Vajdi, Ahmadreza; Zhang, Gongxuan; Zhou, Junlong; Wei, Tongquan; Wang, Yongli; Wang, Tianshu
2018-05-04
We study the problem of employing a mobile-sink into a large-scale Event-Driven Wireless Sensor Networks (EWSNs) for the purpose of data harvesting from sensor-nodes. Generally, this employment improves the main weakness of WSNs that is about energy-consumption in battery-driven sensor-nodes. The main motivation of our work is to address challenges which are related to a network’s topology by adopting a mobile-sink that moves in a predefined trajectory in the environment. Since, in this fashion, it is not possible to gather data from sensor-nodes individually, we adopt the approach of defining some of the sensor-nodes as Rendezvous Points (RPs) in the network. We argue that RP-planning in this case is a tradeoff between minimizing the number of RPs while decreasing the number of hops for a sensor-node that needs data transformation to the related RP which leads to minimizing average energy consumption in the network. We address the problem by formulating the challenges and expectations as a Mixed Integer Linear Programming (MILP). Henceforth, by proving the NP-hardness of the problem, we propose three effective and distributed heuristics for RP-planning, identifying sojourn locations, and constructing routing trees. Finally, experimental results prove the effectiveness of our approach.
Hering, Daniel; Carvalho, Laurence; Argillier, Christine; Beklioglu, Meryem; Borja, Angel; Cardoso, Ana Cristina; Duel, Harm; Ferreira, Teresa; Globevnik, Lidija; Hanganu, Jenica; Hellsten, Seppo; Jeppesen, Erik; Kodeš, Vit; Solheim, Anne Lyche; Nõges, Tiina; Ormerod, Steve; Panagopoulos, Yiannis; Schmutz, Stefan; Venohr, Markus; Birk, Sebastian
2015-01-15
Water resources globally are affected by a complex mixture of stressors resulting from a range of drivers, including urban and agricultural land use, hydropower generation and climate change. Understanding how stressors interfere and impact upon ecological status and ecosystem services is essential for developing effective River Basin Management Plans and shaping future environmental policy. This paper details the nature of these problems for Europe's water resources and the need to find solutions at a range of spatial scales. In terms of the latter, we describe the aims and approaches of the EU-funded project MARS (Managing Aquatic ecosystems and water Resources under multiple Stress) and the conceptual and analytical framework that it is adopting to provide this knowledge, understanding and tools needed to address multiple stressors. MARS is operating at three scales: At the water body scale, the mechanistic understanding of stressor interactions and their impact upon water resources, ecological status and ecosystem services will be examined through multi-factorial experiments and the analysis of long time-series. At the river basin scale, modelling and empirical approaches will be adopted to characterise relationships between multiple stressors and ecological responses, functions, services and water resources. The effects of future land use and mitigation scenarios in 16 European river basins will be assessed. At the European scale, large-scale spatial analysis will be carried out to identify the relationships amongst stress intensity, ecological status and service provision, with a special focus on large transboundary rivers, lakes and fish. The project will support managers and policy makers in the practical implementation of the Water Framework Directive (WFD), of related legislation and of the Blueprint to Safeguard Europe's Water Resources by advising the 3rd River Basin Management Planning cycle, the revision of the WFD and by developing new tools for diagnosing and predicting multiple stressors. Copyright © 2014. Published by Elsevier B.V.
Effectiveness of a Scaled-Up Arthritis Self-Management Program in Oregon: Walk With Ease.
Conte, Kathleen P; Odden, Michelle C; Linton, Natalie M; Harvey, S Marie
2016-12-01
To evaluate the effectiveness of Walk With Ease (WWE), an evidence-based arthritis self-management program that was scaled up in Oregon in 2012 to 2014. Guided by the RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) framework, we collected participant surveys and attendance records and conducted observations. Preprogram and postprogram, participants self-reported pain and fatigue (scale: 0-10 points; high scores indicate more pain and fatigue) and estimated episodes of physical activity per week in the last month. Recruitment successfully reached the targeted population-sedentary adults with arthritis (n = 598). Participants reported significant reduction in pain (-0.47 points; P = .006) and fatigue (-0.58 points; P = .021) and increased physical activity (0.86 days/week; P < .001). WWE was adopted by workplaces and medical, community, faith, and retirement centers. Most WWE programs were delivered with high fidelity; average attendance was 47%. WWE is suitable for implementation by diverse organizations. Effect sizes for pain and fatigue were less than those in the original WWE studies, but this is to be expected for a large-scale implementation. Public Health Implications. WWE can be effectively translated to diverse, real-world contexts to help sedentary adults increase physical activity and reduce pain and fatigue.
Spatial confinement of active microtubule networks induces large-scale rotational cytoplasmic flow
Suzuki, Kazuya; Miyazaki, Makito; Takagi, Jun; Itabashi, Takeshi; Ishiwata, Shin’ichi
2017-01-01
Collective behaviors of motile units through hydrodynamic interactions induce directed fluid flow on a larger length scale than individual units. In cells, active cytoskeletal systems composed of polar filaments and molecular motors drive fluid flow, a process known as cytoplasmic streaming. The motor-driven elongation of microtubule bundles generates turbulent-like flow in purified systems; however, it remains unclear whether and how microtubule bundles induce large-scale directed flow like the cytoplasmic streaming observed in cells. Here, we adopted Xenopus egg extracts as a model system of the cytoplasm and found that microtubule bundle elongation induces directed flow for which the length scale and timescale depend on the existence of geometrical constraints. At the lower activity of dynein, kinesins bundle and slide microtubules, organizing extensile microtubule bundles. In bulk extracts, the extensile bundles connected with each other and formed a random network, and vortex flows with a length scale comparable to the bundle length continually emerged and persisted for 1 min at multiple places. When the extracts were encapsulated in droplets, the extensile bundles pushed the droplet boundary. This pushing force initiated symmetry breaking of the randomly oriented bundle network, leading to bundles aligning into a rotating vortex structure. This vortex induced rotational cytoplasmic flows on the length scale and timescale that were 10- to 100-fold longer than the vortex flows emerging in bulk extracts. Our results suggest that microtubule systems use not only hydrodynamic interactions but also mechanical interactions to induce large-scale temporally stable cytoplasmic flow. PMID:28265076
Why Online Education Will Attain Full Scale
ERIC Educational Resources Information Center
Sener, John
2010-01-01
Online higher education has attained scale and is poised to take the next step in its growth. Although significant obstacles to a full scale adoption of online education remain, we will see full scale adoption of online higher education within the next five to ten years. Practically all higher education students will experience online education in…
Uncovering Nature’s 100 TeV Particle Accelerators in the Large-Scale Jets of Quasars
NASA Astrophysics Data System (ADS)
Georganopoulos, Markos; Meyer, Eileen; Sparks, William B.; Perlman, Eric S.; Van Der Marel, Roeland P.; Anderson, Jay; Sohn, S. Tony; Biretta, John A.; Norman, Colin Arthur; Chiaberge, Marco
2016-04-01
Since the first jet X-ray detections sixteen years ago the adopted paradigm for the X-ray emission has been the IC/CMB model that requires highly relativistic (Lorentz factors of 10-20), extremely powerful (sometimes super-Eddington) kpc scale jets. R I will discuss recently obtained strong evidence, from two different avenues, IR to optical polarimetry for PKS 1136-135 and gamma-ray observations for 3C 273 and PKS 0637-752, ruling out the EC/CMB model. Our work constrains the jet Lorentz factors to less than ~few, and leaves as the only reasonable alternative synchrotron emission from ~100 TeV jet electrons, accelerated hundreds of kpc away from the central engine. This refutes over a decade of work on the jet X-ray emission mechanism and overall energetics and, if confirmed in more sources, it will constitute a paradigm shift in our understanding of powerful large scale jets and their role in the universe. Two important findings emerging from our work will also discussed be: (i) the solid angle-integrated luminosity of the large scale jet is comparable to that of the jet core, contrary to the current belief that the core is the dominant jet radiative outlet and (ii) the large scale jets are the main source of TeV photon in the universe, something potentially important, as TeV photons have been suggested to heat up the intergalactic medium and reduce the number of dwarf galaxies formed.
NASA Astrophysics Data System (ADS)
Millstein, D.; Brown, N. J.; Zhai, P.; Menon, S.
2012-12-01
We use the WRF/Chem model (Weather Research and Forecasting model with chemistry) and pollutant emissions based on the EPA National Emission Inventories from 2005 and 2008 to model regional climate and air quality over the continental United States. Additionally, 2030 emission scenarios are developed to investigate the effects of future enhancements to solar power generation. Modeling covered 6 summer and 6 winter weeks each year. We model feedback between aerosols and meteorology and thus capture direct and indirect aerosol effects. The grid resolution is 25 km and includes no nesting. Between 2005 and 2008 significant emission reductions were reported in the National Emission Inventory. The 2008 weekday emissions over the continental U.S. of SO2 and NO were reduced from 2005 values by 28% and 16%, respectively. Emission reductions of this magnitude are similar in scale to the potential emission reductions from various energy policy initiatives. By evaluating modeled and observed air quality changes from 2005 to 2008, we analyze how well the model represents the effects of historical emission changes. We also gain insight into how well the model might predict the effects of future emission changes. In addition to direct comparisons of model outputs to ground and satellite observations, we compare observed differences between 2005 and 2008 to corresponding modeled differences. Modeling was extended to future scenarios (2030) to simulate air quality and regional climate effects of large-scale adoption of solar power. The 2030-year was selected to allow time for development of solar generation infrastructure. The 2030 emission scenario was scaled, with separate factors for different economic sectors, from the 2008 National Emissions Inventory. The changes to emissions caused by the introduction of large-scale solar power (here assumed to be 10% of total energy generation) are based on results from a parallel project that used an electricity grid model applied over multiple regions across the country. The regional climate and air quality effects of future large-scale solar power adoption are analyzed in the context of uncertainty quantified by the dynamic evaluation of the historical (2005 and 2008) WRF/Chem simulations.
The build up of the correlation between halo spin and the large-scale structure
NASA Astrophysics Data System (ADS)
Wang, Peng; Kang, Xi
2018-01-01
Both simulations and observations have confirmed that the spin of haloes/galaxies is correlated with the large-scale structure (LSS) with a mass dependence such that the spin of low-mass haloes/galaxies tend to be parallel with the LSS, while that of massive haloes/galaxies tend to be perpendicular with the LSS. It is still unclear how this mass dependence is built up over time. We use N-body simulations to trace the evolution of the halo spin-LSS correlation and find that at early times the spin of all halo progenitors is parallel with the LSS. As time goes on, mass collapsing around massive halo is more isotropic, especially the recent mass accretion along the slowest collapsing direction is significant and it brings the halo spin to be perpendicular with the LSS. Adopting the fractional anisotropy (FA) parameter to describe the degree of anisotropy of the large-scale environment, we find that the spin-LSS correlation is a strong function of the environment such that a higher FA (more anisotropic environment) leads to an aligned signal, and a lower anisotropy leads to a misaligned signal. In general, our results show that the spin-LSS correlation is a combined consequence of mass flow and halo growth within the cosmic web. Our predicted environmental dependence between spin and large-scale structure can be further tested using galaxy surveys.
Pevnick, Joshua M.; Fuller, Garth; Duncan, Ray; Spiegel, Brennan M. R.
2016-01-01
Background Personal fitness trackers (PFT) have substantial potential to improve healthcare. Objective To quantify and characterize early adopters who shared their PFT data with providers. Methods We used bivariate statistics and logistic regression to compare patients who shared any PFT data vs. patients who did not. Results A patient portal was used to invite 79,953 registered portal users to share their data. Of 66,105 users included in our analysis, 499 (0.8%) uploaded data during an initial 37-day study period. Bivariate and regression analysis showed that early adopters were more likely than non-adopters to be younger, male, white, health system employees, and to have higher BMIs. Neither comorbidities nor utilization predicted adoption. Conclusion Our results demonstrate that patients had little intrinsic desire to share PFT data with their providers, and suggest that patients most at risk for poor health outcomes are least likely to share PFT data. Marketing, incentives, and/or cultural change may be needed to induce such data-sharing. PMID:27846287
An efficient and reliable predictive method for fluidized bed simulation
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-13
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
Pirotte, Geert; Kesters, Jurgen; Verstappen, Pieter; Govaerts, Sanne; Manca, Jean; Lutsen, Laurence; Vanderzande, Dirk; Maes, Wouter
2015-10-12
Organic photovoltaics (OPV) have attracted great interest as a solar cell technology with appealing mechanical, aesthetical, and economies-of-scale features. To drive OPV toward economic viability, low-cost, large-scale module production has to be realized in combination with increased top-quality material availability and minimal batch-to-batch variation. To this extent, continuous flow chemistry can serve as a powerful tool. In this contribution, a flow protocol is optimized for the high performance benzodithiophene-thienopyrroledione copolymer PBDTTPD and the material quality is probed through systematic solar-cell evaluation. A stepwise approach is adopted to turn the batch process into a reproducible and scalable continuous flow procedure. Solar cell devices fabricated using the obtained polymer batches deliver an average power conversion efficiency of 7.2 %. Upon incorporation of an ionic polythiophene-based cathodic interlayer, the photovoltaic performance could be enhanced to a maximum efficiency of 9.1 %. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Halley Watch: Discipline specialists for large scale phenomena
NASA Technical Reports Server (NTRS)
Brandt, J. C.; Niedner, M. B., Jr.
1986-01-01
The largest scale structures of comets, their tails, are extremely interesting from a physical point of view, and some of their properties are among the most spectacular displayed by comets. Because the tail(s) is an important component part of a comet, the Large-Scale Phenomena (L-SP) Discipline was created as one of eight different observational methods in which Halley data would be encouraged and collected from all around the world under the aspices of the International Halley Watch (IHW). The L-SP Discipline Specialist (DS) Team resides at NASA/Goddard Space Flight Center under the leadership of John C. Brandt, Malcolm B. Niedner, and their team of image-processing and computer specialists; Jurgan Rahe at NASA Headquarters completes the formal DS science staff. The team has adopted the study of disconnection events (DE) as its principal science target, and it is because of the rapid changes which occur in connection with DE's that such extensive global coverage was deemed necessary to assemble a complete record.
An efficient and reliable predictive method for fluidized bed simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-29
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
NASA Astrophysics Data System (ADS)
Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki
2010-12-01
We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.
NASA Astrophysics Data System (ADS)
Sheng, Jie; Zhu, Qiaoming; Cao, Shijie; You, Yang
2017-05-01
This paper helps in study of the relationship between the photovoltaic power generation of large scale “fishing and PV complementary” grid-tied photovoltaic system and meteorological parameters, with multi-time scale power data from the photovoltaic power station and meteorological data over the same period of a whole year. The result indicates that, the PV power generation has the most significant correlation with global solar irradiation, followed by diurnal temperature range, sunshine hours, daily maximum temperature and daily average temperature. In different months, the maximum monthly average power generation appears in August, which related to the more global solar irradiation and longer sunshine hours in this month. However, the maximum daily average power generation appears in October, this is due to the drop in temperature brings about the improvement of the efficiency of PV panels. Through the contrast of monthly average performance ratio (PR) and monthly average temperature, it is shown that, the larger values of monthly average PR appears in April and October, while it is smaller in summer with higher temperature. The results concluded that temperature has a great influence on the performance ratio of large scale grid-tied PV power system, and it is important to adopt effective measures to decrease the temperature of PV plant properly.
NASA Astrophysics Data System (ADS)
Zhong, Hua; Zhang, Song; Hu, Jian; Sun, Minhong
2017-12-01
This paper deals with the imaging problem for one-stationary bistatic synthetic aperture radar (BiSAR) with high-squint, large-baseline configuration. In this bistatic configuration, accurate focusing of BiSAR data is a difficult issue due to the relatively large range cell migration (RCM), severe range-azimuth coupling, and inherent azimuth-geometric variance. To circumvent these issues, an enhanced azimuth nonlinear chirp scaling (NLCS) algorithm based on an ellipse model is investigated in this paper. In the range processing, a method combining deramp operation and keystone transform (KT) is adopted to remove linear RCM completely and mitigate range-azimuth cross-coupling. In the azimuth focusing, an ellipse model is established to analyze and depict the characteristic of azimuth-variant Doppler phase. Based on the new model, an enhanced azimuth NLCS algorithm is derived to focus one-stationary BiSAR data. Simulating results exhibited at the end of this paper validate the effectiveness of the proposed algorithm.
Blueprints for green biotech: development and application of standards for plant synthetic biology.
Patron, Nicola J
2016-06-15
Synthetic biology aims to apply engineering principles to the design and modification of biological systems and to the construction of biological parts and devices. The ability to programme cells by providing new instructions written in DNA is a foundational technology of the field. Large-scale de novo DNA synthesis has accelerated synthetic biology by offering custom-made molecules at ever decreasing costs. However, for large fragments and for experiments in which libraries of DNA sequences are assembled in different combinations, assembly in the laboratory is still desirable. Biological assembly standards allow DNA parts, even those from multiple laboratories and experiments, to be assembled together using the same reagents and protocols. The adoption of such standards for plant synthetic biology has been cohesive for the plant science community, facilitating the application of genome editing technologies to plant systems and streamlining progress in large-scale, multi-laboratory bioengineering projects. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.
Advances in DNA sequencing technologies for high resolution HLA typing.
Cereb, Nezih; Kim, Hwa Ran; Ryu, Jaejun; Yang, Soo Young
2015-12-01
This communication describes our experience in large-scale G group-level high resolution HLA typing using three different DNA sequencing platforms - ABI 3730 xl, Illumina MiSeq and PacBio RS II. Recent advances in DNA sequencing technologies, so-called next generation sequencing (NGS), have brought breakthroughs in deciphering the genetic information in all living species at a large scale and at an affordable level. The NGS DNA indexing system allows sequencing multiple genes for large number of individuals in a single run. Our laboratory has adopted and used these technologies for HLA molecular testing services. We found that each sequencing technology has its own strengths and weaknesses, and their sequencing performances complement each other. HLA genes are highly complex and genotyping them is quite challenging. Using these three sequencing platforms, we were able to meet all requirements for G group-level high resolution and high volume HLA typing. Copyright © 2015 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
Adoption of dental innovations
Ramoni, Rachel B.; Etolue, Jini; Tokede, Oluwabunmi; McClellan, Lyle; Simmons, Kristen; Yansane, Alfa; White, Joel M.; Walji, Muhammad F.; Kalenderian, Elsbeth
2017-01-01
Background Standardized dental diagnostic terminologies (SDDxTs) were introduced decades ago. Their use has been on the rise, accompanying the adoption of electronic health records (EHRs). One of the most broadly used terminologies is the Dental Diagnostic System (DDS). Our aim was to assess the adoption of SDDxTs by US dental schools by using the Rogers diffusion of innovations framework, focusing on the DDS. Methods The authors electronically surveyed clinic deans in all US dental schools (n = 61) to determine use of an EHR and SDDxT, perceived barriers to adoption of an SDDxT, and the effect of implementing an SDDxT on clinical productivity. Results The response rate was 57%. Of the 35 responses, 91% reported using an EHR to document patient care, with 84% using axiUm, and 69% reported using an SDDxT to document patient diagnoses; 41% used the DDS. Fifty-four percent of those who did not use an SDDxT had considered adopting the DDS, but 39% had not, citing barriers such as complexity and compatibility. Conclusions Adoption of an SDDxT, particularly the DDS, is on the rise. Nevertheless, a large number of institutions are in the Rogers late majority and laggards categories with respect to adoption. Several factors may discourage adoption, including the inability to try out the terminology on a small scale, poor usability within the EHR, the fact that it would be a cultural shift in practice, and a perception of unclear benefits. However, the consolidation of the DDS and American Dental Association terminology efforts stands to encourage adoption. PMID:28364948
Bioprocessing Data for the Production of Marine Enzymes
Sarkar, Sreyashi; Pramanik, Arnab; Mitra, Anindita; Mukherjee, Joydeep
2010-01-01
This review is a synopsis of different bioprocess engineering approaches adopted for the production of marine enzymes. Three major modes of operation: batch, fed-batch and continuous have been used for production of enzymes (such as protease, chitinase, agarase, peroxidase) mainly from marine bacteria and fungi on a laboratory bioreactor and pilot plant scales. Submerged, immobilized and solid-state processes in batch mode were widely employed. The fed-batch process was also applied in several bioprocesses. Continuous processes with suspended cells as well as with immobilized cells have been used. Investigations in shake flasks were conducted with the prospect of large-scale processing in reactors. PMID:20479981
Parsec-Scale Obscuring Accretion Disk with Large-Scale Magnetic Field in AGNs
NASA Technical Reports Server (NTRS)
Dorodnitsyn, A.; Kallman, T.
2017-01-01
A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc (parsec) -scale torus in AGNs (Active Galactic Nuclei). Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate that the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.
Parsec-scale Obscuring Accretion Disk with Large-scale Magnetic Field in AGNs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorodnitsyn, A.; Kallman, T.
A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc-scale torus in AGNs. Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate thatmore » the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.« less
A large-scale clinical validation of an integrated monitoring system in the emergency department.
Clifton, David A; Wong, David; Clifton, Lei; Wilson, Sarah; Way, Rob; Pullinger, Richard; Tarassenko, Lionel
2013-07-01
We consider an integrated patient monitoring system, combining electronic patient records with high-rate acquisition of patient physiological data. There remain many challenges in increasing the robustness of "e-health" applications to a level at which they are clinically useful, particularly in the use of automated algorithms used to detect and cope with artifact in data contained within the electronic patient record, and in analyzing and communicating the resultant data for reporting to clinicians. There is a consequential "plague of pilots," in which engineering prototype systems do not enter into clinical use. This paper describes an approach in which, for the first time, the Emergency Department (ED) of a major research hospital has adopted such systems for use during a large clinical trial. We describe the disadvantages of existing evaluation metrics when applied to such large trials, and propose a solution suitable for large-scale validation. We demonstrate that machine learning technologies embedded within healthcare information systems can provide clinical benefit, with the potential to improve patient outcomes in the busy environment of a major ED and other high-dependence areas of patient care.
The cosmological principle is not in the sky
NASA Astrophysics Data System (ADS)
Park, Chan-Gyung; Hyun, Hwasu; Noh, Hyerim; Hwang, Jai-chan
2017-08-01
The homogeneity of matter distribution at large scales, known as the cosmological principle, is a central assumption in the standard cosmological model. The case is testable though, thus no longer needs to be a principle. Here we perform a test for spatial homogeneity using the Sloan Digital Sky Survey Luminous Red Galaxies (LRG) sample by counting galaxies within a specified volume with the radius scale varying up to 300 h-1 Mpc. We directly confront the large-scale structure data with the definition of spatial homogeneity by comparing the averages and dispersions of galaxy number counts with allowed ranges of the random distribution with homogeneity. The LRG sample shows significantly larger dispersions of number counts than the random catalogues up to 300 h-1 Mpc scale, and even the average is located far outside the range allowed in the random distribution; the deviations are statistically impossible to be realized in the random distribution. This implies that the cosmological principle does not hold even at such large scales. The same analysis of mock galaxies derived from the N-body simulation, however, suggests that the LRG sample is consistent with the current paradigm of cosmology, thus the simulation is also not homogeneous in that scale. We conclude that the cosmological principle is neither in the observed sky nor demanded to be there by the standard cosmological world model. This reveals the nature of the cosmological principle adopted in the modern cosmology paradigm, and opens a new field of research in theoretical cosmology.
Pressing needs of biomedical text mining in biocuration and beyond: opportunities and challenges.
Singhal, Ayush; Leaman, Robert; Catlett, Natalie; Lemberger, Thomas; McEntyre, Johanna; Polson, Shawn; Xenarios, Ioannis; Arighi, Cecilia; Lu, Zhiyong
2016-01-01
Text mining in the biomedical sciences is rapidly transitioning from small-scale evaluation to large-scale application. In this article, we argue that text-mining technologies have become essential tools in real-world biomedical research. We describe four large scale applications of text mining, as showcased during a recent panel discussion at the BioCreative V Challenge Workshop. We draw on these applications as case studies to characterize common requirements for successfully applying text-mining techniques to practical biocuration needs. We note that system 'accuracy' remains a challenge and identify several additional common difficulties and potential research directions including (i) the 'scalability' issue due to the increasing need of mining information from millions of full-text articles, (ii) the 'interoperability' issue of integrating various text-mining systems into existing curation workflows and (iii) the 'reusability' issue on the difficulty of applying trained systems to text genres that are not seen previously during development. We then describe related efforts within the text-mining community, with a special focus on the BioCreative series of challenge workshops. We believe that focusing on the near-term challenges identified in this work will amplify the opportunities afforded by the continued adoption of text-mining tools. Finally, in order to sustain the curation ecosystem and have text-mining systems adopted for practical benefits, we call for increased collaboration between text-mining researchers and various stakeholders, including researchers, publishers and biocurators. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
NASA Astrophysics Data System (ADS)
Gu, Xiaodan; Zhou, Yan; Gu, Kevin; Kurosawa, Tadanori; Yan, Hongping; Wang, Cheng; Toney, Micheal; Bao, Zhenan
The challenge of continuous printing in high efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution coated all-polymer bulk heterojunction (BHJ) solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, our results showed that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers. This methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. We were able to continuously roll-to-roll slot die print large area all-polymer solar cells with power conversion efficiencies of 5%, with combined cell area up to 10 cm2. This is among the highest efficiencies realized with R2R coated active layer organic materials on flexible substrate. DOE BRIDGE sunshot program. Office of Naval Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Xiaodan; Zhou, Yan; Gu, Kevin
The challenge of continuous printing in high-efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution-coated all-polymer bulk heterojunction solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity, and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, the results show that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers.more » This particular methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small-scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. Large-area all-polymer solar cells are continuously roll-to-roll slot die printed with power conversion efficiencies of 5%, with combined cell area up to 10 cm 2. This is among the highest efficiencies realized with R2R-coated active layer organic materials on flexible substrate.« less
Gu, Xiaodan; Zhou, Yan; Gu, Kevin; ...
2017-03-07
The challenge of continuous printing in high-efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution-coated all-polymer bulk heterojunction solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity, and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, the results show that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers.more » This particular methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small-scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. Large-area all-polymer solar cells are continuously roll-to-roll slot die printed with power conversion efficiencies of 5%, with combined cell area up to 10 cm 2. This is among the highest efficiencies realized with R2R-coated active layer organic materials on flexible substrate.« less
Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo
2014-04-21
Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.
Adoption of telemedicine: from pilot stage to routine delivery
2012-01-01
Background Today there is much debate about why telemedicine has stalled. Teleradiology is the only widespread telemedicine application. Other telemedicine applications appear to be promising candidates for widespread use, but they remain in the early adoption stage. The objective of this debate paper is to achieve a better understanding of the adoption of telemedicine, to assist those trying to move applications from pilot stage to routine delivery. Discussion We have investigated the reasons why telemedicine has stalled by focusing on two, high-level topics: 1) the process of adoption of telemedicine in comparison with other technologies; and 2) the factors involved in the widespread adoption of telemedicine. For each topic, we have formulated hypotheses. First, the advantages for users are the crucial determinant of the speed of adoption of technology in healthcare. Second, the adoption of telemedicine is similar to that of other health technologies and follows an S-shaped logistic growth curve. Third, evidence of cost-effectiveness is a necessary but not sufficient condition for the widespread adoption of telemedicine. Fourth, personal incentives for the health professionals involved in service provision are needed before the widespread adoption of telemedicine will occur. Summary The widespread adoption of telemedicine is a major -- and still underdeveloped -- challenge that needs to be strengthened through new research directions. We have formulated four hypotheses, which are all susceptible to experimental verification. In particular, we believe that data about the adoption of telemedicine should be collected from applications implemented on a large-scale, to test the assumption that the adoption of telemedicine follows an S-shaped growth curve. This will lead to a better understanding of the process, which will in turn accelerate the adoption of new telemedicine applications in future. Research is also required to identify suitable financial and professional incentives for potential telemedicine users and understand their importance for widespread adoption. PMID:22217121
Shock wave propagation in layered planetary embryos
NASA Astrophysics Data System (ADS)
Arkani-Hamed, Jafar; Ivanov, Boris A.
2014-05-01
The propagation of impact-induced shock wave inside a planetary embryo is investigated using the Hugoniot equations and a new scaling law, governing the particle velocity variations along a shock ray inside a spherical body. The scaling law is adopted to determine the impact heating of a growing embryo in its early stage when it is an undifferentiated and uniform body. The new scaling law, similar to other existing scaling laws, is not suitable for a large differentiated embryo consisting of a silicate mantle overlying an iron core. An algorithm is developed in this study on the basis of the ray theory in a spherically symmetric body which relates the shock parameters at the top of the core to those at the base of the mantle, thus enabling the adoption of scaling laws to estimate the impact heating of both the mantle and the core. The algorithm is applied to two embryo models: a simple two-layered model with a uniform mantle overlying a uniform core, and a model where the pre-shock density and acoustic velocity of the embryo are radially dependent. The former illustrates details of the particle velocity, shock pressure, and temperature increase behind the shock front in a 2D axisymmetric geometry. The latter provides a means to compare the results with those obtained by a hydrocode simulation. The agreement between the results of the two techniques in revealing the effects of the core-mantle boundary on the shock wave transmission across the boundary is encouraging.
Forecasting success via early adoptions analysis: A data-driven study
Milli, Letizia; Giannotti, Fosca; Pedreschi, Dino
2017-01-01
Innovations are continuously launched over markets, such as new products over the retail market or new artists over the music scene. Some innovations become a success; others don’t. Forecasting which innovations will succeed at the beginning of their lifecycle is hard. In this paper, we provide a data-driven, large-scale account of the existence of a special niche among early adopters, individuals that consistently tend to adopt successful innovations before they reach success: we will call them Hit-Savvy. Hit-Savvy can be discovered in very different markets and retain over time their ability to anticipate the success of innovations. As our second contribution, we devise a predictive analytical process, exploiting Hit-Savvy as signals, which achieves high accuracy in the early-stage prediction of successful innovations, far beyond the reach of state-of-the-art time series forecasting models. Indeed, our findings and predictive model can be fruitfully used to support marketing strategies and product placement. PMID:29216255
Forecasting success via early adoptions analysis: A data-driven study.
Rossetti, Giulio; Milli, Letizia; Giannotti, Fosca; Pedreschi, Dino
2017-01-01
Innovations are continuously launched over markets, such as new products over the retail market or new artists over the music scene. Some innovations become a success; others don't. Forecasting which innovations will succeed at the beginning of their lifecycle is hard. In this paper, we provide a data-driven, large-scale account of the existence of a special niche among early adopters, individuals that consistently tend to adopt successful innovations before they reach success: we will call them Hit-Savvy. Hit-Savvy can be discovered in very different markets and retain over time their ability to anticipate the success of innovations. As our second contribution, we devise a predictive analytical process, exploiting Hit-Savvy as signals, which achieves high accuracy in the early-stage prediction of successful innovations, far beyond the reach of state-of-the-art time series forecasting models. Indeed, our findings and predictive model can be fruitfully used to support marketing strategies and product placement.
Development of Taiwan College Students' Sense of Life Meaning Scale
ERIC Educational Resources Information Center
Wu, Ho-Tang; Chou, Mei-Ju; Lei, Meng-Shan; Hou, Jing-Fang; Wu, Ming-Hsyang
2015-01-01
The research aims to develop "Sense of Life Meaning Scale" of Taiwan college students. In accordance with the related literature, most Western scholars adopted Frankl's Logotherapy for developing "Sense of Life Meaning Scale", which consists of freedom of will, will to meaning and meaning of life. The research also adopts these…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adamek, Julian; Daverio, David; Durrer, Ruth
We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
NASA Astrophysics Data System (ADS)
Peng, Heng; Liu, Yinghua; Chen, Haofeng
2018-05-01
In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.
Preconditioning strategies for nonlinear conjugate gradient methods, based on quasi-Newton updates
NASA Astrophysics Data System (ADS)
Andrea, Caliciotti; Giovanni, Fasano; Massimo, Roma
2016-10-01
This paper reports two proposals of possible preconditioners for the Nonlinear Conjugate Gradient (NCG) method, in large scale unconstrained optimization. On one hand, the common idea of our preconditioners is inspired to L-BFGS quasi-Newton updates, on the other hand we aim at explicitly approximating in some sense the inverse of the Hessian matrix. Since we deal with large scale optimization problems, we propose matrix-free approaches where the preconditioners are built using symmetric low-rank updating formulae. Our distinctive new contributions rely on using information on the objective function collected as by-product of the NCG, at previous iterations. Broadly speaking, our first approach exploits the secant equation, in order to impose interpolation conditions on the objective function. In the second proposal we adopt and ad hoc modified-secant approach, in order to possibly guarantee some additional theoretical properties.
Emergency response to mass casualty incidents in Lebanon.
El Sayed, Mazen J
2013-08-01
The emergency response to mass casualty incidents in Lebanon lacks uniformity. Three recent large-scale incidents have challenged the existing emergency response process and have raised the need to improve and develop incident management for better resilience in times of crisis. We describe some simple emergency management principles that are currently applied in the United States. These principles can be easily adopted by Lebanon and other developing countries to standardize and improve their emergency response systems using existing infrastructure.
Stoves or Sugar? Willingness to Adopt Improved Cookstoves in Malawi
Jagger, Pamela; Jumbe, Charles
2016-01-01
Malawi has set a target of adoption of two million improved cookstoves (ICS) by 2020. Meeting this objective requires knowledge about determinants of adoption, particularly in rural areas where the cost of traditional cooking technologies and fuels are non-monetary, and where people have limited capacity to purchase an ICS. We conducted a discrete choice experiment with 383 households in rural Malawi asking them if they would chose a locally made ICS or a package of sugar and salt of roughly equal value. Six months later, we assessed adoption and stove use patterns. Sixty-six percent of households chose the ICS. We find that having a larger share of crop residues in household fuel supply, awareness of the environmental impacts of woodfuel reliance, time the primary cook devotes to collecting fuelwood, and peer effects at the village-level increase the odds of choosing the ICS. Having a large labor supply for fuelwood collection and experience with a non-traditional cooking technology decreased the odds of choosing the ICS. In a rapid assessment six months after stoves were distributed, we found 80% of households were still using the ICS, but not exclusively. Our findings suggest considerable potential for wide-scale adoption of ICS in Malawi. PMID:27346912
a Model Study of Small-Scale World Map Generalization
NASA Astrophysics Data System (ADS)
Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.
2018-04-01
With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.
Or, Calvin; Tong, Ellen; Tan, Joseph; Chan, Summer
2018-05-29
The health care reform initiative led by the Hong Kong government's Food and Health Bureau has started the implementation of an electronic sharing platform to provide an information infrastructure that enables public hospitals and private clinics to share their electronic medical records (EMRs) for improved access to patients' health care information. However, previous attempts to convince the private clinics to adopt EMRs to document health information have faced challenges, as the EMR adoption has been voluntary. The lack of electronic data shared by private clinics carries direct impacts to the efficacy of electronic record sharing between public and private healthcare providers. To increase the likelihood of buy-in, it is essential to proactively identify the users' and organizations' needs and capabilities before large-scale implementation. As part of the reform initiative, this study examined factors affecting the adoption of EMRs in small or solo private general practice clinics, by analyzing the experiences and opinions of the physicians and clinical assistants during the pilot implementation of the technology, with the purpose to learn from it before full-scale rollout. In-depth, semistructured interviews were conducted with 23 physicians and clinical assistants from seven small or solo private general practice clinics to evaluate their experiences, expectations, and opinions regarding the deployment of EMRs. Interview transcripts were content analyzed to identify key factors. Factors affecting the adoption of EMRs to record and manage health care information were identified as follows: system interface design; system functions; stability and reliability of hardware, software, and computing networks; financial and time costs; task and outcome performance, work practice, and clinical workflow; physical space in clinics; trust in technology; users' information technology literacy; training and technical support; and social and organizational influences. The factors are interrelated with the others. The adoption factors identified are multifaceted, ranging from technological characteristics, clinician-technology interactions, skills and knowledge, and the user-workflow-technology fit. Other findings, which have been relatively underrepresented in previous studies, contribute unique insights about the influence of work and social environment on the adoption of EMRs, including limited clinic space and the effects of physicians' decision to use the technology on clinical staffs' adoption decisions. Potential strategies to address the concerns, overcome adoption barriers, and define relevant policies are discussed.
NASA Technical Reports Server (NTRS)
Senocak, I.; Ackerman, A. S.; Kirkpatrick, M. P.; Stevens, D. E.; Mansour, N. N.
2004-01-01
Large-eddy simulation (LES) is a widely used technique in armospheric modeling research. In LES, large, unsteady, three dimensional structures are resolved and small structures that are not resolved on the computational grid are modeled. A filtering operation is applied to distinguish between resolved and unresolved scales. We present two near-surface models that have found use in atmospheric modeling. We also suggest a simpler eddy viscosity model that adopts Prandtl's mixing length model (Prandtl 1925) in the vicinity of the surface and blends with the dynamic Smagotinsky model (Germano et al, 1991) away from the surface. We evaluate the performance of these surface models by simulating a neutraly stratified atmospheric boundary layer.
DNA-encoded chemistry: enabling the deeper sampling of chemical space.
Goodnow, Robert A; Dumelin, Christoph E; Keefe, Anthony D
2017-02-01
DNA-encoded chemical library technologies are increasingly being adopted in drug discovery for hit and lead generation. DNA-encoded chemistry enables the exploration of chemical spaces four to five orders of magnitude more deeply than is achievable by traditional high-throughput screening methods. Operation of this technology requires developing a range of capabilities including aqueous synthetic chemistry, building block acquisition, oligonucleotide conjugation, large-scale molecular biological transformations, selection methodologies, PCR, sequencing, sequence data analysis and the analysis of large chemistry spaces. This Review provides an overview of the development and applications of DNA-encoded chemistry, highlighting the challenges and future directions for the use of this technology.
The Study of Adopting Problem Based Learning in Normal Scale Class Course Design
ERIC Educational Resources Information Center
Hsu, Chia-ling
2014-01-01
This study adopts the Problem Based Learning (PBL) for pre-service teachers in teacher education program. The reasons to adopt PBL are the class scale is not a small class, the contents are too many to teach, and the technologies are ready to be used in classroom. This study used an intermediary, movie, for scenario to student to define the…
Resistivity scaling and electron relaxation times in metallic nanowires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moors, Kristof, E-mail: kristof@itf.fys.kuleuven.be; Imec, Kapeldreef 75, B-3001 Leuven; Sorée, Bart
2014-08-14
We study the resistivity scaling in nanometer-sized metallic wires due to surface roughness and grain-boundaries, currently the main cause of electron scattering in nanoscaled interconnects. The resistivity has been obtained with the Boltzmann transport equation, adopting the relaxation time approximation of the distribution function and the effective mass approximation for the conducting electrons. The relaxation times are calculated exactly, using Fermi's golden rule, resulting in a correct relaxation time for every sub-band state contributing to the transport. In general, the relaxation time strongly depends on the sub-band state, something that remained unclear with the methods of previous work. The resistivitymore » scaling is obtained for different roughness and grain-boundary properties, showing large differences in scaling behavior and relaxation times. Our model clearly indicates that the resistivity is dominated by grain-boundary scattering, easily surpassing the surface roughness contribution by a factor of 10.« less
Inquiring Minds Want to Know: Progress Report on SCALE-UP Physics at Penn State Erie
NASA Astrophysics Data System (ADS)
Hall, Jonathan
2008-03-01
SCALE-UP (Student Centered Activities for Large Enrollment University Programs) is a ``studio'' approach to learning developed by Bob Beichner at North Carolina State University. SCALE-UP was adapted for teaching and learning in the introductory calculus-based mechanics course at Penn State Erie, The Behrend College, starting in Spring 2007. We are presently doing quantitative and qualitative research on using inquiry-based learning with first year college students, in particular how it effects female students and students from groups that are traditionally under-represented in STEM fields. Using field notes of observations of the classes, focus groups, and the collection of quantitative data, the feedback generated by the research is also being used to improve the delivery of the course, and in the planning of adopting SCALE-UP to the second semester course on electromagnetism in the Fall 2008 semester.
NASA Astrophysics Data System (ADS)
Fung, K. M.; Tai, A. P. K.; Yong, T.; Liu, X.
2017-12-01
The fast-growing world population will impose a severe pressure on our current global food production system. Meanwhile, boosting crop yield by increasing fertilizer use comes with a cascade of environmental problems including air pollution. In China, agricultural activities contribute to 95% of total ammonia emissions. Such emissions are attributable to 20% of the fine particulate matter (PM2.5) formed in the downwind regions, which imposes severe health risks to the citizens. Field studies of soybean intercropping have demonstrated its potential to enhance crop yield, lower fertilizer use, and thus reduce ammonia emissions by taking advantage of legume nitrogen fixation and enabling mutualistic crop-crop interactions between legumes and non-legume crops. In our work, we revise the process-based biogeochemical model, DeNitrification-DeComposition (DNDC) to capture the belowground interactions of intercropped crops and show that with intercropping, only 58% of fertilizer is required to yield the same maize production of its monoculture counterpart, corresponding to a reduction in ammonia emission by 43% over China. Using the GEOS-Chem global 3-D chemical transport model, we estimate that such ammonia reduction can lessen downwind inorganic PM2.5 by up to 2.1% (equivalent to 1.3 μg m-3), which saves the Chinese air pollution-related health costs by up to US$1.5 billion each year. With the more enhanced crop growth and land management algorithms in the Community Land Model (CLM), we also implement into CLM the new parametrization of the belowground interactions to simulate large-scale adoption of intercropping around the globe and study their beneficial effects on food production, fertilizer usage and ammonia reduction. This study can serve as a scientific basis for policy makers and intergovernmental organizations to consider promoting large-scale intercropping to maintain a sustainable global food supply to secure both future crop production and air quality.
Progressing Deployment of Solar Photovoltaic Installations in the United States
NASA Astrophysics Data System (ADS)
Kwan, Calvin Lee
2011-07-01
This dissertation evaluates the likelihood of solar PV playing a larger role in national and state level renewable energy portfolios. I examine the feasibility of large-scale solar PV arrays on college campuses, the financials associated with large-scale solar PV arrays and finally, the influence of environmental, economic, social and political variables on the distribution of residential solar PV arrays in the United States. Chapter two investigates the challenges and feasibility of college campuses adopting a net-zero energy policy. Using energy consumption data, local solar insolation data and projected campus growth, I present a method to identify the minimum sized solar PV array that is required for the City College campus of the Los Angeles Community College District to achieve net-zero energy status. I document how current energy demand can be reduced using strategic demand side management, with remaining energy demand being met using a solar PV array. Chapter three focuses on the financial feasibility of large-scale solar PV arrays, using the proposed City College campus array as an example. I document that even after demand side energy management initiatives and financial incentives, large-scale solar PV arrays continue to have ROIs greater than 25 years. I find that traditional financial evaluation methods are not suitable for environmental projects such as solar PV installations as externalities are not taken into account and therefore calls for development of alternative financial valuation methods. Chapter four investigates the influence of environmental, social, economic and political variables on the distribution of residential solar PV arrays across the United States using ZIP code level data from the 2000 US Census. Using data from the National Renewable Energy Laboratory's Open PV project, I document where residential solar PVs are currently located. A zero-inflated negative binomial model was run to evaluate the influence of selected variables. Using the same model, predicted residential solar PV shares were generated and illustrated using GIS software. The results of this model indicate that solar insolation, state energy deregulation and cost of electricity are statistically significant factors positively correlated with the adoption of residential solar PV arrays. With this information, policymakers at the towns and cities level can establish effective solar PV promoting policies and regulations for their respective locations.
Smith, Rachel A; Kim, Youllee; Zhu, Xun; Doudou, Dimi Théodore; Sternberg, Eleanore D; Thomas, Matthew B
2018-01-01
This study documents an investigation into the adoption and diffusion of eave tubes, a novel mosquito vector control, during a large-scale scientific field trial in West Africa. The diffusion of innovations (DOI) and the integrated model of behavior (IMB) were integrated (i.e., innovation attributes with attitudes and social pressures with norms) to predict participants' (N = 329) diffusion intentions. The findings showed that positive attitudes about the innovation's attributes were a consistent positive predictor of diffusion intentions: adopting it, maintaining it, and talking with others about it. As expected by the DOI and the IMB, the social pressure created by a descriptive norm positively predicted intentions to adopt and maintain the innovation. Drawing upon sharing research, we argued that the descriptive norm may dampen future talk about the innovation, because it may no longer be seen as a novel, useful topic to discuss. As predicted, the results showed that as the descriptive norm increased, the intention to talk about the innovation decreased. These results provide broad support for integrating the DOI and the IMB to predict diffusion and for efforts to draw on other research to understand motivations for social diffusion.
Large-eddy simulations of a forced homogeneous isotropic turbulence with polymer additives
NASA Astrophysics Data System (ADS)
Wang, Lu; Cai, Wei-Hua; Li, Feng-Chen
2014-03-01
Large-eddy simulations (LES) based on the temporal approximate deconvolution model were performed for a forced homogeneous isotropic turbulence (FHIT) with polymer additives at moderate Taylor Reynolds number. Finitely extensible nonlinear elastic in the Peterlin approximation model was adopted as the constitutive equation for the filtered conformation tensor of the polymer molecules. The LES results were verified through comparisons with the direct numerical simulation results. Using the LES database of the FHIT in the Newtonian fluid and the polymer solution flows, the polymer effects on some important parameters such as strain, vorticity, drag reduction, and so forth were studied. By extracting the vortex structures and exploring the flatness factor through a high-order correlation function of velocity derivative and wavelet analysis, it can be found that the small-scale vortex structures and small-scale intermittency in the FHIT are all inhibited due to the existence of the polymers. The extended self-similarity scaling law in the polymer solution flow shows no apparent difference from that in the Newtonian fluid flow at the currently simulated ranges of Reynolds and Weissenberg numbers.
2013-06-01
and adopts a laissez - faire approach to advancing free-market and democratic ideals, which globalization seems to facilitate by itself. According to...uniquely preeminent role in protecting the system and sustaining the United States’ leadership position within it. By helping to prevent large-scale war...the means thereby marginalizing the Navy’s ability to influence U.S. strategy. In short, the style of U.S. defense leadership was industrial-managerial
2011-12-01
road oil, aviation gasoline, kerosene, lubricants, naphtha-type jet fuel, pentanes plus, petrochemical feedstocks, special naphthas, still gas... refinery gas), waxes, miscellaneous products, and crude oil burned as fuel. Figure 2. Uses of Oil (EIA, 2010a, p. 148) There is no significant body of...1. Large-Scale Efforts in the 1990s There have been efforts in the past to bring about the adoption of EVs or other zero- emissions vehicles. There
Balancing Green Power; How to deal with variable energy sources
NASA Astrophysics Data System (ADS)
Elliott, David
2016-04-01
Renewable energy sources are large but some are variable and intermittent. The wide-scale use of renewable energy sources for energy supply will require the adoption of ways to compensate for their variability. This book reviews the technical options looking at their pros and cons and how they might work together to support a reliable and sustainable energy system. This is a rapidly advancing area of research and practice and Balancing Green Power offers an ideal introduction to the field.
Barstow, Christina K; Nagel, Corey L; Clasen, Thomas F; Thomas, Evan A
2016-07-16
In an effort to reduce the disease burden in rural Rwanda, decrease poverty associated with expenditures for fuel, and minimize the environmental impact on forests and greenhouse gases from inefficient combustion of biomass, the Rwanda Ministry of Health (MOH) partnered with DelAgua Health (DelAgua), a private social enterprise, to distribute and promote the use of improved cookstoves and advanced water filters to the poorest quarter of households (Ubudehe 1 and 2) nationally, beginning in Western Province under a program branded Tubeho Neza ("Live Well"). The project is privately financed and earns revenue from carbon credits under the United Nations Clean Development Mechanism. During a 3-month period in late 2014, over 470,000 people living in over 101,000 households were provided free water filters and cookstoves. Following the distribution, community health workers visited nearly 98 % of households to perform household level education and training activities. Over 87 % of households were visited again within 6 months with a basic survey conducted. Detailed adoption surveys were conducted among a sample of households, 1000 in the first round, 187 in the second. Approximately a year after distribution, reported water filter use was above 90 % (+/-4 % CI) and water present in filter was observed in over 76 % (+/-6 % CI) of households, while the reported primary stove was nearly 90 % (+/-4.4 % CI) and of households cooking at the time of the visit, over 83 % (+/-5.3 % CI) were on the improved stove. There was no observed association between household size and stove stacking behavior. This program suggests that free distribution is not a determinant of low adoption. It is plausible that continued engagement in households, enabled by Ministry of Health support and carbon financed revenue, contributed to high adoption rates. Overall, the program was able to demonstrate a privately financed, public health intervention can achieve high levels of initial adoption and usage of household level water filtration and improved cookstoves at a large scale.
Bini, Stefano A; Mahajan, John
2016-11-01
Little is known about the implementation rate of clinical practice guidelines (CPGs). Our purpose was to report on the adoption rate of CPGs created and implemented by a large orthopedic group using the Delphi consensus method. The draft CPGs were created before the group's annual meeting by 5 teams each assigned a subset of topics. The draft guidelines included a statement and a summary of the available evidence. Each guideline was debated in both small-group and plenary sessions. Voting was anonymous and a 75% supermajority was required for passage. A Likert scale was used to survey the patient's experience with the process at 1 week, and the Kirkpatrick evaluation model was used to gauge the efficacy of the process over a 6-month time frame. Eighty-five orthopedic surgeons attended the meeting. Fifteen guidelines grouped into 5 topics were created. All passed. Eighty-six percent of attendees found the process effective and 84% felt that participating in the process made it more likely that they would adopt the guidelines. At 1 week, an average of 62% of attendees stated they were practicing the guideline as written (range: 35%-72%), and at 6 months, 96% stated they were practicing them (range: 82%-100%). We have demonstrated that a modified Delphi method for reaching consensus can be very effective in both creating CPGs and leading to their adoption. Further we have shown that the process is well received by participants and that an inclusionary approach can be highly successful. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha
2018-06-01
Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.
Recent advances in large-eddy simulation of spray and coal combustion
NASA Astrophysics Data System (ADS)
Zhou, L. X.
2013-07-01
Large-eddy simulation (LES) is under its rapid development and is recognized as a possible second generation of CFD methods used in engineering. Spray and coal combustion is widely used in power, transportation, chemical and metallurgical, iron and steel making, aeronautical and astronautical engineering, hence LES of spray and coal two-phase combustion is particularly important for engineering application. LES of two-phase combustion attracts more and more attention; since it can give the detailed instantaneous flow and flame structures and more exact statistical results than those given by the Reynolds averaged modeling (RANS modeling). One of the key problems in LES is to develop sub-grid scale (SGS) models, including SGS stress models and combustion models. Different investigators proposed or adopted various SGS models. In this paper the present author attempts to review the advances in studies on LES of spray and coal combustion, including the studies done by the present author and his colleagues. Different SGS models adopted by different investigators are described, some of their main results are summarized, and finally some research needs are discussed.
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...
2017-03-24
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Le Mouël, Jean-Louis; Allègre, Claude J.; Narteau, Clément
1997-01-01
A scaling law approach is used to simulate the dynamo process of the Earth’s core. The model is made of embedded turbulent domains of increasing dimensions, until the largest whose size is comparable with the site of the core, pervaded by large-scale magnetic fields. Left-handed or right-handed cyclones appear at the lowest scale, the scale of the elementary domains of the hierarchical model, and disappear. These elementary domains then behave like electromotor generators with opposite polarities depending on whether they contain a left-handed or a right-handed cyclone. To transfer the behavior of the elementary domains to larger ones, a dynamic renormalization approach is used. A simple rule is adopted to determine whether a domain of scale l is a generator—and what its polarity is—in function of the state of the (l − 1) domains it is made of. This mechanism is used as the main ingredient of a kinematic dynamo model, which displays polarity intervals, excursions, and reversals of the geomagnetic field. PMID:11038547
Identifying the scale-dependent motifs in atmospheric surface layer by ordinal pattern analysis
NASA Astrophysics Data System (ADS)
Li, Qinglei; Fu, Zuntao
2018-07-01
Ramp-like structures in various atmospheric surface layer time series have been long studied, but the presence of motifs with the finer scale embedded within larger scale ramp-like structures has largely been overlooked in the reported literature. Here a novel, objective and well-adapted methodology, the ordinal pattern analysis, is adopted to study the finer-scaled motifs in atmospheric boundary-layer (ABL) time series. The studies show that the motifs represented by different ordinal patterns take clustering properties and 6 dominated motifs out of the whole 24 motifs account for about 45% of the time series under particular scales, which indicates the higher contribution of motifs with the finer scale to the series. Further studies indicate that motif statistics are similar for both stable conditions and unstable conditions at larger scales, but large discrepancies are found at smaller scales, and the frequencies of motifs "1234" and/or "4321" are a bit higher under stable conditions than unstable conditions. Under stable conditions, there are great changes for the occurrence frequencies of motifs "1234" and "4321", where the occurrence frequencies of motif "1234" decrease from nearly 24% to 4.5% with the scale factor increasing, and the occurrence frequencies of motif "4321" change nonlinearly with the scale increasing. These great differences of dominated motifs change with scale can be taken as an indicator to quantify the flow structure changes under different stability conditions, and motif entropy can be defined just by only 6 dominated motifs to quantify this time-scale independent property of the motifs. All these results suggest that the defined scale of motifs with the finer scale should be carefully taken into consideration in the interpretation of turbulence coherent structures.
Carbone, Chris; Teacher, Amber; Rowcliffe, J. Marcus
2007-01-01
Mammalian carnivores fall into two broad dietary groups: smaller carnivores (<20 kg) that feed on very small prey (invertebrates and small vertebrates) and larger carnivores (>20 kg) that specialize in feeding on large vertebrates. We develop a model that predicts the mass-related energy budgets and limits of carnivore size within these groups. We show that the transition from small to large prey can be predicted by the maximization of net energy gain; larger carnivores achieve a higher net gain rate by concentrating on large prey. However, because it requires more energy to pursue and subdue large prey, this leads to a 2-fold step increase in energy expenditure, as well as increased intake. Across all species, energy expenditure and intake both follow a three-fourths scaling with body mass. However, when each dietary group is considered individually they both display a shallower scaling. This suggests that carnivores at the upper limits of each group are constrained by intake and adopt energy conserving strategies to counter this. Given predictions of expenditure and estimates of intake, we predict a maximum carnivore mass of approximately a ton, consistent with the largest extinct species. Our approach provides a framework for understanding carnivore energetics, size, and extinction dynamics. PMID:17227145
Low-energy transmission electron diffraction and imaging of large-area graphene
Zhao, Wei; Xia, Bingyu; Lin, Li; Xiao, Xiaoyang; Liu, Peng; Lin, Xiaoyang; Peng, Hailin; Zhu, Yuanmin; Yu, Rong; Lei, Peng; Wang, Jiangtao; Zhang, Lina; Xu, Yong; Zhao, Mingwen; Peng, Lianmao; Li, Qunqing; Duan, Wenhui; Liu, Zhongfan; Fan, Shoushan; Jiang, Kaili
2017-01-01
Two-dimensional (2D) materials have attracted interest because of their excellent properties and potential applications. A key step in realizing industrial applications is to synthesize wafer-scale single-crystal samples. Until now, single-crystal samples, such as graphene domains up to the centimeter scale, have been synthesized. However, a new challenge is to efficiently characterize large-area samples. Currently, the crystalline characterization of these samples still relies on selected-area electron diffraction (SAED) or low-energy electron diffraction (LEED), which is more suitable for characterizing very small local regions. This paper presents a highly efficient characterization technique that adopts a low-energy electrostatically focused electron gun and a super-aligned carbon nanotube (SACNT) film sample support. It allows rapid crystalline characterization of large-area graphene through a single photograph of a transmission-diffracted image at a large beam size. Additionally, the low-energy electron beam enables the observation of a unique diffraction pattern of adsorbates on the suspended graphene at room temperature. This work presents a simple and convenient method for characterizing the macroscopic structures of 2D materials, and the instrument we constructed allows the study of the weak interaction with 2D materials. PMID:28879233
Low-energy transmission electron diffraction and imaging of large-area graphene.
Zhao, Wei; Xia, Bingyu; Lin, Li; Xiao, Xiaoyang; Liu, Peng; Lin, Xiaoyang; Peng, Hailin; Zhu, Yuanmin; Yu, Rong; Lei, Peng; Wang, Jiangtao; Zhang, Lina; Xu, Yong; Zhao, Mingwen; Peng, Lianmao; Li, Qunqing; Duan, Wenhui; Liu, Zhongfan; Fan, Shoushan; Jiang, Kaili
2017-09-01
Two-dimensional (2D) materials have attracted interest because of their excellent properties and potential applications. A key step in realizing industrial applications is to synthesize wafer-scale single-crystal samples. Until now, single-crystal samples, such as graphene domains up to the centimeter scale, have been synthesized. However, a new challenge is to efficiently characterize large-area samples. Currently, the crystalline characterization of these samples still relies on selected-area electron diffraction (SAED) or low-energy electron diffraction (LEED), which is more suitable for characterizing very small local regions. This paper presents a highly efficient characterization technique that adopts a low-energy electrostatically focused electron gun and a super-aligned carbon nanotube (SACNT) film sample support. It allows rapid crystalline characterization of large-area graphene through a single photograph of a transmission-diffracted image at a large beam size. Additionally, the low-energy electron beam enables the observation of a unique diffraction pattern of adsorbates on the suspended graphene at room temperature. This work presents a simple and convenient method for characterizing the macroscopic structures of 2D materials, and the instrument we constructed allows the study of the weak interaction with 2D materials.
A Coherent vorticity preserving eddy-viscosity correction for Large-Eddy Simulation
NASA Astrophysics Data System (ADS)
Chapelier, J.-B.; Wasistho, B.; Scalo, C.
2018-04-01
This paper introduces a new approach to Large-Eddy Simulation (LES) where subgrid-scale (SGS) dissipation is applied proportionally to the degree of local spectral broadening, hence mitigated or deactivated in regions dominated by large-scale and/or laminar vortical motion. The proposed coherent-vorticity preserving (CvP) LES methodology is based on the evaluation of the ratio of the test-filtered to resolved (or grid-filtered) enstrophy, σ. Values of σ close to 1 indicate low sub-test-filter turbulent activity, justifying local deactivation of the SGS dissipation. The intensity of the SGS dissipation is progressively increased for σ < 1 which corresponds to a small-scale spectral broadening. The SGS dissipation is then fully activated in developed turbulence characterized by σ ≤σeq, where the value σeq is derived assuming a Kolmogorov spectrum. The proposed approach can be applied to any eddy-viscosity model, is algorithmically simple and computationally inexpensive. LES of Taylor-Green vortex breakdown demonstrates that the CvP methodology improves the performance of traditional, non-dynamic dissipative SGS models, capturing the peak of total turbulent kinetic energy dissipation during transition. Similar accuracy is obtained by adopting Germano's dynamic procedure albeit at more than twice the computational overhead. A CvP-LES of a pair of unstable periodic helical vortices is shown to predict accurately the experimentally observed growth rate using coarse resolutions. The ability of the CvP methodology to dynamically sort the coherent, large-scale motion from the smaller, broadband scales during transition is demonstrated via flow visualizations. LES of compressible channel are carried out and show a good match with a reference DNS.
Framework for rapid assessment and adoption of new vector control tools.
Vontas, John; Moore, Sarah; Kleinschmidt, Immo; Ranson, Hilary; Lindsay, Steve; Lengeler, Christian; Hamon, Nicholas; McLean, Tom; Hemingway, Janet
2014-04-01
Evidence-informed health policy making is reliant on systematic access to, and appraisal of, the best available research evidence. This review suggests a strategy to improve the speed at which evidence is gathered on new vector control tools (VCTs) using a framework based on measurements of the vectorial capacity of an insect population to transmit disease. We explore links between indicators of VCT efficacy measurable in small-scale experiments that are relevant to entomological and epidemiological parameters measurable only in large-scale proof-of-concept randomised control trials (RCTs). We hypothesise that once RCTs establish links between entomological and epidemiological indicators then rapid evaluation of new products within the same product category may be conducted through smaller scale experiments without repetition of lengthy and expensive RCTs. Copyright © 2014 Elsevier Ltd. All rights reserved.
Path Searching Based Crease Detection for Large Scale Scanned Document Images
NASA Astrophysics Data System (ADS)
Zhang, Jifu; Li, Yi; Li, Shutao; Sun, Bin; Sun, Jun
2017-12-01
Since the large size documents are usually folded for preservation, creases will occur in the scanned images. In this paper, a crease detection method is proposed to locate the crease pixels for further processing. According to the imaging process of contactless scanners, the shading on both sides of the crease usually varies a lot. Based on this observation, a convex hull based algorithm is adopted to extract the shading information of the scanned image. Then, the possible crease path can be achieved by applying the vertical filter and morphological operations on the shading image. Finally, the accurate crease is detected via Dijkstra path searching. Experimental results on the dataset of real scanned newspapers demonstrate that the proposed method can obtain accurate locations of the creases in the large size document images.
Adoption of routine telemedicine in Norwegian hospitals: progress over 5 years.
Zanaboni, Paolo; Wootton, Richard
2016-09-20
Although Norway is well known for its early use of telemedicine to provide services for people in rural and remote areas in the Arctic, little is known about the pace of telemedicine adoption in Norway. The aim of the present study was to explore the statewide implementation of telemedicine in Norwegian hospitals over time, and analyse its adoption and level of use. Data on outpatient visits and telemedicine consultations delivered by Norwegian hospitals from 2009 to 2013 were collected from the national health registry. Data were stratified by health region, hospital, year, and clinical specialty. All four health regions used telemedicine, i.e. there was 100 % adoption at the regional level. The use of routine telemedicine differed between health regions, and telemedicine appeared to be used mostly in the regions of lower centrality and population density, such as Northern Norway. Only Central Norway seemed to be atypical. Twenty-one out of 28 hospitals reported using telemedicine, i.e. there was 75 % adoption at the hospital level. Neurosurgery and rehabilitation were the clinical specialties where telemedicine was used most frequently. Despite the growing trend and the high adoption, the relative use of telemedicine compared to that of outpatient visits was low. Adoption of telemedicine is Norway was high, with all the health regions and most of the hospitals reporting using telemedicine. The use of telemedicine appeared to increase over the 5-year study period. However, the proportion of telemedicine consultations relative to the number of outpatient visits was low. The use of telemedicine in Norway was low in comparison with that reported in large-scale telemedicine networks in other countries. To facilitate future comparisons, data on adoption and utilisation over time should be reported routinely by statewide or network-based telemedicine services.
Using ADOPT Algorithm and Operational Data to Discover Precursors to Aviation Adverse Events
NASA Technical Reports Server (NTRS)
Janakiraman, Vijay; Matthews, Bryan; Oza, Nikunj
2018-01-01
The US National Airspace System (NAS) is making its transition to the NextGen system and assuring safety is one of the top priorities in NextGen. At present, safety is managed reactively (correct after occurrence of an unsafe event). While this strategy works for current operations, it may soon become ineffective for future airspace designs and high density operations. There is a need for proactive management of safety risks by identifying hidden and "unknown" risks and evaluating the impacts on future operations. To this end, NASA Ames has developed data mining algorithms that finds anomalies and precursors (high-risk states) to safety issues in the NAS. In this paper, we describe a recently developed algorithm called ADOPT that analyzes large volumes of data and automatically identifies precursors from real world data. Precursors help in detecting safety risks early so that the operator can mitigate the risk in time. In addition, precursors also help identify causal factors and help predict the safety incident. The ADOPT algorithm scales well to large data sets and to multidimensional time series, reduce analyst time significantly, quantify multiple safety risks giving a holistic view of safety among other benefits. This paper details the algorithm and includes several case studies to demonstrate its application to discover the "known" and "unknown" safety precursors in aviation operation.
NASA Astrophysics Data System (ADS)
Vahmani, P.; Ban-Weiss, G.
2016-08-01
During 2012-2014, drought in California resulted in policies to reduce water consumption. One measure pursued was replacing lawns with landscapes that minimize water consumption, such as drought-tolerant vegetation. If implemented at broad scale, this strategy would result in reductions in irrigation and changes in land surface characteristics. In this study, we employ a modified regional climate model to assess the climatic consequences of adopting drought-tolerant vegetation over the Los Angeles metropolitan area. Transforming lawns to drought-tolerant vegetation resulted in daytime warming of up to 1.9°C, largely due to decreases in irrigation that shifted surface energy partitioning toward higher sensible and lower latent heat flux. During nighttime, however, adopting drought-tolerant vegetation caused mean cooling of 3.2°C, due to changes in soil thermodynamic properties and heat exchange dynamics between the surface and subsurface. Our results show that nocturnal cooling effects, which are larger in magnitude and of great importance for public health during heat events, could counterbalance the daytime warming attributed to the studied water conservation strategy. A more aggressive implementation, assuming all urban vegetation was replaced with drought-tolerant vegetation, resulted in an average daytime cooling of 0.2°C, largely due to strengthened sea breeze patterns, highlighting the important role of land surface roughness in this coastal megacity.
NASA Astrophysics Data System (ADS)
Ban-Weiss, G. A.; Vahmani, P.
2016-12-01
During 2012-2014, drought in California resulted in policies to reduce water consumption. One measure pursued was replacing lawns with landscapes that minimize water consumption, such as drought tolerant vegetation. If implemented at broad scale, this strategy would result in reductions in irrigation, and changes in land surface characteristics. In this study, we employ a modified regional climate model to assess the climatic consequences of adopting drought tolerant vegetation over the Los Angeles metropolitan area. Transforming lawns to drought tolerant vegetation resulted in daytime warming of up to 1.9°C, largely due to decreases in irrigation that shifted surface energy partitioning toward higher sensible and lower latent heat flux. During nighttime, however, adopting drought tolerant vegetation caused mean cooling of about 3°C, due to changes in soil thermodynamic properties and heat exchange dynamics between the surface and ground. Our results show that nocturnal cooling effects, which are larger in magnitude and of great importance for public health during heat events, could counterbalance the daytime warming attributed to the studied water conservation strategy. A more aggressive implementation, assuming all urban vegetation was replaced with drought tolerant vegetation, resulted in an average daytime cooling of 0.2°C, largely due to weakened sea-breeze patterns, highlighting the important role of land surface roughness in this coastal megacity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muchero, Wellington; Labbe, Jessy L; Priya, Ranjan
2014-01-01
To date, Populus ranks among a few plant species with a complete genome sequence and other highly developed genomic resources. With the first genome sequence among all tree species, Populus has been adopted as a suitable model organism for genomic studies in trees. However, far from being just a model species, Populus is a key renewable economic resource that plays a significant role in providing raw materials for the biofuel and pulp and paper industries. Therefore, aside from leading frontiers of basic tree molecular biology and ecological research, Populus leads frontiers in addressing global economic challenges related to fuel andmore » fiber production. The latter fact suggests that research aimed at improving quality and quantity of Populus as a raw material will likely drive the pursuit of more targeted and deeper research in order to unlock the economic potential tied in molecular biology processes that drive this tree species. Advances in genome sequence-driven technologies, such as resequencing individual genotypes, which in turn facilitates large scale SNP discovery and identification of large scale polymorphisms are key determinants of future success in these initiatives. In this treatise we discuss implications of genome sequence-enable technologies on Populus genomic and genetic studies of complex and specialized-traits.« less
Institutionalizing Telemedicine Applications: The Challenge of Legitimizing Decision-Making
Lettieri, Emanuele
2011-01-01
During the last decades a variety of telemedicine applications have been trialed worldwide. However, telemedicine is still an example of major potential benefits that have not been fully attained. Health care regulators are still debating why institutionalizing telemedicine applications on a large scale has been so difficult and why health care professionals are often averse or indifferent to telemedicine applications, thus preventing them from becoming part of everyday clinical routines. We believe that the lack of consolidated procedures for supporting decision making by health care regulators is a major weakness. We aim to further the current debate on how to legitimize decision making about the institutionalization of telemedicine applications on a large scale. We discuss (1) three main requirements— rationality, fairness, and efficiency—that should underpin decision making so that the relevant stakeholders perceive them as being legitimate, and (2) the domains and criteria for comparing and assessing telemedicine applications—benefits and sustainability. According to these requirements and criteria, we illustrate a possible reference process for legitimate decision making about which telemedicine applications to implement on a large scale. This process adopts the health care regulators’ perspective and is made up of 2 subsequent stages, in which a preliminary proposal and then a full proposal are reviewed. PMID:21955510
Institutionalizing telemedicine applications: the challenge of legitimizing decision-making.
Zanaboni, Paolo; Lettieri, Emanuele
2011-09-28
During the last decades a variety of telemedicine applications have been trialed worldwide. However, telemedicine is still an example of major potential benefits that have not been fully attained. Health care regulators are still debating why institutionalizing telemedicine applications on a large scale has been so difficult and why health care professionals are often averse or indifferent to telemedicine applications, thus preventing them from becoming part of everyday clinical routines. We believe that the lack of consolidated procedures for supporting decision making by health care regulators is a major weakness. We aim to further the current debate on how to legitimize decision making about the institutionalization of telemedicine applications on a large scale. We discuss (1) three main requirements--rationality, fairness, and efficiency--that should underpin decision making so that the relevant stakeholders perceive them as being legitimate, and (2) the domains and criteria for comparing and assessing telemedicine applications--benefits and sustainability. According to these requirements and criteria, we illustrate a possible reference process for legitimate decision making about which telemedicine applications to implement on a large scale. This process adopts the health care regulators' perspective and is made up of 2 subsequent stages, in which a preliminary proposal and then a full proposal are reviewed.
ERIC Educational Resources Information Center
Celik, Ismail; Sahin, Ismail; Aydin, Mustafa
2014-01-01
In this study, a mobile learning adoption scale (MLAS) was developed on the basis of Rogers' (2003) Diffusion of Innovations Theory. The scale that was developed consists of four sections. These sections are as follows: Stages in the innovation-decision process, Types of m-learning decision, Innovativeness level and attributes of m-learning. There…
ERIC Educational Resources Information Center
Celik, Ismail; Sahin, Ismail; Aydin, Mustafa
2014-01-01
In this study, a mobile learning adoption scale (MLAS) was developed on the basis of Rogers' (2003) Diffusion of Innovations Theory. The scale that was developed consists of four sections. These sections are as follows: Stages in the innovation-decision process, Types of m-learning decision, Innovativeness level and attributes of m-learning.…
Final Report: Enabling Exascale Hardware and Software Design through Scalable System Virtualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Patrick G.
2015-02-01
In this grant, we enhanced the Palacios virtual machine monitor to increase its scalability and suitability for addressing exascale system software design issues. This included a wide range of research on core Palacios features, large-scale system emulation, fault injection, perfomrance monitoring, and VMM extensibility. This research resulted in large number of high-impact publications in well-known venues, the support of a number of students, and the graduation of two Ph.D. students and one M.S. student. In addition, our enhanced version of the Palacios virtual machine monitor has been adopted as a core element of the Hobbes operating system under active DOE-fundedmore » research and development.« less
GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil
2015-11-15
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host andmore » device.« less
Design of energy storage system to improve inertial response for large scale PV generation
Wang, Xiaoyu; Yue, Meng
2016-07-01
With high-penetration levels of renewable generating sources being integrated into the existing electric power grid, conventional generators are being replaced and grid inertial response is deteriorating. This technical challenge is more severe with photovoltaic (PV) generation than with wind generation because PV generation systems cannot provide inertial response unless special countermeasures are adopted. To enhance the inertial response, this paper proposes to use battery energy storage systems (BESS) as the remediation approach to accommodate the degrading inertial response when high penetrations of PV generation are integrated into the existing power grid. A sample power system was adopted and simulated usingmore » PSS/E software. Here, impacts of different penetration levels of PV generation on the system inertial response were investigated and then BESS was incorporated to improve the frequency dynamics.« less
NASA Astrophysics Data System (ADS)
Song, Z. N.; Sui, H. G.
2018-04-01
High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.
NASA Astrophysics Data System (ADS)
Gao, Zhenxun; Wang, Jingying; Jiang, Chongwen; Lee, Chunhian
2014-11-01
In the framework of Reynolds-averaged Navier-Stokes simulation, supersonic turbulent combustion flows at the German Aerospace Centre (DLR) combustor and Japan Aerospace Exploration Agency (JAXA) integrated scramjet engine are numerically simulated using the flamelet model. Based on the DLR combustor case, theoretical analysis and numerical experiments conclude that: the finite rate model only implicitly considers the large-scale turbulent effect and, due to the lack of the small-scale non-equilibrium effect, it would overshoot the peak temperature compared to the flamelet model in general. Furthermore, high-Mach-number compressibility affects the flamelet model mainly through two ways: the spatial pressure variation and the static enthalpy variation due to the kinetic energy. In the flamelet library, the mass fractions of the intermediate species, e.g. OH, are more sensible to the above two effects than the main species such as H2O. Additionally, in the combustion flowfield where the pressure is larger than the value adopted in the generation of the flamelet library or the conversion from the static enthalpy to the kinetic energy occurs, the temperature obtained by the flamelet model without taking compressibility effects into account would be undershot, and vice versa. The static enthalpy variation effect has only little influence on the temperature simulation of the flamelet model, while the effect of the spatial pressure variation may cause relatively large errors. From the JAXA case, it is found that the flamelet model cannot in general be used for an integrated scramjet engine. The existence of the inlet together with the transverse injection scheme could cause large spatial variations of pressure, so the pressure value adopted for the generation of a flamelet library should be fine-tuned according to a pre-simulation of pure mixing.
Vibart, Ronaldo; Vogeler, Iris; Dennis, Samuel; Kaye-Blake, William; Monaghan, Ross; Burggraaf, Vicki; Beautrais, Josef; Mackay, Alec
2015-06-01
Using a novel approach that links geospatial land resource information with individual farm-scale simulation, we conducted a regional assessment of nitrogen (N) and phosphorous (P) losses to water and greenhouse gas (GHG) emissions to air from the predominant mix of pastoral industries in Southland, New Zealand. An evaluation of the cost-effectiveness of several nutrient loss mitigation strategies applied at the farm-scale, set primarily for reducing N and P losses and grouped by capital cost and potential ease of adoption, followed an initial baseline assessment. Grouped nutrient loss mitigation strategies were applied on an additive basis on the assumption of full adoption, and were broadly identified as 'improved nutrient management' (M1), 'improved animal productivity' (M2), and 'restricted grazing' (M3). Estimated annual nitrate-N leaching losses occurring under representative baseline sheep and beef (cattle) farms, and representative baseline dairy farms for the region were 10 ± 2 and 32 ± 6 kg N/ha (mean ± standard deviation), respectively. Both sheep and beef and dairy farms were responsive to N leaching loss mitigation strategies in M1, at a low cost per kg N-loss mitigated. Only dairy farms were responsive to N leaching loss abatement from adopting M2, at no additional cost per kg N-loss mitigated. Dairy farms were also responsive to N leaching loss abatement from adopting M3, but this reduction came at a greater cost per kg N-loss mitigated. Only dairy farms were responsive to P-loss mitigation strategies, in particular by adopting M1. Only dairy farms were responsive to GHG abatement; greater abatement was achieved by the most intensified dairy farm system simulated. Overall, M1 provided for high levels of regional scale N- and P-loss abatement at a low cost per farm without affecting overall farm production, M2 provided additional N-loss abatement but only marginal P-loss abatement, whereas M3 provided the greatest N-loss abatement, but delivered no additional P abatement, and came at a large financial cost to farmers, sheep and beef farmers in particular. The modelling approach provides a farm-scale framework that can be extended to other regions to accommodate different farm production systems and performances, capturing the interactions between farm types, land use capabilities and production levels, as these influence nutrient losses and GHG emissions, and the effectiveness of mitigation strategies. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dabney, James B.; Arthur, James Douglas
2017-01-01
Agile methods have gained wide acceptance over the past several years, to the point that they are now a standard management and execution approach for small-scale software development projects. While conventional Agile methods are not generally applicable to large multi-year and mission-critical systems, Agile hybrids are now being developed (such as SAFe) to exploit the productivity improvements of Agile while retaining the necessary process rigor and coordination needs of these projects. From the perspective of Independent Verification and Validation (IVV), however, the adoption of these hybrid Agile frameworks is becoming somewhat problematic. Hence, we find it prudent to question the compatibility of conventional IVV techniques with (hybrid) Agile practices.This paper documents our investigation of (a) relevant literature, (b) the modification and adoption of Agile frameworks to accommodate the development of large scale, mission critical systems, and (c) the compatibility of standard IVV techniques within hybrid Agile development frameworks. Specific to the latter, we found that the IVV methods employed within a hybrid Agile process can be divided into three groups: (1) early lifecycle IVV techniques that are fully compatible with the hybrid lifecycles, (2) IVV techniques that focus on tracing requirements, test objectives, etc. are somewhat incompatible, but can be tailored with a modest effort, and (3) IVV techniques involving an assessment requiring artifact completeness that are simply not compatible with hybrid Agile processes, e.g., those that assume complete requirement specification early in the development lifecycle.
NK cell-based immunotherapy for malignant diseases
Cheng, Min; Chen, Yongyan; Xiao, Weihua; Sun, Rui; Tian, Zhigang
2013-01-01
Natural killer (NK) cells play critical roles in host immunity against cancer. In response, cancers develop mechanisms to escape NK cell attack or induce defective NK cells. Current NK cell-based cancer immunotherapy aims to overcome NK cell paralysis using several approaches. One approach uses expanded allogeneic NK cells, which are not inhibited by self histocompatibility antigens like autologous NK cells, for adoptive cellular immunotherapy. Another adoptive transfer approach uses stable allogeneic NK cell lines, which is more practical for quality control and large-scale production. A third approach is genetic modification of fresh NK cells or NK cell lines to highly express cytokines, Fc receptors and/or chimeric tumor-antigen receptors. Therapeutic NK cells can be derived from various sources, including peripheral or cord blood cells, stem cells or even induced pluripotent stem cells (iPSCs), and a variety of stimulators can be used for large-scale production in laboratories or good manufacturing practice (GMP) facilities, including soluble growth factors, immobilized molecules or antibodies, and other cellular activators. A list of NK cell therapies to treat several types of cancer in clinical trials is reviewed here. Several different approaches to NK-based immunotherapy, such as tissue-specific NK cells, killer receptor-oriented NK cells and chemically treated NK cells, are discussed. A few new techniques or strategies to monitor NK cell therapy by non-invasive imaging, predetermine the efficiency of NK cell therapy by in vivo experiments and evaluate NK cell therapy approaches in clinical trials are also introduced. PMID:23604045
Scalable and Sustainable Electrochemical Allylic C–H Oxidation
Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D.; Baran, Phil S.
2016-01-01
New methods and strategies for the direct functionalization of C–H bonds are beginning to reshape the fabric of retrosynthetic analysis, impacting the synthesis of natural products, medicines, and even materials1. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C–H functionalization due to the utility of enones and allylic alcohols as versatile intermediates, along with their prevalence in natural and unnatural materials2. Allylic oxidations have been featured in hundreds of syntheses, including some natural product syntheses regarded as “classics”3. Despite many attempts to improve the efficiency and practicality of this powerful transformation, the vast majority of conditions still employ highly toxic reagents (based around toxic elements such as chromium, selenium, etc.) or expensive catalysts (palladium, rhodium, etc.)2. These requirements are highly problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. As such, this oxidation strategy is rarely embraced for large-scale synthetic applications, limiting the adoption of this important retrosynthetic strategy by industrial scientists. In this manuscript, we describe an electrochemical solution to this problem that exhibits broad substrate scope, operational simplicity, and high chemoselectivity. This method employs inexpensive and readily available materials, representing the first example of a scalable allylic C–H oxidation (demonstrated on 100 grams), finally opening the door for the adoption of this C–H oxidation strategy in large-scale industrial settings without significant environmental impact. PMID:27096371
Flexible services for the support of research.
Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John
2013-01-28
Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.
NASA Astrophysics Data System (ADS)
Sun, P.; Jokipii, J. R.; Giacalone, J.
2016-12-01
Anisotropies in astrophysical turbulence has been proposed and observed for a long time. And recent observations adopting the multi-scale analysis techniques provided a detailed description of the scale-dependent power spectrum of the magnetic field parallel and perpendicular to the scale-dependent magnetic field line at different scales in the solar wind. In the previous work, we proposed a multi-scale method to synthesize non-isotropic turbulent magnetic field with pre-determined power spectra of the fluctuating magnetic field as a function of scales. We present the effect of test particle transport in the resulting field with a two-scale algorithm. We find that the scale-dependent turbulence anisotropy has a significant difference in the effect on charged par- ticle transport from what the isotropy or the global anisotropy has. It is important to apply this field synthesis method to the solar wind magnetic field based on spacecraft data. However, this relies on how we extract the power spectra of the turbulent magnetic field across different scales. In this study, we propose here a power spectrum synthesis method based on Fourier analysis to extract the large and small scale power spectrum from a single spacecraft observation with a long enough period and a high sampling frequency. We apply the method to the solar wind measurement by the magnetometer onboard the ACE spacecraft and regenerate the large scale isotropic 2D spectrum and the small scale anisotropic 2D spectrum. We run test particle simulations in the magnetid field generated in this way to estimate the transport coefficients and to compare with the isotropic turbulence model.
Masso, Malcolm; Thompson, Cristina
2016-01-01
The context for the paper was the evaluation of a national program in Australia to investigate extended scopes of practice for health professionals (paramedics, physiotherapists, and nurses). The design of the evaluation involved a mixed-methods approach with multiple data sources. Four multidisciplinary models of extended scope of practice were tested over an 18-month period, involving 26 organizations, 224 health professionals, and 36 implementation sites. The evaluation focused on what could be learned to inform scaling up the extended scopes of practice on a national scale. The evaluation findings were used to develop a conceptual framework for use by clinicians, managers, and policy makers to determine appropriate strategies for scaling up effective innovations. Development of the framework was informed by the literature on the diffusion of innovations, particularly an understanding that certain attributes of innovations influence adoption. The framework recognizes the role played by three groups of stakeholders: evidence producers, evidence influencers, and evidence adopters. The use of the framework is illustrated with four case studies from the evaluation. The findings demonstrate how the scaling up of innovations can be influenced by three quite distinct approaches - letting adoption take place in an uncontrolled, unplanned, way; actively helping the process of adoption; or taking deliberate steps to ensure that adoption takes place. Development of the conceptual framework resulted in two sets of questions to guide decisions about scalability, one for those considering whether to adopt the innovation (evidence adopters), and the other for those trying to decide on the optimal strategy for dissemination (evidence influencers).
NASA Astrophysics Data System (ADS)
Trujillo, E.; Giometto, M. G.; Leonard, K. C.; Maksym, T. L.; Meneveau, C. V.; Parlange, M. B.; Lehning, M.
2014-12-01
Sea ice-atmosphere interactions are major drivers of patterns of sea ice drift and deformations in the Polar regions, and affect snow erosion and deposition at the surface. Here, we combine analyses of sea ice surface topography at very high-resolutions (1-10 cm), and Large Eddy Simulations (LES) to study surface drag and snow erosion and deposition patterns from process scales to floe scales (1 cm - 100 m). The snow/ice elevations were obtained using a Terrestrial Laser Scanner during the SIPEX II (Sea Ice Physics and Ecosystem eXperiment II) research voyage to East Antarctica (September-November 2012). LES are performed on a regular domain adopting a mixed pseudo-spectral/finite difference spatial discretization. A scale-dependent dynamic subgrid-scale model based on Lagrangian time averaging is adopted to determine the eddy-viscosity in the bulk of the flow. Effects of larger-scale features of the surface on wind flows (those features that can be resolved in the LES) are accounted for through an immersed boundary method. Conversely, drag forces caused by subgrid-scale features of the surface should be accounted for through a parameterization. However, the effective aerodynamic roughness parameter z0 for snow/ice is not known. Hence, a novel dynamic approach is utilized, in which z0 is determined using the constraint that the total momentum flux (drag) must be independent on grid-filter scale. We focus on three ice floe surfaces. The first of these surfaces (October 6, 2012) is used to test the performance of the model, validate the algorithm, and study the spatial distributed fields of resolved and modeled stress components. The following two surfaces, scanned at the same location before and after a snow storm event (October 20/23, 2012), are used to propose an application to study how spatially resolved mean flow and turbulence relates to observed patterns of snow erosion and deposition. We show how erosion and deposition patterns are correlated with the computed stresses, with modeled stresses having higher explanatory power. Deposition is mainly occurring in wake regions of specific ridges that strongly affect wind flow patterns. These larger ridges also lock in place elongated streaks of relatively high speeds with axes along the stream-wise direction, and which are largely responsible for the observed erosion.
New tuberculosis technologies: challenges for retooling and scale-up.
Pai, M; Palamountain, K M
2012-10-01
The availability of new tools does not mean that they will be adopted, used correctly, scaled up or have public health impact. Experience to date with new diagnostics suggests that many national tuberculosis programmes (NTPs) in high-burden countries are reluctant to adopt and scale up new tools, even when these are backed by evidence and global policy recommendations. We suggest that there are several common barriers to effective national adoption and scale-up of new technologies: global policy recommendations that do not provide sufficient information for scale-up, complex decision-making processes and weak political commitment at the country level, limited engagement of and support to NTP managers, high cost of tools and poor fit with user needs, unregulated markets and inadequate business models, limited capacity for laboratory strengthening and implementation research, and insufficient advocacy and donor support. Overcoming these barriers will require enhanced country-level advocacy, resources, technical assistance and political commitment. Some of the BRICS (Brazil, Russia, India, China, South Africa) countries are emerging as early adopters of policies and technologies, and are increasing their investments in TB control. They may provide the first opportunities to fully assess the public health impact of new tools.
The value of the Semantic Web in the laboratory.
Frey, Jeremy G
2009-06-01
The Semantic Web is beginning to impact on the wider chemical and physical sciences, beyond the earlier adopted bio-informatics. While useful in large-scale data driven science with automated processing, these technologies can also help integrate the work of smaller scale laboratories producing diverse data. The semantics aid the discovery, reliable re-use of data, provide improved provenance and facilitate automated processing by increased resilience to changes in presentation and reduced ambiguity. The Semantic Web, its tools and collections are not yet competitive with well-established solutions to current problems. It is in the reduced cost of instituting solutions to new problems that the versatility of Semantic Web-enabled data and resources will make their mark once the more general-purpose tools are more available.
The cartography of Venus with Magellan data
NASA Technical Reports Server (NTRS)
Kirk, R. L.; Morgan, H. F.; Russell, J. F.
1993-01-01
Maps of Venus based on Magellan data are being compiled at 1:50,000,000, 1:5,000,000 and 1:1,500,000 scales. Topographic contour lines based on radar altimetry data are overprinted on the image maps, along with feature nomenclature. Map controls are based on existing knowledge of the spacecraft orbit; photogrammetric triangulation, a traditional basis for geodetic control for bodies where framing cameras were used, is not feasible with the radar images of Venus. Preliminary synthetic aperture radar (SAR) image maps have some data gaps and cosmetic inconsistencies, which will be corrected on final compilations. Eventual revision of geodetic controls and of the adopted Venusian spin-axis location will result in geometric adjustments, particularly on large-scale maps.
2012-01-01
Background A commitment to Electronic Health Record (EHR) systems now constitutes a core part of many governments’ healthcare reform strategies. The resulting politically-initiated large-scale or national EHR endeavors are challenging because of their ambitious agendas of change, the scale of resources needed to make them work, the (relatively) short timescales set, and the large number of stakeholders involved, all of whom pursue somewhat different interests. These initiatives need to be evaluated to establish if they improve care and represent value for money. Methods Critical reflections on these complexities in the light of experience of undertaking the first national, longitudinal, and sociotechnical evaluation of the implementation and adoption of England’s National Health Service’s Care Records Service (NHS CRS). Results/discussion We advance two key arguments. First, national programs for EHR implementations are likely to take place in the shifting sands of evolving sociopolitical and sociotechnical and contexts, which are likely to shape them in significant ways. This poses challenges to conventional evaluation approaches which draw on a model of baseline operations → intervention → changed operations (outcome). Second, evaluation of such programs must account for this changing context by adapting to it. This requires careful and creative choice of ontological, epistemological and methodological assumptions. Summary New and significant challenges are faced in evaluating national EHR implementation endeavors. Based on experiences from this national evaluation of the implementation and adoption of the NHS CRS in England, we argue for an approach to these evaluations which moves away from seeing EHR systems as Information and Communication Technologies (ICT) projects requiring an essentially outcome-centred assessment towards a more interpretive approach that reflects the situated and evolving nature of EHR seen within multiple specific settings and reflecting a constantly changing milieu of policies, strategies and software, with constant interactions across such boundaries. PMID:22545646
Takian, Amirhossein; Petrakaki, Dimitra; Cornford, Tony; Sheikh, Aziz; Barber, Nicholas
2012-04-30
A commitment to Electronic Health Record (EHR) systems now constitutes a core part of many governments' healthcare reform strategies. The resulting politically-initiated large-scale or national EHR endeavors are challenging because of their ambitious agendas of change, the scale of resources needed to make them work, the (relatively) short timescales set, and the large number of stakeholders involved, all of whom pursue somewhat different interests. These initiatives need to be evaluated to establish if they improve care and represent value for money. Critical reflections on these complexities in the light of experience of undertaking the first national, longitudinal, and sociotechnical evaluation of the implementation and adoption of England's National Health Service's Care Records Service (NHS CRS). We advance two key arguments. First, national programs for EHR implementations are likely to take place in the shifting sands of evolving sociopolitical and sociotechnical and contexts, which are likely to shape them in significant ways. This poses challenges to conventional evaluation approaches which draw on a model of baseline operations → intervention → changed operations (outcome). Second, evaluation of such programs must account for this changing context by adapting to it. This requires careful and creative choice of ontological, epistemological and methodological assumptions. New and significant challenges are faced in evaluating national EHR implementation endeavors. Based on experiences from this national evaluation of the implementation and adoption of the NHS CRS in England, we argue for an approach to these evaluations which moves away from seeing EHR systems as Information and Communication Technologies (ICT) projects requiring an essentially outcome-centred assessment towards a more interpretive approach that reflects the situated and evolving nature of EHR seen within multiple specific settings and reflecting a constantly changing milieu of policies, strategies and software, with constant interactions across such boundaries.
Deep convolutional neural network based antenna selection in multiple-input multiple-output system
NASA Astrophysics Data System (ADS)
Cai, Jiaxin; Li, Yan; Hu, Ying
2018-03-01
Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.
A visualization tool to support decision making in environmental and biological planning
Romañach, Stephanie S.; McKelvy, James M.; Conzelmann, Craig; Suir, Kevin J.
2014-01-01
Large-scale ecosystem management involves consideration of many factors for informed decision making. The EverVIEW Data Viewer is a cross-platform desktop decision support tool to help decision makers compare simulation model outputs from competing plans for restoring Florida's Greater Everglades. The integration of NetCDF metadata conventions into EverVIEW allows end-users from multiple institutions within and beyond the Everglades restoration community to share information and tools. Our development process incorporates continuous interaction with targeted end-users for increased likelihood of adoption. One of EverVIEW's signature features is side-by-side map panels, which can be used to simultaneously compare species or habitat impacts from alternative restoration plans. Other features include examination of potential restoration plan impacts across multiple geographic or tabular displays, and animation through time. As a result of an iterative, standards-driven approach, EverVIEW is relevant to large-scale planning beyond Florida, and is used in multiple biological planning efforts in the United States.
A large-scale solar dynamics observatory image dataset for computer vision applications.
Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A
2017-01-01
The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.
Collaborative mining and interpretation of large-scale data for biomedical research insights.
Tsiliki, Georgia; Karacapilidis, Nikos; Christodoulou, Spyros; Tzagarakis, Manolis
2014-01-01
Biomedical research becomes increasingly interdisciplinary and collaborative in nature. Researchers need to efficiently and effectively collaborate and make decisions by meaningfully assembling, mining and analyzing available large-scale volumes of complex multi-faceted data residing in different sources. In line with related research directives revealing that, in spite of the recent advances in data mining and computational analysis, humans can easily detect patterns which computer algorithms may have difficulty in finding, this paper reports on the practical use of an innovative web-based collaboration support platform in a biomedical research context. Arguing that dealing with data-intensive and cognitively complex settings is not a technical problem alone, the proposed platform adopts a hybrid approach that builds on the synergy between machine and human intelligence to facilitate the underlying sense-making and decision making processes. User experience shows that the platform enables more informed and quicker decisions, by displaying the aggregated information according to their needs, while also exploiting the associated human intelligence.
Collaborative Mining and Interpretation of Large-Scale Data for Biomedical Research Insights
Tsiliki, Georgia; Karacapilidis, Nikos; Christodoulou, Spyros; Tzagarakis, Manolis
2014-01-01
Biomedical research becomes increasingly interdisciplinary and collaborative in nature. Researchers need to efficiently and effectively collaborate and make decisions by meaningfully assembling, mining and analyzing available large-scale volumes of complex multi-faceted data residing in different sources. In line with related research directives revealing that, in spite of the recent advances in data mining and computational analysis, humans can easily detect patterns which computer algorithms may have difficulty in finding, this paper reports on the practical use of an innovative web-based collaboration support platform in a biomedical research context. Arguing that dealing with data-intensive and cognitively complex settings is not a technical problem alone, the proposed platform adopts a hybrid approach that builds on the synergy between machine and human intelligence to facilitate the underlying sense-making and decision making processes. User experience shows that the platform enables more informed and quicker decisions, by displaying the aggregated information according to their needs, while also exploiting the associated human intelligence. PMID:25268270
Mobilization strategy to overcome global crisis of water consumption
NASA Astrophysics Data System (ADS)
Suzdaleva, Antonina; Goryunova, Svetlana; Marchuk, Aleksey; Borovkov, Valery
2017-10-01
Today, the global water consumption crisis is one of the main threats that can disrupt socio-economic and environmental conditions of life of the majority of the world’s population. The water consumption mobilization strategy is based on the idea of increasing the available water resources. The main direction for the implementation of this strategy is the construction of anti-rivers - the systems for inter-basin (interregional) water resources redistribution. Antirivers are intended for controlled redistribution of water resources from regions with their catastrophic excess to regions with their critical shortage. The creation of anti-rivers, taking into account the requirements of environmental safety, will form large-scale managed natural- engineering systems and implement the principle of sustainable development adopted by the United Nations. The aim of the article is to substantiate a new methodological approach to address the problem, where the implementation of this approach can prevent large-scale humanitarian and environmental disasters expected in the coming years.
Shaikh, Babar Tasneem; Mazhar, Arslan; Khan, Shahzad Ali; Hafeez, Assad
2011-01-01
Globally, a billion people cannot seek appropriate and timely healthcare because they are not covered under any social protection and health insurance system. Countries where government financing for health care is meagre, the situation is even worse. Pakistan with its slowly improving indicators of maternal and child health makes a classical case for instigating a social protection mechanism for the poor segments of population. The Government safety nets are unable to cater the large proportion of poor population. NGOs partially cover the rural areas where majority of the vulnerable population lives but need to expand their scope of work. Donors have presented variety of models and frameworks which were seldom considered in the concerned quarters. All stakeholders ought to strategise their plans to adopt and scale up the successful interventions (vouchers, cash transfers, micro-credits, community based insurance etc) which have been operating but on a very small scale or for other types of health services, but none for reproductive health care per se. Adoption of risk pooling mechanisms and provision of accessible and quality reproductive health services seems feasible through a meaningful and integrated public private partnership in the times to come.
From Global Stresses to Local Cell Packing During Development
NASA Astrophysics Data System (ADS)
Lubensky, David
2011-03-01
To perform their functions, cells in epithelial tissues must often adopt highly regular packings. It is still not fully understood how these ordered arrangements of cells arise from disordered, proliferative epithelia during development. I will use experimental and theoretical studies on an attractive model system, the cone cell mosaic in fish retina, to illustrate some ways that mechanical forces and cell signaling can interact to produce this transformation. Experiments examining the response to surgical lesions suggest that the correct mechanical environment at the tissue scale is essential to induce cone cells to rearrange into a rectangular lattice. Starting from this observation, I will argue that large-scale mechanical stresses naturally couple to and orient cell polarization and that this coupling can lead cells to line up in regular rows, as observed in the fish retina. This model predicts that cells in the rows will adopt characteristic trapezoidal shapes and that fragments of rows will persist even in tissue where the mosaic pattern is disrupted by lesions; these predictions are borne out by an analysis of cell packings at the level of the zonula occludens in wildtype and lesioned retinas. Supported by NSF grant IOS-0952873.
Integrating Green and Blue Water Management Tools for Land and Water Resources Planning
NASA Astrophysics Data System (ADS)
Jewitt, G. P. W.
2009-04-01
The role of land use and land use change on the hydrological cycle is well known. However, the impacts of large scale land use change are poorly considered in water resources planning, unless they require direct abstraction of water resources and associated development of infrastructure e.g. Irrigation Schemes. However, large scale deforestation for the supply of raw materials, expansion of the areas of plantation forestry, increasing areas under food production and major plans for cultivation of biofuels in many developing countries are likely to result in extensive land use change. Given the spatial extent and temporal longevity of these proposed developments, major impacts on water resources are inevitable. It is imperative that managers and planners consider the consequences for downstream ecosystems and users in such developments. However, many popular tools, such as the vitual water approach, provide only coarse scale "order of magnitude" type estimates with poor consideration of, and limited usefulness, for land use planning. In this paper, a framework for the consideration of the impacts of large scale land use change on water resources at a range of temporal and spatial scales is presented. Drawing on experiences from South Africa, where the establishment of exotic commercial forest plantations is only permitted once a water use license has been granted, the framework adopts the "green water concept" for the identification of potential high impact areas of land use change and provides for integration with traditional "blue water" water resources planning tools for more detailed planning. Appropriate tools, ranging from simple spreadsheet solutions to more sophisticated remote sensing and hydrological models are described, and the application of the framework for consideration of water resources impacts associated with the establishment of large scale tectona grandis, sugar cane and jatropha curcas plantations is illustrated through examples in Mozambique and South Africa. Keywords: Land use change, water resources, green water, blue water, biofuels, developing countries
NASA Astrophysics Data System (ADS)
Jayakumarai, G.; Gokulpriya, C.; Sudhapriya, R.; Sharmila, G.; Muthukumaran, C.
2015-12-01
Simple effective and rapid approach for the green synthesis of copper oxide nanoparticles (CONPs) using of Albizia lebbeck leaf extract was investigated in this study. Various instrumental techniques were adopted to characterize the synthesized CONPs, viz. UV-Vis spectroscopy, SEM, TEM, EDS and XRD. The synthesized CONPs were found to be spherical in shape and size less than 100 nm. It could be concluded that A. lebbeck leaf extract can be used as a cheap and effective reducing agent for CONPs production in large scale.
Azobenzene-functionalized carbon nanotubes as high-energy density solar thermal fuels.
Kolpak, Alexie M; Grossman, Jeffrey C
2011-08-10
Solar thermal fuels, which reversibly store solar energy in molecular bonds, are a tantalizing prospect for clean, renewable, and transportable energy conversion/storage. However, large-scale adoption requires enhanced energy storage capacity and thermal stability. Here we present a novel solar thermal fuel, composed of azobenzene-functionalized carbon nanotubes, with the volumetric energy density of Li-ion batteries. Our work also demonstrates that the inclusion of nanoscale templates is an effective strategy for design of highly cyclable, thermally stable, and energy-dense solar thermal fuels.
Identification of Curie temperature distributions in magnetic particulate systems
NASA Astrophysics Data System (ADS)
Waters, J.; Berger, A.; Kramer, D.; Fangohr, H.; Hovorka, O.
2017-09-01
This paper develops a methodology for extracting the Curie temperature distribution from magnetisation versus temperature measurements which are realizable by standard laboratory magnetometry. The method is integral in nature, robust against various sources of measurement noise, and can be adopted to a wide range of granular magnetic materials and magnetic particle systems. The validity and practicality of the method is demonstrated using large-scale Monte-Carlo simulations of an Ising-like model as a proof of concept, and general conclusions are drawn about its applicability to different classes of systems and experimental conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillingham, Kenneth; Bollinger, Bryan
This is the final report for a systematic, evidence-based project using an unprecedented series of large-scale field experiments to examine the effectiveness and cost-effectiveness of novel approaches to reduce the soft costs of solar residential photovoltaics. The approaches were based around grassroots marketing campaigns called ‘Solarize’ campaigns, that were designed to lower costs and increase adoption of solar technology. This study quantified the effectiveness and cost-effectiveness of the Solarize programs and tested new approaches to further improve the model.
NASA Astrophysics Data System (ADS)
Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.
2016-10-01
An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.
Red, Straight, no bends: primordial power spectrum reconstruction from CMB and large-scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ravenni, Andrea; Verde, Licia; Cuesta, Antonio J., E-mail: andrea.ravenni@pd.infn.it, E-mail: liciaverde@icc.ub.edu, E-mail: ajcuesta@icc.ub.edu
2016-08-01
We present a minimally parametric, model independent reconstruction of the shape of the primordial power spectrum. Our smoothing spline technique is well-suited to search for smooth features such as deviations from scale invariance, and deviations from a power law such as running of the spectral index or small-scale power suppression. We use a comprehensive set of the state-of the art cosmological data: Planck observations of the temperature and polarisation anisotropies of the cosmic microwave background, WiggleZ and Sloan Digital Sky Survey Data Release 7 galaxy power spectra and the Canada-France-Hawaii Lensing Survey correlation function. This reconstruction strongly supports the evidencemore » for a power law primordial power spectrum with a red tilt and disfavours deviations from a power law power spectrum including small-scale power suppression such as that induced by significantly massive neutrinos. This offers a powerful confirmation of the inflationary paradigm, justifying the adoption of the inflationary prior in cosmological analyses.« less
Scaled-model guidelines for formation-flying solar coronagraph missions.
Landini, Federico; Romoli, Marco; Baccani, Cristian; Focardi, Mauro; Pancrazzi, Maurizio; Galano, Damien; Kirschner, Volker
2016-02-15
Stray light suppression is the main concern in designing a solar coronagraph. The main contribution to the stray light for an externally occulted space-borne solar coronagraph is the light diffracted by the occulter and scattered by the optics. It is mandatory to carefully evaluate the diffraction generated by an external occulter and the impact that it has on the stray light signal on the focal plane. The scientific need for observations to cover a large portion of the heliosphere with an inner field of view as close as possible to the photospheric limb supports the ambition of launching formation-flying giant solar coronagraphs. Their dimension prevents the possibility of replicating the flight geometry in a clean laboratory environment, and the strong need for a scaled model is thus envisaged. The problem of scaling a coronagraph has already been faced for exoplanets, for a single point source on axis at infinity. We face the problem here by adopting an original approach and by introducing the scaling of the solar disk as an extended source.
Red, Straight, no bends: primordial power spectrum reconstruction from CMB and large-scale structure
NASA Astrophysics Data System (ADS)
Ravenni, Andrea; Verde, Licia; Cuesta, Antonio J.
2016-08-01
We present a minimally parametric, model independent reconstruction of the shape of the primordial power spectrum. Our smoothing spline technique is well-suited to search for smooth features such as deviations from scale invariance, and deviations from a power law such as running of the spectral index or small-scale power suppression. We use a comprehensive set of the state-of the art cosmological data: Planck observations of the temperature and polarisation anisotropies of the cosmic microwave background, WiggleZ and Sloan Digital Sky Survey Data Release 7 galaxy power spectra and the Canada-France-Hawaii Lensing Survey correlation function. This reconstruction strongly supports the evidence for a power law primordial power spectrum with a red tilt and disfavours deviations from a power law power spectrum including small-scale power suppression such as that induced by significantly massive neutrinos. This offers a powerful confirmation of the inflationary paradigm, justifying the adoption of the inflationary prior in cosmological analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
Application of the resource-based relative value scale system to pediatrics.
Gerstle, Robert S; Molteni, Richard A; Andreae, Margie C; Bradley, Joel F; Brewer, Eileen D; Calabrese, Jamie; Krug, Steven E; Liechty, Edward A; Linzer, Jeffrey F; Pillsbury, Julia M; Tuli, Sanjeev Y
2014-06-01
The majority of public and private payers in the United States currently use the Medicare Resource-Based Relative Value Scale as the basis for physician payment. Many large group and academic practices have adopted this objective system of physician work to benchmark physician productivity, including using it, wholly or in part, to determine compensation. The Resource-Based Relative Value Scale survey instrument, used to value physician services, was designed primarily for procedural services, leading to current concerns that American Medical Association/Specialty Society Relative Value Scale Update Committee (RUC) surveys may undervalue nonprocedural evaluation and management services. The American Academy of Pediatrics is represented on the RUC, the committee charged with maintaining accurate physician work values across specialties and age groups. The Academy, working closely with other primary care and subspecialty societies, actively pursues a balanced RUC membership and a survey instrument that will ensure appropriate work relative value unit assignments, thereby allowing pediatricians to receive appropriate payment for their services relative to other services.
TDat: An Efficient Platform for Processing Petabyte-Scale Whole-Brain Volumetric Images.
Li, Yuxin; Gong, Hui; Yang, Xiaoquan; Yuan, Jing; Jiang, Tao; Li, Xiangning; Sun, Qingtao; Zhu, Dan; Wang, Zhenyu; Luo, Qingming; Li, Anan
2017-01-01
Three-dimensional imaging of whole mammalian brains at single-neuron resolution has generated terabyte (TB)- and even petabyte (PB)-sized datasets. Due to their size, processing these massive image datasets can be hindered by the computer hardware and software typically found in biological laboratories. To fill this gap, we have developed an efficient platform named TDat, which adopts a novel data reformatting strategy by reading cuboid data and employing parallel computing. In data reformatting, TDat is more efficient than any other software. In data accessing, we adopted parallelization to fully explore the capability for data transmission in computers. We applied TDat in large-volume data rigid registration and neuron tracing in whole-brain data with single-neuron resolution, which has never been demonstrated in other studies. We also showed its compatibility with various computing platforms, image processing software and imaging systems.
Emergence of highly transparent photovoltaics for distributed applications
NASA Astrophysics Data System (ADS)
Traverse, Christopher J.; Pandey, Richa; Barr, Miles C.; Lunt, Richard R.
2017-11-01
Solar energy offers a viable solution to our growing energy need. While adoption of conventional photovoltaics on rooftops and in solar farms has grown rapidly in the last decade, there is still plenty of opportunity for expansion. See-through solar technologies with partial light transmission developed over the past 30 years have initiated methods of integration not possible with conventional modules. The large-scale deployment necessary to offset global energy consumption could be further accelerated by developing fully invisible solar cells that selectively absorb ultraviolet and near-infrared light, allowing many of the surfaces of our built environment to be turned into solar harvesting arrays without impacting the function or aesthetics. Here, we review recent advances in photovoltaics with varying degrees of visible light transparency. We discuss the figures of merit necessary to characterize transparent photovoltaics, and outline the requirements to enable their widespread adoption in buildings, windows, electronic device displays, and automobiles.
Masso, Malcolm; Thompson, Cristina
2016-01-01
The context for the paper was the evaluation of a national program in Australia to investigate extended scopes of practice for health professionals (paramedics, physiotherapists, and nurses). The design of the evaluation involved a mixed-methods approach with multiple data sources. Four multidisciplinary models of extended scope of practice were tested over an 18-month period, involving 26 organizations, 224 health professionals, and 36 implementation sites. The evaluation focused on what could be learned to inform scaling up the extended scopes of practice on a national scale. The evaluation findings were used to develop a conceptual framework for use by clinicians, managers, and policy makers to determine appropriate strategies for scaling up effective innovations. Development of the framework was informed by the literature on the diffusion of innovations, particularly an understanding that certain attributes of innovations influence adoption. The framework recognizes the role played by three groups of stakeholders: evidence producers, evidence influencers, and evidence adopters. The use of the framework is illustrated with four case studies from the evaluation. The findings demonstrate how the scaling up of innovations can be influenced by three quite distinct approaches – letting adoption take place in an uncontrolled, unplanned, way; actively helping the process of adoption; or taking deliberate steps to ensure that adoption takes place. Development of the conceptual framework resulted in two sets of questions to guide decisions about scalability, one for those considering whether to adopt the innovation (evidence adopters), and the other for those trying to decide on the optimal strategy for dissemination (evidence influencers). PMID:27616889
NASA Astrophysics Data System (ADS)
Cassani, Mary Kay Kuhr
The objective of this study was to evaluate the effect of two pedagogical models used in general education science on non-majors' science teaching self-efficacy. Science teaching self-efficacy can be influenced by inquiry and cooperative learning, through cognitive mechanisms described by Bandura (1997). The Student Centered Activities for Large Enrollment Undergraduate Programs (SCALE-UP) model of inquiry and cooperative learning incorporates cooperative learning and inquiry-guided learning in large enrollment combined lecture-laboratory classes (Oliver-Hoyo & Beichner, 2004). SCALE-UP was adopted by a small but rapidly growing public university in the southeastern United States in three undergraduate, general education science courses for non-science majors in the Fall 2006 and Spring 2007 semesters. Students in these courses were compared with students in three other general education science courses for non-science majors taught with the standard teaching model at the host university. The standard model combines lecture and laboratory in the same course, with smaller enrollments and utilizes cooperative learning. Science teaching self-efficacy was measured using the Science Teaching Efficacy Belief Instrument - B (STEBI-B; Bleicher, 2004). A science teaching self-efficacy score was computed from the Personal Science Teaching Efficacy (PTSE) factor of the instrument. Using non-parametric statistics, no significant difference was found between teaching models, between genders, within models, among instructors, or among courses. The number of previous science courses was significantly correlated with PTSE score. Student responses to open-ended questions indicated that students felt the larger enrollment in the SCALE-UP room reduced individual teacher attention but that the large round SCALE-UP tables promoted group interaction. Students responded positively to cooperative and hands-on activities, and would encourage inclusion of more such activities in all of the courses. The large enrollment SCALE-UP model as implemented at the host university did not increase science teaching self-efficacy of non-science majors, as hypothesized. This was likely due to limited modification of standard cooperative activities according to the inquiry-guided SCALE-UP model. It was also found that larger SCALE-UP enrollments did not decrease science teaching self-efficacy when standard cooperative activities were used in the larger class.
GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen
2015-09-30
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the hostmore » and the device.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khokhra, Richa; Kumar, Rajesh, E-mail: rajesh.kumar@juit.ac.in
2015-05-15
A facile room temperature, aqueous solution-based chemical method has been adopted for large-scale synthesis of Fe doped ZnO nanosheets. The XRD and SEM results reveal the as-synthesized products well crystalline and accumulated by large amount of interweave nanosheets, respectively. Energy dispersive spectroscopy data confirmed Fe doping of the ZnO nanosheets with a varying Fe concentration. The photoluminescence spectrum reveals a continuous suppression of defect related emissions intensity by increasing the concentration of the Fe ion. A photocatalytic activity using these samples under sunlight irradiation in the mineralization of methylene blue dye was investigated. The photocatalytic activity of Fe doped ZnOmore » nanosheets depends upon the presence of surface oxygen vacancies.« less
Low-temperature synthesis of graphene on nickel foil by microwave plasma chemical vapor deposition.
Kim, Y; Song, W; Lee, S Y; Jeon, C; Jung, W; Kim, M; Park, C-Y
2011-06-27
Microwave plasma chemical vapor deposition (MPCVD) was employed to synthesize high quality centimeter scale graphene film at low temperatures. Monolayer graphene was obtained by varying the gas mixing ratio of hydrogen and methane to 80:1. Using advantages of MPCVD, the synthesis temperature was decreased from 750 °C down to 450 °C. Optical microscopy and Raman mapping images exhibited that a large area monolayer graphene was synthesized regardless of the temperatures. Since the overall transparency of 89% and low sheet resistances ranging from 590 to 1855 Ω∕sq of graphene films were achieved at considerably low synthesis temperatures, MPCVD can be adopted in manufacturing future large-area electronic devices based on graphene film.
Low-temperature synthesis of graphene on nickel foil by microwave plasma chemical vapor deposition
NASA Astrophysics Data System (ADS)
Kim, Y.; Song, W.; Lee, S. Y.; Jeon, C.; Jung, W.; Kim, M.; Park, C.-Y.
2011-06-01
Microwave plasma chemical vapor deposition (MPCVD) was employed to synthesize high quality centimeter scale graphene film at low temperatures. Monolayer graphene was obtained by varying the gas mixing ratio of hydrogen and methane to 80:1. Using advantages of MPCVD, the synthesis temperature was decreased from 750 °C down to 450 °C. Optical microscopy and Raman mapping images exhibited that a large area monolayer graphene was synthesized regardless of the temperatures. Since the overall transparency of 89% and low sheet resistances ranging from 590 to 1855 Ω/sq of graphene films were achieved at considerably low synthesis temperatures, MPCVD can be adopted in manufacturing future large-area electronic devices based on graphene film.
Arrossi, Silvina; Paolino, Melisa; Thouyaret, Laura; Laudi, Rosa; Campanera, Alicia
2017-02-13
Self-collection has been proposed as a strategy to increase cervical screening coverage among hard-to-reach women. However, evaluations of the implementation of this strategy on a large scale are scarce. This paper describes the process and measurement of the scaling-up of self-collection offered by community health workers during home visits as a strategy to reach under-screened women aged 30+ with public health coverage, defined as the target women. We used an adaptation of the Health System Framework to analyze key drivers of scaling-up. A content analysis approach was used to collect and analyze information from different sources. The RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) model was used to evaluate the impact of the strategy. HPV self-collection was scaled-up in the province of Jujuy in 2014 after a RCT (Self-collection Modality Trial, initials EMA in Spanish) was carried out locally in 2012 and demonstrated effectiveness of the strategy to increase screening uptake. Facilitators of scaling-up were the organizational capacity of the provincial health system, sustainable funding for HPV testing, and local consensus about the value of the technology. Reach: In 2014, 9% (2983/33,245) of target women were screened through self-collection in the Jujuy public health sector. Effectiveness: In 2014, 17% (n = 5657/33,245) of target women were screened with any HPV test (self-collected and clinician-collected tests) vs. 11.7% (4579/38,981) in 2013, the pre-scaling-up period (p < 0.0001). Training about the strategy was provided to 84.2% (n = 609/723) of total community health workers (CHWs). Of 414 HPV+ women, 77.5% (n = 320) had follow-up procedures. Of 113 women with positive triage, 66.4% (n = 75) had colposcopic diagnosis. Treatment was provided to 80.7% of CIN2+ women (n = 21/26). Adoption: Of trained CHWs, 69.3% (n = 422/609) had at least one woman with self-collection; 85.2% (n = 315/368) of CHWs who responded to an evaluation survey were satisfied with self-collection strategy. Maintenance: During 2015, 100.0% (723/723) CHWs were operational and 63.8% (461/723) had at least one woman with self-collection. The strategy was successfully scaled-up, with a high level of adoption among CHWs, which resulted in increased screening among socially vulnerable under-screened women.
Evaluating the Properties of the Evidence-Based Practice Attitude Scale (EBPAS) in Health Care
ERIC Educational Resources Information Center
Melas, Christos D.; Zampetakis, Leonidas A.; Dimopoulou, Anastasia; Moustakis, Vassilis
2012-01-01
The Evidence-Based Practice Attitude Scale (EBPAS; Aarons, 2004) is a relatively new construct for the study of attitudes toward the adoption of innovation and evidence-based practices (EBPs) in mental health service settings. Despite widespread interest in measuring the attitudes of health care providers in conjunction with the adoption of EBPs,…
Generating descriptive visual words and visual phrases for large-scale image applications.
Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen
2011-09-01
Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.
Massive gravity wrapped in the cosmic web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shim, Junsup; Lee, Jounghun; Li, Baojiu, E-mail: jsshim@astro.snu.ac.kr, E-mail: jounghun@astro.snu.ac.kr
We study how the filamentary pattern of the cosmic web changes if the true gravity deviates from general relativity (GR) on a large scale. The f(R) gravity, whose strength is controlled to satisfy the current observational constraints on the cluster scale, is adopted as our fiducial model and a large, high-resolution N-body simulation is utilized for this study. By applying the minimal spanning tree algorithm to the halo catalogs from the simulation at various epochs, we identify the main stems of the rich superclusters located in the most prominent filamentary section of the cosmic web and determine their spatial extentsmore » per member cluster to be the degree of their straightness. It is found that the f(R) gravity has the effect of significantly bending the superclusters and that the effect becomes stronger as the universe evolves. Even in the case where the deviation from GR is too small to be detectable by any other observables, the degree of the supercluster straightness exhibits a conspicuous difference between the f(R) and the GR models. Our results also imply that the supercluster straightness could be a useful discriminator of f(R) gravity from the coupled dark energy since it is shown to evolve differently between the two models. As a final conclusion, the degree of the straightness of the rich superclusters should provide a powerful cosmological test of large scale gravity.« less
Moores, Carly Jane; Miller, Jacqueline; Perry, Rebecca Anne; Chan, Lily Lai Hang; Daniels, Lynne Allison; Vidgen, Helen Anna; Magarey, Anthea Margaret
2017-11-29
Translation encompasses the continuum from clinical efficacy to widespread adoption within the healthcare service and ultimately routine clinical practice. The Parenting, Eating and Activity for Child Health (PEACH™) program has previously demonstrated clinical effectiveness in the management of child obesity, and has been recently implemented as a large-scale community intervention in Queensland, Australia. This paper aims to describe the translation of the evaluation framework from a randomised controlled trial (RCT) to large-scale community intervention (PEACH™ QLD). Tensions between RCT paradigm and implementation research will be discussed along with lived evaluation challenges, responses to overcome these, and key learnings for future evaluation conducted at scale. The translation of evaluation from PEACH™ RCT to the large-scale community intervention PEACH™ QLD is described. While the CONSORT Statement was used to report findings from two previous RCTs, the REAIM framework was more suitable for the evaluation of upscaled delivery of the PEACH™ program. Evaluation of PEACH™ QLD was undertaken during the project delivery period from 2013 to 2016. Experiential learnings from conducting the evaluation of PEACH™ QLD to the described evaluation framework are presented for the purposes of informing the future evaluation of upscaled programs. Evaluation changes in response to real-time changes in the delivery of the PEACH™ QLD Project were necessary at stages during the project term. Key evaluation challenges encountered included the collection of complete evaluation data from a diverse and geographically dispersed workforce and the systematic collection of process evaluation data in real time to support program changes during the project. Evaluation of large-scale community interventions in the real world is challenging and divergent from RCTs which are rigourously evaluated within a more tightly-controlled clinical research setting. Constructs explored in an RCT are inadequate in describing the enablers and barriers of upscaled community program implementation. Methods for data collection, analysis and reporting also require consideration. We present a number of experiential reflections and suggestions for the successful evaluation of future upscaled community programs which are scarcely reported in the literature. PEACH™ QLD was retrospectively registered with the Australian New Zealand Clinical Trials Registry on 28 February 2017 (ACTRN12617000315314).
Syed-Abdul, Shabbir; Hsu, Min-Huei; Iqbal, Usman; Scholl, Jeremiah; Huang, Chih-Wei; Nguyen, Phung Anh; Lee, Peisan; García-Romero, Maria Teresa; Li, Yu-Chuan Jack; Jian, Wen-Shan
2015-09-01
Recent discussions have focused on using health information technology (HIT) to support goals related to universal healthcare delivery. These discussions have generally not reflected on the experience of countries with a large amount of experience using HIT to support universal healthcare on a national level. HIT was compared globally by using data from the Ministry of the Interior, Republic of China (Taiwan). Taiwan has been providing universal healthcare since 1995 and began to strategically implement HIT on a national level at that time. Today the national-level HIT system is more extensive in Taiwan than in many other countries and is used to aid administration, clinical care, and public health. The experience of Taiwan thus can provide an illustration of how HIT can be used to support universal healthcare delivery. In this article we present an overview of some key historical developments and successes in the adoption of HIT in Taiwan over a 17-year period, as well as some more recent developments. We use this experience to offer some strategic perspectives on how it can aid in the adoption of large-scale HIT systems and on how HIT can be used to support universal healthcare delivery.
The study of integration about measurable image and 4D production
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun
2008-12-01
In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
NASA Astrophysics Data System (ADS)
Wu, Dongxu; Qiao, Zheng; Wang, Bo; Wang, Huiming; Li, Guo
2014-08-01
In this paper, a four-axis ultra-precision lathe for machining large-scale drum mould with microstructured surface is presented. Firstly, because of the large dimension and weight of drum workpiece, as well as high requirement of machining accuracy, the design guidelines and component parts of this drum lathe is introduced in detail, including control system, moving and driving components, position feedback system and so on. Additionally, the weight of drum workpiece would result in the structural deformation of this lathe, therefore, this paper analyses the effect of structural deformation on machining accuracy by means of ANSYS. The position change is approximately 16.9nm in the X-direction(sensitive direction) which could be negligible. Finally, in order to study the impact of bearing parameters on the load characteristics of aerostatic journal bearing, one of the famous computational fluid dynamics(CFD) software, FLUENT, is adopted, and a series of simulations are carried out. The result shows that the aerostatic spindle has superior performance of carrying capacity and stiffness, it is possible for this lathe to bear the weight of drum workpiece up to 1000kg since there are two aerostatic spindles in the headstock and tailstock.
Fish Gill Inspired Crossflow for Efficient and Continuous Collection of Spilled Oil.
Dou, Yuhai; Tian, Dongliang; Sun, Ziqi; Liu, Qiannan; Zhang, Na; Kim, Jung Ho; Jiang, Lei; Dou, Shi Xue
2017-03-28
Developing an effective system to clean up large-scale oil spills is of great significance due to their contribution to severe environmental pollution and destruction. Superwetting membranes have been widely studied for oil/water separation. The separation, however, adopts a gravity-driven approach that is inefficient and discontinuous due to quick fouling of the membrane by oil. Herein, inspired by the crossflow filtration behavior in fish gills, we propose a crossflow approach via a hydrophilic, tilted gradient membrane for spilled oil collection. In crossflow collection, as the oil/water flows parallel to the hydrophilic membrane surface, water is gradually filtered through the pores, while oil is repelled, transported, and finally collected for storage. Owing to the selective gating behavior of the water-sealed gradient membrane, the large pores at the bottom with high water flux favor fast water filtration, while the small pores at the top with strong oil repellency allow easy oil transportation. In addition, the gradient membrane exhibits excellent antifouling properties due to the protection of the water layer. Therefore, this bioinspired crossflow approach enables highly efficient and continuous spilled oil collection, which is very promising for the cleanup of large-scale oil spills.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-24
... Large Appliance and Metal Furniture Coatings AGENCY: Environmental Protection Agency (EPA). ACTION... Techniques Guidelines (CTG) standards for large appliance and metal furniture coatings. In the Final Rules...; Pennsylvania; Adoption of Control Techniques Guidelines for Large Appliance and Metal Furniture Coatings...
Astrophysical constraints on Planck scale dissipative phenomena.
Liberati, Stefano; Maccione, Luca
2014-04-18
The emergence of a classical spacetime from any quantum gravity model is still a subtle and only partially understood issue. If indeed spacetime is arising as some sort of large scale condensate of more fundamental objects, then it is natural to expect that matter, being a collective excitation of the spacetime constituents, will present modified kinematics at sufficiently high energies. We consider here the phenomenology of the dissipative effects necessarily arising in such a picture. Adopting dissipative hydrodynamics as a general framework for the description of the energy exchange between collective excitations and the spacetime fundamental degrees of freedom, we discuss how rates of energy loss for elementary particles can be derived from dispersion relations and used to provide strong constraints on the base of current astrophysical observations of high-energy particles.
Single-Pass, Closed-System Rapid Expansion of Lymphocyte Cultures for Adoptive Cell Therapy
Klapper, Jacob A.; Thomasian, Armen A.; Smith, Douglas M.; Gorgas, Gayle C.; Wunderlich, John R.; Smith, Franz O.; Hampson, Brian S.; Rosenberg, Steven A.; Dudley, Mark E.
2009-01-01
Adoptive cell therapy (ACT) for metastatic melanoma involves the ex vivo expansion and re-infusion of tumor infiltrating lymphocytes (TIL) obtained from resected specimens. With an overall objective response rate of fifty-six percent, this T-cell immunotherapy provides an appealing alternative to other therapies, including conventional therapies with lower response rates. However, there are significant regulatory and logistical concerns associated with the ex vivo activation and large scale expansion of these cells. The best current practice uses a rapid expansion protocol (REP) consisting of an ex vivo process that occurs in tissue culture flasks (T-flasks) and gas-permeable bags, utilizes OKT3 (anti-CD3 monoclonal antibody), recombinant human interleukin-2, and irradiated peripheral blood mononuclear cells to initiate rapid lymphocyte growth. A major limitation to the widespread delivery of therapy to large numbers of melanoma patients is the open system in which a REP is initiated. To address this problem, we have investigated the initiation, expansion and harvest at clinical scale of TIL in a closed-system continuous perfusion bioreactor. Each cell product met all safety criteria for patient treatment and by head-to-head comparison had a similar potency and phenotype as cells grown in control T-flasks and gas-permeable bags. However, the currently available bioreactor cassettes were limited in the total cell numbers that could be generated. This bioreactor may simplify the process of the rapid expansion of TIL under stringent regulatory conditions thereby enabling other institutions to pursue this form of ACT. PMID:19389403
Jeuland, Marc A.; Pattanayak, Subhrendu K.
2012-01-01
Current attention to improved cook stoves (ICS) focuses on the “triple benefits” they provide, in improved health and time savings for households, in preservation of forests and associated ecosystem services, and in reducing emissions that contribute to global climate change. Despite the purported economic benefits of such technologies, however, progress in achieving large-scale adoption and use has been remarkably slow. This paper uses Monte Carlo simulation analysis to evaluate the claim that households will always reap positive and large benefits from the use of such technologies. Our analysis allows for better understanding of the variability in economic costs and benefits of ICS use in developing countries, which depend on unknown combinations of numerous uncertain parameters. The model results suggest that the private net benefits of ICS will sometimes be negative, and in many instances highly so. Moreover, carbon financing and social subsidies may help enhance incentives to adopt, but will not always be appropriate. The costs and benefits of these technologies are most affected by their relative fuel costs, time and fuel use efficiencies, the incidence and cost-of-illness of acute respiratory illness, and the cost of household cooking time. Combining these results with the fact that households often find these technologies to be inconvenient or culturally inappropriate leads us to understand why uptake has been disappointing. Given the current attention to the scale up of ICS, this analysis is timely and important for highlighting some of the challenges for global efforts to promote ICS. PMID:22348005
Climate Change and Macro-Economic Cycles in Pre-Industrial Europe
Pei, Qing; Zhang, David D.; Lee, Harry F.; Li, Guodong
2014-01-01
Climate change has been proven to be the ultimate cause of social crisis in pre-industrial Europe at a large scale. However, detailed analyses on climate change and macro-economic cycles in the pre-industrial era remain lacking, especially within different temporal scales. Therefore, fine-grained, paleo-climate, and economic data were employed with statistical methods to quantitatively assess the relations between climate change and agrarian economy in Europe during AD 1500 to 1800. In the study, the Butterworth filter was adopted to filter the data series into a long-term trend (low-frequency) and short-term fluctuations (high-frequency). Granger Causality Analysis was conducted to scrutinize the associations between climate change and macro-economic cycle at different frequency bands. Based on quantitative results, climate change can only show significant effects on the macro-economic cycle within the long-term. In terms of the short-term effects, society can relieve the influences from climate variations by social adaptation methods and self-adjustment mechanism. On a large spatial scale, temperature holds higher importance for the European agrarian economy than precipitation. By examining the supply-demand mechanism in the grain market, population during the study period acted as the producer in the long term, whereas as the consumer in the short term. These findings merely reflect the general interactions between climate change and macro-economic cycles at the large spatial region with a long-term study period. The findings neither illustrate individual incidents that can temporarily distort the agrarian economy nor explain some specific cases. In the study, the scale thinking in the analysis is raised as an essential methodological issue for the first time to interpret the associations between climatic impact and macro-economy in the past agrarian society within different temporal scales. PMID:24516601
Climate change and macro-economic cycles in pre-industrial europe.
Pei, Qing; Zhang, David D; Lee, Harry F; Li, Guodong
2014-01-01
Climate change has been proven to be the ultimate cause of social crisis in pre-industrial Europe at a large scale. However, detailed analyses on climate change and macro-economic cycles in the pre-industrial era remain lacking, especially within different temporal scales. Therefore, fine-grained, paleo-climate, and economic data were employed with statistical methods to quantitatively assess the relations between climate change and agrarian economy in Europe during AD 1500 to 1800. In the study, the Butterworth filter was adopted to filter the data series into a long-term trend (low-frequency) and short-term fluctuations (high-frequency). Granger Causality Analysis was conducted to scrutinize the associations between climate change and macro-economic cycle at different frequency bands. Based on quantitative results, climate change can only show significant effects on the macro-economic cycle within the long-term. In terms of the short-term effects, society can relieve the influences from climate variations by social adaptation methods and self-adjustment mechanism. On a large spatial scale, temperature holds higher importance for the European agrarian economy than precipitation. By examining the supply-demand mechanism in the grain market, population during the study period acted as the producer in the long term, whereas as the consumer in the short term. These findings merely reflect the general interactions between climate change and macro-economic cycles at the large spatial region with a long-term study period. The findings neither illustrate individual incidents that can temporarily distort the agrarian economy nor explain some specific cases. In the study, the scale thinking in the analysis is raised as an essential methodological issue for the first time to interpret the associations between climatic impact and macro-economy in the past agrarian society within different temporal scales.
Kongelf, Anine; Bandewar, Sunita V S; Bharat, Shalini; Collumbien, Martine
2015-01-01
In the last decade, community mobilisation (CM) interventions targeting female sex workers (FSWs) have been scaled-up in India's national response to the HIV epidemic. This included the Bill and Melinda Gates Foundation's Avahan programme which adopted a business approach to plan and manage implementation at scale. With the focus of evaluation efforts on measuring effectiveness and health impacts there has been little analysis thus far of the interaction of the CM interventions with the sex work industry in complex urban environments. Between March and July 2012 semi-structured, in-depth interviews and focus group discussions were conducted with 63 HIV intervention implementers, to explore challenges of HIV prevention among FSWs in Mumbai. A thematic analysis identified contextual factors that impact CM implementation. Large-scale interventions are not only impacted by, but were shown to shape the dynamic social context. Registration practices and programme monitoring were experienced as stigmatising, reflected in shifting client preferences towards women not disclosing as 'sex workers'. This combined with urban redevelopment and gentrification of traditional red light areas, forcing dispersal and more 'hidden' ways of solicitation, further challenging outreach and collectivisation. Participants reported that brothel owners and 'pimps' continued to restrict access to sex workers and the heterogeneous 'community' of FSWs remains fragmented with high levels of mobility. Stakeholder engagement was poor and mobilising around HIV prevention not compelling. Interventions largely failed to respond to community needs as strong target-orientation skewed activities towards those most easily measured and reported. Large-scale interventions have been impacted by and contributed to an increasingly complex sex work environment in Mumbai, challenging outreach and mobilisation efforts. Sex workers remain a vulnerable and disempowered group needing continued support and more comprehensive services.
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
A Study of NetCDF as an Approach for High Performance Medical Image Storage
NASA Astrophysics Data System (ADS)
Magnus, Marcone; Coelho Prado, Thiago; von Wangenhein, Aldo; de Macedo, Douglas D. J.; Dantas, M. A. R.
2012-02-01
The spread of telemedicine systems increases every day. The systems and PACS based on DICOM images has become common. This rise reflects the need to develop new storage systems, more efficient and with lower computational costs. With this in mind, this article discusses a study for application in NetCDF data format as the basic platform for storage of DICOM images. The study case comparison adopts an ordinary database, the HDF5 and the NetCDF to storage the medical images. Empirical results, using a real set of images, indicate that the time to retrieve images from the NetCDF for large scale images has a higher latency compared to the other two methods. In addition, the latency is proportional to the file size, which represents a drawback to a telemedicine system that is characterized by a large amount of large image files.
Can Economics Provide Insights into Trust Infrastructure?
NASA Astrophysics Data System (ADS)
Vishik, Claire
Many security technologies require infrastructure for authentication, verification, and other processes. In many cases, viable and innovative security technologies are never adopted on a large scale because the necessary infrastructure is slow to emerge. Analyses of such technologies typically focus on their technical flaws, and research emphasizes innovative approaches to stronger implementation of the core features. However, an observation can be made that in many cases the success of adoption pattern depends on non-technical issues rather than technology-lack of economic incentives, difficulties in finding initial investment, inadequate government support. While a growing body of research is dedicated to economics of security and privacy in general, few theoretical studies in this area have been completed, and even fewer that look at the economics of “trust infrastructure” beyond simple “cost of ownership” models. This exploratory paper takes a look at some approaches in theoretical economics to determine if they can provide useful insights into security infrastructure technologies and architectures that have the best chance to be adopted. We attempt to discover if models used in theoretical economics can help inform technology developers of the optimal business models that offer a better chance for quick infrastructure deployment.
Time to decision: the drivers of innovation adoption decisions
NASA Astrophysics Data System (ADS)
Ciganek, Andrew Paul; (Dave) Haseman, William; Ramamurthy, K.
2014-03-01
Organisations desire timeliness. Timeliness facilitates a better responsiveness to changes in an organisation's external environment to either attain or maintain competitiveness. Despite its importance, decision timeliness has not been explicitly examined. Decision timeliness is measured in this study as the time taken to commit to a decision. The research objective is to identify the drivers of decision timeliness in the context of adopting service-oriented architecture (SOA), an innovation for enterprise computing. A research model rooted in the technology-organisation-environment (TOE) framework is proposed and tested with data collected in a large-scale study. The research variables have been examined before in the context of adoption, but their applicability to the timeliness of innovation decision-making has not received much attention and their salience is unclear. The results support multiple hypothesised relationships, including the finding that a risk-oriented organisational culture as well as normative and coercive pressures accelerates decision timeliness. Top management support as well as the traditional innovation attributes (compatibility, relative advantage and complexity/ease-of-use) were not found to be significant when examining their influence on decision timeliness, which appears inconsistent with generally accepted knowledge and deserves further examination.
Zhao, Henry; Coote, Skye; Pesavento, Lauren; Churilov, Leonid; Dewey, Helen M; Davis, Stephen M; Campbell, Bruce C V
2017-03-01
Clinical large vessel occlusion (LVO) triage scales were developed to identify and bypass LVO to endovascular centers. However, there are concerns that scale misclassification of patients may cause excessive harm. We studied the settings where misclassifications were likely to occur and the consequences of these misclassifications in a representative stroke population. Prospective data were collected from consecutive ambulance-initiated stroke alerts at 2 stroke centers, with patients stratified into typical (LVO with predefined severe syndrome and non-LVO without) or atypical presentations (opposite situations). Five scales (Rapid Arterial Occlusion Evaluation [RACE], Los Angeles Motor Scale [LAMS], Field Assessment Stroke Triage for Emergency Destination [FAST-ED], Prehospital Acute Stroke Severity scale [PASS], and Cincinnati Prehospital Stroke Severity Scale [CPSSS]) were derived from the baseline National Institutes of Health Stroke Scale scored by doctors and analyzed for diagnostic performance compared with imaging. Of a total of 565 patients, atypical presentations occurred in 31 LVO (38% of LVO) and 50 non-LVO cases (10%). Most scales correctly identified >95% of typical presentations but <20% of atypical presentations. Misclassification attributable to atypical presentations would have resulted in 4 M1/internal carotid artery occlusions, with National Institutes of Health Stroke Scale score ≥6 (5% of LVO) being missed and 9 non-LVO infarcts (5%) bypassing the nearest thrombolysis center. Atypical presentations accounted for the bulk of scale misclassifications, but the majority of these misclassifications were not detrimental, and use of LVO scales would significantly increase timely delivery to endovascular centers, with only a small proportion of non-LVO infarcts bypassing the nearest thrombolysis center. Our findings, however, would require paramedics to score as accurately as doctors, and this translation is made difficult by weaknesses in current scales that need to be addressed before widespread adoption. © 2017 American Heart Association, Inc.
Dickson, Kim E.; Tran, Nhan T.; Samuelson, Julia L.; Njeuhmeli, Emmanuel; Cherutich, Peter; Dick, Bruce; Farley, Tim; Ryan, Caroline; Hankins, Catherine A.
2011-01-01
Background Following confirmation of the effectiveness of voluntary medical male circumcision (VMMC) for HIV prevention, the World Health Organization and the Joint United Nations Programme on HIV/AIDS issued recommendations in 2007. Less than 5 y later, priority countries are at different stages of program scale-up. This paper analyzes the progress towards the scale-up of VMMC programs. It analyzes the adoption of VMMC as an additional HIV prevention strategy and explores the factors may have expedited or hindered the adoption of policies and initial program implementation in priority countries to date. Methods and Findings VMMCs performed in priority countries between 2008 and 2010 were recorded and used to classify countries into five adopter categories according to the Diffusion of Innovations framework. The main predictors of VMMC program adoption were determined and factors influencing subsequent scale-up explored. By the end of 2010, over 550,000 VMMCs had been performed, representing approximately 3% of the target coverage level in priority countries. The “early adopter” countries developed national VMMC policies and initiated VMMC program implementation soon after the release of the WHO recommendations. However, based on modeling using the Decision Makers' Program Planning Tool (DMPPT), only Kenya appears to be on track towards achievement of the DMPPT-estimated 80% coverage goal by 2015, having already achieved 61.5% of the DMPPT target. None of the other countries appear to be on track to achieve their targets. Potential predicators of early adoption of male circumcision programs include having a VMMC focal person, establishing a national policy, having an operational strategy, and the establishment of a pilot program. Conclusions Early adoption of VMMC policies did not necessarily result in rapid program scale-up. A key lesson is the importance of not only being ready to adopt a new intervention but also ensuring that factors critical to supporting and accelerating scale-up are incorporated into the program. The most successful program had country ownership and sustained leadership to translate research into a national policy and program. Please see later in the article for the Editors' Summary PMID:22140368
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyman, Jeffrey De'Haven; Painter, S. L.; Viswanathan, H.
We investigate how the choice of injection mode impacts transport properties in kilometer-scale three-dimensional discrete fracture networks (DFN). The choice of injection mode, resident and flux-weighted, is designed to mimic different physical phenomena. It has been hypothesized that solute plumes injected under resident conditions evolve to behave similarly to solutes injected under flux-weighted conditions. Previously, computational limitations have prohibited the large-scale simulations required to investigate this hypothesis. We investigate this hypothesis by using a high-performance DFN suite, dfnWorks, to simulate flow in kilometer-scale three-dimensional DFNs based on fractured granite at the Forsmark site in Sweden, and adopt a Lagrangian approachmore » to simulate transport therein. Results show that after traveling through a pre-equilibrium region, both injection methods exhibit linear scaling of the first moment of travel time and power law scaling of the breakthrough curve with similar exponents, slightly larger than 2. Lastly, the physical mechanisms behind this evolution appear to be the combination of in-network channeling of mass into larger fractures, which offer reduced resistance to flow, and in-fracture channeling, which results from the topology of the DFN.« less
Hyman, Jeffrey De'Haven; Painter, S. L.; Viswanathan, H.; ...
2015-09-12
We investigate how the choice of injection mode impacts transport properties in kilometer-scale three-dimensional discrete fracture networks (DFN). The choice of injection mode, resident and flux-weighted, is designed to mimic different physical phenomena. It has been hypothesized that solute plumes injected under resident conditions evolve to behave similarly to solutes injected under flux-weighted conditions. Previously, computational limitations have prohibited the large-scale simulations required to investigate this hypothesis. We investigate this hypothesis by using a high-performance DFN suite, dfnWorks, to simulate flow in kilometer-scale three-dimensional DFNs based on fractured granite at the Forsmark site in Sweden, and adopt a Lagrangian approachmore » to simulate transport therein. Results show that after traveling through a pre-equilibrium region, both injection methods exhibit linear scaling of the first moment of travel time and power law scaling of the breakthrough curve with similar exponents, slightly larger than 2. Lastly, the physical mechanisms behind this evolution appear to be the combination of in-network channeling of mass into larger fractures, which offer reduced resistance to flow, and in-fracture channeling, which results from the topology of the DFN.« less
Altınoğlu Dikmeer, Ilkiz; Erol, Neşe; Gençöz, Tülin
2014-01-01
This study aimed to investigate and compare emotional and behavioral problems in Turkish adoptees and non-adopted peers raised by their biological parents. The study included 61 adopted children (34 female and 27 male) aged 6-18 years and 62 age- and gender-matched non-adopted children (35 female and 27 male). Parents rated their children's problem behaviors using the Child Behavior Checklist/6-18, temperament characteristics using the School Age Temperament Inventory, their own personality traits using the Basic Personality Traits Inventory, and their parenting styles using the Measure of Child Rearing Styles. Children rated their parents' availability and reliability as attachment figures using the Kerns Security Scale and parenting styles using the Measure of Child Rearing Styles. Adolescents aged 11-18 years self-rated their problem behaviors using the Youth Self Report. Group differences and correlations were analyzed. There were non-significant differences in all scale scores between the adopted and non-adopted groups. In contrast to the literature, age of the children at the time of adoption was not associated with problem behaviors or attachment relationships. On the other hand, the findings indicate that as the age at which the children learned that they had been adopted increased emotional and behavioral problems increased. Adoption alone could not explain the problem behaviors observed in the adopted children; the observed problem behaviors should be considered within the context of the developmental process.
NASA Technical Reports Server (NTRS)
Grung, B. L.; Heaps, J. D.; Schmit, F. M.; Schuldt, S. B.; Zook, J. D.
1981-01-01
The technical feasibility of producing solar-cell-quality sheet silicon to meet the Department of Energy (DOE) 1986 overall price goal of $0.70/watt was investigated. With the silicon-on-ceramic (SOC) approach, a low-cost ceramic substrate is coated with large-grain polycrystalline silicon by unidirectional solidification of molten silicon. This effort was divided into several areas of investigation in order to most efficiently meet the goals of the program. These areas include: (1) dip-coating; (2) continuous coating designated SCIM-coating, and acronym for Silicon Coating by an Inverted Meniscus (SCIM); (3) material characterization; (4) cell fabrication and evaluation; and (5) theoretical analysis. Both coating approaches were successful in producing thin layers of large grain, solar-cell-quality silicon. The dip-coating approach was initially investigated and considerable effort was given to this technique. The SCIM technique was adopted because of its scale-up potential and its capability to produce more conventiently large areas of SOC.
Laser hardening techniques on steam turbine blade and application
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Zhang, Qunli; Kong, Fanzhi; Ding, Qingming
Different laser surface hardening techniques, such as laser alloying and laser solution strengthening were adopted to perform modification treatment on the local region of inset edge for 2Cr13 and 17-4PH steam turbine blades to prolong the life of the blades. The microstructures, microhardness and anti-cavitation properties were investigated on the blades after laser treatment. The hardening mechanism and technique adaptability were researched. Large scale installation practices confirmed that the laser surface modification techniques are safe and reliable, which can improve the properties of blades greatly with advantages of high automation, high quality, little distortion and simple procedure.
AQBE — QBE Style Queries for Archetyped Data
NASA Astrophysics Data System (ADS)
Sachdeva, Shelly; Yaginuma, Daigo; Chu, Wanming; Bhalla, Subhash
Large-scale adoption of electronic healthcare applications requires semantic interoperability. The new proposals propose an advanced (multi-level) DBMS architecture for repository services for health records of patients. These also require query interfaces at multiple levels and at the level of semi-skilled users. In this regard, a high-level user interface for querying the new form of standardized Electronic Health Records system has been examined in this study. It proposes a step-by-step graphical query interface to allow semi-skilled users to write queries. Its aim is to decrease user effort and communication ambiguities, and increase user friendliness.
Taking stock of agroforestry adoption studies
Subhrendu K. Pattanayak; D. Evan Mercer; Erin Sills; Jui-Chen Yang
2003-01-01
In light of the large number of empirical studies of agroforestry adoption published during the last decade, we believe it is time to take stock and identify general determinants of agroforestry adoption. In reviewing 120 articles on adoption of agricultural and forestry technology by small holders, we find five categories of factors that explain technology adoption...
Spatial organization of chromatin domains and compartments in single chromosomes
NASA Astrophysics Data System (ADS)
Wang, Siyuan; Su, Jun-Han; Beliveau, Brian; Bintu, Bogdan; Moffitt, Jeffrey; Wu, Chao-Ting; Zhuang, Xiaowei
The spatial organization of chromatin critically affects genome function. Recent chromosome-conformation-capture studies have revealed topologically associating domains (TADs) as a conserved feature of chromatin organization, but how TADs are spatially organized in individual chromosomes remains unknown. Here, we developed an imaging method for mapping the spatial positions of numerous genomic regions along individual chromosomes and traced the positions of TADs in human interphase autosomes and X chromosomes. We observed that chromosome folding deviates from the ideal fractal-globule model at large length scales and that TADs are largely organized into two compartments spatially arranged in a polarized manner in individual chromosomes. Active and inactive X chromosomes adopt different folding and compartmentalization configurations. These results suggest that the spatial organization of chromatin domains can change in response to regulation.
sbtools: A package connecting R to cloud-based data for collaborative online research
Winslow, Luke; Chamberlain, Scott; Appling, Alison P.; Read, Jordan S.
2016-01-01
The adoption of high-quality tools for collaboration and reproducible research such as R and Github is becoming more common in many research fields. While Github and other version management systems are excellent resources, they were originally designed to handle code and scale poorly to large text-based or binary datasets. A number of scientific data repositories are coming online and are often focused on dataset archival and publication. To handle collaborative workflows using large scientific datasets, there is increasing need to connect cloud-based online data storage to R. In this article, we describe how the new R package sbtools enables direct access to the advanced online data functionality provided by ScienceBase, the U.S. Geological Survey’s online scientific data storage platform.
NASA Astrophysics Data System (ADS)
Stringer, L. C.; Fleskens, L.; Reed, M. S.; de Vente, J.; Zengin, M.
2014-11-01
Examples of sustainable land management (SLM) exist throughout the world. In many cases, SLM has largely evolved through local traditional practices and incremental experimentation rather than being adopted on the basis of scientific evidence. This means that SLM technologies are often only adopted across small areas. The DESIRE (DESertIfication mitigation and REmediation of degraded land) project combined local traditional knowledge on SLM with empirical evaluation of SLM technologies. The purpose of this was to evaluate and select options for dissemination in 16 sites across 12 countries. It involved (i) an initial workshop to evaluate stakeholder priorities (reported elsewhere), (ii) field trials/empirical modeling, and then, (iii) further stakeholder evaluation workshops. This paper focuses on workshops in which stakeholders evaluated the performance of SLM technologies based on the scientific monitoring and modeling results from 15 study sites. It analyses workshop outcomes to evaluate how scientific results affected stakeholders' perceptions of local SLM technologies. It also assessed the potential of this participatory approach in facilitating wider acceptance and implementation of SLM. In several sites, stakeholder preferences for SLM technologies changed as a consequence of empirical measurements and modeling assessments of each technology. Two workshop examples are presented in depth to: (a) explore the scientific results that triggered stakeholders to change their views; and (b) discuss stakeholders' suggestions on how the adoption of SLM technologies could be up-scaled. The overall multi-stakeholder participatory approach taken is then evaluated. It is concluded that to facilitate broad-scale adoption of SLM technologies, de-contextualized, scientific generalisations must be given local context; scientific findings must be viewed alongside traditional beliefs and both scrutinized with equal rigor; and the knowledge of all kinds of experts must be recognised and considered in decision-making about SLM, whether it has been formally codified or not. The approach presented in this paper provided this opportunity and received positive feedback from stakeholders.
Lanz, M; Iafrate, R; Rosnati, R; Scabini, E
1999-12-01
The purpose of the present study was to verify whether there are some differences in parent-child communication and in adolescent self-esteem among adoptive, separated and intact non-adoptive families and to investigate the extent to which parent-child communication is related to adolescent self-esteem in the three types of families. The study sample was composed of 450 adolescents aged between 11 and 17 years (160 from intact non-adoptive families, 140 from separated or divorced families and 150 from intercountry adoptive families). Subjects completed the Parent-Adolescent Communication Scale by Barnes and Olson, the Rosenberg Self-esteem Scale and some socio-demographic items. The results show that adolescents from separated families have more difficulties in their relationships with both the mother and the father than their peers, and that adoptive children perceive a more positive communication with their parents than biological children. Moreover, adoptees showed lower self-esteem than the other two groups of adolescents. Lastly, it emerged that male and female adolescents' self-esteem is related to positive communication with both parents in intact non-adoptive families, while no link was significant for male and female children of divorced parents or for adoptees. Copyright 1999 The Association for Professionals in Services for Adolescents.
Power-law expansion of the Universe from the bosonic Lorentzian type IIB matrix model
NASA Astrophysics Data System (ADS)
Ito, Yuta; Nishimura, Jun; Tsuchiya, Asato
2015-11-01
Recent studies on the Lorentzian version of the type IIB matrix model show that (3+1)D expanding universe emerges dynamically from (9+1)D space-time predicted by superstring theory. Here we study a bosonic matrix model obtained by omitting the fermionic matrices. With the adopted simplification and the usage of a large-scale parallel computer, we are able to perform Monte Carlo calculations with matrix size up to N = 512, which is twenty times larger than that used previously for the studies of the original model. When the matrix size is larger than some critical value N c ≃ 110, we find that (3+1)D expanding universe emerges dynamically with a clear large- N scaling property. Furthermore, the observed increase of the spatial extent with time t at sufficiently late times is consistent with a power-law behavior t 1/2, which is reminiscent of the expanding behavior of the Friedmann-Robertson-Walker universe in the radiation dominated era. We discuss possible implications of this result on the original supersymmetric model including fermionic matrices.
A dataset of human decision-making in teamwork management.
Yu, Han; Shen, Zhiqi; Miao, Chunyan; Leung, Cyril; Chen, Yiqiang; Fauvel, Simon; Lin, Jun; Cui, Lizhen; Pan, Zhengxiang; Yang, Qiang
2017-01-17
Today, most endeavours require teamwork by people with diverse skills and characteristics. In managing teamwork, decisions are often made under uncertainty and resource constraints. The strategies and the effectiveness of the strategies different people adopt to manage teamwork under different situations have not yet been fully explored, partially due to a lack of detailed large-scale data. In this paper, we describe a multi-faceted large-scale dataset to bridge this gap. It is derived from a game simulating complex project management processes. It presents the participants with different conditions in terms of team members' capabilities and task characteristics for them to exhibit their decision-making strategies. The dataset contains detailed data reflecting the decision situations, decision strategies, decision outcomes, and the emotional responses of 1,144 participants from diverse backgrounds. To our knowledge, this is the first dataset simultaneously covering these four facets of decision-making. With repeated measurements, the dataset may help establish baseline variability of decision-making in teamwork management, leading to more realistic decision theoretic models and more effective decision support approaches.
Membranes for redox flow battery applications.
Prifti, Helen; Parasuraman, Aishwarya; Winardi, Suminto; Lim, Tuti Mariana; Skyllas-Kazacos, Maria
2012-06-19
The need for large scale energy storage has become a priority to integrate renewable energy sources into the electricity grid. Redox flow batteries are considered the best option to store electricity from medium to large scale applications. However, the current high cost of redox flow batteries impedes the wide spread adoption of this technology. The membrane is a critical component of redox flow batteries as it determines the performance as well as the economic viability of the batteries. The membrane acts as a separator to prevent cross-mixing of the positive and negative electrolytes, while still allowing the transport of ions to complete the circuit during the passage of current. An ideal membrane should have high ionic conductivity, low water intake and excellent chemical and thermal stability as well as good ionic exchange capacity. Developing a low cost, chemically stable membrane for redox flow cell batteries has been a major focus for many groups around the world in recent years. This paper reviews the research work on membranes for redox flow batteries, in particular for the all-vanadium redox flow battery which has received the most attention.
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications
Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.
2018-01-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.
Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D
2017-04-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.
Profiling and Improving I/O Performance of a Large-Scale Climate Scientific Application
NASA Technical Reports Server (NTRS)
Liu, Zhuo; Wang, Bin; Wang, Teng; Tian, Yuan; Xu, Cong; Wang, Yandong; Yu, Weikuan; Cruz, Carlos A.; Zhou, Shujia; Clune, Tom;
2013-01-01
Exascale computing systems are soon to emerge, which will pose great challenges on the huge gap between computing and I/O performance. Many large-scale scientific applications play an important role in our daily life. The huge amounts of data generated by such applications require highly parallel and efficient I/O management policies. In this paper, we adopt a mission-critical scientific application, GEOS-5, as a case to profile and analyze the communication and I/O issues that are preventing applications from fully utilizing the underlying parallel storage systems. Through in-detail architectural and experimental characterization, we observe that current legacy I/O schemes incur significant network communication overheads and are unable to fully parallelize the data access, thus degrading applications' I/O performance and scalability. To address these inefficiencies, we redesign its I/O framework along with a set of parallel I/O techniques to achieve high scalability and performance. Evaluation results on the NASA discover cluster show that our optimization of GEOS-5 with ADIOS has led to significant performance improvements compared to the original GEOS-5 implementation.
A dataset of human decision-making in teamwork management
Yu, Han; Shen, Zhiqi; Miao, Chunyan; Leung, Cyril; Chen, Yiqiang; Fauvel, Simon; Lin, Jun; Cui, Lizhen; Pan, Zhengxiang; Yang, Qiang
2017-01-01
Today, most endeavours require teamwork by people with diverse skills and characteristics. In managing teamwork, decisions are often made under uncertainty and resource constraints. The strategies and the effectiveness of the strategies different people adopt to manage teamwork under different situations have not yet been fully explored, partially due to a lack of detailed large-scale data. In this paper, we describe a multi-faceted large-scale dataset to bridge this gap. It is derived from a game simulating complex project management processes. It presents the participants with different conditions in terms of team members’ capabilities and task characteristics for them to exhibit their decision-making strategies. The dataset contains detailed data reflecting the decision situations, decision strategies, decision outcomes, and the emotional responses of 1,144 participants from diverse backgrounds. To our knowledge, this is the first dataset simultaneously covering these four facets of decision-making. With repeated measurements, the dataset may help establish baseline variability of decision-making in teamwork management, leading to more realistic decision theoretic models and more effective decision support approaches. PMID:28094787
Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology
Siegle, Joshua H.; Hale, Gregory J.; Newman, Jonathan P.; Voigts, Jakob
2014-01-01
One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is “open” or “closed”: that is, whether or not the system’s schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. PMID:25528614
Chip-integrated optical power limiter based on an all-passive micro-ring resonator
NASA Astrophysics Data System (ADS)
Yan, Siqi; Dong, Jianji; Zheng, Aoling; Zhang, Xinliang
2014-10-01
Recent progress in silicon nanophotonics has dramatically advanced the possible realization of large-scale on-chip optical interconnects integration. Adopting photons as information carriers can break the performance bottleneck of electronic integrated circuit such as serious thermal losses and poor process rates. However, in integrated photonics circuits, few reported work can impose an upper limit of optical power therefore prevent the optical device from harm caused by high power. In this study, we experimentally demonstrate a feasible integrated scheme based on a single all-passive micro-ring resonator to realize the optical power limitation which has a similar function of current limiting circuit in electronics. Besides, we analyze the performance of optical power limiter at various signal bit rates. The results show that the proposed device can limit the signal power effectively at a bit rate up to 20 Gbit/s without deteriorating the signal. Meanwhile, this ultra-compact silicon device can be completely compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may pave the way of very large scale integrated photonic circuits for all-optical information processors and artificial intelligence systems.
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
Neural ensemble communities: open-source approaches to hardware for large-scale electrophysiology.
Siegle, Joshua H; Hale, Gregory J; Newman, Jonathan P; Voigts, Jakob
2015-06-01
One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is 'open' or 'closed': that is, whether or not the system's schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Membranes for Redox Flow Battery Applications
Prifti, Helen; Parasuraman, Aishwarya; Winardi, Suminto; Lim, Tuti Mariana; Skyllas-Kazacos, Maria
2012-01-01
The need for large scale energy storage has become a priority to integrate renewable energy sources into the electricity grid. Redox flow batteries are considered the best option to store electricity from medium to large scale applications. However, the current high cost of redox flow batteries impedes the wide spread adoption of this technology. The membrane is a critical component of redox flow batteries as it determines the performance as well as the economic viability of the batteries. The membrane acts as a separator to prevent cross-mixing of the positive and negative electrolytes, while still allowing the transport of ions to complete the circuit during the passage of current. An ideal membrane should have high ionic conductivity, low water intake and excellent chemical and thermal stability as well as good ionic exchange capacity. Developing a low cost, chemically stable membrane for redox flow cell batteries has been a major focus for many groups around the world in recent years. This paper reviews the research work on membranes for redox flow batteries, in particular for the all-vanadium redox flow battery which has received the most attention. PMID:24958177
Bentzen, Amalie Kai; Marquard, Andrea Marion; Lyngaa, Rikke; Saini, Sunil Kumar; Ramskov, Sofie; Donia, Marco; Such, Lina; Furness, Andrew J S; McGranahan, Nicholas; Rosenthal, Rachel; Straten, Per Thor; Szallasi, Zoltan; Svane, Inge Marie; Swanton, Charles; Quezada, Sergio A; Jakobsen, Søren Nyboe; Eklund, Aron Charles; Hadrup, Sine Reker
2016-10-01
Identification of the peptides recognized by individual T cells is important for understanding and treating immune-related diseases. Current cytometry-based approaches are limited to the simultaneous screening of 10-100 distinct T-cell specificities in one sample. Here we use peptide-major histocompatibility complex (MHC) multimers labeled with individual DNA barcodes to screen >1,000 peptide specificities in a single sample, and detect low-frequency CD8 T cells specific for virus- or cancer-restricted antigens. When analyzing T-cell recognition of shared melanoma antigens before and after adoptive cell therapy in melanoma patients, we observe a greater number of melanoma-specific T-cell populations compared with cytometry-based approaches. Furthermore, we detect neoepitope-specific T cells in tumor-infiltrating lymphocytes and peripheral blood from patients with non-small cell lung cancer. Barcode-labeled pMHC multimers enable the combination of functional T-cell analysis with large-scale epitope recognition profiling for the characterization of T-cell recognition in various diseases, including in small clinical samples.
A dataset of human decision-making in teamwork management
NASA Astrophysics Data System (ADS)
Yu, Han; Shen, Zhiqi; Miao, Chunyan; Leung, Cyril; Chen, Yiqiang; Fauvel, Simon; Lin, Jun; Cui, Lizhen; Pan, Zhengxiang; Yang, Qiang
2017-01-01
Today, most endeavours require teamwork by people with diverse skills and characteristics. In managing teamwork, decisions are often made under uncertainty and resource constraints. The strategies and the effectiveness of the strategies different people adopt to manage teamwork under different situations have not yet been fully explored, partially due to a lack of detailed large-scale data. In this paper, we describe a multi-faceted large-scale dataset to bridge this gap. It is derived from a game simulating complex project management processes. It presents the participants with different conditions in terms of team members' capabilities and task characteristics for them to exhibit their decision-making strategies. The dataset contains detailed data reflecting the decision situations, decision strategies, decision outcomes, and the emotional responses of 1,144 participants from diverse backgrounds. To our knowledge, this is the first dataset simultaneously covering these four facets of decision-making. With repeated measurements, the dataset may help establish baseline variability of decision-making in teamwork management, leading to more realistic decision theoretic models and more effective decision support approaches.
Guo, Weian; Si, Chengyong; Xue, Yu; Mao, Yanfen; Wang, Lei; Wu, Qidi
2017-05-04
Particle Swarm Optimization (PSO) is a popular algorithm which is widely investigated and well implemented in many areas. However, the canonical PSO does not perform well in population diversity maintenance so that usually leads to a premature convergence or local optima. To address this issue, we propose a variant of PSO named Grouping PSO with Personal- Best-Position (Pbest) Guidance (GPSO-PG) which maintains the population diversity by preserving the diversity of exemplars. On one hand, we adopt uniform random allocation strategy to assign particles into different groups and in each group the losers will learn from the winner. On the other hand, we employ personal historical best position of each particle in social learning rather than the current global best particle. In this way, the exemplars diversity increases and the effect from the global best particle is eliminated. We test the proposed algorithm to the benchmarks in CEC 2008 and CEC 2010, which concern the large scale optimization problems (LSOPs). By comparing several current peer algorithms, GPSO-PG exhibits a competitive performance to maintain population diversity and obtains a satisfactory performance to the problems.
1km Soil Moisture from Downsampled Sentinel-1 SAR Data: Harnessing Assets and Overcoming Obstacles.
NASA Astrophysics Data System (ADS)
Bauer-Marschallinger, Bernhard; Cao, Senmao; Schaufler, Stefan; Paulik, Christoph; Naeimi, Vahid; Wagner, Wolfgang
2017-04-01
Radars onboard Earth observing satellites allow estimating Surface Soil Moisture (SSM) regularly and globally. The use of coarse-scale measurements from active or passive radars for SSM retrieval is well established and in operational use. Thanks to the Sentinel-1 mission, launched in 2014 and deploying Synthetic Aperture Radars (SAR), high-resolution radar imagery is routinely available at the scale of 20 meters, with a high revisit frequency of 3-6 days and with unprecedented radiometric accuracy. However, the direct exploitation of high-resolution SAR data for SSM retrieval is complicated by several problems: Small-scaled contributions to the radar backscatter from individual ground features often obscure the soil moisture signal, rendering common algorithms insensitive to SSM. Furthermore, the influence of vegetation dynamics on the radar signal is less understood than in the coarse-scale case, leading to biases during the vegetation period. Finally, the large data volumes of high-resolution remote sensing data present a great load on hardware systems. Consequently, a spatial resampling of the high-resolution SAR data to a 500 meters sampling is done, allowing the exploitation of information at 10 meter sampling, but reducing effectively the inherent uncertainties. The thereof retrieved 1km SSM product aims to describe the soil moisture dynamics at medium scale with high quality. We adopted the TU-Wien Change Detection algorithm to the Sentinel-1 data, which was already successfully used for retrieving SSM from ERS-1/2 and Envisat-ASAR observations. The adoption entails a new method for SAR image resampling, including a masking for pixels that do not carry soil moisture signals, preventing them to spread during downsampling. Furthermore, the observation angle between the radar sensors and the ground is treated in a different way, as Sentinel-1 sensors observe from fixed orbit paths (in contrast to other radar sensors). Here, a regression model is developed that successfully estimates the dependency of radar backscatter to observation angle with statistical parameters from the Sentinel-1 SAR time series archive. We present the Sentinel-1 1km-SSM product generated by the adopted change detection algorithm. The dataset covers the European continent and holds data from October 2014 ongoing. In addition to a validation of the SSM product, the statistical SAR parameters used during SSM retrieval are examined.
NASA Astrophysics Data System (ADS)
Vogt, Nathan
2005-12-01
Investigations in this portfolio of manuscripts broadly advance understanding of how institutional arrangements influence impacts of population growth and integration into non-local markets on forest and tree-cover change. This research integrates methods of the natural and social sciences including remote sensing, geographical information systems, vegetation plot analysis, key informant interviews, and archival research. In combination, these methods are applied for longer-term analyses of the role of institutional arrangements in land-cover change in West Mengo, Uganda. Over the past fifty years, tree cover on settled areas (cultivated and grazed lands and home-gardens) in West Mengo has increased while forest cover (particularly outside of state reserves) is more diffuse. One finding is that the underlying, traditional sociopolitical structure in West Mengo does facilitate, on aggregate, customary arrangements in identifying diverse strategies to maintain the flow of forest products and benefits under growing population and market pressures (avoiding local tragedies). But, these customary arrangements may or may not be able to maintain ecosystem services (produced from large-scale forest patches) outside of the local sociopolitical unit under these conditions. Boundaries of state forest reserves in West Mengo were found to have remained stable for over fifty years despite population and market pressures. Another finding is that formal state arrangements can, but don't always, stem deforestation under conditions of high population and market pressures. When design principles for robust, large-scale commons are adopted in the process of creating adaptive arrangements for governance of large extents of working forests that the arrangements and desired outcomes (e.g., stable forest cover and flow of subsistence products in the West Mengo case) may endure over the long term. And, when not adopted, you may find a relatively fast breakdown in the institutional arrangement resulting in unintended outcomes for some or all stakeholders (e.g., forest degradation and loss for foresters in the Kikuyu case).
Relationship between Social Networks Adoption and Social Intelligence
ERIC Educational Resources Information Center
Gunduz, Semseddin
2017-01-01
The purpose of this study was to set forth the relationship between the individuals' states to adopt social networks and social intelligence and analyze both concepts according to various variables. Research data were collected from 1145 social network users in the online media by using the Adoption of Social Network Scale and Social Intelligence…
Roberge, Jean-Michel; Lämås, Tomas; Lundmark, Tomas; Ranius, Thomas; Felton, Adam; Nordin, Annika
2015-05-01
Over previous decades new environmental measures have been implemented in forestry. In Fennoscandia, forest management practices were modified to set aside conservation areas and to retain trees at final felling. In this study we simulated the long-term effects of set-aside establishment and tree retention practices on the future availability of large trees and dead wood, two forest structures of documented importance to biodiversity conservation. Using a forest decision support system (Heureka), we projected the amounts of these structures over 200 years in two managed north Swedish landscapes, under management scenarios with and without set-asides and tree retention. In line with common best practice, we simulated set-asides covering 5% of the productive area with priority to older stands, as well as ∼5% green-tree retention (solitary trees and forest patches) including high-stump creation at final felling. We found that only tree retention contributed to substantial increases in the future density of large (DBH ≥35 cm) deciduous trees, while both measures made significant contributions to the availability of large conifers. It took more than half a century to observe stronger increases in the densities of large deciduous trees as an effect of tree retention. The mean landscape-scale volumes of hard dead wood fluctuated widely, but the conservation measures yielded values which were, on average over the entire simulation period, about 2.5 times as high as for scenarios without these measures. While the density of large conifers increased with time in the landscape initially dominated by younger forest, best practice conservation measures did not avert a long-term decrease in large conifer density in the landscape initially comprised of more old forest. Our results highlight the needs to adopt a long temporal perspective and to consider initial landscape conditions when evaluating the large-scale effects of conservation measures on forest biodiversity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Co-C and Pd-C Fixed Points for the Evaluation of Facilities and Scales Realization at INRIM and NMC
NASA Astrophysics Data System (ADS)
Battuello, M.; Wang, L.; Girard, F.; Ang, S. H.
2014-04-01
Two hybrid cells for realizing the Co-C and Pd-C fixed points and constructed at Istituto Nazionale di Ricerca Metrologica (INRIM) were used for an evaluation of facilities and procedures adopted by INRIM and National Metrology Institute of Singapore (NMC) for the realization of the solid-liquid phase transitions of high-temperature fixed points and for determining their transition temperatures. Four different furnaces were used for the investigations, i.e., two single-zone furnaces, one of them of the direct-heating type, and two identical three-zone furnaces. The transition temperatures were measured at both institutes by adopting different procedures for realizing the radiation scales, i.e., at INRIM a scheme based on the extrapolation of fixed-point interpolated scales and an International Temperature Scale of 1990 (ITS-90) approach at NMC. The point of inflection (POI) of the melting curves was determined and assumed as a practical representation of the melting temperature. Different methods for deriving the POI were used, and differences as large as some hundredths of a kelvin were found with the different approaches. The POIs of the different melting curves were analyzed with respect to the different possible operative conditions with the aim of deriving reproducibility figures to improve the estimated uncertainty. As regard to the institutes inter-comparison, differences of 0.13 K and 0.29 K were found between INRIM and NMC determinations at the Co-C and Pd-C points, respectively. Such differences are compatible with the combined standard uncertainties of the comparison, which are estimated to be 0.33 K and 0.36 K at the Co-C and Pd-C points, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, Sonia; Yang, Christopher; Gibbs, Michael
California aims to reduce greenhouse gas (GHG) emissions to 40% below 1990 levels by 2030. We compare six energy models that have played various roles in informing the state policymakers in setting climate policy goals and targets. These models adopt a range of modeling structures, including stock-turnover back-casting models, a least-cost optimization model, macroeconomic/macro-econometric models, and an electricity dispatch model. Results from these models provide useful insights in terms of the transformations in the energy system required, including efficiency improvements in cars, trucks, and buildings, electrification of end-uses, low- or zero-carbon electricity and fuels, aggressive adoptions of zero-emission vehicles (ZEVs),more » demand reduction, and large reductions of non-energy GHG emissions. Some of these studies also suggest that the direct economic costs can be fairly modest or even generate net savings, while the indirect macroeconomic benefits are large, as shifts in employment and capital investments could have higher economic returns than conventional energy expenditures. These models, however, often assume perfect markets, perfect competition, and zero transaction costs. They also do not provide specific policy guidance on how these transformative changes can be achieved. Greater emphasis on modeling uncertainty, consumer behaviors, heterogeneity of impacts, and spatial modeling would further enhance policymakers' ability to design more effective and targeted policies. Here, this paper presents an example of how policymakers, energy system modelers and stakeholders interact and work together to develop and evaluate long-term state climate policy targets. Lastly, even though this paper focuses on California, the process of dialogue and interactions, modeling results, and lessons learned can be generally adopted across different regions and scales.« less
Commercial use of remote sensing in agriculture: a case study
NASA Astrophysics Data System (ADS)
Gnauck, Gary E.
1999-12-01
Over 25 years of research have clearly shown that an analysis of remote sensing imagery can provide information on agricultural crops. Most of this research has been funded by and directed toward the needs of government agencies. Commercial use of agricultural remote sensing has been limited to very small-scale operations supplying remote sensing services to a few selected customers. Datron/Transco Inc. undertook an internally funded remote sensing program directed toward the California cash crop industry (strawberries, lettuce, tomatoes, other fresh vegetables and cotton). The objectives of this program were twofold: (1) to assess the need and readiness of agricultural land managers to adopt remote sensing as a management tool, and (2) determine what technical barriers exist to large-scale implementation of this technology on a commercial basis. The program was divided into three phases: Planning, Engineering Test and Evaluation, and Commercial Operations. Findings: Remote sensing technology can deliver high resolution multispectral imagery with rapid turnaround, that can provide information on crop stress insects, disease and various soil parameters. The limiting factors to the use of remote sensing in agriculture are a lack of familiarization by the land managers, difficulty in translating 'information' into increased revenue or reduced cost for the land manager, and the large economies of scale needed to make the venture commercially viable.
Nonlinear dynamic analysis and optimal trajectory planning of a high-speed macro-micro manipulator
NASA Astrophysics Data System (ADS)
Yang, Yi-ling; Wei, Yan-ding; Lou, Jun-qiang; Fu, Lei; Zhao, Xiao-wei
2017-09-01
This paper reports the nonlinear dynamic modeling and the optimal trajectory planning for a flexure-based macro-micro manipulator, which is dedicated to the large-scale and high-speed tasks. In particular, a macro- micro manipulator composed of a servo motor, a rigid arm and a compliant microgripper is focused. Moreover, both flexure hinges and flexible beams are considered. By combining the pseudorigid-body-model method, the assumed mode method and the Lagrange equation, the overall dynamic model is derived. Then, the rigid-flexible-coupling characteristics are analyzed by numerical simulations. After that, the microscopic scale vibration excited by the large-scale motion is reduced through the trajectory planning approach. Especially, a fitness function regards the comprehensive excitation torque of the compliant microgripper is proposed. The reference curve and the interpolation curve using the quintic polynomial trajectories are adopted. Afterwards, an improved genetic algorithm is used to identify the optimal trajectory by minimizing the fitness function. Finally, the numerical simulations and experiments validate the feasibility and the effectiveness of the established dynamic model and the trajectory planning approach. The amplitude of the residual vibration reduces approximately 54.9%, and the settling time decreases 57.1%. Therefore, the operation efficiency and manipulation stability are significantly improved.
Use of single large or several small policies as strategies to manage people-park interactions.
Mackenzie, Catrina A; Baird, Timothy D; Hartter, Joel
2014-12-01
Biodiversity conservation has been criticized for undermining or ignoring social well-being. Currently efforts to mutually promote social justice, rural development, and biodiversity conservation, which have been contentious and yielded mixed results, continue to spread despite a general dearth of effective management strategies. We contend that social and economic concerns should be integral to conservation planning and propose that the scale of these phenomena is also critical. To evaluate the merit of this proposal, we adopted and expanded a conservation management strategy framework developed by Joel Heinen and examined how population density, economic disparity, and ethnic heterogeneity vary spatially surrounding 2 contrasting protected areas in East Africa: Kibale National Park in Uganda and Tarangire National Park in Tanzania. Analyses of demographic, wealth, and ethnicity data from regional censuses and household surveys conducted in 2009 and 2010 indicated that choice of scale (landscape or community) changed the management strategies recommended by the model. Therefore, "several small" people-park management strategies varying around a given protected area may be more appropriate than a "single large" people-park strategy applied across an entire protected area. Correspondingly, scale adjusted Heinen recommendations offered new strategies for effective conservation management within these human landscapes not incorporated in current in situ management plans. © 2014 Society for Conservation Biology.
Sunlight-thin nanophotonic monocrystalline silicon solar cells
NASA Astrophysics Data System (ADS)
Depauw, Valérie; Trompoukis, Christos; Massiot, Inès; Chen, Wanghua; Dmitriev, Alexandre; Cabarrocas, Pere Roca i.; Gordon, Ivan; Poortmans, Jef
2017-09-01
Introducing nanophotonics into photovoltaics sets the path for scaling down the surface texture of crystalline-silicon solar cells from the micro- to the nanoscale, allowing to further boost the photon absorption while reducing silicon material loss. However, keeping excellent electrical performance has proven to be very challenging, as the absorber is damaged by the nanotexturing and the sensitivity to the surface recombination is dramatically increased. Here we realize a light-wavelength-scale nanotextured monocrystalline silicon cell with the confirmed efficiency of 8.6% and an effective thickness of only 830 nm. For this we adopt a self-assembled large-area and industry-compatible amorphous ordered nanopatterning, combined with an advanced surface passivation, earning strongly enhanced solar light absorption while retaining efficient electron collection. This prompts the development of highly efficient flexible and semitransparent photovoltaics, based on the industrially mature monocrystalline silicon technology.
Metabolic engineering of biosynthetic pathway for production of renewable biofuels.
Singh, Vijai; Mani, Indra; Chaudhary, Dharmendra Kumar; Dhar, Pawan Kumar
2014-02-01
Metabolic engineering is an important area of research that involves editing genetic networks to overproduce a certain substance by the cells. Using a combination of genetic, metabolic, and modeling methods, useful substances have been synthesized in the past at industrial scale and in a cost-effective manner. Currently, metabolic engineering is being used to produce sufficient, economical, and eco-friendly biofuels. In the recent past, a number of efforts have been made towards engineering biosynthetic pathways for large scale and efficient production of biofuels from biomass. Given the adoption of metabolic engineering approaches by the biofuel industry, this paper reviews various approaches towards the production and enhancement of renewable biofuels such as ethanol, butanol, isopropanol, hydrogen, and biodiesel. We have also identified specific areas where more work needs to be done in the future.
Wind-invariant saltation heights imply linear scaling of aeolian saltation flux with shear stress.
Martin, Raleigh L; Kok, Jasper F
2017-06-01
Wind-driven sand transport generates atmospheric dust, forms dunes, and sculpts landscapes. However, it remains unclear how the flux of particles in aeolian saltation-the wind-driven transport of sand in hopping trajectories-scales with wind speed, largely because models do not agree on how particle speeds and trajectories change with wind shear velocity. We present comprehensive measurements, from three new field sites and three published studies, showing that characteristic saltation layer heights remain approximately constant with shear velocity, in agreement with recent wind tunnel studies. These results support the assumption of constant particle speeds in recent models predicting linear scaling of saltation flux with shear stress. In contrast, our results refute widely used older models that assume that particle speed increases with shear velocity, thereby predicting nonlinear 3/2 stress-flux scaling. This conclusion is further supported by direct field measurements of saltation flux versus shear stress. Our results thus argue for adoption of linear saltation flux laws and constant saltation trajectories for modeling saltation-driven aeolian processes on Earth, Mars, and other planetary surfaces.
Wind-invariant saltation heights imply linear scaling of aeolian saltation flux with shear stress
Martin, Raleigh L.; Kok, Jasper F.
2017-01-01
Wind-driven sand transport generates atmospheric dust, forms dunes, and sculpts landscapes. However, it remains unclear how the flux of particles in aeolian saltation—the wind-driven transport of sand in hopping trajectories—scales with wind speed, largely because models do not agree on how particle speeds and trajectories change with wind shear velocity. We present comprehensive measurements, from three new field sites and three published studies, showing that characteristic saltation layer heights remain approximately constant with shear velocity, in agreement with recent wind tunnel studies. These results support the assumption of constant particle speeds in recent models predicting linear scaling of saltation flux with shear stress. In contrast, our results refute widely used older models that assume that particle speed increases with shear velocity, thereby predicting nonlinear 3/2 stress-flux scaling. This conclusion is further supported by direct field measurements of saltation flux versus shear stress. Our results thus argue for adoption of linear saltation flux laws and constant saltation trajectories for modeling saltation-driven aeolian processes on Earth, Mars, and other planetary surfaces. PMID:28630907
Epidemic failure detection and consensus for extreme parallelism
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...
2017-02-01
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
Tropical atmospheric circulations with humidity effects.
Hsia, Chun-Hsiung; Lin, Chang-Shou; Ma, Tian; Wang, Shouhong
2015-01-08
The main objective of this article is to study the effect of the moisture on the planetary scale atmospheric circulation over the tropics. The modelling we adopt is the Boussinesq equations coupled with a diffusive equation of humidity, and the humidity-dependent heat source is modelled by a linear approximation of the humidity. The rigorous mathematical analysis is carried out using the dynamic transition theory. In particular, we obtain mixed transitions, also known as random transitions, as described in Ma & Wang (2010 Discrete Contin. Dyn. Syst. 26 , 1399-1417. (doi:10.3934/dcds.2010.26.1399); 2011 Adv. Atmos. Sci. 28 , 612-622. (doi:10.1007/s00376-010-9089-0)). The analysis also indicates the need to include turbulent friction terms in the model to obtain correct convection scales for the large-scale tropical atmospheric circulations, leading in particular to the right critical temperature gradient and the length scale for the Walker circulation. In short, the analysis shows that the effect of moisture lowers the magnitude of the critical thermal Rayleigh number and does not change the essential characteristics of dynamical behaviour of the system.
NASA Astrophysics Data System (ADS)
Haigang, Sui; Zhina, Song
2016-06-01
Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.
Structured decision making as a framework for large-scale wildlife harvest management decisions
Robinson, Kelly F.; Fuller, Angela K.; Hurst, Jeremy E.; Swift, Bryan L.; Kirsch, Arthur; Farquhar, James F.; Decker, Daniel J.; Siemer, William F.
2016-01-01
Fish and wildlife harvest management at large spatial scales often involves making complex decisions with multiple objectives and difficult tradeoffs, population demographics that vary spatially, competing stakeholder values, and uncertainties that might affect management decisions. Structured decision making (SDM) provides a formal decision analytic framework for evaluating difficult decisions by breaking decisions into component parts and separating the values of stakeholders from the scientific evaluation of management actions and uncertainty. The result is a rigorous, transparent, and values-driven process. This decision-aiding process provides the decision maker with a more complete understanding of the problem and the effects of potential management actions on stakeholder values, as well as how key uncertainties can affect the decision. We use a case study to illustrate how SDM can be used as a decision-aiding tool for management decision making at large scales. We evaluated alternative white-tailed deer (Odocoileus virginianus) buck-harvest regulations in New York designed to reduce harvest of yearling bucks, taking into consideration the values of the state wildlife agency responsible for managing deer, as well as deer hunters. We incorporated tradeoffs about social, ecological, and economic management concerns throughout the state. Based on the outcomes of predictive models, expert elicitation, and hunter surveys, the SDM process identified management alternatives that optimized competing objectives. The SDM process provided biologists and managers insight about aspects of the buck-harvest decision that helped them adopt a management strategy most compatible with diverse hunter values and management concerns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hongqi; Brandenburg, Axel; Sokoloff, D. D., E-mail: hzhang@bao.ac.cn
We adopt an isotropic representation of the Fourier-transformed two-point correlation tensor of the magnetic field to estimate the magnetic energy and helicity spectra as well as current helicity spectra of two individual active regions (NOAA 11158 and NOAA 11515) and the change of the spectral indices during their development as well as during the solar cycle. The departure of the spectral indices of magnetic energy and current helicity from 5/3 are analyzed, and it is found that it is lower than the spectral index of the magnetic energy spectrum. Furthermore, the fractional magnetic helicity tends to increase when the scale of themore » energy-carrying magnetic structures increases. The magnetic helicity of NOAA 11515 violates the expected hemispheric sign rule, which is interpreted as an effect of enhanced field strengths at scales larger than 30–60 Mm with opposite signs of helicity. This is consistent with the general cycle dependence, which shows that around the solar maximum the magnetic energy and helicity spectra are steeper, emphasizing the large-scale field.« less
Defects controlled wrinkling and topological design in graphene
NASA Astrophysics Data System (ADS)
Zhang, Teng; Li, Xiaoyan; Gao, Huajian
2014-07-01
Due to its atomic scale thickness, the deformation energy in a free standing graphene sheet can be easily released through out-of-plane wrinkles which, if controllable, may be used to tune the electrical and mechanical properties of graphene. Here we adopt a generalized von Karman equation for a flexible solid membrane to describe graphene wrinkling induced by a prescribed distribution of topological defects such as disclinations (heptagons or pentagons) and dislocations (heptagon-pentagon dipoles). In this framework, a given distribution of topological defects in a graphene sheet is represented as an eigenstrain field which is determined from a Poisson equation and can be conveniently implemented in finite element (FEM) simulations. Comparison with atomistic simulations indicates that the proposed model, with only three parameters (i.e., bond length, stretching modulus and bending stiffness), is capable of accurately predicting the atomic scale wrinkles near disclination/dislocation cores while also capturing the large scale graphene configurations under specific defect distributions such as those leading to a sinusoidal surface ruga2
CD-ROM technology at the EROS data center
Madigan, Michael E.; Weinheimer, Mary C.
1993-01-01
The vast amount of digital spatial data often required by a single user has created a demand for media alternatives to 1/2" magnetic tape. One such medium that has been recently adopted at the U.S. Geological Survey's EROS Data Center is the compact disc (CD). CD's are a versatile, dynamic, and low-cost method for providing a variety of data on a single media device and are compatible with various computer platforms. CD drives are available for personal computers, UNIX workstations, and mainframe systems, either directly connected, or through a network. This medium furnishes a quick method of reproducing and distributing large amounts of data on a single CD. Several data sets are already available on CD's, including collections of historical Landsat multispectral scanner data and biweekly composites of Advanced Very High Resolution Radiometer data for the conterminous United States. The EROS Data Center intends to provide even more data sets on CD's. Plans include specific data sets on a customized disc to fulfill individual requests, and mass production of unique data sets for large-scale distribution. Requests for a single compact disc-read only memory (CD-ROM) containing a large volume of data either for archiving or for one-time distribution can be addressed with a CD-write once (CD-WO) unit. Mass production and large-scale distribution will require CD-ROM replication and mastering.
NASA Astrophysics Data System (ADS)
Suzuki, Tsuneo
2018-02-01
Blockspin transformation of topological defects is applied to the violation of the non-Abelian Bianchi identity (VNABI) on lattice defined as Abelian monopoles. To get rid of lattice artifacts, we introduce (1) smooth gauge fixings such as the maximal center gauge (MCG), (2) blockspin transformations and (3) the tadpole-improved gauge action. The effective action can be determined by adopting the inverse Monte Carlo method. The coupling constants F (i ) of the effective action depend on the coupling of the lattice action β and the number of the blocking step n . But it is found that F (i ) satisfies a beautiful scaling; that is, they are a function of the product b =n a (β ) alone for lattice coupling constants 3.0 ≤β ≤3.9 and the steps of blocking 1 ≤n ≤12 . The effective action showing the scaling behavior can be regarded as an almost perfect action corresponding to the continuum limit, since a →0 as n →∞ for fixed b . The infrared effective monopole action keeps the global color invariance when smooth gauges such as MCG keeping the invariance are adopted. The almost perfect action showing the scaling is found to be independent of the smooth gauges adopted here as naturally expected from the gauge invariance of the continuum theory. Then we compare the results with those obtained by the analytic blocking method of topological defects from the continuum, assuming local two-point interactions are dominant as the infrared effective action. The action is formulated in the continuum limit while the couplings of these actions can be derived from simple observables calculated numerically on lattices with a finite lattice spacing. When use is made of Berezinskii-Kosterlitz-Thouless (BKT) transformation, the infrared monopole action can be transformed into that of the string model. Since large b =n a (β ) corresponds to the strong-coupling region in the string model, the physical string tension and the lowest glueball mass can be evaluated analytically with the use of the strong-coupling expansion of the string model. The almost perfect action gives us √{σ }≃1.3 √{σphys } for b ≥1.0 (σphys-1 /2) , whereas the scalar glueball mass is kept to be near M (0++)˜3.7 √{σphys } . In addition, using the effective action composed of 10 simple quadratic interactions alone, we can almost explain analytically the scaling function of the squared monopole density determined numerically for a large b region when b >1.2 (σphys-1 /2).
NASA Astrophysics Data System (ADS)
Chen, Chunfeng; Liu, Hua; Fan, Ge
2005-02-01
In this paper we consider the problem of designing a network of optical cross-connects(OXCs) to provide end-to-end lightpath services to label switched routers (LSRs). Like some previous work, we select the number of OXCs as our objective. Compared with the previous studies, we take into account the fault-tolerant characteristic of logical topology. First of all, using a Prufer number randomly generated, we generate a tree. By adding some edges to the tree, we can obtain a physical topology which consists of a certain number of OXCs and fiber links connecting OXCs. It is notable that we for the first time limit the number of layers of the tree produced according to the method mentioned above. Then we design the logical topologies based on the physical topologies mentioned above. In principle, we will select the shortest path in addition to some consideration on the load balancing of links and the limitation owing to the SRLG. Notably, we implement the routing algorithm for the nodes in increasing order of the degree of the nodes. With regarding to the problem of the wavelength assignment, we adopt the heuristic algorithm of the graph coloring commonly used. It is clear our problem is computationally intractable especially when the scale of the network is large. We adopt the taboo search algorithm to find the near optimal solution to our objective. We present numerical results for up to 1000 LSRs and for a wide range of system parameters such as the number of wavelengths supported by each fiber link and traffic. The results indicate that it is possible to build large-scale optical networks with rich connectivity in a cost-effective manner, using relatively few but properly dimensioned OXCs.
NASA Astrophysics Data System (ADS)
Yen, H.; White, M. J.; Arnold, J. G.; Keitzer, S. C.; Johnson, M. V. V.; Atwood, J. D.; Daggupati, P.; Herbert, M. E.; Sowa, S. P.; Ludsin, S.; Robertson, D. M.; Srinivasan, R.; Rewa, C. A.
2016-12-01
By the substantial improvement of computer technology, large-scale watershed modeling has become practically feasible in conducting detailed investigations of hydrologic, sediment, and nutrient processes. In the Western Lake Erie Basin (WLEB), water quality issues caused by anthropogenic activities are not just interesting research subjects but, have implications related to human health and welfare, as well as ecological integrity, resistance, and resilience. In this study, the Soil and Water Assessment Tool (SWAT) and the finest resolution stream network, NHDPlus, were implemented on the WLEB to examine the interactions between achievable conservation scenarios with corresponding additional projected costs. During the calibration/validation processes, both hard (temporal) and soft (non-temporal) data were used to ensure the modeling outputs are coherent with actual watershed behavior. The results showed that widespread adoption of conservation practices intended to provide erosion control could deliver average reductions of sediment and nutrients without additional nutrient management changes. On the other hand, responses of nitrate (NO3) and dissolved inorganic phosphorus (DIP) dynamics may be different than responses of total nitrogen and total phosphorus dynamics under the same conservation practice. Model results also implied that fewer financial resources are required to achieve conservation goals if the goal is to achieve reductions in targeted watershed outputs (ex. NO3 or DIP) rather than aggregated outputs (ex. total nitrogen or total phosphorus). In addition, it was found that the model's capacity to simulate seasonal effects and responses to changing conservation adoption on a seasonal basis could provide a useful index to help alleviate additional cost through temporal targeting of conservation practices. Scientists, engineers, and stakeholders can take advantage of the work performed in this study as essential information while conducting policy making processes in the future.
Broberg, Craig S; Mitchell, Julie; Rehel, Silven; Grant, Andrew; Gianola, Ann; Beninato, Peter; Winter, Christiane; Verstappen, Amy; Valente, Anne Marie; Weiss, Joseph; Zaidi, Ali; Earing, Michael G; Cook, Stephen; Daniels, Curt; Webb, Gary; Khairy, Paul; Marelli, Ariane; Gurvitz, Michelle Z; Sahn, David J
2015-10-01
The adoption of electronic health records (EHR) has created an opportunity for multicenter data collection, yet the feasibility and reliability of this methodology is unknown. The aim of this study was to integrate EHR data into a homogeneous central repository specifically addressing the field of adult congenital heart disease (ACHD). Target data variables were proposed and prioritized by consensus of investigators at five target ACHD programs. Database analysts determined which variables were available within their institutions' EHR and stratified their accessibility, and results were compared between centers. Data for patients seen in a single calendar year were extracted to a uniform database and subsequently consolidated. From 415 proposed target variables, only 28 were available in discrete formats at all centers. For variables of highest priority, 16/28 (57%) were available at all four sites, but only 11% for those of high priority. Integration was neither simple nor straightforward. Coding schemes in use for congenital heart diagnoses varied and would require additional user input for accurate mapping. There was considerable variability in procedure reporting formats and medication schemes, often with center-specific modifications. Despite the challenges, the final acquisition included limited data on 2161 patients, and allowed for population analysis of race/ethnicity, defect complexity, and body morphometrics. Large-scale multicenter automated data acquisition from EHRs is feasible yet challenging. Obstacles stem from variability in data formats, coding schemes, and adoption of non-standard lists within each EHR. The success of large-scale multicenter ACHD research will require institution-specific data integration efforts. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The impact of BPO on cost reduction in mid-sized health care systems.
Perry, Andy; Kocakülâh, Mehmet C
2010-01-01
At the convergence of two politico-economic "hot topics" of the day--outsourcing and the cost of health care-lie opportunities for mid-sized health systems to innovate, collaborate, and reduce overhead. Competition in the retail health care market can serve as both an impetus and an inhibitor to such measures, though. Here we are going to address the motivations, influences, opportunities, and limitations facing mid-sized, US non-profit health systems in business process outsourcing (BPO). Advocates cite numerous benefits to BPO, particularly in cost reduction and strategy optimization. BPO can elicit cost savings due to specialization among provider firms, returns to scale and technology, standardization and automation, and gains in resource arbitrage (off-shoring capabilities). BPO can also free an organization of non-critical tasks and focus resources on core competencies (treating patients). The surge in BPO utilization has rarely extended to the back-office functions of many mid-sized health systems. Health care providers, still a largely fragmented bunch with many rural, independent non-profit systems, have not experienced the consolidation and organizational scale growth to make BPO as attractive as other industries. Smaller firms, spurning merger and acquisition pressure from large, tertiary health systems, often wish to retain their autonomy and identity; hence, they face a competitive cost disadvantage compared to their larger competitors. This article examines the functional areas for these health systems in which BPO is not currently utilized and dissects the various methods available in which to practice BPO. We assess the ongoing adoption of BPO in these areas as well as the barriers to adoption, and identify the key processes that best represent opportunity for success. An emphasis is placed on a collaborative model with other health systems compared to a single system, unilateral BPO arrangement.
Decisions through data: analytics in healthcare.
Wills, Mary J
2014-01-01
The amount of data in healthcare is increasing at an astonishing rate. However, in general, the industry has not deployed the level of data management and analysis necessary to make use of those data. As a result, healthcare executives face the risk of being overwhelmed by a flood of unusable data. In this essay I argue that, in order to extract actionable information, leaders must take advantage of the promise of data analytics. Small data, predictive modeling expansion, and real-time analytics are three forms of data analytics. On the basis of my analysis for this study, I recommend all three for adoption. Recognizing the uniqueness of each organization's situation, I also suggest that practices, hospitals, and healthcare systems examine small data and conduct real-time analytics and that large-scale organizations managing populations of patients adopt predictive modeling. I found that all three solutions assist in the collection, management, and analysis of raw data to improve the quality of care and decrease costs.
Edison, John R; Spencer, Ryan K; Butterfoss, Glenn L; Hudson, Benjamin C; Hochbaum, Allon I; Paravastu, Anant K; Zuckermann, Ronald N; Whitelam, Stephen
2018-05-29
The conformations adopted by the molecular constituents of a supramolecular assembly influence its large-scale order. At the same time, the interactions made in assemblies by molecules can influence their conformations. Here we study this interplay in extended flat nanosheets made from nonnatural sequence-specific peptoid polymers. Nanosheets exist because individual polymers can be linear and untwisted, by virtue of polymer backbone elements adopting alternating rotational states whose twists oppose and cancel. Using molecular dynamics and quantum mechanical simulations, together with experimental data, we explore the design space of flat nanostructures built from peptoids. We show that several sets of peptoid backbone conformations are consistent with their being linear, but the specific combination observed in experiment is determined by a combination of backbone energetics and the interactions made within the nanosheet. Our results provide a molecular model of the peptoid nanosheet consistent with all available experimental data and show that its structure results from a combination of intra- and intermolecular interactions.
Social Collaborative Filtering by Trust.
Yang, Bo; Lei, Yu; Liu, Jiming; Li, Wenjie
2017-08-01
Recommender systems are used to accurately and actively provide users with potentially interesting information or services. Collaborative filtering is a widely adopted approach to recommendation, but sparse data and cold-start users are often barriers to providing high quality recommendations. To address such issues, we propose a novel method that works to improve the performance of collaborative filtering recommendations by integrating sparse rating data given by users and sparse social trust network among these same users. This is a model-based method that adopts matrix factorization technique that maps users into low-dimensional latent feature spaces in terms of their trust relationship, and aims to more accurately reflect the users reciprocal influence on the formation of their own opinions and to learn better preferential patterns of users for high-quality recommendations. We use four large-scale datasets to show that the proposed method performs much better, especially for cold start users, than state-of-the-art recommendation algorithms for social collaborative filtering based on trust.
A Framework and Improvements of the Korea Cloud Services Certification System.
Jeon, Hangoo; Seo, Kwang-Kyu
2015-01-01
Cloud computing service is an evolving paradigm that affects a large part of the ICT industry and provides new opportunities for ICT service providers such as the deployment of new business models and the realization of economies of scale by increasing efficiency of resource utilization. However, despite benefits of cloud services, there are some obstacles to adopt such as lack of assessing and comparing the service quality of cloud services regarding availability, security, and reliability. In order to adopt the successful cloud service and activate it, it is necessary to establish the cloud service certification system to ensure service quality and performance of cloud services. This paper proposes a framework and improvements of the Korea certification system of cloud service. In order to develop it, the critical issues related to service quality, performance, and certification of cloud service are identified and the systematic framework for the certification system of cloud services and service provider domains are developed. Improvements of the developed Korea certification system of cloud services are also proposed.
A Framework and Improvements of the Korea Cloud Services Certification System
Jeon, Hangoo
2015-01-01
Cloud computing service is an evolving paradigm that affects a large part of the ICT industry and provides new opportunities for ICT service providers such as the deployment of new business models and the realization of economies of scale by increasing efficiency of resource utilization. However, despite benefits of cloud services, there are some obstacles to adopt such as lack of assessing and comparing the service quality of cloud services regarding availability, security, and reliability. In order to adopt the successful cloud service and activate it, it is necessary to establish the cloud service certification system to ensure service quality and performance of cloud services. This paper proposes a framework and improvements of the Korea certification system of cloud service. In order to develop it, the critical issues related to service quality, performance, and certification of cloud service are identified and the systematic framework for the certification system of cloud services and service provider domains are developed. Improvements of the developed Korea certification system of cloud services are also proposed. PMID:26125049
ERIC Educational Resources Information Center
MacDonald, Mandi; McLoughlin, Priscilla
2016-01-01
This article combines practitioner insight and research evidence to chart how principles of partnership and paramountcy have led to birth family contact becoming the expected norm following contested adoption from care in Northern Ireland. The article highlights how practice has adapted to the delay in proposed reforms to adoption legislation…
Scaling of Device Variability and Subthreshold Swing in Ballistic Carbon Nanotube Transistors
NASA Astrophysics Data System (ADS)
Cao, Qing; Tersoff, Jerry; Han, Shu-Jen; Penumatcha, Ashish V.
2015-08-01
In field-effect transistors, the inherent randomness of dopants and other charges is a major cause of device-to-device variability. For a quasi-one-dimensional device such as carbon nanotube transistors, even a single charge can drastically change the performance, making this a critical issue for their adoption as a practical technology. Here we calculate the effect of the random charges at the gate-oxide surface in ballistic carbon nanotube transistors, finding good agreement with the variability statistics in recent experiments. A combination of experimental and simulation results further reveals that these random charges are also a major factor limiting the subthreshold swing for nanotube transistors fabricated on thin gate dielectrics. We then establish that the scaling of the nanotube device uniformity with the gate dielectric, fixed-charge density, and device dimension is qualitatively different from conventional silicon transistors, reflecting the very different device physics of a ballistic transistor with a quasi-one-dimensional channel. The combination of gate-oxide scaling and improved control of fixed-charge density should provide the uniformity needed for large-scale integration of such novel one-dimensional transistors even at extremely scaled device dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baghram, Shant; Abolhasani, Ali Akbar; Firouzjahi, Hassan
We study the predictions of anomalous inflationary models on the abundance of structures in large scale structure observations. The anomalous features encoded in primordial curvature perturbation power spectrum are (a): localized feature in momentum space, (b): hemispherical asymmetry and (c): statistical anisotropies. We present a model-independent expression relating the number density of structures to the changes in the matter density variance. Models with localized feature can alleviate the tension between observations and numerical simulations of cold dark matter structures on galactic scales as a possible solution to the missing satellite problem. In models with hemispherical asymmetry we show that themore » abundance of structures becomes asymmetric depending on the direction of observation to sky. In addition, we study the effects of scale-dependent dipole amplitude on the abundance of structures. Using the quasars data and adopting the power-law scaling k{sup n{sub A}-1} for the amplitude of dipole we find the upper bound n{sub A} < 0.6 for the spectral index of the dipole asymmetry. In all cases there is a critical mass scale M{sub c} in which for M M{sub c}) the enhancement in variance induced from anomalous feature decreases (increases) the abundance of dark matter structures in Universe.« less
Understanding middle managers' influence in implementing patient safety culture.
Gutberg, Jennifer; Berta, Whitney
2017-08-22
The past fifteen years have been marked by large-scale change efforts undertaken by healthcare organizations to improve patient safety and patient-centered care. Despite substantial investment of effort and resources, many of these large-scale or "radical change" initiatives, like those in other industries, have enjoyed limited success - with practice and behavioural changes neither fully adopted nor ultimately sustained - which has in large part been ascribed to inadequate implementation efforts. Culture change to "patient safety culture" (PSC) is among these radical change initiatives, where results to date have been mixed at best. This paper responds to calls for research that focus on explicating factors that affect efforts to implement radical change in healthcare contexts, and focuses on PSC as the radical change implementation. Specifically, this paper offers a novel conceptual model based on Organizational Learning Theory to explain the ability of middle managers in healthcare organizations to influence patient safety culture change. We propose that middle managers can capitalize on their unique position between upper and lower levels in the organization and engage in 'ambidextrous' learning that is critical to implementing and sustaining radical change. This organizational learning perspective offers an innovative way of framing the mid-level managers' role, through both explorative and exploitative activities, which further considers the necessary organizational context in which they operate.
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Puzzolo, Elisa; Pope, Daniel; Stanistreet, Debbi; Rehfuess, Eva A; Bruce, Nigel G
2016-04-01
Access to, and sustained adoption of, clean household fuels at scale remains an aspirational goal to achieve sufficient reductions in household air pollution (HAP) in order to impact on the substantial global health burden caused by reliance on solid fuels. To systematically appraise the current evidence base to identify: (i) which factors enable or limit adoption and sustained use of clean fuels (namely liquefied petroleum gas (LPG), biogas, solar cooking and alcohol fuels) in low- and middle-income countries; (ii) lessons learnt concerning equitable scaling-up of programmes of cleaner cooking fuels in relation to poverty, urban-rural settings and gender. A mixed-methods systematic review was conducted using established review methodology and extensive searches of published and grey literature sources. Data extraction and quality appraisal of quantitative, qualitative and case studies meeting inclusion criteria were conducted using standardised methods with reliability checking. Forty-four studies from Africa, Asia and Latin America met the inclusion criteria (17 on biogas, 12 on LPG, 9 on solar, 6 on alcohol fuels). A broad range of inter-related enabling and limiting factors were identified for all four types of intervention, operating across seven pre-specified domains (i.e. fuel and technology characteristics, household and setting characteristics, knowledge and perceptions, financial, tax and subsidy aspects, market development, regulation, legislation and standards, and programme and policy mechanisms) and multiple levels (i.e. household, community, national). All domains matter and the majority of factors are common to all clean fuels interventions reviewed although some are fuel and technology-specific. All factors should therefore be taken into account and carefully assessed during planning and implementation of any small- and large-scale initiative aiming at promoting clean fuels for household cooking. Despite limitations in quantity and quality of the evidence this systematic review provides a useful starting point for the design, delivery and evaluation of programmes to ensure more effective adoption and use of LPG, biogas, alcohol fuels and solar cooking. This review was funded by the Department for International Development (DfID) of the United Kingdom. The authors would also like to thank the Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre) for their technical support. Copyright © 2016 Elsevier Inc. All rights reserved.
Improving Smallholder Farmer Biosecurity in the Mekong Region Through Change Management.
Young, J R; Evans-Kocinski, S; Bush, R D; Windsor, P A
2015-10-01
Transboundary animal diseases including foot-and-mouth disease and haemorrhagic septicaemia remain a major constraint for improving smallholder large ruminant productivity in the Mekong region, producing negative impacts on rural livelihoods and compromising efforts to reduce poverty and food insecurity. The traditional husbandry practices of smallholders largely exclude preventive health measures, increasing risks of disease transmission. Although significant efforts have been made to understand the social aspects of change development in agricultural production, attention to improving the adoption of biosecurity has been limited. This study reviews smallholder biosecurity risk factors identified in the peer-reviewed literature and from field research observations conducted in Cambodia and Laos during 2006-2013, considering these in the context of a change management perspective aimed at improving adoption of biosecurity measures. Motivation for change, resistance to change, knowledge management, cultural dimensions, systems theory and leadership are discussed. Due to geographical, physical and resource variability, the implementation of biosecurity interventions suitable for smallholders is not a 'one size fits all'. Smallholders should be educated in biosecurity principles and empowered to make personal decisions rather than adopt prescribed pre-defined interventions. Biosecurity interventions should be aligned with smallholder farmer motivations, preferably offering clear short-term risk management benefits that elicit interest from smallholders. Linking biosecurity and disease control with improved livestock productivity provides opportunities for sustainable improvements in livelihoods. Participatory research and extension that improves farmer knowledge and practices offers a pathway to elicit sustainable broad-scale social change. However, examples of successes need to be communicated both at the 'evidence-based level' to influence regional policy development and at the village or commune level, with 'champion farmers' and 'cross-visits' used to lead local change. The adoption of applied change management principles to improving regional biosecurity may assist current efforts to control and eradicate transboundary diseases in the Mekong region. © 2013 Blackwell Verlag GmbH.
Diffusion of innovations in Axelrod’s model
NASA Astrophysics Data System (ADS)
Tilles, Paulo F. C.; Fontanari, José F.
2015-11-01
Axelrod's model for the dissemination of culture contains two key factors required to model the process of diffusion of innovations, namely, social influence (i.e., individuals become more similar when they interact) and homophily (i.e., individuals interact preferentially with similar others). The strength of these social influences are controlled by two parameters: $F$, the number of features that characterizes the cultures and $q$, the common number of states each feature can assume. Here we assume that the innovation is a new state of a cultural feature of a single individual -- the innovator -- and study how the innovation spreads through the networks among the individuals. For infinite regular lattices in one (1D) and two dimensions (2D), we find that initially the successful innovation spreads linearly with the time $t$, but in the long-time limit it spreads diffusively ($\\sim t^{1/2}$) in 1D and sub-diffusively ($\\sim t/\\ln t$) in 2D. For finite lattices, the growth curves for the number of adopters are typically concave functions of $t$. For random graphs with a finite number of nodes $N$, we argue that the classical S-shaped growth curves result from a trade-off between the average connectivity $K$ of the graph and the per feature diversity $q$. A large $q$ is needed to reduce the pace of the initial spreading of the innovation and thus delimit the early-adopters stage, whereas a large $K$ is necessary to ensure the onset of the take-off stage at which the number of adopters grows superlinearly with $t$. In an infinite random graph we find that the number of adopters of a successful innovation scales with $t^\\gamma$ with $\\gamma =1$ for $K> 2$ and $1/2 < \\gamma < 1$ for $K=2$. We suggest that the exponent $\\gamma$ may be a useful index to characterize the process of diffusion of successful innovations in diverse scenarios.
Housing first on a large scale: Fidelity strengths and challenges in the VA's HUD-VASH program.
Kertesz, Stefan G; Austin, Erika L; Holmes, Sally K; DeRussy, Aerin J; Van Deusen Lukas, Carol; Pollio, David E
2017-05-01
Housing First (HF) combines permanent supportive housing and supportive services for homeless individuals and removes traditional treatment-related preconditions for housing entry. There has been little research describing strengths and shortfalls of HF implementation outside of research demonstration projects. The U.S. Department of Veterans Affairs (VA) has transitioned to an HF approach in a supportive housing program serving over 85,000 persons. This offers a naturalistic window to study fidelity when HF is adopted on a large scale. We operationalized HF into 20 criteria grouped into 5 domains. We assessed 8 VA medical centers twice (1 year apart), scoring each criterion using a scale ranging from 1 ( low fidelity ) to 4 ( high fidelity ). There were 2 HF domains (no preconditions and rapidly offering permanent housing) for which high fidelity was readily attained. There was uneven progress in prioritizing the most vulnerable clients for housing support. Two HF domains (sufficient supportive services and a modern recovery philosophy) had considerably lower fidelity. Interviews suggested that operational issues such as shortfalls in staffing and training likely hindered performance in these 2 domains. In this ambitious national HF program, the largest to date, we found substantial fidelity in focusing on permanent housing and removal of preconditions to housing entry. Areas of concern included the adequacy of supportive services and adequacy in deployment of a modern recovery philosophy. Under real-world conditions, large-scale implementation of HF is likely to require significant additional investment in client service supports to assure that results are concordant with those found in research studies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Reforming primary healthcare: from public policy to organizational change.
Gilbert, Frédéric; Denis, Jean-Louis; Lamothe, Lise; Beaulieu, Marie-Dominique; D'amour, Danielle; Goudreau, Johanne
2015-01-01
Governments everywhere are implementing reform to improve primary care. However, the existence of a high degree of professional autonomy makes large-scale change difficult to achieve. The purpose of this paper is to elucidate the change dynamics and the involvement of professionals in a primary healthcare reform initiative carried out in the Canadian province of Quebec. An empirical approach was used to investigate change processes from the inception of a public policy to the execution of changes in professional practices. The data were analysed from a multi-level, combined contextualist-processual perspective. Results are based on a longitudinal multiple-case study of five family medicine groups, which was informed by over 100 interviews, questionnaires, and documentary analysis. The results illustrate the multiple processes observed with the introduction of planned large-scale change in primary care services. The analysis of change content revealed that similar post-change states concealed variations between groups in the scale of their respective changes. The analysis also demonstrated more precisely how change evolved through the introduction of "intermediate change" and how cycles of prescribed and emergent mechanisms distinctively drove change process and change content, from the emergence of the public policy to the change in primary care service delivery. This research was conducted among a limited number of early policy adopters. However, given the international interest in turning to the medical profession to improve primary care, the results offer avenues for both policy development and implementation. The findings offer practical insights for those studying and managing large-scale transformations. They provide a better understanding of how deliberate reforms coexist with professional autonomy through an intertwining of change content and processes. This research is one of few studies to examine a primary care reform from emergence to implementation using a longitudinal multi-level design.
Gustavsson, J P; Pedersen, N L; Asberg, M; Schalling, D
1996-06-01
The genetic and environmental origins of individual differences in scores on the anxiety-proneness scales from the Karolinska Scales of Personality were explored using a twin/adoption study design in a sample consisting of 15 monozygotic twin pairs reared apart, and 26 monozygotic and 29 dizygotic twin pairs reared together. The results showed that genetic factors accounted for individual differences in scores on the psychasthenia and somatic anxiety scales. The genetic determinants were not specific to each scale, but were common to both scales. Shared-rearing environmental determinants were important for individual differences in lack of assertiveness and psychic anxiety, and were common to both scales. Individual differences in muscular tension were found to be attributable to the effects of correlated environments. The most important factor explaining individual differences for all scales was the non-shared environment component. The evidence for an aetiologically heterogeneous anxiety-proneness construct emphasizes the appropriateness of a multi-dimensional approach to anxiety proneness.
Finak, Greg; Frelinger, Jacob; Jiang, Wenxin; Newell, Evan W.; Ramey, John; Davis, Mark M.; Kalams, Spyros A.; De Rosa, Stephen C.; Gottardo, Raphael
2014-01-01
Flow cytometry is used increasingly in clinical research for cancer, immunology and vaccines. Technological advances in cytometry instrumentation are increasing the size and dimensionality of data sets, posing a challenge for traditional data management and analysis. Automated analysis methods, despite a general consensus of their importance to the future of the field, have been slow to gain widespread adoption. Here we present OpenCyto, a new BioConductor infrastructure and data analysis framework designed to lower the barrier of entry to automated flow data analysis algorithms by addressing key areas that we believe have held back wider adoption of automated approaches. OpenCyto supports end-to-end data analysis that is robust and reproducible while generating results that are easy to interpret. We have improved the existing, widely used core BioConductor flow cytometry infrastructure by allowing analysis to scale in a memory efficient manner to the large flow data sets that arise in clinical trials, and integrating domain-specific knowledge as part of the pipeline through the hierarchical relationships among cell populations. Pipelines are defined through a text-based csv file, limiting the need to write data-specific code, and are data agnostic to simplify repetitive analysis for core facilities. We demonstrate how to analyze two large cytometry data sets: an intracellular cytokine staining (ICS) data set from a published HIV vaccine trial focused on detecting rare, antigen-specific T-cell populations, where we identify a new subset of CD8 T-cells with a vaccine-regimen specific response that could not be identified through manual analysis, and a CyTOF T-cell phenotyping data set where a large staining panel and many cell populations are a challenge for traditional analysis. The substantial improvements to the core BioConductor flow cytometry packages give OpenCyto the potential for wide adoption. It can rapidly leverage new developments in computational cytometry and facilitate reproducible analysis in a unified environment. PMID:25167361
Finak, Greg; Frelinger, Jacob; Jiang, Wenxin; Newell, Evan W; Ramey, John; Davis, Mark M; Kalams, Spyros A; De Rosa, Stephen C; Gottardo, Raphael
2014-08-01
Flow cytometry is used increasingly in clinical research for cancer, immunology and vaccines. Technological advances in cytometry instrumentation are increasing the size and dimensionality of data sets, posing a challenge for traditional data management and analysis. Automated analysis methods, despite a general consensus of their importance to the future of the field, have been slow to gain widespread adoption. Here we present OpenCyto, a new BioConductor infrastructure and data analysis framework designed to lower the barrier of entry to automated flow data analysis algorithms by addressing key areas that we believe have held back wider adoption of automated approaches. OpenCyto supports end-to-end data analysis that is robust and reproducible while generating results that are easy to interpret. We have improved the existing, widely used core BioConductor flow cytometry infrastructure by allowing analysis to scale in a memory efficient manner to the large flow data sets that arise in clinical trials, and integrating domain-specific knowledge as part of the pipeline through the hierarchical relationships among cell populations. Pipelines are defined through a text-based csv file, limiting the need to write data-specific code, and are data agnostic to simplify repetitive analysis for core facilities. We demonstrate how to analyze two large cytometry data sets: an intracellular cytokine staining (ICS) data set from a published HIV vaccine trial focused on detecting rare, antigen-specific T-cell populations, where we identify a new subset of CD8 T-cells with a vaccine-regimen specific response that could not be identified through manual analysis, and a CyTOF T-cell phenotyping data set where a large staining panel and many cell populations are a challenge for traditional analysis. The substantial improvements to the core BioConductor flow cytometry packages give OpenCyto the potential for wide adoption. It can rapidly leverage new developments in computational cytometry and facilitate reproducible analysis in a unified environment.
Electro-thermal battery model identification for automotive applications
NASA Astrophysics Data System (ADS)
Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.
This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.
Single-chip microprocessor that communicates directly using light
NASA Astrophysics Data System (ADS)
Sun, Chen; Wade, Mark T.; Lee, Yunsup; Orcutt, Jason S.; Alloatti, Luca; Georgas, Michael S.; Waterman, Andrew S.; Shainline, Jeffrey M.; Avizienis, Rimas R.; Lin, Sen; Moss, Benjamin R.; Kumar, Rajesh; Pavanello, Fabio; Atabaki, Amir H.; Cook, Henry M.; Ou, Albert J.; Leu, Jonathan C.; Chen, Yu-Hsin; Asanović, Krste; Ram, Rajeev J.; Popović, Miloš A.; Stojanović, Vladimir M.
2015-12-01
Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems—from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic-photonic systems enabled by silicon-based nanophotonic devices8. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic-photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic-photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a ‘zero-change’ approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic-photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.
Single-chip microprocessor that communicates directly using light.
Sun, Chen; Wade, Mark T; Lee, Yunsup; Orcutt, Jason S; Alloatti, Luca; Georgas, Michael S; Waterman, Andrew S; Shainline, Jeffrey M; Avizienis, Rimas R; Lin, Sen; Moss, Benjamin R; Kumar, Rajesh; Pavanello, Fabio; Atabaki, Amir H; Cook, Henry M; Ou, Albert J; Leu, Jonathan C; Chen, Yu-Hsin; Asanović, Krste; Ram, Rajeev J; Popović, Miloš A; Stojanović, Vladimir M
2015-12-24
Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems--from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic-photonic systems enabled by silicon-based nanophotonic devices. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic-photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic-photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a 'zero-change' approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic-photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.
Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena
2015-01-01
We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.
Agha, Sohail; Williams, Emma
2016-04-01
During the last two decades, the use of maternal health services has increased dramatically in Pakistan, with nearly 80% of Pakistani women making an antenatal care (ANC) visit during their pregnancy. Yet, this increase in use of modern health services has not translated into significant increases in the adoption of contraception. Even though Pakistan has had a national family planning programme and policies since the 1950s, contraceptive use has increased slowly to reach only 35% in 2012-13. No evidence is currently available to demonstrate whether the utilization of maternal health services is associated with contraceptive adoption in Pakistan. This study uses data from a large-scale survey conducted in Sindh province in 2013 to examine whether ANC utilization is a significant predictor of subsequent contraceptive use among women. In an analysis which controls for a range of variables known to be important for family planning adoption, the findings show that ANC is the strongest predictor of subsequent family planning use among women in Sindh. The antenatal visit represents an enormous opportunity to promote the adoption of family planning in Pakistan. The family planning programme should ensure that high-quality family planning counselling is provided to women during their ANC visits. This approach has the potential for contributing to substantial increases in contraceptive use in Pakistan. © The Author 2015. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.
NASA Astrophysics Data System (ADS)
Zhuang, Wei; Mountrakis, Giorgos
2014-09-01
Large footprint waveform LiDAR sensors have been widely used for numerous airborne studies. Ground peak identification in a large footprint waveform is a significant bottleneck in exploring full usage of the waveform datasets. In the current study, an accurate and computationally efficient algorithm was developed for ground peak identification, called Filtering and Clustering Algorithm (FICA). The method was evaluated on Land, Vegetation, and Ice Sensor (LVIS) waveform datasets acquired over Central NY. FICA incorporates a set of multi-scale second derivative filters and a k-means clustering algorithm in order to avoid detecting false ground peaks. FICA was tested in five different land cover types (deciduous trees, coniferous trees, shrub, grass and developed area) and showed more accurate results when compared to existing algorithms. More specifically, compared with Gaussian decomposition, the RMSE ground peak identification by FICA was 2.82 m (5.29 m for GD) in deciduous plots, 3.25 m (4.57 m for GD) in coniferous plots, 2.63 m (2.83 m for GD) in shrub plots, 0.82 m (0.93 m for GD) in grass plots, and 0.70 m (0.51 m for GD) in plots of developed areas. FICA performance was also relatively consistent under various slope and canopy coverage (CC) conditions. In addition, FICA showed better computational efficiency compared to existing methods. FICA's major computational and accuracy advantage is a result of the adopted multi-scale signal processing procedures that concentrate on local portions of the signal as opposed to the Gaussian decomposition that uses a curve-fitting strategy applied in the entire signal. The FICA algorithm is a good candidate for large-scale implementation on future space-borne waveform LiDAR sensors.
Couriot, Ophélie; Hewison, A J Mark; Saïd, Sonia; Cagnacci, Francesca; Chamaillé-Jammes, Simon; Linnell, John D C; Mysterud, Atle; Peters, Wibke; Urbano, Ferdinando; Heurich, Marco; Kjellander, Petter; Nicoloso, Sandro; Berger, Anne; Sustr, Pavel; Kroeschel, Max; Soennichsen, Leif; Sandfort, Robin; Gehr, Benedikt; Morellet, Nicolas
2018-05-01
Much research on large herbivore movement has focused on the annual scale to distinguish between resident and migratory tactics, commonly assuming that individuals are sedentary at the within-season scale. However, apparently sedentary animals may occupy a number of sub-seasonal functional home ranges (sfHR), particularly when the environment is spatially heterogeneous and/or temporally unpredictable. The roe deer (Capreolus capreolus) experiences sharply contrasting environmental conditions due to its widespread distribution, but appears markedly sedentary over much of its range. Using GPS monitoring from 15 populations across Europe, we evaluated the propensity of this large herbivore to be truly sedentary at the seasonal scale in relation to variation in environmental conditions. We studied movement using net square displacement to identify the possible use of sfHR. We expected that roe deer should be less sedentary within seasons in heterogeneous and unpredictable environments, while migratory individuals should be seasonally more sedentary than residents. Our analyses revealed that, across the 15 populations, all individuals adopted a multi-range tactic, occupying between two and nine sfHR during a given season. In addition, we showed that (i) the number of sfHR was only marginally influenced by variation in resource distribution, but decreased with increasing sfHR size; and (ii) the distance between sfHR increased with increasing heterogeneity and predictability in resource distribution, as well as with increasing sfHR size. We suggest that the multi-range tactic is likely widespread among large herbivores, allowing animals to track spatio-temporal variation in resource distribution and, thereby, to cope with changes in their local environment.
Network analysis of wildfire transmission and implications for risk governance
Ager, Alan A.; Evers, Cody R.; Day, Michelle A.; Preisler, Haiganoush K.; Barros, Ana M. G.; Nielsen-Pincus, Max
2017-01-01
We characterized wildfire transmission and exposure within a matrix of large land tenures (federal, state, and private) surrounding 56 communities within a 3.3 million ha fire prone region of central Oregon US. Wildfire simulation and network analysis were used to quantify the exchange of fire among land tenures and communities and analyze the relative contributions of human versus natural ignitions to wildfire exposure. Among the land tenures examined, the area burned by incoming fires averaged 57% of the total burned area. Community exposure from incoming fires ignited on surrounding land tenures accounted for 67% of the total area burned. The number of land tenures contributing wildfire to individual communities and surrounding wildland urban interface (WUI) varied from 3 to 20. Community firesheds, i.e. the area where ignitions can spawn fires that can burn into the WUI, covered 40% of the landscape, and were 5.5 times larger than the combined area of the community core and WUI. For the major land tenures within the study area, the amount of incoming versus outgoing fire was relatively constant, with some exceptions. The study provides a multi-scale characterization of wildfire networks within a large, mixed tenure and fire prone landscape, and illustrates the connectivity of risk between communities and the surrounding wildlands. We use the findings to discuss how scale mismatches in local wildfire governance result from disconnected planning systems and disparate fire management objectives among the large landowners (federal, state, private) and local communities. Local and regional risk planning processes can adopt our concepts and methods to better define and map the scale of wildfire risk from large fire events and incorporate wildfire network and connectivity concepts into risk assessments. PMID:28257416
Discovering Beaten Paths in Collaborative Ontology-Engineering Projects using Markov Chains
Walk, Simon; Singer, Philipp; Strohmaier, Markus; Tudorache, Tania; Musen, Mark A.; Noy, Natalya F.
2014-01-01
Biomedical taxonomies, thesauri and ontologies in the form of the International Classification of Diseases as a taxonomy or the National Cancer Institute Thesaurus as an OWL-based ontology, play a critical role in acquiring, representing and processing information about human health. With increasing adoption and relevance, biomedical ontologies have also significantly increased in size. For example, the 11th revision of the International Classification of Diseases, which is currently under active development by the World Health Organization contains nearly 50, 000 classes representing a vast variety of different diseases and causes of death. This evolution in terms of size was accompanied by an evolution in the way ontologies are engineered. Because no single individual has the expertise to develop such large-scale ontologies, ontology-engineering projects have evolved from small-scale efforts involving just a few domain experts to large-scale projects that require effective collaboration between dozens or even hundreds of experts, practitioners and other stakeholders. Understanding the way these different stakeholders collaborate will enable us to improve editing environments that support such collaborations. In this paper, we uncover how large ontology-engineering projects, such as the International Classification of Diseases in its 11th revision, unfold by analyzing usage logs of five different biomedical ontology-engineering projects of varying sizes and scopes using Markov chains. We discover intriguing interaction patterns (e.g., which properties users frequently change after specific given ones) that suggest that large collaborative ontology-engineering projects are governed by a few general principles that determine and drive development. From our analysis, we identify commonalities and differences between different projects that have implications for project managers, ontology editors, developers and contributors working on collaborative ontology-engineering projects and tools in the biomedical domain. PMID:24953242
Network analysis of wildfire transmission and implications for risk governance.
Ager, Alan A; Evers, Cody R; Day, Michelle A; Preisler, Haiganoush K; Barros, Ana M G; Nielsen-Pincus, Max
2017-01-01
We characterized wildfire transmission and exposure within a matrix of large land tenures (federal, state, and private) surrounding 56 communities within a 3.3 million ha fire prone region of central Oregon US. Wildfire simulation and network analysis were used to quantify the exchange of fire among land tenures and communities and analyze the relative contributions of human versus natural ignitions to wildfire exposure. Among the land tenures examined, the area burned by incoming fires averaged 57% of the total burned area. Community exposure from incoming fires ignited on surrounding land tenures accounted for 67% of the total area burned. The number of land tenures contributing wildfire to individual communities and surrounding wildland urban interface (WUI) varied from 3 to 20. Community firesheds, i.e. the area where ignitions can spawn fires that can burn into the WUI, covered 40% of the landscape, and were 5.5 times larger than the combined area of the community core and WUI. For the major land tenures within the study area, the amount of incoming versus outgoing fire was relatively constant, with some exceptions. The study provides a multi-scale characterization of wildfire networks within a large, mixed tenure and fire prone landscape, and illustrates the connectivity of risk between communities and the surrounding wildlands. We use the findings to discuss how scale mismatches in local wildfire governance result from disconnected planning systems and disparate fire management objectives among the large landowners (federal, state, private) and local communities. Local and regional risk planning processes can adopt our concepts and methods to better define and map the scale of wildfire risk from large fire events and incorporate wildfire network and connectivity concepts into risk assessments.
Discovering beaten paths in collaborative ontology-engineering projects using Markov chains.
Walk, Simon; Singer, Philipp; Strohmaier, Markus; Tudorache, Tania; Musen, Mark A; Noy, Natalya F
2014-10-01
Biomedical taxonomies, thesauri and ontologies in the form of the International Classification of Diseases as a taxonomy or the National Cancer Institute Thesaurus as an OWL-based ontology, play a critical role in acquiring, representing and processing information about human health. With increasing adoption and relevance, biomedical ontologies have also significantly increased in size. For example, the 11th revision of the International Classification of Diseases, which is currently under active development by the World Health Organization contains nearly 50,000 classes representing a vast variety of different diseases and causes of death. This evolution in terms of size was accompanied by an evolution in the way ontologies are engineered. Because no single individual has the expertise to develop such large-scale ontologies, ontology-engineering projects have evolved from small-scale efforts involving just a few domain experts to large-scale projects that require effective collaboration between dozens or even hundreds of experts, practitioners and other stakeholders. Understanding the way these different stakeholders collaborate will enable us to improve editing environments that support such collaborations. In this paper, we uncover how large ontology-engineering projects, such as the International Classification of Diseases in its 11th revision, unfold by analyzing usage logs of five different biomedical ontology-engineering projects of varying sizes and scopes using Markov chains. We discover intriguing interaction patterns (e.g., which properties users frequently change after specific given ones) that suggest that large collaborative ontology-engineering projects are governed by a few general principles that determine and drive development. From our analysis, we identify commonalities and differences between different projects that have implications for project managers, ontology editors, developers and contributors working on collaborative ontology-engineering projects and tools in the biomedical domain. Copyright © 2014 Elsevier Inc. All rights reserved.
Advances toward field application of 3D hydraulic tomography
NASA Astrophysics Data System (ADS)
Cardiff, M. A.; Barrash, W.; Kitanidis, P. K.
2011-12-01
Hydraulic tomography (HT) is a technique that shows great potential for aquifer characterization and one that holds the promise of producing 3D hydraulic property distributions, given suitable equipment. First suggested over 15 years ago, HT assimilates distributed aquifer pressure (head) response data collected during a series of multiple pumping tests to produce estimates of aquifer property variability. Unlike traditional curve-matching analyses, which assume homogeneity or "effective" parameters within the radius of influence of a hydrologic test, HT analysis relies on numerical models with detailed heterogeneity in order to invert for the highly resolved 3D parameter distribution that jointly fits all data. Several numerical and laboratory investigations of characterization using HT have shown that property distributions can be accurately estimated between observation locations when experiments are correctly designed - a property not always shared by other, simpler 1D characterization approaches such as partially-penetrating slug tests. HT may represent one of the best methods available for obtaining detailed 3D aquifer property descriptions, especially in deep or "hard" aquifer materials, where direct-push methods may not be feasible. However, to date HT has not yet been widely adopted at contaminated field sites. We believe that current perceived impediments to HT adoption center around four key issues: 1) A paucity in the scientific literature of proven, cross-validated 3D field applications 2) A lack of guidelines and best practices for performing field 3D HT experiments; 3) Practical difficulty and time commitment associated with the installation of a large number of high-accuracy sampling locations, and the running of a large number of pumping tests; and 4) Computational difficulty associated with solving large-scale inverse problems for parameter identification. In this talk, we present current results in 3D HT research that addresses these four issues, and thus bring HT closer to field practice. Topics to be discussed include: -Improving field efficiency through design and implementation of new modular, easily-installed equipment for 3D HT. -Validating field-scale 3D HT through application and cross-validation at the Boise Hydrogeophysical Research Site. -Developing guidelines for HT implementation based on field experience, numerical modeling, and a comprehensive literature review of the past 15 years of HT research. -Application of novel, fast numerical methods for large-scale HT data analysis. The results presented will focus on the application of 3D HT, but in general we also hope to provide insights on aquifer characterization that stimulate thought on the issue of continually updating aquifer characteristics estimates while recognizing uncertainties and providing guidance for future data collection.
Mlynek, Georg; Lehner, Anita; Neuhold, Jana; Leeb, Sarah; Kostan, Julius; Charnagalov, Alexej; Stolt-Bergner, Peggy; Djinović-Carugo, Kristina; Pinotsis, Nikos
2014-06-01
Expression in Escherichia coli represents the simplest and most cost effective means for the production of recombinant proteins. This is a routine task in structural biology and biochemistry where milligrams of the target protein are required in high purity and monodispersity. To achieve these criteria, the user often needs to screen several constructs in different expression and purification conditions in parallel. We describe a pipeline, implemented in the Center for Optimized Structural Studies, that enables the systematic screening of expression and purification conditions for recombinant proteins and relies on a series of logical decisions. We first use bioinformatics tools to design a series of protein fragments, which we clone in parallel, and subsequently screen in small scale for optimal expression and purification conditions. Based on a scoring system that assesses soluble expression, we then select the top ranking targets for large-scale purification. In the establishment of our pipeline, emphasis was put on streamlining the processes such that it can be easily but not necessarily automatized. In a typical run of about 2 weeks, we are able to prepare and perform small-scale expression screens for 20-100 different constructs followed by large-scale purification of at least 4-6 proteins. The major advantage of our approach is its flexibility, which allows for easy adoption, either partially or entirely, by any average hypothesis driven laboratory in a manual or robot-assisted manner.
Enablers and Barriers to Large-Scale Uptake of Improved Solid Fuel Stoves: A Systematic Review
Puzzolo, Elisa; Stanistreet, Debbi; Pope, Daniel; Bruce, Nigel G.
2013-01-01
Background: Globally, 2.8 billion people rely on household solid fuels. Reducing the resulting adverse health, environmental, and development consequences will involve transitioning through a mix of clean fuels and improved solid fuel stoves (IS) of demonstrable effectiveness. To date, achieving uptake of IS has presented significant challenges. Objectives: We performed a systematic review of factors that enable or limit large-scale uptake of IS in low- and middle-income countries. Methods: We conducted systematic searches through multidisciplinary databases, specialist websites, and consulting experts. The review drew on qualitative, quantitative, and case studies and used standardized methods for screening, data extraction, critical appraisal, and synthesis. We summarized our findings as “factors” relating to one of seven domains—fuel and technology characteristics; household and setting characteristics; knowledge and perceptions; finance, tax, and subsidy aspects; market development; regulation, legislation, and standards; programmatic and policy mechanisms—and also recorded issues that impacted equity. Results: We identified 31 factors influencing uptake from 57 studies conducted in Asia, Africa, and Latin America. All domains matter. Although factors such as offering technologies that meet household needs and save fuel, user training and support, effective financing, and facilitative government action appear to be critical, none guarantee success: All factors can be influential, depending on context. The nature of available evidence did not permit further prioritization. Conclusions: Achieving adoption and sustained use of IS at a large scale requires that all factors, spanning household/community and program/societal levels, be assessed and supported by policy. We propose a planning tool that would aid this process and suggest further research to incorporate an evaluation of effectiveness. Citation: Rehfuess EA, Puzzolo E, Stanistreet D, Pope D, Bruce NG. 2014. Enablers and barriers to large-scale uptake of improved solid fuel stoves: a systematic review. Environ Health Perspect 122:120–130; http://dx.doi.org/10.1289/ehp.1306639 PMID:24300100
Yancey, Antronette K
2009-10-01
This paper argues that substantive and sustainable population-wide improvements in physical activity can be achieved only through the large scale adoption and implementation of policies and practices that make being active the default choice and remaining inactive difficult. Meta-volition refers to the volition and collective agency of early adopter leaders who implement such changes in their own organizations to drive productivity and health improvements. Leaders, themselves, are motivated by strong incentives to accomplish their organizational missions. The meta-volition model (MVM) specifies a cascade of changes that may be sparked by structural integration of brief activity bouts into organizational routine across sectors and types of organizations. MVM builds upon inter-disciplinary social ecological change models and frameworks such as diffusion of innovations, social learning and social marketing. MVM is dynamic rather than static, integrating biological influences with psychological factors, and socio-cultural influences with organizational processes. The model proposes six levels of dissemination triggered by organizational marketing to early adopter leaders carried out by "sparkplugs," boisterous leaders in population physical activity promotion: initiating (leader-leader), catalyzing (organizational-individual), viral marketing (individual-organizational), accelerating (organizational-organizational), anchoring (organizational-community) and institutionalizing (community-individual). MVM embodies public-private partnership principles, a collective investment in the high cost of achieving and maintaining active lifestyles.
O’Shaughnessy, Eric; Nemet, Gregory F.; Darghouth, Naïm
2018-01-30
The solar photovoltaic (PV) installation industry comprises thousands of firms around the world who collectively installed nearly 200 million panels in 2015. Spatial analysis of the emerging industry has received considerable attention from the literature, especially on the demand side concerning peer effects and adopter clustering. However this research area does not include similarly sophisticated spatial analysis on the supply side of the installation industry. The lack of understanding of the spatial structure of the PV installation industry leaves PV market research to rely on jurisdictional lines, such as counties, to define geographic PV markets. We develop an approach thatmore » uses the spatial distribution of installers' activity to define geographic boundaries for PV markets. Our method is useful for PV market research and applicable in the contexts of other industries. We use our approach to demonstrate that the PV industry in the United States is spatially heterogeneous. Despite the emergence of some national-scale PV installers, installers are largely local and installer communities are unique from one region to the next. The social implications of the spatial heterogeneity of the emerging PV industry involve improving understanding of issues such as market power, industry consolidation, and how much choice potential adopters have.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Shaughnessy, Eric; Nemet, Gregory F.; Darghouth, Naïm
The solar photovoltaic (PV) installation industry comprises thousands of firms around the world who collectively installed nearly 200 million panels in 2015. Spatial analysis of the emerging industry has received considerable attention from the literature, especially on the demand side concerning peer effects and adopter clustering. However this research area does not include similarly sophisticated spatial analysis on the supply side of the installation industry. The lack of understanding of the spatial structure of the PV installation industry leaves PV market research to rely on jurisdictional lines, such as counties, to define geographic PV markets. We develop an approach thatmore » uses the spatial distribution of installers' activity to define geographic boundaries for PV markets. Our method is useful for PV market research and applicable in the contexts of other industries. We use our approach to demonstrate that the PV industry in the United States is spatially heterogeneous. Despite the emergence of some national-scale PV installers, installers are largely local and installer communities are unique from one region to the next. The social implications of the spatial heterogeneity of the emerging PV industry involve improving understanding of issues such as market power, industry consolidation, and how much choice potential adopters have.« less
Wyker, Brett A; Davison, Kirsten K
2010-01-01
Drawing on the Theory of Planned Behavior (TPB) and the Transtheoretical Model (TTM), this study (1) examines links between stages of change for following a plant-based diet (PBD) and consuming more fruits and vegetables (FV); (2) tests an integrated theoretical model predicting intention to follow a PBD; and (3) identifies associated salient beliefs. Cross-sectional. Large public university in the northeastern United States. 204 college students. TPB and TTM constructs were assessed using validated scales. Outcome, normative, and control beliefs were measured using open-ended questions. The overlap between stages of change for FV consumption and adopting a PBD was assessed using Spearman rank correlation analysis and cross-tab comparisons. The proposed model predicting adoption of a PBD was tested using structural equation modeling (SEM). Salient beliefs were coded using automatic response coding software. No association was found between stages of change for FV consumption and following a PBD. Results from SEM analyses provided support for the proposed model predicting intention to follow a PBD. Gender differences in salient beliefs for following a PBD were found. Results demonstrate the potential for effective theory-driven and stage-tailored public health interventions to promote PBDs. Copyright 2010 Society for Nutrition Education. Published by Elsevier Inc. All rights reserved.
Financial incentives: Possible options for sustainable rangeland management?
Louhaichi, Mounir; Yigezu, Yigezu A; Werner, Jutta; Dashtseren, Lojoo; El-Shater, Tamer; Ahmed, Mohamed
2016-09-15
Large-scale mismanagement of natural resources emanating from lack of appropriate policies and regulatory framework is arguably one of the reasons that led to resource degradation and poor livelihoods in many countries in the Middle East and North Africa (MENA) region. Sustainable rangeland management practices (SRMPs) are considered to be a solution to feed shortages and rangeland degradation. However, the scope for SRMP adoption, has been a subject of debate. Using a case study from Syria and the application of the Minimum Data Analysis method (TOA-MD), this paper provides empirical evidence for ensuring wider adoption of SRMP. The paper argues that the introduction of financial incentives in the form of payments for agricultural-environmental services can increase the economic viability and enhance the adoption of SRMPs and is a better alternative to the unsustainable state subsidies for fodder purchases and barley cultivation on rangelands. Model results indicate that further investment in reasearch toward generating low cost technologies and tailored governance strategies including a financial incentive system would lead to better management of rangelands and improve livelihoods in the Syrian Badia. These findings are valuable for policy makers, donors as well as development and extension practitioners in the MENA region as they can better inform future courses of actions. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Biswas, Amitava; Liu, Chen; Monga, Inder; ...
2016-01-01
For last few years, there has been a tremendous growth in data traffic due to high adoption rate of mobile devices and cloud computing. Internet of things (IoT) will stimulate even further growth. This is increasing scale and complexity of telecom/internet service provider (SP) and enterprise data centre (DC) compute and network infrastructures. As a result, managing these large network-compute converged infrastructures is becoming complex and cumbersome. To cope up, network and DC operators are trying to automate network and system operations, administrations and management (OAM) functions. OAM includes all non-functional mechanisms which keep the network running.
Tarozzi, Alessandro; Mahajan, Aprajit; Blackburn, Brian; Kopf, Dan; Krishnan, Lakshmi; Yoong, Joanne
2014-07-01
We describe findings from the first large-scale cluster randomized controlled trial in a developing country that evaluates the uptake of a health-protecting technology, insecticide-treated bednets (ITNs), through micro-consumer loans, as compared to free distribution and control conditions. Despite a relatively high price, 52 percent of sample households purchased ITNs, highlighting the role of liquidity constraints in explaining earlier low adoption rates. We find mixed evidence of improvements in malaria indices. We interpret the results and their implications within the debate about cost sharing, sustainability and liquidity constraints in public health initiatives in developing countries.
SENER molten salt tower technology. Ouarzazate NOOR III case
NASA Astrophysics Data System (ADS)
Relloso, Sergio; Gutiérrez, Yolanda
2017-06-01
NOOR III 150 MWe project is the evolution of Gemasolar (19.9 MWe) to large scale Molten Salt Tower plants. With more than 5 years of operational experience, Gemasolar lessons learned have been the starting point for the optimization of this technology, considered the leader of potential cost reduction in CSP. In addition, prototypes of plant key components (heliostat and receiver) were manufactured and thoroughly tested before project launch in order to prove the new engineering solutions adopted. The SENER proprietary technology of NOOR III will be applied in the next Molten Salt Tower plants that will follow in other countries, such as South Africa, Chile and Australia.
Onoka, Chima A; Onwujekwe, Obinna E; Uzochukwu, Benjamin S; Ezumah, Nkoli N
2013-06-13
The National Health Insurance Scheme (NHIS) in Nigeria was launched in 2005 as part of efforts by the federal government to achieve universal coverage using financial risk protection mechanisms. However, only 4% of the population, and mainly federal government employees, are currently covered by health insurance and this is primarily through the Formal Sector Social Health Insurance Programme (FSSHIP) of the NHIS. This study aimed to understand why different state (sub-national) governments decided whether or not to adopt the FSSHIP for their employees. This study used a comparative case study approach. Data were collected through document reviews and 48 in-depth interviews with policy makers, programme managers, health providers, and civil servant leaders. Although the programme's benefits seemed acceptable to state policy makers and the intended beneficiaries (employees), the feasibility of employer contributions, concerns about transparency in the NHIS and the role of states in the FSSHIP, the roles of policy champions such as state governors and resistance by employees to making contributions, all influenced the decision of state governments on adoption. Overall, the power of state governments over state-level health reforms, attributed to the prevailing system of government that allows states to deliberate on certain national-level policies, enhanced by the NHIS legislation that made adoption voluntary, enabled states to adopt or not to adopt the program. The study demonstrates and supports observations that even when the content of a programme is generally acceptable, context, actor roles, and the wider implications of programme design on actor interests can explain decision on policy adoption. Policy implementers involved in scaling-up the NHIS programme need to consider the prevailing contextual factors, and effectively engage policy champions to overcome known challenges in order to encourage adoption by sub-national governments. Policy makers and implementers in countries scaling-up health insurance coverage should, early enough, develop strategies to overcome political challenges inherent in the path to scaling-up, to avoid delay or stunting of the process. They should also consider the potential pitfalls of reforms that first focus on civil servants, especially when the use of public funds potentially compromises coverage for other citizens.
Topham, Debra; Drew, Debra
2017-12-01
CAPA is a multifaceted pain assessment tool that was adopted at a large tertiary Midwest hospital to replace the numeric scale for adult patients who could self-report their pain experience. This article describes the process of implementation and the effect on patient satisfaction scores. Use of the tool is supported by the premise that pain assessment entails more than just pain intensity and that assessment is an exchange of meaning between patients and clinicians dependent on internal and external factors. Implementation of the tool was a transformative process resulting in modest increases in patient satisfaction scores with pain management. Patient reports that "staff did everything to manage pain" had the biggest gains and were sustained for more than 2 years. The CAPA tool meets regulatory requirements for pain assessment. Copyright © 2017 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
Relativistic magnetised perturbations: magnetic pressure versus magnetic tension
NASA Astrophysics Data System (ADS)
Tseneklidou, Dimitra; Tsagas, Christos G.; Barrow, John D.
2018-06-01
We study the linear evolution of magnetised cosmological perturbations in the post-recombination epoch. Using full general relativity and adopting the ideal magnetohydrodynamic approximation, we refine and extend the previous treatments. More specifically, this is the first relativistic study that accounts for the effects of the magnetic tension, in addition to those of the field’s pressure. Our solutions show that on sufficiently large scales, larger than the (purely magnetic) Jeans length, the perturbations evolve essentially unaffected by the magnetic presence. The magnetic pressure dominates on small scales, where it forces the perturbations to oscillate and decay. Close to the Jeans length, however, the field’s tension takes over and leads to a weak growth of the inhomogeneities. These solutions clearly demonstrate the opposing action of the aforementioned two magnetic agents, namely of the field’s pressure and tension, on the linear evolution of cosmological density perturbations.
Blengini, Gian Andrea; Fantoni, Moris; Busto, Mirko; Genon, Giuseppe; Zanetti, Maria Chiara
2012-09-01
The paper summarises the main results obtained from two extensive applications of Life Cycle Assessment (LCA) to the integrated municipal solid waste management systems of Torino and Cuneo Districts in northern Italy. Scenarios with substantial differences in terms of amount of waste, percentage of separate collection and options for the disposal of residual waste are used to discuss the credibility and acceptability of the LCA results, which are adversely affected by the large influence of methodological assumptions and the local socio-economic constraints. The use of site-specific data on full scale waste treatment facilities and the adoption of a participatory approach for the definition of the most sensible LCA assumptions are used to assist local public administrators and stakeholders showing them that LCA can be operational to waste management at local scale. Copyright © 2012 Elsevier Ltd. All rights reserved.
Revolution…Now The Future Arrives for Five Clean Energy Technologies – 2015 Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
In 2013, the U.S. Department of Energy (DOE) released the Revolution Now report, highlighting four transformational technologies: land-based wind power, silicon photovoltaic (PV) solar modules, light-emitting diodes (LEDs), and electric vehicles (EVs). That study and its 2014 update showed how dramatic reductions in cost are driving a surge in consumer, industrial, and commercial adoption for these clean energy technologies—as well as yearly progress. In addition to presenting the continued progress made over the last year in these areas, this year’s update goes further. Two separate sections now cover large, central, utility-scale PV plants and smaller, rooftop, distributed PV systems tomore » highlight how both have achieved significant deployment nationwide, and have done so through different innovations, such as easier access to capital for utility-scale PV and reductions of non-hardware costs and third-party ownership for distributed PV. Along with these core technologies« less
Farming for Ecosystem Services: An Ecological Approach to Production Agriculture
Philip Robertson, G.; Gross, Katherine L.; Hamilton, Stephen K.; Landis, Douglas A.; Schmidt, Thomas M.; Snapp, Sieglinde S.; Swinton, Scott M.
2014-01-01
A balanced assessment of ecosystem services provided by agriculture requires a systems-level socioecological understanding of related management practices at local to landscape scales. The results from 25 years of observation and experimentation at the Kellogg Biological Station long-term ecological research site reveal services that could be provided by intensive row-crop ecosystems. In addition to high yields, farms could be readily managed to contribute clean water, biocontrol and other biodiversity benefits, climate stabilization, and long-term soil fertility, thereby helping meet society's need for agriculture that is economically and environmentally sustainable. Midwest farmers—especially those with large farms—appear willing to adopt practices that deliver these services in exchange for payments scaled to management complexity and farmstead benefit. Surveyed citizens appear willing to pay farmers for the delivery of specific services, such as cleaner lakes. A new farming for services paradigm in US agriculture seems feasible and could be environmentally significant. PMID:26955069
Farming for Ecosystem Services: An Ecological Approach to Production Agriculture.
Philip Robertson, G; Gross, Katherine L; Hamilton, Stephen K; Landis, Douglas A; Schmidt, Thomas M; Snapp, Sieglinde S; Swinton, Scott M
2014-05-01
A balanced assessment of ecosystem services provided by agriculture requires a systems-level socioecological understanding of related management practices at local to landscape scales. The results from 25 years of observation and experimentation at the Kellogg Biological Station long-term ecological research site reveal services that could be provided by intensive row-crop ecosystems. In addition to high yields, farms could be readily managed to contribute clean water, biocontrol and other biodiversity benefits, climate stabilization, and long-term soil fertility, thereby helping meet society's need for agriculture that is economically and environmentally sustainable. Midwest farmers-especially those with large farms-appear willing to adopt practices that deliver these services in exchange for payments scaled to management complexity and farmstead benefit. Surveyed citizens appear willing to pay farmers for the delivery of specific services, such as cleaner lakes. A new farming for services paradigm in US agriculture seems feasible and could be environmentally significant.
A Single-use Strategy to Enable Manufacturing of Affordable Biologics.
Jacquemart, Renaud; Vandersluis, Melissa; Zhao, Mochao; Sukhija, Karan; Sidhu, Navneet; Stout, Jim
2016-01-01
The current processing paradigm of large manufacturing facilities dedicated to single product production is no longer an effective approach for best manufacturing practices. Increasing competition for new indications and the launch of biosimilars for the monoclonal antibody market have put pressure on manufacturers to produce at lower cost. Single-use technologies and continuous upstream processes have proven to be cost-efficient options to increase biomass production but as of today the adoption has been only minimal for the purification operations, partly due to concerns related to cost and scale-up. This review summarizes how a single-use holistic process and facility strategy can overcome scale limitations and enable cost-efficient manufacturing to support the growing demand for affordable biologics. Technologies enabling high productivity, right-sized, small footprint, continuous, and automated upstream and downstream operations are evaluated in order to propose a concept for the flexible facility of the future.
Seo, Han-bok; Lee, Seung-Yop
2017-01-01
Structure-dependent colour is caused by the interaction of light with photonic crystal structures rather than pigments. The elytra of longhorn beetles Tmesisternus isabellae appear to be iridescent green in a dry state and turn to red when exposed to humidity. Based on the hygroscopic colouration of the longhorn beetle, we have developed centimeter-scale colorimetric opal films using a novel self-assembly method. The micro-channel assisted assembly technique adopts both natural evaporation and rotational forced drying, enhancing the surface binding of silica particles and the packing density by reducing the lattice constant and structural defects. The fabricated large-scale photonic film changes its structural colour from green to red when exposed to water vapour, similarly to the colorimetric feature of the longhorn beetle. The humidity-dependent colour change of the opal film is shown to be reversible and durable over five-hundred cycles of wetting and drying. PMID:28322307
Dale, Bruce E
2017-09-21
A sustainable chemical industry cannot exist at scale without both sustainable feedstocks and feedstock supply chains to provide the raw materials. However, most current research focus is on producing the sustainable chemicals and materials. Little attention is given to how and by whom sustainable feedstocks will be supplied. In effect, we have put the bioproducts cart before the sustainable feedstocks horse. For example, bulky, unstable, non-commodity feedstocks such as crop residues probably cannot supply a large-scale sustainable industry. Likewise, those who manage land to produce feedstocks must benefit significantly from feedstock production, otherwise they will not participate in this industry and it will never grow. However, given real markets that properly reward farmers, demand for sustainable bioproducts and bioenergy can drive the adoption of more sustainable agricultural and forestry practices, providing many societal "win-win" opportunities. Three case studies are presented to show how this "win-win" process might unfold.
NASA Astrophysics Data System (ADS)
Seo, Han-Bok; Lee, Seung-Yop
2017-03-01
Structure-dependent colour is caused by the interaction of light with photonic crystal structures rather than pigments. The elytra of longhorn beetles Tmesisternus isabellae appear to be iridescent green in a dry state and turn to red when exposed to humidity. Based on the hygroscopic colouration of the longhorn beetle, we have developed centimeter-scale colorimetric opal films using a novel self-assembly method. The micro-channel assisted assembly technique adopts both natural evaporation and rotational forced drying, enhancing the surface binding of silica particles and the packing density by reducing the lattice constant and structural defects. The fabricated large-scale photonic film changes its structural colour from green to red when exposed to water vapour, similarly to the colorimetric feature of the longhorn beetle. The humidity-dependent colour change of the opal film is shown to be reversible and durable over five-hundred cycles of wetting and drying.
Del Fiol, Guilherme; Huser, Vojtech; Strasberg, Howard R; Maviglia, Saverio M; Curtis, Clayton; Cimino, James J
2012-01-01
To support clinical decision-making,computerized information retrieval tools known as “infobuttons” deliver contextually-relevant knowledge resources intoclinical information systems.The Health Level Seven International(HL7)Context-Aware Knowledge Retrieval (Infobutton) Standard specifies a standard mechanism to enable infobuttons on a large scale. Objective To examine the experience of organizations in the course of implementing the HL7 Infobutton Standard. Method Cross-sectionalonline survey and in-depth phone interviews. Results A total of 17 organizations participated in the study.Analysis of the in-depth interviews revealed 20 recurrent themes.Implementers underscored the benefits, simplicity, and flexibility of the HL7 Infobutton Standard. Yet, participants voiced the need for easier access to standard specifications and improved guidance to beginners. Implementers predicted that the Infobutton Standard will be widely or at least fairly well adopted in the next five years, but uptake will dependlargely on adoption among electronic health record (EHR) vendors. To accelerate EHR adoption of the Infobutton Standard,implementers recommended HL7-compliant infobutton capabilities to be included in the United States Meaningful Use Certification Criteria EHR systems. Limitations Opinions and predictions should be interpreted with caution, since all the participant organizations have successfully implemented the Standard and overhalf of the organizations were actively engaged in the development of the Standard. Conclusion Overall, implementers reported a very positive experience with the HL7 Infobutton Standard.Despite indications of increasing uptake, measures should be taken to stimulate adoption of the Infobutton Standard among EHR vendors. Widespread adoption of the Infobutton standard has the potential to bring contextually relevant clinical decision support content into the healthcare provider workflow. PMID:22226933
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
Complete physico-chemical treatment for coke plant effluents.
Ghose, M K
2002-03-01
Naturally found coal is converted to coke which is suitable for metallurgical industries. Large quantities of liquid effluents produced contain a large amount of suspended solids, high COD, BOD, phenols, ammonia and other toxic substances which are causing serious pollution problem in the receiving water to which they are discharged. There are a large number of coke plants in the vicinity of Jharia Coal Field (JCF). Characteristics of the effluents have been evaluated. The present effluent treatment systems were found to be inadequate. Physico-chemical treatment has been considered as a suitable option for the treatment of coke plant effluents. Ammonia removal by synthetic zeolite, activated carbon for the removal of bacteria, viruses, refractory organics, etc. were utilized and the results are discussed. A scheme has been proposed for the complete physico-chemical treatment, which can be suitably adopted for the recycling, reuse and safe disposal of the treated effluent. Various unit process and unit operations involved in the treatment system have been discussed. The process may be useful on industrial scale at various sites.
NASA Astrophysics Data System (ADS)
Elbakary, Mohamed I.; Iftekharuddin, Khan M.; Papelis, Yiannis; Newman, Brett
2017-05-01
Air Traffic Management (ATM) concepts are commonly tested in simulation to obtain preliminary results and validate the concepts before adoption. Recently, the researchers found that simulation is not enough because of complexity associated with ATM concepts. In other words, full-scale tests must eventually take place to provide compelling performance evidence before adopting full implementation. Testing using full-scale aircraft produces a high-cost approach that yields high-confidence results but simulation provides a low-risk/low-cost approach with reduced confidence on the results. One possible approach to increase the confidence of the results and simultaneously reduce the risk and the cost is using unmanned sub-scale aircraft in testing new concepts for ATM. This paper presents the simulation results of using unmanned sub-scale aircraft in implementing ATM concepts compared to the full scale aircraft. The results of simulation show that the performance of sub-scale is quite comparable to that of the full-scale which validates use of the sub-scale in testing new ATM concepts. Keywords: Unmanned
A modeling comparison of deep greenhouse gas emissions reduction scenarios by 2030 in California
Yeh, Sonia; Yang, Christopher; Gibbs, Michael; ...
2016-10-21
California aims to reduce greenhouse gas (GHG) emissions to 40% below 1990 levels by 2030. We compare six energy models that have played various roles in informing the state policymakers in setting climate policy goals and targets. These models adopt a range of modeling structures, including stock-turnover back-casting models, a least-cost optimization model, macroeconomic/macro-econometric models, and an electricity dispatch model. Results from these models provide useful insights in terms of the transformations in the energy system required, including efficiency improvements in cars, trucks, and buildings, electrification of end-uses, low- or zero-carbon electricity and fuels, aggressive adoptions of zero-emission vehicles (ZEVs),more » demand reduction, and large reductions of non-energy GHG emissions. Some of these studies also suggest that the direct economic costs can be fairly modest or even generate net savings, while the indirect macroeconomic benefits are large, as shifts in employment and capital investments could have higher economic returns than conventional energy expenditures. These models, however, often assume perfect markets, perfect competition, and zero transaction costs. They also do not provide specific policy guidance on how these transformative changes can be achieved. Greater emphasis on modeling uncertainty, consumer behaviors, heterogeneity of impacts, and spatial modeling would further enhance policymakers' ability to design more effective and targeted policies. Here, this paper presents an example of how policymakers, energy system modelers and stakeholders interact and work together to develop and evaluate long-term state climate policy targets. Lastly, even though this paper focuses on California, the process of dialogue and interactions, modeling results, and lessons learned can be generally adopted across different regions and scales.« less
Kongelf, Anine; Bandewar, Sunita V. S.; Bharat, Shalini; Collumbien, Martine
2015-01-01
Background In the last decade, community mobilisation (CM) interventions targeting female sex workers (FSWs) have been scaled-up in India’s national response to the HIV epidemic. This included the Bill and Melinda Gates Foundation’s Avahan programme which adopted a business approach to plan and manage implementation at scale. With the focus of evaluation efforts on measuring effectiveness and health impacts there has been little analysis thus far of the interaction of the CM interventions with the sex work industry in complex urban environments. Methods and Findings Between March and July 2012 semi-structured, in-depth interviews and focus group discussions were conducted with 63 HIV intervention implementers, to explore challenges of HIV prevention among FSWs in Mumbai. A thematic analysis identified contextual factors that impact CM implementation. Large-scale interventions are not only impacted by, but were shown to shape the dynamic social context. Registration practices and programme monitoring were experienced as stigmatising, reflected in shifting client preferences towards women not disclosing as ‘sex workers’. This combined with urban redevelopment and gentrification of traditional red light areas, forcing dispersal and more ‘hidden’ ways of solicitation, further challenging outreach and collectivisation. Participants reported that brothel owners and ‘pimps’ continued to restrict access to sex workers and the heterogeneous ‘community’ of FSWs remains fragmented with high levels of mobility. Stakeholder engagement was poor and mobilising around HIV prevention not compelling. Interventions largely failed to respond to community needs as strong target-orientation skewed activities towards those most easily measured and reported. Conclusion Large-scale interventions have been impacted by and contributed to an increasingly complex sex work environment in Mumbai, challenging outreach and mobilisation efforts. Sex workers remain a vulnerable and disempowered group needing continued support and more comprehensive services. PMID:25811484
NASA Astrophysics Data System (ADS)
Mccoll, K. A.; Van Heerwaarden, C.; Katul, G. G.; Gentine, P.; Entekhabi, D.
2016-12-01
While the break-down in similarity between turbulent transport of heat and momentum (or Reynolds analogy) is not disputed in the atmospheric surface layer (ASL) under unstably stratified conditions, the causes of this breakdown remain the subject of some debate. One reason for the break-down is hypothesized to be due to a change in the topology of the coherent structures and how they differently transport heat and momentum. As instability increases, coherent structures that are confined to the near-wall region transition to thermal plumes, spanning the entire boundary layer depth. Monin-Obukhov Similarity Theory (MOST), which hypothesizes that only local length scales play a role in ASL turbulent transport, implicitly assumes that thermal plumes and other large-scale structures are inactive (i.e., they do not contribute to turbulent transport despite their large energy content). Widely adopted mixing-length models for the ASL also rest on this assumption. The difficulty of characterizing low-wavenumber turbulent motions with field observations motivates the use of high-resolution Direct Numerical Simulations (DNS) that are free from sub-grid scale parameterizations and ad-hoc assumptions near the boundary. Despite the low Reynolds number, mild stratification and idealized geometry, DNS-estimated MOST functions are consistent with field experiments as are key low-frequency features of the vertical velocity variance and buoyancy spectra. Parsimonious spectral models for MOST stability correction functions for momentum (φm) and heat (φh) are derived based on idealized vertical velocity variance and buoyancy spectra fit to the corresponding DNS spectra. For φm, a spectral model requiring a local length scale (evolving with local stability conditions) that matches DNS and field observations is derived. In contrast, for φh, the aforementioned model is substantially biased unless contributions from larger length scales are also included. These results suggest that ASL heat transport cannot be precisely MO-similar, and that the breakdown of the Reynolds analogy is at least partially caused by the influence of large eddies on turbulent heat transport.
A Non-Modeling Exploration of Residential Solar Photovoltaic (PV) Adoption and Non-Adoption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moezzi, Mithra; Ingle, Aaron; Lutzenhiser, Loren
Although U.S. deployment of residential rooftop solar photovoltaic (PV) systems has accelerated in recent years, PV is still installed on less than 1 percent of single-family homes. Most research on household PV adoption focuses on scaling initial markets and modeling predicted growth rather than considering more broadly why adoption occurs. Among the studies that have investigated the characteristics of PV adoption, most collected data from adopters, sometimes with additional non-adopter data, and rarely from people who considered but did not adopt PV. Yet the vast majority of Americans are non-adopters, and they are a diverse group - understanding their waysmore » of evaluating PV adoption is important. Similarly, PV is a unique consumer product, which makes it difficult to apply findings from studies of other technologies to PV. In addition, little research addresses the experience of households after they install PV. This report helps fill some of these gaps in the existing literature. The results inform a more detailed understanding of residential PV adoption, while helping ensure that adoption is sufficiently beneficial to adopters and even non-adopters.« less
Nation-scale adoption of new medicines by doctors: an application of the Bass diffusion model
2012-01-01
Background The adoption of new medicines is influenced by a complex set of social processes that have been widely examined in terms of individual prescribers’ information-seeking and decision-making behaviour. However, quantitative, population-wide analyses of how long it takes for new healthcare practices to become part of mainstream practice are rare. Methods We applied a Bass diffusion model to monthly prescription volumes of 103 often-prescribed drugs in Australia (monthly time series data totalling 803 million prescriptions between 1992 and 2010), to determine the distribution of adoption rates. Our aim was to test the utility of applying the Bass diffusion model to national-scale prescribing volumes. Results The Bass diffusion model was fitted to the adoption of a broad cross-section of drugs using national monthly prescription volumes from Australia (median R2 = 0.97, interquartile range 0.95 to 0.99). The median time to adoption was 8.2 years (IQR 4.9 to 12.1). The model distinguished two classes of prescribing patterns – those where adoption appeared to be driven mostly by external forces (19 drugs) and those driven mostly by social contagion (84 drugs). Those driven more prominently by internal forces were found to have shorter adoption times (p = 0.02 in a non-parametric analysis of variance by ranks). Conclusion The Bass diffusion model may be used to retrospectively represent the patterns of adoption exhibited in prescription volumes in Australia, and distinguishes between adoption driven primarily by external forces such as regulation, or internal forces such as social contagion. The eight-year delay between the introduction of a new medicine and the adoption of the prescribing practice suggests the presence of system inertia in Australian prescribing practices. PMID:22876867
NASA Astrophysics Data System (ADS)
Ma, Zhanshan; Liu, Qijun; Zhao, Chuanfeng; Shen, Xueshun; Wang, Yuan; Jiang, Jonathan H.; Li, Zhe; Yung, Yuk
2018-03-01
An explicit prognostic cloud-cover scheme (PROGCS) is implemented into the Global/Regional Assimilation and Prediction System (GRAPES) for global middle-range numerical weather predication system (GRAPES_GFS) to improve the model performance in simulating cloud cover and radiation. Unlike the previous diagnostic cloud-cover scheme (DIAGCS), PROGCS considers the formation and dissipation of cloud cover by physically connecting it to the cumulus convection and large-scale stratiform condensation processes. Our simulation results show that clouds in mid-high latitudes arise mainly from large-scale stratiform condensation processes, while cumulus convection and large-scale condensation processes jointly determine cloud cover in low latitudes. Compared with DIAGCS, PROGCS captures more consistent vertical distributions of cloud cover with the observations from Atmospheric Radiation Measurements (ARM) program at the Southern Great Plains (SGP) site and simulates more realistic diurnal cycle of marine stratocumulus with the ERA-Interim reanalysis data. The low, high, and total cloud covers that are determined via PROGCS appear to be more realistic than those simulated via DIAGCS when both are compared with satellite retrievals though the former maintains slight negative biases. In addition, the simulations of outgoing longwave radiation (OLR) at the top of the atmosphere (TOA) from PROGCS runs have been considerably improved as well, resulting in less biases in radiative heating rates at heights below 850 hPa and above 400 hPa of GRAPES_GFS. Our results indicate that a prognostic method of cloud-cover calculation has significant advantage over the conventional diagnostic one, and it should be adopted in both weather and climate simulation and forecast.
Enablers and barriers to large-scale uptake of improved solid fuel stoves: a systematic review.
Rehfuess, Eva A; Puzzolo, Elisa; Stanistreet, Debbi; Pope, Daniel; Bruce, Nigel G
2014-02-01
Globally, 2.8 billion people rely on household solid fuels. Reducing the resulting adverse health, environmental, and development consequences will involve transitioning through a mix of clean fuels and improved solid fuel stoves (IS) of demonstrable effectiveness. To date, achieving uptake of IS has presented significant challenges. We performed a systematic review of factors that enable or limit large-scale uptake of IS in low- and middle-income countries. We conducted systematic searches through multidisciplinary databases, specialist websites, and consulting experts. The review drew on qualitative, quantitative, and case studies and used standardized methods for screening, data extraction, critical appraisal, and synthesis. We summarized our findings as "factors" relating to one of seven domains-fuel and technology characteristics; household and setting characteristics; knowledge and perceptions; finance, tax, and subsidy aspects; market development; regulation, legislation, and standards; programmatic and policy mechanisms-and also recorded issues that impacted equity. We identified 31 factors influencing uptake from 57 studies conducted in Asia, Africa, and Latin America. All domains matter. Although factors such as offering technologies that meet household needs and save fuel, user training and support, effective financing, and facilitative government action appear to be critical, none guarantee success: All factors can be influential, depending on context. The nature of available evidence did not permit further prioritization. Achieving adoption and sustained use of IS at a large scale requires that all factors, spanning household/community and program/societal levels, be assessed and supported by policy. We propose a planning tool that would aid this process and suggest further research to incorporate an evaluation of effectiveness.
Proposing an Educational Scaling-and-Diffusion Model for Inquiry-Based Learning Designs
ERIC Educational Resources Information Center
Hung, David; Lee, Shu-Shing
2015-01-01
Education cannot adopt the linear model of scaling used by the medical sciences. "Gold standards" cannot be replicated without considering process-in-learning, diversity, and student-variedness in classrooms. This article proposes a nuanced model of educational scaling-and-diffusion, describing the scaling (top-down supports) and…
Second-Generation Large Civil Tiltrotor 7- by 10-Foot Wind Tunnel Test Data Report
NASA Technical Reports Server (NTRS)
Theodore, Colin R.; Russell, Carl R.; Willink, Gina C.; Pete, Ashley E.; Adibi, Sierra A.; Ewert, Adam; Theuns, Lieselotte; Beierle, Connor
2016-01-01
An approximately 6-percent scale model of the NASA Second-Generation Large Civil Tiltrotor (LCTR2) Aircraft was tested in the U.S. Army 7- by 10-Foot Wind Tunnel at NASA Ames Research Center January 4 to April 19, 2012, and September 18 to November 1, 2013. The full model was tested, along with modified versions in order to determine the effects of the wing tip extensions and nacelles; the wing was also tested separately in the various configurations. In both cases, the wing and nacelles used were adopted from the U.S. Army High Efficiency Tilt Rotor (HETR) aircraft, in order to limit the cost of the experiment. The full airframe was tested in high-speed cruise and low-speed hover flight conditions, while the wing was tested only in cruise conditions, with Reynolds numbers ranging from 0 to 1.4 million. In all cases, the external scale system of the wind tunnel was used to collect data. Both models were mounted to the scale using two support struts attached underneath the wing; the full airframe model also used a third strut attached at the tail. The collected data provides insight into the performance of the preliminary design of the LCTR2 and will be used for computational fluid dynamics (CFD) validation and the development of flight dynamics simulation models.
McCammon, J. Andrew
2011-01-01
Chagas' disease, caused by the protozoan parasite Trypanosoma cruzi (T. cruzi), is a life-threatening illness affecting 11–18 million people. Currently available treatments are limited, with unacceptable efficacy and safety profiles. Recent studies have revealed an essential T. cruzi proline racemase enzyme (TcPR) as an attractive candidate for improved chemotherapeutic intervention. Conformational changes associated with substrate binding to TcPR are believed to expose critical residues that elicit a host mitogenic B-cell response, a process contributing to parasite persistence and immune system evasion. Characterization of the conformational states of TcPR requires access to long-time-scale motions that are currently inaccessible by standard molecular dynamics simulations. Here we describe advanced accelerated molecular dynamics that extend the effective simulation time and capture large-scale motions of functional relevance. Conservation and fragment mapping analyses identified potential conformational epitopes located in the vicinity of newly identified transient binding pockets. The newly identified open TcPR conformations revealed by this study along with knowledge of the closed to open interconversion mechanism advances our understanding of TcPR function. The results and the strategy adopted in this work constitute an important step toward the rationalization of the molecular basis behind the mitogenic B-cell response of TcPR and provide new insights for future structure-based drug discovery. PMID:22022240
de Oliveira, César Augusto F; Grant, Barry J; Zhou, Michelle; McCammon, J Andrew
2011-10-01
Chagas' disease, caused by the protozoan parasite Trypanosoma cruzi (T. cruzi), is a life-threatening illness affecting 11-18 million people. Currently available treatments are limited, with unacceptable efficacy and safety profiles. Recent studies have revealed an essential T. cruzi proline racemase enzyme (TcPR) as an attractive candidate for improved chemotherapeutic intervention. Conformational changes associated with substrate binding to TcPR are believed to expose critical residues that elicit a host mitogenic B-cell response, a process contributing to parasite persistence and immune system evasion. Characterization of the conformational states of TcPR requires access to long-time-scale motions that are currently inaccessible by standard molecular dynamics simulations. Here we describe advanced accelerated molecular dynamics that extend the effective simulation time and capture large-scale motions of functional relevance. Conservation and fragment mapping analyses identified potential conformational epitopes located in the vicinity of newly identified transient binding pockets. The newly identified open TcPR conformations revealed by this study along with knowledge of the closed to open interconversion mechanism advances our understanding of TcPR function. The results and the strategy adopted in this work constitute an important step toward the rationalization of the molecular basis behind the mitogenic B-cell response of TcPR and provide new insights for future structure-based drug discovery.
Secure access control and large scale robust representation for online multimedia event detection.
Liu, Changyu; Lu, Bin; Li, Huiling
2014-01-01
We developed an online multimedia event detection (MED) system. However, there are a secure access control issue and a large scale robust representation issue when we want to integrate traditional event detection algorithms into the online environment. For the first issue, we proposed a tree proxy-based and service-oriented access control (TPSAC) model based on the traditional role based access control model. Verification experiments were conducted on the CloudSim simulation platform, and the results showed that the TPSAC model is suitable for the access control of dynamic online environments. For the second issue, inspired by the object-bank scene descriptor, we proposed a 1000-object-bank (1000OBK) event descriptor. Feature vectors of the 1000OBK were extracted from response pyramids of 1000 generic object detectors which were trained on standard annotated image datasets, such as the ImageNet dataset. A spatial bag of words tiling approach was then adopted to encode these feature vectors for bridging the gap between the objects and events. Furthermore, we performed experiments in the context of event classification on the challenging TRECVID MED 2012 dataset, and the results showed that the robust 1000OBK event descriptor outperforms the state-of-the-art approaches.
TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.
Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas
2017-01-01
Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.
Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.
Canale, Natale; Vieno, Alessio; Griffiths, Mark D; Borraccino, Alberto; Lazzeri, Giacomo; Charrier, Lorena; Lemma, Patrizia; Dalmasso, Paola; Santinello, Massimo
2017-03-01
The primary aim of the present study was to examine the association between immigrant generation, family sociodemographic characteristics, and problem gambling severity in a large-scale nationally representative sample of Italian youth. Data from the 2013-2014 Health Behaviour in School-aged Children (HBSC) Survey were used for cross-sectional analyses of adolescent problem gambling. Self-administered questionnaires were completed by a representative sample of 20,791 15-year-old students. Respondents' problem gambling severity, immigrant status, family characteristics (family structure, family affluence, perceived family support) and socio-demographic characteristics were individually assessed. Rates of adolescent at-risk/problem gambling were twice as high among first generation immigrants than non-immigrant students; the odds of being at-risk/problem gamblers were higher among first-generation immigrants than adolescents of other immigrant generations or non-immigrant. Not living with two biological or adoptive parents appears to be a factor that increases the risk of becoming a problem gambler in first generation immigrants. Immigrant status and family characteristics may play a key role in contributing to adolescent problem gambling. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
EVALUATING MACROINVERTEBRATE COMMUNITY ...
Since 2010, new construction in California is required to include stormwater detention and infiltration that is designed to capture rainfall from the 85th percentile of storm events in the region, preferably through green infrastructure. This study used recent macroinvertebrate community monitoring data to determine the ecological threshold for percent impervious cover prior to large scale adoption of green infrastructure using Threshold Indicator Taxa Analysis (TITAN). TITAN uses an environmental gradient and biological community data to determine individual taxa change points with respect to changes in taxa abundance and frequency across that gradient. Individual taxa change points are then aggregated to calculate the ecological threshold. This study used impervious cover data from National Land Cover Datasets and macroinvertebrate community data from California Environmental Data Exchange Network and Southern California Coastal Water Research Project. Preliminary TITAN runs for California’s Chaparral region indicated that both increasing and decreasing taxa had ecological thresholds of <1% watershed impervious cover. Next, TITAN will be used to determine shifts in the ecological threshold after the implementation of green infrastructure on a large scale. This presentation for the Society for Freshwater Scientists will discuss initial evaluation of community and taxa-specific thresholds of impairment for macroinvertebrates in California streams along
Field-scale experiments reveal persistent yield gaps in low-input and organic cropping systems
Kravchenko, Alexandra N.; Snapp, Sieglinde S.; Robertson, G. Philip
2017-01-01
Knowledge of production-system performance is largely based on observations at the experimental plot scale. Although yield gaps between plot-scale and field-scale research are widely acknowledged, their extent and persistence have not been experimentally examined in a systematic manner. At a site in southwest Michigan, we conducted a 6-y experiment to test the accuracy with which plot-scale crop-yield results can inform field-scale conclusions. We compared conventional versus alternative, that is, reduced-input and biologically based–organic, management practices for a corn–soybean–wheat rotation in a randomized complete block-design experiment, using 27 commercial-size agricultural fields. Nearby plot-scale experiments (0.02-ha to 1.0-ha plots) provided a comparison of plot versus field performance. We found that plot-scale yields well matched field-scale yields for conventional management but not for alternative systems. For all three crops, at the plot scale, reduced-input and conventional managements produced similar yields; at the field scale, reduced-input yields were lower than conventional. For soybeans at the plot scale, biological and conventional managements produced similar yields; at the field scale, biological yielded less than conventional. For corn, biological management produced lower yields than conventional in both plot- and field-scale experiments. Wheat yields appeared to be less affected by the experimental scale than corn and soybean. Conventional management was more resilient to field-scale challenges than alternative practices, which were more dependent on timely management interventions; in particular, mechanical weed control. Results underscore the need for much wider adoption of field-scale experimentation when assessing new technologies and production-system performance, especially as related to closing yield gaps in organic farming and in low-resourced systems typical of much of the developing world. PMID:28096409
Field-scale experiments reveal persistent yield gaps in low-input and organic cropping systems.
Kravchenko, Alexandra N; Snapp, Sieglinde S; Robertson, G Philip
2017-01-31
Knowledge of production-system performance is largely based on observations at the experimental plot scale. Although yield gaps between plot-scale and field-scale research are widely acknowledged, their extent and persistence have not been experimentally examined in a systematic manner. At a site in southwest Michigan, we conducted a 6-y experiment to test the accuracy with which plot-scale crop-yield results can inform field-scale conclusions. We compared conventional versus alternative, that is, reduced-input and biologically based-organic, management practices for a corn-soybean-wheat rotation in a randomized complete block-design experiment, using 27 commercial-size agricultural fields. Nearby plot-scale experiments (0.02-ha to 1.0-ha plots) provided a comparison of plot versus field performance. We found that plot-scale yields well matched field-scale yields for conventional management but not for alternative systems. For all three crops, at the plot scale, reduced-input and conventional managements produced similar yields; at the field scale, reduced-input yields were lower than conventional. For soybeans at the plot scale, biological and conventional managements produced similar yields; at the field scale, biological yielded less than conventional. For corn, biological management produced lower yields than conventional in both plot- and field-scale experiments. Wheat yields appeared to be less affected by the experimental scale than corn and soybean. Conventional management was more resilient to field-scale challenges than alternative practices, which were more dependent on timely management interventions; in particular, mechanical weed control. Results underscore the need for much wider adoption of field-scale experimentation when assessing new technologies and production-system performance, especially as related to closing yield gaps in organic farming and in low-resourced systems typical of much of the developing world.
Petranovich, Christine L; Walz, Nicolay Chertkoff; Staat, Mary Allen; Chiu, Chung-Yiu Peter; Wade, Shari L
2017-01-01
The objectives of this study were to examine the association of structural language and pragmatic communication with behavior problems and social competence in girls adopted internationally. Participants included girls between 6-12 years of age who were internationally adopted from China (n = 32) and Eastern-Europe (n = 25) and a control group of never-adopted girls (n = 25). Children completed the Wechsler Abbreviated Scale of Intelligence. Parents completed the Child Communication Checklist- second edition, the Child Behavior Checklist, and the Home and Community Social Behavior Scales. Compared to the controls, parents in the Eastern European group reported more problems with social competence, externalizing behaviors, structural language, and pragmatic communication. The Chinese group evidenced more internalizing problems. Using generalized linear regression, interaction terms were examined to determine if the associations of pragmatic communication and structural language with behavior problems and social competence varied across groups. Controlling for general intellectual functioning, poorer pragmatic communication was associated with more externalizing problems and poorer social competence. In the Chinese group, poorer pragmatic communication was associated with more internalizing problems. Post-adoption weaknesses in pragmatic communication are associated with behavior problems and social competence. Internationally adopted children may benefit from interventions that target pragmatic communication.
The Impact of Utility Tariff Evolution on Behind-the-Meter PV Adoption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Wesley J; Gagnon, Pieter J; Frew, Bethany A
This analysis uses a new method to link the NREL Regional Energy Deployment System (ReEDS) capacity expansion model with the NREL distributed generation market demand model (dGen) to explore the impact that the evolution of retail electricity tariffs can have on the adoption of distributed photovoltaics (DPV). The evolution most notably takes the form of decreased mid-day electricity costs, as low-cost PV reduces the marginal cost of electricity during those hours and the changes are subsequently communicated to electricity consumers through tariffs. We find that even under the low PV prices of the new SunShot targets the financial performance ofmore » DPV under evolved tariffs still motivates behind-the-meter adoption, despite significant reduction in the costs of electricity during afternoon periods driven by deployment of cheap utility-scale PV. The amount of DPV in 2050 in these low-cost futures ranged from 206 GW to 263 GW, a 13-fold and 16-fold increase over 2016 adoption levels respectively. From a utility planner's perspective, the representation of tariff evolution has noteworthy impacts on forecasted DPV adoption in scenarios with widespread time-of-use tariffs. Scenarios that projected adoption under a portfolio of time-of-use tariffs, but did not represent the evolution of those tariffs, predicted up to 36 percent more DPV in 2050, compared to scenarios that did not represent that evolution. Lastly, we find that a reduction in DPV deployment resulting from evolved tariffs had a negligible impact on the total generation from PV - both utility-scale and distributed - in the scenarios we examined. Any reduction in DPV generation was replaced with utility-scale PV generation, to arrive at the quantity that makes up the least-cost portfolio.« less
Knispel, Alexis L; McLachlan, Stéphane M
2010-01-01
Genetically modified herbicide-tolerant (GMHT) oilseed rape (OSR; Brassica napus L.) was approved for commercial cultivation in Canada in 1995 and currently represents over 95% of the OSR grown in western Canada. After a decade of widespread cultivation, GMHT volunteers represent an increasing management problem in cultivated fields and are ubiquitous in adjacent ruderal habitats, where they contribute to the spread of transgenes. However, few studies have considered escaped GMHT OSR populations in North America, and even fewer have been conducted at large spatial scales (i.e. landscape scales). In particular, the contribution of landscape structure and large-scale anthropogenic dispersal processes to the persistence and spread of escaped GMHT OSR remains poorly understood. We conducted a multi-year survey of the landscape-scale distribution of escaped OSR plants adjacent to roads and cultivated fields. Our objective was to examine the long-term dynamics of escaped OSR at large spatial scales and to assess the relative importance of landscape and localised factors to the persistence and spread of these plants outside of cultivation. From 2005 to 2007, we surveyed escaped OSR plants along roadsides and field edges at 12 locations in three agricultural landscapes in southern Manitoba where GMHT OSR is widely grown. Data were analysed to examine temporal changes at large spatial scales and to determine factors affecting the distribution of escaped OSR plants in roadside and field edge habitats within agricultural landscapes. Additionally, we assessed the potential for seed dispersal between escaped populations by comparing the relative spatial distribution of roadside and field edge OSR. Densities of escaped OSR fluctuated over space and time in both roadside and field edge habitats, though the proportion of GMHT plants was high (93-100%). Escaped OSR was positively affected by agricultural landscape (indicative of cropping intensity) and by the presence of an adjacent field planted to OSR. Within roadside habitats, escaped OSR was also strongly associated with large-scale variables, including road surface (indicative of traffic intensity) and distance to the nearest grain elevator. Conversely, within field edges, OSR density was affected by localised crop management practices such as mowing, soil disturbance and herbicide application. Despite the proximity of roadsides and field edges, there was little evidence of spatial aggregation among escaped OSR populations in these two habitats, especially at very fine spatial scales (i.e. <100 m), suggesting that natural propagule exchange is infrequent. Escaped OSR populations were persistent at large spatial and temporal scales, and low density in a given landscape or year was not indicative of overall extinction. As a result of ongoing cultivation and transport of OSR crops, escaped GMHT traits will likely remain predominant in agricultural landscapes. While escaped OSR in field edge habitats generally results from local seeding and management activities occurring at the field-scale, distribution patterns within roadside habitats are determined in large part by seed transport occurring at the landscape scale and at even larger regional scales. Our findings suggest that these large-scale anthropogenic dispersal processes are sufficient to enable persistence despite limited natural seed dispersal. This widespread dispersal is likely to undermine field-scale management practices aimed at eliminating escaped and in-field GMHT OSR populations. Agricultural transport and landscape-scale cropping patterns are important determinants of the distribution of escaped GM crops. At the regional level, these factors ensure ongoing establishment and spread of escaped GMHT OSR despite limited local seed dispersal. Escaped populations thus play an important role in the spread of transgenes and have substantial implications for the coexistence of GM and non-GM production systems. Given the large-scale factors driving the spread of escaped transgenes, localised co-existence measures may be impracticable where they are not commensurate with regional dispersal mechanisms. To be effective, strategies aimed at reducing contamination from GM crops should be multi-scale in approach and be developed and implemented at both farm and landscape levels of organisation. Multiple stakeholders should thus be consulted, including both GM and non-GM farmers, as well as seed developers, processors, transporters and suppliers. Decisions to adopt GM crops require thoughtful and inclusive consideration of the risks and responsibilities inherent in this new technology.
Factors associated with small-scale agricultural machinery adoption in Bangladesh: Census findings.
Mottaleb, Khondoker Abdul; Krupnik, Timothy J; Erenstein, Olaf
2016-08-01
There is strong advocacy for agricultural machinery appropriate for smallholder farmers in South Asia. Such 'scale-appropriate' machinery can increase returns to land and labour, although the still substantial capital investment required can preclude smallholder ownership. Increasing machinery demand has resulted in relatively well-developed markets for rental services for tillage, irrigation, and post-harvest operations. Many smallholders thereby access agricultural machinery that may have otherwise been cost prohibitive to purchase through fee-for-service arrangements, though opportunity for expansion remains. To more effectively facilitate the development and investment in scale-appropriate machinery, there is a need to better understand the factors associated with agricultural machinery purchases and service provision. This paper first reviews Bangladesh's historical policy environment that facilitated the development of agricultural machinery markets. It then uses recent Bangladesh census data from 814,058 farm households to identify variables associated with the adoption of the most common smallholder agricultural machinery - irrigation pumps, threshers, and power tillers (mainly driven by two-wheel tractors). Multinomial probit model results indicate that machinery ownership is positively associated with household assets, credit availability, electrification, and road density. These findings suggest that donors and policy makers should focus not only on short-term projects to boost machinery adoption. Rather, sustained emphasis on improving physical and civil infrastructure and services, as well as assuring credit availability, is also necessary to create an enabling environment in which the adoption of scale-appropriate farm machinery is most likely.
Power monitoring and control for large scale projects: SKA, a case study
NASA Astrophysics Data System (ADS)
Barbosa, Domingos; Barraca, João. Paulo; Maia, Dalmiro; Carvalho, Bruno; Vieira, Jorge; Swart, Paul; Le Roux, Gerhard; Natarajan, Swaminathan; van Ardenne, Arnold; Seca, Luis
2016-07-01
Large sensor-based science infrastructures for radio astronomy like the SKA will be among the most intensive datadriven projects in the world, facing very high demanding computation, storage, management, and above all power demands. The geographically wide distribution of the SKA and its associated processing requirements in the form of tailored High Performance Computing (HPC) facilities, require a Greener approach towards the Information and Communications Technologies (ICT) adopted for the data processing to enable operational compliance to potentially strict power budgets. Addressing the reduction of electricity costs, improve system power monitoring and the generation and management of electricity at system level is paramount to avoid future inefficiencies and higher costs and enable fulfillments of Key Science Cases. Here we outline major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science and Smart power monitoring and control.
Wave functions of symmetry-protected topological phases from conformal field theories
NASA Astrophysics Data System (ADS)
Scaffidi, Thomas; Ringel, Zohar
2016-03-01
We propose a method for analyzing two-dimensional symmetry-protected topological (SPT) wave functions using a correspondence with conformal field theories (CFTs) and integrable lattice models. This method generalizes the CFT approach for the fractional quantum Hall effect wherein the wave-function amplitude is written as a many-operator correlator in the CFT. Adopting a bottom-up approach, we start from various known microscopic wave functions of SPTs with discrete symmetries and show how the CFT description emerges at large scale, thereby revealing a deep connection between group cocycles and critical, sometimes integrable, models. We show that the CFT describing the bulk wave function is often also the one describing the entanglement spectrum, but not always. Using a plasma analogy, we also prove the existence of hidden quasi-long-range order for a large class of SPTs. Finally, we show how response to symmetry fluxes is easily described in terms of the CFT.
NASA Astrophysics Data System (ADS)
Cheyney, S.; Fishwick, S.; Hill, I. A.; Linford, N. T.
2015-08-01
Despite the development of advanced processing and interpretation tools for magnetic data sets in the fields of mineral and hydrocarbon industries, these methods have not achieved similar levels of adoption for archaeological or very near surface surveys. Using a synthetic data set we demonstrate that certain methodologies and assumptions used to successfully invert more regional-scale data can lead to large discrepancies between the true and recovered depths when applied to archaeological-type anomalies. We propose variations to the current approach, analysing the choice of the depth-weighting function, mesh design and parameter constraints, to develop an appropriate technique for the 3-D inversion of archaeological-scale data sets. The results show a successful recovery of a synthetic scenario, as well as a case study of a Romano-Celtic temple in the UK. For the case study, the final susceptibility model is compared with two coincident ground penetrating radar surveys, showing a high correlation with the comparative depth slices. The new approach takes interpretation of archaeological data sets beyond a simple 2-D visual interpretation based on pattern recognition.
Numerical and Experimental Study of Wake Redirection Techniques in a Boundary Layer Wind Tunnel
NASA Astrophysics Data System (ADS)
Wang, J.; Foley, S.; Nanos, E. M.; Yu, T.; Campagnolo, F.; Bottasso, C. L.; Zanotti, A.; Croce, A.
2017-05-01
The aim of the present paper is to validate a wind farm LES framework in the context of two distinct wake redirection techniques: yaw misalignment and individual cyclic pitch control. A test campaign was conducted using scaled wind turbine models in a boundary layer wind tunnel, where both particle image velocimetry and hot-wire thermo anemometers were used to obtain high quality measurements of the downstream flow. A LiDAR system was also employed to determine the non-uniformity of the inflow velocity field. A high-fidelity large-eddy simulation lifting-line model was used to simulate the aerodynamic behavior of the system, including the geometry of the wind turbine nacelle and tower. A tuning-free Lagrangian scale-dependent dynamic approach was adopted to improve the sub-grid scale modeling. Comparisons with experimental measurements are used to systematically validate the simulations. The LES results are in good agreement with the PIV and hot-wire data in terms of time-averaged wake profiles, turbulence intensity and Reynolds shear stresses. Discrepancies are also highlighted, to guide future improvements.
Toward economic flood loss characterization via hazard simulation
NASA Astrophysics Data System (ADS)
Czajkowski, Jeffrey; Cunha, Luciana K.; Michel-Kerjan, Erwann; Smith, James A.
2016-08-01
Among all natural disasters, floods have historically been the primary cause of human and economic losses around the world. Improving flood risk management requires a multi-scale characterization of the hazard and associated losses—the flood loss footprint. But this is typically not available in a precise and timely manner, yet. To overcome this challenge, we propose a novel and multidisciplinary approach which relies on a computationally efficient hydrological model that simulates streamflow for scales ranging from small creeks to large rivers. We adopt a normalized index, the flood peak ratio (FPR), to characterize flood magnitude across multiple spatial scales. The simulated FPR is then shown to be a key statistical driver for associated economic flood losses represented by the number of insurance claims. Importantly, because it is based on a simulation procedure that utilizes generally readily available physically-based data, our flood simulation approach has the potential to be broadly utilized, even for ungauged and poorly gauged basins, thus providing the necessary information for public and private sector actors to effectively reduce flood losses and save lives.
NASA Astrophysics Data System (ADS)
Tauro, F.; Piscopia, R.; Grimaldi, S.
2017-12-01
Image-based methodologies, such as large scale particle image velocimetry (LSPIV) and particle tracking velocimetry (PTV), have increased our ability to noninvasively conduct streamflow measurements by affording spatially distributed observations at high temporal resolution. However, progress in optical methodologies has not been paralleled by the implementation of image-based approaches in environmental monitoring practice. We attribute this fact to the sensitivity of LSPIV, by far the most frequently adopted algorithm, to visibility conditions and to the occurrence of visible surface features. In this work, we test both LSPIV and PTV on a data set of 12 videos captured in a natural stream wherein artificial floaters are homogeneously and continuously deployed. Further, we apply both algorithms to a video of a high flow event on the Tiber River, Rome, Italy. In our application, we propose a modified PTV approach that only takes into account realistic trajectories. Based on our findings, LSPIV largely underestimates surface velocities with respect to PTV in both favorable (12 videos in a natural stream) and adverse (high flow event in the Tiber River) conditions. On the other hand, PTV is in closer agreement than LSPIV with benchmark velocities in both experimental settings. In addition, the accuracy of PTV estimations can be directly related to the transit of physical objects in the field of view, thus providing tangible data for uncertainty evaluation.
Genetically modified crops and small-scale farmers: main opportunities and challenges.
Azadi, Hossein; Samiee, Atry; Mahmoudi, Hossein; Jouzi, Zeynab; Khachak, Parisa Rafiaani; De Maeyer, Philippe; Witlox, Frank
2016-01-01
Although some important features of genetically modified (GM) crops such as insect resistance, herbicide tolerance, and drought tolerance might seem to be beneficial for small-scale farmers, the adoption of GM technology by smallholders is still slight. Identifying pros and cons of using this technology is important to understand the impacts of GM crops on these farmers. This article reviews the main opportunities and challenges of GM crops for small-scale farmers in developing countries. The most significant advantages of GM crops include being independent to farm size, environment protection, improvement of occupational health issues, and the potential of bio-fortified crops to reduce malnutrition. Challenges faced by small-scale farmers for adoption of GM crops comprise availability and accessibility of GM crop seeds, seed dissemination and price, and the lack of adequate information. In addition, R&D and production costs in using GM crops make it difficult for these farmers to adopt the use of these crops. Moreover, intellectual property right regulations may deprive resource poor farmers from the advantages of GM technology. Finally, concerns on socio-economic and environment safety issues are also addressed in this paper.
Padula, William V; Mishra, Manish K; Makic, Mary Beth F; Wald, Heidi L; Campbell, Jonathan D; Nair, Kavita V; Valuck, Robert J
2015-12-01
In 2008, the U.S. Centers for Medicare and Medicaid Services enacted a nonpayment policy for stage III and IV hospital-acquired pressure ulcers (HAPUs), which incentivized hospitals to improve prevention efforts. In response, hospitals looked for ways to support implementation of evidence-based practices for HAPU prevention, such as adoption of quality improvement (QI) interventions. The objective of this study was to quantify adoption patterns of QI interventions for supporting evidence-based practices for HAPU prevention. This study surveyed wound care specialists working at hospitals within the University HealthSystem Consortium. A questionnaire was used to retrospectively describe QI adoption patterns according to 25 HAPU-specific QI interventions into four domains: leadership, staff, information technology (IT), and performance and improvement. Respondents indicated QI interventions implemented between 2007 and 2012 to the nearest quarter and year. Descriptive statistics defined patterns of QI adoption. A t-test and statistical process control chart established statistically significant increase in adoption following nonpayment policy enactment in October 2008. Increase are described in terms of scope (number of QI domains employed) and scale (number of QI interventions within domains). Fifty-three of the 55 hospitals surveyed reported implementing QI interventions for HAPU prevention. Leadership interventions were most frequent, increasing in scope from 40% to 63% between 2008 and 2012; "annual programs to promote pressure ulcer prevention" showed the greatest increase in scale. Staff interventions increased in scope from 32% to 53%; "frequent consult driven huddles" showed the greatest increase in scale. IT interventions increased in scope from 31% to 55%. Performance and improvement interventions increased in scope from 18% to 40%, with "new skin care products . . ." increasing the most. Academic medical centers increased adoption of QI interventions following changes in nonpayment policy. These QI interventions supported adherence to implementation of pressure ulcer prevention protocols. Changes in payment policies for prevention are effective in QI efforts. © 2015 Sigma Theta Tau International.
School Principals' Decision-Making Behaviour in the Management of Innovation.
ERIC Educational Resources Information Center
McGeown, Vincent
1979-01-01
A rating scale operationalized a model for the adoption and implementation of educational innovation. Phases were designated: creating a climate for change; analyzing antecedent conditions; generating alternatives; initiating change adoption; implementing change; and evaluating change outcomes. Principals' decision-making behavior was the best…
Challenge of biofuel: filling the tank without emptying the stomach?
NASA Astrophysics Data System (ADS)
Rajagopal, D.; Sexton, S. E.; Roland-Holst, D.; Zilberman, D.
2007-10-01
Biofuels have become a leading alternative to fossil fuel because they can be produced domestically by many countries, require only minimal changes to retail distribution and end-use technologies, are a partial response to global climate change, and because they have the potential to spur rural development. Production of biofuel has increased most rapidly for corn ethanol, in part because of government subsidies; yet, corn ethanol offers at most a modest contribution to society's climate change goals and only a marginally positive net energy balance. Current biofuels pose long-run consequences for the provision of food and environmental amenities. In the short run, however, when gasoline supply and demand are inelastic, they serve as a buffer supply of energy, helping to reduce prices. Employing a conceptual model and with back-of-the-envelope estimates of wealth transfers resulting from biofuel production, we find that ethanol subsidies pay for themselves. Adoption of second-generation technologies may make biofuels more beneficial to society. The large-scale production of new types of crops dedicated to energy is likely to induce structural change in agriculture and change the sources, levels, and variability of farm incomes. The socio-economic impact of biofuel production will largely depend on how well the process of technology adoption by farmers and processors is understood and managed. The confluence of agricultural policy with environmental and energy policies is expected.
NASA Astrophysics Data System (ADS)
Tassi, R.; Lorenzini, F.; Allasia, D. G.
2015-06-01
In the last decades, new approaches were adopted to manage stormwater as close to its source as possible through technologies and devices that preserve and recreate natural landscape features. Green Roofs (GR) are examples of these devices that are also incentivized by city's stormwater management plans. Several studies show that GR decreases on-site runoff from impervious surfaces, however, the analysis of the effect of widespread implementation of GR in the flood characteristics at the urban basin scale in subtropical areas are little discussed, mainly because of the absence of data. Thereby, this paper shows results related to the monitoring of an extensive modular GR under subtropical weather conditions, the development of a rainfall-runoff model based on the modified Curve Number (CN) and SCS Triangular Unit Hydrograph (TUH) methods and the analysis of large-scale impact of GR by modelling different basins. The model was calibrated against observed data and showed that GR absorbed almost all the smaller storms and reduced runoff even during the most intense rainfall. The overall CN was estimated in 83 (consistent with available literature) with the shape of hydrographs well reproduced. Large-scale modelling (in basins ranging from 0.03 ha to several square kilometers) showed that the widespread use of GRs reduced peak flows (volumes) around 57% (48%) at source and 38% (32%) at the basin scale. Thus, this research validated a tool for the assessment of structural management measures (specifically GR) to address changes in flood characteristics in the city's water management planning. From the application of this model it was concluded that even if the efficiency of GR decreases as the basin scale increase they still provide a good option to cope with urbanization impact.
Kandiah, Venu; Binder, Andrew R; Berglund, Emily Z
2017-10-01
Water reuse can serve as a sustainable alternative water source for urban areas. However, the successful implementation of large-scale water reuse projects depends on community acceptance. Because of the negative perceptions that are traditionally associated with reclaimed water, water reuse is often not considered in the development of urban water management plans. This study develops a simulation model for understanding community opinion dynamics surrounding the issue of water reuse, and how individual perceptions evolve within that context, which can help in the planning and decision-making process. Based on the social amplification of risk framework, our agent-based model simulates consumer perceptions, discussion patterns, and their adoption or rejection of water reuse. The model is based on the "risk publics" model, an empirical approach that uses the concept of belief clusters to explain the adoption of new technology. Each household is represented as an agent, and parameters that define their behavior and attributes are defined from survey data. Community-level parameters-including social groups, relationships, and communication variables, also from survey data-are encoded to simulate the social processes that influence community opinion. The model demonstrates its capabilities to simulate opinion dynamics and consumer adoption of water reuse. In addition, based on empirical data, the model is applied to investigate water reuse behavior in different regions of the United States. Importantly, our results reveal that public opinion dynamics emerge differently based on membership in opinion clusters, frequency of discussion, and the structure of social networks. © 2017 Society for Risk Analysis.
[Overview of the US policies for health information technology and lessons learned for Israel].
Topaz, Maxim; Ash, Nachman
2013-05-01
The heaLthcare system in the United States (U.S.) faces a number of significant changes aimed at improving the quality and availability of medical services and reducing costs. Implementation of health information technologies, especiaLly ELectronic Health Records (EHR), is central to achieving these goals. Several recent Legislative efforts in the U.S. aim at defining standards and promoting wide scale "Meaningful Use" of the novel technologies. In Israel, the majority of heaLthcare providers adopted EHR throughout the Last decade. Unlike the U.S., the process of EHR adoption occurred spontaneously, without governmental control or the definition of standards. In this article, we review the U.S. health information technology policies and standards and suggest potential lessons Learned for Israel. First, we present the three-staged Meaningful Use regulations that require eligible healthcare practitioners to use EHR in their practice. We also describe the standards for EHR certification and national efforts to create interoperable health information technology networks. Finally, we provide a brief overview of the IsraeLi regulation in the field of EHR. Although the adoption of health information technology is wider in Israel, the Lack of technology standards and governmental control has Led to Large technology gaps between providers. The example of the U.S. Legislation urges the adoption of several critical steps to further enhance the quality and efficiency of the Israeli healthcare system, in particular: strengthening health information technology regulation; developing Licensure criteria for health information technology; bridging the digital gap between healthcare organizations; defining quality measures; and improving the accessibility of health information for patients.
Aarons, Gregory A; Cafri, Guy; Lugo, Lindsay; Sawitzky, Angelina
2012-09-01
Mental health and social service provider attitudes toward evidence-based practice have been measured through the development and validation of the Evidence-Based Practice Attitude Scale (EBPAS; Aarons, Ment Health Serv Res 6(2):61-74, 2004). Scores on the EBPAS scales are related to provider demographic characteristics, organizational characteristics, and leadership. However, the EBPAS assesses only four domains of attitudes toward EBP. The current study expands and further identifies additional domains of attitudes towards evidence-based practice. A qualitative and quantitative mixed-methods approach was used to: (1) generate items from multiples sources (researcher, mental health program manager, clinician/therapist), (2) identify potential content domains, and (3) examine the preliminary domains and factor structure through exploratory factor analysis. Participants for item generation included the investigative team, a group of mental health program managers (n = 6), and a group of clinicians/therapists (n = 8). For quantitative analyses a sample of 422 mental health service providers from 65 outpatient programs in San Diego County completed a survey that included the new items. Eight new EBPAS factors comprised of 35 items were identified. Factor loadings were moderate to large and internal consistency reliabilities were fair to excellent. We found that the convergence of these factors with the four previously identified evidence-based practice attitude factors (15 items) was small to moderate suggesting that the newly identified factors represent distinct dimensions of mental health and social service provider attitudes toward adopting EBP. Combining the original 15 items with the 35 new items comprises the EBPAS 50-item version (EBPAS-50) that adds to our understanding of provider attitudes toward adopting EBPs. Directions for future research are discussed.
Technology Adoption of Medical Faculty in Teaching: Differentiating Factors in Adopter Categories
ERIC Educational Resources Information Center
Zayim, Nese; Yildirim, Soner; Saka, Osman
2006-01-01
Despite large investments by higher education institutions in technology for faculty and student use, instructional technology is not being integrated into instruction in higher education institutions including medical education institutions. While the diffusion of instructional technologies has reached a saturation point among early adopters of…
The management of health care service quality. A physician perspective.
Bobocea, L; Gheorghe, I R; Spiridon, St; Gheorghe, C M; Purcarea, V L
2016-01-01
Applying marketing in health care services is presently an essential element for every manager or policy maker. In order to be successful, a health care organization has to identify an accurate measurement scale for defining service quality due to competitive pressure and cost values. The most widely employed scale in the services sector is SERVQUAL scale. In spite of being successfully adopted in fields such as brokerage and banking, experts concluded that the SERVQUAL scale should be modified depending on the specific context. Moreover, the SERVQUAL scale focused on the consumer's perspective regarding service quality. While service quality was measured with the help of SERVQUAL scale, other experts identified a structure-process-outcome design, which, they thought, would be more suitable for health care services. This approach highlights a different perspective on investigating the service quality, namely, the physician's perspective. Further, we believe that the Seven Prong Model for Improving Service Quality has been adopted in order to effectively measure the health care service in a Romanian context from a physician's perspective.
The Bass diffusion model on networks with correlations and inhomogeneous advertising
NASA Astrophysics Data System (ADS)
Bertotti, M. L.; Brunner, J.; Modanese, G.
2016-09-01
The Bass model, which is an effective forecasting tool for innovation diffusion based on large collections of empirical data, assumes an homogeneous diffusion process. We introduce a network structure into this model and we investigate numerically the dynamics in the case of networks with link density $P(k)=c/k^\\gamma$, where $k=1, \\ldots , N$. The resulting curve of the total adoptions in time is qualitatively similar to the homogeneous Bass curve corresponding to a case with the same average number of connections. The peak of the adoptions, however, tends to occur earlier, particularly when $\\gamma$ and $N$ are large (i.e., when there are few hubs with a large maximum number of connections). Most interestingly, the adoption curve of the hubs anticipates the total adoption curve in a predictable way, with peak times which can be, for instance when $N=100$, between 10% and 60% of the total adoptions peak. This may allow to monitor the hubs for forecasting purposes. We also consider the case of networks with assortative and disassortative correlations and a case of inhomogeneous advertising where the publicity terms are "targeted" on the hubs while maintaining their total cost constant.
[Management of large marine ecosystem based on ecosystem approach].
Chu, Jian-song
2011-09-01
Large marine ecosystem (LME) is a large area of ocean characterized by distinct oceanology and ecology. Its natural characteristics require management based on ecosystem approach. A series of international treaties and regulations definitely or indirectly support that it should adopt ecosystem approach to manage LME to achieve the sustainable utilization of marine resources. In practices, some countries such as Canada, Australia, and U.S.A. have adopted ecosystem-based approach to manage their oceans, and some international organizations such as global environment fund committee have carried out a number of LME programs based on ecosystem approach. Aiming at the sustainable development of their fisheries, the regional organizations such as Caribbean Community have established regional fisheries mechanism. However, the adoption of ecosystem approach to manage LME is not only a scientific and legal issue, but also a political matter largely depending on the political will and the mutual cooperation degree of related countries.
Factors associated with compliance among users of solar water disinfection in rural Bolivia
2011-01-01
Background Diarrhoea is the second leading cause of childhood mortality, with an estimated 1.3 million deaths per year. Promotion of Solar Water Disinfection (SODIS) has been suggested as a strategy for reducing the global burden of diarrhoea by improving the microbiological quality of drinking water. Despite increasing support for the large-scale dissemination of SODIS, there are few reports describing the effectiveness of its implementation. It is, therefore, important to identify and understand the mechanisms that lead to adoption and regular use of SODIS. Methods We investigated the behaviours associated with SODIS adoption among households assigned to receive SODIS promotion during a cluster-randomized trial in rural Bolivia. Distinct groups of SODIS-users were identified on the basis of six compliance indicators using principal components and cluster analysis. The probability of adopting SODIS as a function of campaign exposure and household characteristics was evaluated using ordinal logistic regression models. Results Standardised, community-level SODIS-implementation in a rural Bolivian setting was associated with a median SODIS use of 32% (IQR: 17-50). Households that were more likely to use SODIS were those that participated more frequently in SODIS promotional events (OR = 1.07, 95%CI: 1.01-1.13), included women (OR = 1.18, 95%CI: 1.07-1.30), owned latrines (OR = 3.38, 95%CI: 1.07-10.70), and had severely wasted children living in the home (OR = 2.17, 95%CI: 1.34-3.49). Conclusions Most of the observed household characteristics showed limited potential to predict compliance with a comprehensive, year-long SODIS-promotion campaign; this finding reflects the complexity of behaviour change in the context of household water treatment. However, our findings also suggest that the motivation to adopt new water treatment habits and to acquire new knowledge about drinking water treatment is associated with prior engagements in sanitary hygiene and with the experience of contemporary family health concerns. Household-level factors like the ownership of a latrine, a large proportion of females and the presence of a malnourished child living in a home are easily assessable indicators that SODIS-programme managers could use to identify early adopters in SODIS promotion campaigns. Trial Registration ClinicalTrials.gov: NCT00731497 PMID:21463508
A hierarchy of distress and invariant item ordering in the General Health Questionnaire-12.
Doyle, F; Watson, R; Morgan, K; McBride, O
2012-06-01
Invariant item ordering (IIO) is defined as the extent to which items have the same ordering (in terms of item difficulty/severity - i.e. demonstrating whether items are difficult [rare] or less difficult [common]) for each respondent who completes a scale. IIO is therefore crucial for establishing a scale hierarchy that is replicable across samples, but no research has demonstrated IIO in scales of psychological distress. We aimed to determine if a hierarchy of distress with IIO exists in a large general population sample who completed a scale measuring distress. Data from 4107 participants who completed the 12-item General Health Questionnaire (GHQ-12) from the Northern Ireland Health and Social Wellbeing Survey 2005-6 were analysed. Mokken scaling was used to determine the dimensionality and hierarchy of the GHQ-12, and items were investigated for IIO. All items of the GHQ-12 formed a single, strong unidimensional scale (H=0.58). IIO was found for six of the 12 items (H-trans=0.55), and these symptoms reflected the following hierarchy: anhedonia, concentration, participation, coping, decision-making and worthlessness. The cross-sectional analysis needs replication. The GHQ-12 showed a hierarchy of distress, but IIO is only demonstrated for six of the items, and the scale could therefore be shortened. Adopting brief, hierarchical scales with IIO may be beneficial in both clinical and research contexts. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Valcu-Lisman, A. M.; Gassman, P. W.; Arritt, R. W.; Kling, C.; Arbuckle, J. G.; Roesch-McNally, G. E.; Panagopoulos, Y.
2017-12-01
Projected changes in the climatic patterns (higher temperatures, changes in extreme precipitation events, and higher levels of humidity) will affect agricultural cropping and management systems in major agricultural production areas. The concept of adaption to new climatic or economic conditions is an important aspect of the agricultural decision-making process. Adopting cover crops, reduced tillage, extending the drainage systems and adjusting crop management are only a few examples of adaptive actions. These actions can be easily implemented as long as they have private benefits (increased profits, reduced risk). However, each adaptive action has a different impact on water quality. Cover crops and no till usually have a positive impact on water quality, but increased tile drainage typically results in more degraded water quality due primarily to increased export of soluble nitrogen and phosphorus. The goal of this research is to determine the changes in water quality as well in crop yields as farmers undertake these adaptive measures. To answer this research question, we need to estimate the likelihood that these actions will occur, identify the agricultural areas where these actions are most likely to be implemented, and simulate the water quality impacts associated with each of these scenarios. We apply our modeling efforts to the whole Upper-Mississippi River Basin Basin (UMRB) and the Ohio-Tennessee River Basin (OTRB). These two areas are critical source regions for the re-occurring hypoxic zone in the gulf of Mexico. The likelihood of each adaptive agricultural action is estimated using data from a survey conducted in 2012. A large, representative sample of farmers in the Corn Belt was used in the survey to elicit behavioral intentions regarding three of the most important agricultural adaptation strategies (no-till, cover crops and tile drainage). We use these data to study the relationship between intent to adapt, farmer characteristics, farm characteristics, and weather characteristics, and to predict the probability of adoption for each action. Next, we use these estimated probabilities to create different scenarios for the two large scale-watersheds. Finally, we simulate the impact of these scenarios on water quality using calibrated UMRB and OTRB SWAT water quality models.
NASA Astrophysics Data System (ADS)
Leung, Juliana Y.; Srinivasan, Sanjay
2016-09-01
Modeling transport process at large scale requires proper scale-up of subsurface heterogeneity and an understanding of its interaction with the underlying transport mechanisms. A technique based on volume averaging is applied to quantitatively assess the scaling characteristics of effective mass transfer coefficient in heterogeneous reservoir models. The effective mass transfer coefficient represents the combined contribution from diffusion and dispersion to the transport of non-reactive solute particles within a fluid phase. Although treatment of transport problems with the volume averaging technique has been published in the past, application to geological systems exhibiting realistic spatial variability remains a challenge. Previously, the authors developed a new procedure where results from a fine-scale numerical flow simulation reflecting the full physics of the transport process albeit over a sub-volume of the reservoir are integrated with the volume averaging technique to provide effective description of transport properties. The procedure is extended such that spatial averaging is performed at the local-heterogeneity scale. In this paper, the transport of a passive (non-reactive) solute is simulated on multiple reservoir models exhibiting different patterns of heterogeneities, and the scaling behavior of effective mass transfer coefficient (Keff) is examined and compared. One such set of models exhibit power-law (fractal) characteristics, and the variability of dispersion and Keff with scale is in good agreement with analytical expressions described in the literature. This work offers an insight into the impacts of heterogeneity on the scaling of effective transport parameters. A key finding is that spatial heterogeneity models with similar univariate and bivariate statistics may exhibit different scaling characteristics because of the influence of higher order statistics. More mixing is observed in the channelized models with higher-order continuity. It reinforces the notion that the flow response is influenced by the higher-order statistical description of heterogeneity. An important implication is that when scaling-up transport response from lab-scale results to the field scale, it is necessary to account for the scale-up of heterogeneity. Since the characteristics of higher-order multivariate distributions and large-scale heterogeneity are typically not captured in small-scale experiments, a reservoir modeling framework that captures the uncertainty in heterogeneity description should be adopted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unat, Didem; Dubey, Anshu; Hoefler, Torsten
The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity andmore » performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. Furthermore, this paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.« less
Eusebio, Lidia; Capelli, Laura; Sironi, Selena
2016-01-01
Despite initial enthusiasm towards electronic noses and their possible application in different fields, and quite a lot of promising results, several criticalities emerge from most published research studies, and, as a matter of fact, the diffusion of electronic noses in real-life applications is still very limited. In general, a first step towards large-scale-diffusion of an analysis method, is standardization. The aim of this paper is describing the experimental procedure adopted in order to evaluate electronic nose performances, with the final purpose of establishing minimum performance requirements, which is considered to be a first crucial step towards standardization of the specific case of electronic nose application for environmental odor monitoring at receptors. Based on the experimental results of the performance testing of a commercialized electronic nose type with respect to three criteria (i.e., response invariability to variable atmospheric conditions, instrumental detection limit, and odor classification accuracy), it was possible to hypothesize a logic that could be adopted for the definition of minimum performance requirements, according to the idea that these are technologically achievable. PMID:27657086
Eusebio, Lidia; Capelli, Laura; Sironi, Selena
2016-09-21
Despite initial enthusiasm towards electronic noses and their possible application in different fields, and quite a lot of promising results, several criticalities emerge from most published research studies, and, as a matter of fact, the diffusion of electronic noses in real-life applications is still very limited. In general, a first step towards large-scale-diffusion of an analysis method, is standardization. The aim of this paper is describing the experimental procedure adopted in order to evaluate electronic nose performances, with the final purpose of establishing minimum performance requirements, which is considered to be a first crucial step towards standardization of the specific case of electronic nose application for environmental odor monitoring at receptors. Based on the experimental results of the performance testing of a commercialized electronic nose type with respect to three criteria (i.e., response invariability to variable atmospheric conditions, instrumental detection limit, and odor classification accuracy), it was possible to hypothesize a logic that could be adopted for the definition of minimum performance requirements, according to the idea that these are technologically achievable.
NASA Astrophysics Data System (ADS)
Hardy, D.; Janée, G.; Gallagher, J.; Frew, J.; Cornillon, P.
2006-12-01
The OPeNDAP Data Access Protocol (DAP) is a community standard for sharing scientific data across the Internet. Data providers using DAP have adopted a variety of metadata conventions to improve data utility, such as COARDS (1995) and CF (2003). Our results show, however, that metadata do not follow these conventions in practice. We collected metadata from over a hundred DAP servers, tens of thousands of data objects, and hundreds of collections. We found that a minority claim to adhere to a metadata convention, and a small percentage accurately adhere to their stated convention. We present descriptive statistics of our survey and highlight common traits such as well-populated attributes. Our empirical results indicate that unified search services cannot rely solely on metadata conventions. Although we encourage all providers to adopt a small subset of the CF convention for discovery purposes, we have no evidence to suggest that improved conventions would simplify the fundamental problem of heterogeneity. Large-scale discovery services must find methods for integrating incompatible metadata.
Changes and Relationships of Climatic and Hydrological Droughts in the Jialing River Basin, China.
Zeng, Xiaofan; Zhao, Na; Sun, Huaiwei; Ye, Lei; Zhai, Jianqing
2015-01-01
The comprehensive assessment of climatic and hydrological droughts in terms of their temporal and spatial evolutions is very important for water resources management and social development in the basin scale. To study the spatial and temporal changes of climatic and hydrological droughts and the relationships between them, the SPEI and SDI are adopted to assess the changes and the correlations of climatic and hydrological droughts by selecting the Jialing River basin, China as the research area. The SPEI and SDI at different time scales are assessed both at the entire Jialing River basin and at the regional levels of the three sub basins. The results show that the SPEI and SDI are very suitable for assessing the changes and relationships of climatic and hydrological droughts in large basins. Based on the assessment, for the Jialing River basin, climatic and hydrological droughts have the increasing tendency during recent several decades, and the increasing trend of climatic droughts is significant or extremely significant in the western and northern basin, while hydrological drought has a less significant increasing trend. Additionally, climatic and hydrological droughts tend to increase in the next few years. The results also show that on short time scales, climatic droughts have one or two months lag impact on hydrological droughts in the north-west area of the basin, and have one month lag impact in south-east area of the basin. The assessment of climatic and hydrological droughts based on the SPEI and SDI could be very useful for water resources management and climate change adaptation at large basin scale.
Changes and Relationships of Climatic and Hydrological Droughts in the Jialing River Basin, China
Zeng, Xiaofan; Zhao, Na; Sun, Huaiwei; Ye, Lei; Zhai, Jianqing
2015-01-01
The comprehensive assessment of climatic and hydrological droughts in terms of their temporal and spatial evolutions is very important for water resources management and social development in the basin scale. To study the spatial and temporal changes of climatic and hydrological droughts and the relationships between them, the SPEI and SDI are adopted to assess the changes and the correlations of climatic and hydrological droughts by selecting the Jialing River basin, China as the research area. The SPEI and SDI at different time scales are assessed both at the entire Jialing River basin and at the regional levels of the three sub basins. The results show that the SPEI and SDI are very suitable for assessing the changes and relationships of climatic and hydrological droughts in large basins. Based on the assessment, for the Jialing River basin, climatic and hydrological droughts have the increasing tendency during recent several decades, and the increasing trend of climatic droughts is significant or extremely significant in the western and northern basin, while hydrological drought has a less significant increasing trend. Additionally, climatic and hydrological droughts tend to increase in the next few years. The results also show that on short time scales, climatic droughts have one or two months lag impact on hydrological droughts in the north-west area of the basin, and have one month lag impact in south-east area of the basin. The assessment of climatic and hydrological droughts based on the SPEI and SDI could be very useful for water resources management and climate change adaptation at large basin scale. PMID:26544070
Incorporating human-water dynamics in a hyper-resolution land surface model
NASA Astrophysics Data System (ADS)
Vergopolan, N.; Chaney, N.; Wanders, N.; Sheffield, J.; Wood, E. F.
2017-12-01
The increasing demand for water, energy, and food is leading to unsustainable groundwater and surface water exploitation. As a result, the human interactions with the environment, through alteration of land and water resources dynamics, need to be reflected in hydrologic and land surface models (LSMs). Advancements in representing human-water dynamics still leave challenges related to the lack of water use data, water allocation algorithms, and modeling scales. This leads to an over-simplistic representation of human water use in large-scale models; this is in turn leads to an inability to capture extreme events signatures and to provide reliable information at stakeholder-level spatial scales. The emergence of hyper-resolution models allows one to address these challenges by simulating the hydrological processes and interactions with the human impacts at field scales. We integrated human-water dynamics into HydroBlocks - a hyper-resolution, field-scale resolving LSM. HydroBlocks explicitly solves the field-scale spatial heterogeneity of land surface processes through interacting hydrologic response units (HRUs); and its HRU-based model parallelization allows computationally efficient long-term simulations as well as ensemble predictions. The implemented human-water dynamics include groundwater and surface water abstraction to meet agricultural, domestic and industrial water demands. Furthermore, a supply-demand water allocation scheme based on relative costs helps to determine sectoral water use requirements and tradeoffs. A set of HydroBlocks simulations over the Midwest United States (daily, at 30-m spatial resolution for 30 years) are used to quantify the irrigation impacts on water availability. The model captures large reductions in total soil moisture and water table levels, as well as spatiotemporal changes in evapotranspiration and runoff peaks, with their intensity related to the adopted water management strategy. By incorporating human-water dynamics in a hyper-resolution LSM this work allows for progress on hydrological monitoring and predictions, as well as drought preparedness and water impact assessments at relevant decision-making scales.
Health promotion, education and community participation in the Americas. Reality or myth?
Rice, M
1988-06-01
The concepts and understanding of community promotion, education and participation for health have many interpretations and applications in the countries of the Americas. Even though many of the countries have written policies indicating that these are important strategies, very few have developed ways to make them effective and operational. In most cases they are not defined, nor have countries developed concrete objectives and goals. Although the Primary Health Care strategy has been adopted in principle by countries of the Americas, they are encountering serious difficulties in implementing the concept, particularly within large-scale national health programmes. In actuality, the community promotion, education and participation concepts in the health arena are vague and often misunderstood.
NASA Astrophysics Data System (ADS)
Wang, Kunpeng; Ji, Weidong; Zhang, Feifei; Yu, Wei; Zheng, Runqing
2018-02-01
This thesis, based on the closed reconstruction project of the coal storage yard of Shengli Power Plant which is affiliated to Sinopec Shengli Petroleum Administration, first makes an analysis on the significance of current dustfall reconstruction of open coal yard, then summarizes the methods widely adopted in the dustfall of large-scale open coal storage yard of current thermal power plant as well as their advantages and disadvantages, and finally focuses on this project, aiming at providing some reference and assistance to the future closed reconstruction project of open coal storage yard in thermal power plant.
[Rhabdomyosarcoma of soft palate. A case on purpose].
Arias Marzán, F; De Bonis Redondo, M; Redondo Ventura, F; Betancor Martínez, L; Sanginés Yzzo, M; Arias Marzán, J; De Bonis Braun, C; Zurita Expósito, V; Reig Ripoll, F; De Lucas Carmona, G
2006-01-01
The rabdomiosarcoma (RMS) are infrequent tumors. They are principally described in infancy and located in 35% of the cases in head and neck. The nasopharynx localisation is relatively rare, being in these cases the tongue, palate and oral mucosa the preferent places of establishment. Classically the patient presented very low standard healing with surgery and radiotherapy. The introduction in the middle 70 of systematic chimiotherapy as complementary treatment, improved the survival rate in large scale. In this article the case of an adolescent patient, who presented a RMS at the level of the soft palate, the diagnostic procedure and the therapeutic decision adopted, after revision of the last studies at this respect, are described.
Coletta, Alain; Molter, Colin; Duqué, Robin; Steenhoff, David; Taminau, Jonatan; de Schaetzen, Virginie; Meganck, Stijn; Lazar, Cosmin; Venet, David; Detours, Vincent; Nowé, Ann; Bersini, Hugues; Weiss Solís, David Y
2012-11-18
Genomics datasets are increasingly useful for gaining biomedical insights, with adoption in the clinic underway. However, multiple hurdles related to data management stand in the way of their efficient large-scale utilization. The solution proposed is a web-based data storage hub. Having clear focus, flexibility and adaptability, InSilico DB seamlessly connects genomics dataset repositories to state-of-the-art and free GUI and command-line data analysis tools. The InSilico DB platform is a powerful collaborative environment, with advanced capabilities for biocuration, dataset sharing, and dataset subsetting and combination. InSilico DB is available from https://insilicodb.org.
A Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF)
NASA Astrophysics Data System (ADS)
Trotta, Francesco; Fenu, Elisa; Pinardi, Nadia; Bruciaferri, Diego; Giacomelli, Luca; Federico, Ivan; Coppini, Giovanni
2016-11-01
We present a numerical platform named Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF). The platform is developed for short-time forecasts and is designed to be embedded in any region of the large-scale Mediterranean Forecasting System (MFS) via downscaling. We employ CTD data collected during a campaign around the Elba island to calibrate and validate SURF. The model requires an initial spin up period of a few days in order to adapt the initial interpolated fields and the subsequent solutions to the higher-resolution nested grids adopted by SURF. Through a comparison with the CTD data, we quantify the improvement obtained by SURF model compared to the coarse-resolution MFS model.
To manage inland fisheries is to manage at the social-ecological watershed scale.
Nguyen, Vivian M; Lynch, Abigail J; Young, Nathan; Cowx, Ian G; Beard, T Douglas; Taylor, William W; Cooke, Steven J
2016-10-01
Approaches to managing inland fisheries vary between systems and regions but are often based on large-scale marine fisheries principles and thus limited and outdated. Rarely do they adopt holistic approaches that consider the complex interplay among humans, fish, and the environment. We argue that there is an urgent need for a shift in inland fisheries management towards holistic and transdisciplinary approaches that embrace the principles of social-ecological systems at the watershed scale. The interconnectedness of inland fisheries with their associated watershed (biotic, abiotic, and humans) make them extremely complex and challenging to manage and protect. For this reason, the watershed is a logical management unit. To assist management at this scale, we propose a framework that integrates disparate concepts and management paradigms to facilitate inland fisheries management and sustainability. We contend that inland fisheries need to be managed as social-ecological watershed system (SEWS). The framework supports watershed-scale and transboundary governance to manage inland fisheries, and transdisciplinary projects and teams to ensure relevant and applicable monitoring and research. We discuss concepts of social-ecological feedback and interactions of multiple stressors and factors within/between the social-ecological systems. Moreover, we emphasize that management, monitoring, and research on inland fisheries at the watershed scale are needed to ensure long-term sustainable and resilient fisheries. Copyright © 2016. Published by Elsevier Ltd.
Brooks, Ronald A; Allen, Vincent C; Regan, Rotrease; Mutchler, Matt G; Cervantes-Tadeo, Ramon; Lee, Sung-Jae
2018-03-01
In the United States, black men who have sex with men (MSM) are the group most affected by the HIV/AIDS epidemic. Pre-exposure prophylaxis (PrEP) is an important new HIV prevention strategy that may help reduce new HIV infections among black MSM. This analysis examined the association between HIV/AIDS conspiracy beliefs and intentions to adopt PrEP among 224 black MSM. The likelihood of adopting PrEP was assessed and more than half (60%) of the study population indicated a high intention to adopt PrEP. HIV/AIDS genocidal and treatment-related conspiracies were assessed using scales previously validated with black MSM. Almost two-thirds (63%) endorsed at least one of eight HIV/AIDS conspiracy beliefs presented. In multivariable analyses, black MSM who agreed with the genocidal or treatment-related conspiracy beliefs scales had a lower intention to adopt PrEP (Adjusted Odds Ratio [AOR] = 0.73, 95% CI = 0.54, 0.99 and AOR = 0.36, 95% CI = 0.23, 0.55, respectively). Our findings indicate that preexisting HIV/AIDS conspiracy beliefs may deter some black MSM from adopting PrEP. We suggest strategies PrEP implementers may want to employ to address the influence that HIV/AIDS conspiracy beliefs may have on the adoption of PrEP among black MSM, a population disproportionately affected by HIV/AIDS.
Sail or sink: novel behavioural adaptations on water in aerially dispersing species.
Hayashi, Morito; Bakkali, Mohammed; Hyde, Alexander; Goodacre, Sara L
2015-07-03
Long-distance dispersal events have the potential to shape species distributions and ecosystem diversity over large spatial scales, and to influence processes such as population persistence and the pace and scale of invasion. How such dispersal strategies have evolved and are maintained within species is, however, often unclear. We have studied long-distance dispersal in a range of pest-controlling terrestrial spiders that are important predators within agricultural ecosystems. These species persist in heterogeneous environments through their ability to re-colonise vacant habitat by repeated long-distance aerial dispersal ("ballooning") using spun silk lines. Individuals are strictly terrestrial, are not thought to tolerate landing on water, and have no control over where they land once airborne. Their tendency to spread via aerial dispersal has thus been thought to be limited by the costs of encountering water, which is a frequent hazard in the landscape. In our study we find that ballooning in a subset of individuals from two groups of widely-distributed and phylogenetically distinct terrestrial spiders (linyphiids and one tetragnathid) is associated with a hitherto undescribed ability of those same individuals to survive encounters with both fresh and marine water. Individuals that showed a high tendency to adopt 'ballooning' behaviour adopted elaborate postures to seemingly take advantage of the wind current whilst on the water surface. The ability of individuals capable of long-distance aerial dispersal to survive encounters with water allows them to disperse repeatedly, thereby increasing the pace and spatial scale over which they can spread and subsequently exert an influence on the ecosystems into which they migrate. The potential for genetic connectivity between populations, which can influence the rate of localized adaptation, thus exists over much larger geographic scales than previously thought. Newly available habitat may be particularly influenced given the degree of ecosystem disturbance that is known to follow new predator introductions.
ERIC Educational Resources Information Center
Yaikhong, Kriangkrai; Usaha, Siriluck
2012-01-01
The present study contributes to developing a Public Speaking Class Anxiety Scale (PSCAS) to measure anxiety in the EFL public speaking class in the Thai context. Items were adopted from previous scales: Foreign Language Classroom Anxiety Scale (FLCAS) by Horwitz et al. (1986); Personal Report of Communication Apprehension (PRCA-24) and Personal…
Siira, Virva; Wahlberg, Karl-Erik; Hakko, Helinä; Tienari, Pekka
2013-11-30
Stability has been considered an important aspect of vulnerability to schizophrenia. The temporal stability of the scales in the Minnesota Multiphasic Personality Inventory (MMPI) was examined, using adoptees from the Finnish Adoptive Family Study of Schizophrenia. Adoptees who were high-risk (HR) offspring of biological mothers having a schizophrenia spectrum disorder (n=28) and low-risk (LR) controls (n=46) were evaluated using 15 MMPI scales at the initial assessment (HR, mean age 24 years; LR, mean age 23 years) and at the follow-up assessment after a mean interval of 11 years. Stability of the MMPI scales was also assessed in the groups of adoptees, assigned according to the adoptive parents'(n=44) communication style using Communication Deviance (CD) scale as an environmental factor. Initial Lie, Frequency, Correction, Psychopathic Deviate, Schizophrenia, Manifest Hostility, Hypomania, Phobias, Psychoticism, Religious Fundamentalism, Social Maladjustment, Paranoid Schizophrenia, Golden-Meehl Indicators, Schizophrenia Proneness and 8-6 scale scores significantly predicted the MMPI scores at the follow-up assessment indicating stability in the characteristics of thinking, affective expression, social relatedness and volition. Low CD in the family had an effect on the stabilization of personality traits such as social withdrawal and restricted affectivity assessed by Correction and Hostility. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Documenting Instructional Practices in Large Introductory STEM Lecture Courses
NASA Astrophysics Data System (ADS)
Vu, Viet Quoc
STEM education reform in higher education is framed around the need to improve student learning outcomes, increase student retention, and increase the number of underrepresented minorities and female students in STEM fields, all of which would ultimately contribute to America's competitiveness and prosperity. To achieve these goals, education reformers call for an increase in the adoption of research-based "promising practices" in classrooms. Despite efforts to increase the adoption of more promising practices in classrooms, postsecondary instructors are still likely to lecture and use traditional teaching approaches. To shed light on this adoption dilemma, a mix-methods study was conducted. First, instructional practices in large introductory STEM courses were identified, followed by an analysis of factors that inhibit or contribute to the use of promising practices. Data were obtained from classroom observations (N = 259) of large gateway courses across STEM departments and from instructor interviews (N = 67). Results show that instructors are already aware of promising practices and that change strategies could move from focusing on the development and dissemination of promising practices to focusing on improving adoption rates. Teaching-track instructors such as lecturers with potential for security of employment (LPSOE) and lecturers with security of employment (LSOE) have adopted promising practices more than other instructors. Interview data show that LPSOEs are also effective at disseminating promising practices to their peers, but opinion leaders (influential faculty in a department) are necessary to promote adoption of promising practices by higher ranking instructors. However, hiring more LPSOEs or opinion leaders will not be enough to shift instructional practices. Variations in the adoption of promising practices by instructors and across departments show that any reform strategy needs to be systematic and take into consideration how information is shared through communication channels, the adoption decision-making process by potential adopters, and the contextual barriers and drivers of adoption. Additionally, the strategy should be designed with multiple stages, with each stage given time for changes to have an effect. Taking a one-size fits all approach to STEM education reform will not work and may only perpetuate the cycle of non-adoption and continued use of teacher-centered instructional practices.
NASA Astrophysics Data System (ADS)
Benchicou, Soraya; Aichouni, Mohamed; Nehari, Driss
2010-06-01
Technology-mediated education or e-learning is growing globally both in scale and delivery capacity due to the large diffusion of the ubiquitous information and communication technologies (ICT) in general and the web technologies in particular. This statement has not yet been fully supported by research, especially in developing countries such as Algeria. The purpose of this paper was to identify directions for addressing the needs of academics in higher education institutions in Algeria in order to adopt the e-learning approach as a strategy to improve quality of education. The paper will report results of an empirical study that measures the readiness of the Algerian higher education institutions towards the implementation of ICT in the educational process and the attitudes of faculty members towards the application of the e-learning approach in engineering education. Three main objectives were targeted, namely: (a) to provide an initial evaluation of faculty members' attitudes and perceptions towards web-based education; (b) reporting on their perceived requirements for implementing e-learning in university courses; (c) providing an initial input for a collaborative process of developing an institutional strategy for e-learning. Statistical analysis of the survey results indicates that the Algerian higher education institution, which adopted the Licence - Master and Doctorate educational system, is facing a big challenge to take advantage of emerging technological innovations and the advent of e-learning to further develop its teaching programmes and to enhance the quality of education in engineering fields. The successful implementation of this modern approach is shown to depend largely on a set of critical success factors that would include: 1. The extent to which the institution will adopt a formal and official e-learning strategy. 2. The extent to which faculty members will adhere and adopt this strategy and develop ownership of the various measures in the context of their teaching and research responsibilities. 3. The extent to which the university will offer adequate support in terms of training, software platform administration, online resource development and impact monitoring and assessment.
NASA Astrophysics Data System (ADS)
Niwa, Masaki; Takashina, Shoichi; Mori, Yojiro; Hasegawa, Hiroshi; Sato, Ken-ichi; Watanabe, Toshio
2015-01-01
With the continuous increase in Internet traffic, reconfigurable optical add-drop multiplexers (ROADMs) have been widely adopted in the core and metro core networks. Current ROADMs, however, allow only static operation. To realize future dynamic optical-network services, and to minimize any human intervention in network operation, the optical signal add/drop part should have colorless/directionless/contentionless (C/D/C) capabilities. This is possible with matrix switches or a combination of splitter-switches and optical tunable filters. The scale of the matrix switch increases with the square of the number of supported channels, and hence, the matrix-switch-based architecture is not suitable for creating future large-scale ROADMs. In contrast, the numbers of splitter ports, switches, and tunable filters increase linearly with the number of supported channels, and hence the tunable-filter-based architecture will support all future traffic. So far, we have succeeded in fabricating a compact tunable filter that consists of multi-stage cyclic arrayed-waveguide gratings (AWGs) and switches by using planar-lightwave-circuit (PLC) technologies. However, this multistage configuration suffers from large insertion loss and filter narrowing. Moreover, power-consuming temperature control is necessary since it is difficult to make cyclic AWGs athermal. We propose here novel tunable-filter architecture that sandwiches a single-stage non-cyclic athermal AWG having flatter-topped passbands between small-scale switches. With this configuration, the optical tunable filter attains low insertion loss, large passband bandwidths, low power consumption, compactness, and high cost-effectiveness. A prototype is monolithically fabricated with PLC technologies and its excellent performance is experimentally confirmed utilizing 80-channel 30-GBaud dual-polarization quadrature phase-shift-keying (QPSK) signals.
Structural diversity in social contagion
Ugander, Johan; Backstrom, Lars; Marlow, Cameron; Kleinberg, Jon
2012-01-01
The concept of contagion has steadily expanded from its original grounding in epidemic disease to describe a vast array of processes that spread across networks, notably social phenomena such as fads, political opinions, the adoption of new technologies, and financial decisions. Traditional models of social contagion have been based on physical analogies with biological contagion, in which the probability that an individual is affected by the contagion grows monotonically with the size of his or her “contact neighborhood”—the number of affected individuals with whom he or she is in contact. Whereas this contact neighborhood hypothesis has formed the underpinning of essentially all current models, it has been challenging to evaluate it due to the difficulty in obtaining detailed data on individual network neighborhoods during the course of a large-scale contagion process. Here we study this question by analyzing the growth of Facebook, a rare example of a social process with genuinely global adoption. We find that the probability of contagion is tightly controlled by the number of connected components in an individual's contact neighborhood, rather than by the actual size of the neighborhood. Surprisingly, once this “structural diversity” is controlled for, the size of the contact neighborhood is in fact generally a negative predictor of contagion. More broadly, our analysis shows how data at the size and resolution of the Facebook network make possible the identification of subtle structural signals that go undetected at smaller scales yet hold pivotal predictive roles for the outcomes of social processes. PMID:22474360
Dalen, Monica; Theie, Steinar
2012-01-01
Internationally adopted children are often delayed in their development and demonstrate more behaviour problems than nonadopted children due to adverse preadoption circumstances. This is especially true for children adopted from Eastern European countries. Few studies have focused on children adopted from non-European countries. This paper presents results from an ongoing longitudinal study of 119 internationally adopted children from non-European countries during their first two years in Norway. Several scales measuring different aspects of the children's development are included in the study: communication and gross motor development, temperamental characteristics, and behaviour problems. The results show that internationally adopted children are delayed in their general development when they first arrive in their adoptive families. After two years the children have made significant progress in development. However, they still lag behind in communication and motor skills compared to non-adopted children. The temperamental characteristics seem very stable from time of adoption until two years after adoption. The children demonstrate a low frequency of behaviour problems. However, the behaviour problems have changed during the two years. At time of adoption they show more nonphysically challenging behaviour while after two years their physically challenging behaviour has increased.
Hogg, Oliver T; Huvenne, Veerle A I; Griffiths, Huw J; Linse, Katrin
2018-06-01
In recent years very large marine protected areas (VLMPAs) have become the dominant form of spatial protection in the marine environment. Whilst seen as a holistic and geopolitically achievable approach to conservation, there is currently a mismatch between the size of VLMPAs, and the data available to underpin their establishment and inform on their management. Habitat mapping has increasingly been adopted as a means of addressing paucity in biological data, through use of environmental proxies to estimate species and community distribution. Small-scale studies have demonstrated environmental-biological links in marine systems. Such links, however, are rarely demonstrated across larger spatial scales in the benthic environment. As such, the utility of habitat mapping as an effective approach to the ecosystem-based management of VLMPAs remains, thus far, largely undetermined. The aim of this study was to assess the ecological relevance of broadscale landscape mapping. Specifically we test the relationship between broad-scale marine landscapes and the structure of their benthic faunal communities. We focussed our work at the sub-Antarctic island of South Georgia, site of one of the largest MPAs in the world. We demonstrate a statistically significant relationship between environmentally derived landscape mapping clusters, and the composition of presence-only species data from the region. To demonstrate this relationship required specific re-sampling of historical species occurrence data to balance biological rarity, biological cosmopolitism, range-restricted sampling and fine-scale heterogeneity between sampling stations. The relationship reveals a distinct biological signature in the faunal composition of individual landscapes, attributing ecological relevance to South Georgia's environmentally derived marine landscape map. We argue therefore, that landscape mapping represents an effective framework for ensuring representative protection of habitats in management plans. Such scientific underpinning of marine spatial planning is critical in balancing the needs of multiple stakeholders whilst maximising conservation payoff. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Visualization of the Eastern Renewable Generation Integration Study: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron
The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated withmore » evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.« less
Characteristic mega-basin water storage behavior using GRACE.
Reager, J T; Famiglietti, James S
2013-06-01
[1] A long-standing challenge for hydrologists has been a lack of observational data on global-scale basin hydrological behavior. With observations from NASA's Gravity Recovery and Climate Experiment (GRACE) mission, hydrologists are now able to study terrestrial water storage for large river basins (>200,000 km 2 ), with monthly time resolution. Here we provide results of a time series model of basin-averaged GRACE terrestrial water storage anomaly and Global Precipitation Climatology Project precipitation for the world's largest basins. We address the short (10 year) length of the GRACE record by adopting a parametric spectral method to calculate frequency-domain transfer functions of storage response to precipitation forcing and then generalize these transfer functions based on large-scale basin characteristics, such as percent forest cover and basin temperature. Among the parameters tested, results show that temperature, soil water-holding capacity, and percent forest cover are important controls on relative storage variability, while basin area and mean terrain slope are less important. The derived empirical relationships were accurate (0.54 ≤ E f ≤ 0.84) in modeling global-scale water storage anomaly time series for the study basins using only precipitation, average basin temperature, and two land-surface variables, offering the potential for synthesis of basin storage time series beyond the GRACE observational period. Such an approach could be applied toward gap filling between current and future GRACE missions and for predicting basin storage given predictions of future precipitation.
An Overlooked Source of Auroral Arc Field-Aligned Current
NASA Astrophysics Data System (ADS)
Knudsen, D. J.
2017-12-01
The search for the elusive generator of quiet auroral arcs often focuses on magnetospheric pressure gradients, based on the static terms in the so-called Vaslyiunas equation [Vasyliunas, in "Magneospheric Currents", Geophysical Monograph 28, 1984]. However, magnetospheric pressure gradient scale sizes are much larger than the width of individual auroral arcs. This discrepancy was noted by Atkinson [JGR, 27, p4746, 1970], who proposed that the auroral arcs are fed instead by steady-state polarization currents, in which large-scale convection across quasi-static electric field structures leads to an apparent time dependence in the frame co-moving with the plasma, and therefore to the generation of ion polarization currents. This mechanism has been adopted by a series of authors over several decades, relating to studies of the ionospheric feedback instability, or IFI. However, the steady-state polarization current mechanism does not require the IFI, nor even the ionsophere. Specifically, any quasi-static electric field structure that is stationary relative to large-scale plasma convection is subject to the generation this current. This talk demonstrates that assumed convection speeds of the order of a 100 m/s across typical arc fields structures can lead to the generation FAC magintudes of several μA/m2, typical of values observed at the ionospheric footpoint of auoral arcs. This current can be viewed as originating within the M-I coupling medium, along the entire field line connecting an auroral arc to its root in the magnetosphere.
Characteristic mega-basin water storage behavior using GRACE
Reager, J T; Famiglietti, James S
2013-01-01
[1] A long-standing challenge for hydrologists has been a lack of observational data on global-scale basin hydrological behavior. With observations from NASA’s Gravity Recovery and Climate Experiment (GRACE) mission, hydrologists are now able to study terrestrial water storage for large river basins (>200,000 km2), with monthly time resolution. Here we provide results of a time series model of basin-averaged GRACE terrestrial water storage anomaly and Global Precipitation Climatology Project precipitation for the world’s largest basins. We address the short (10 year) length of the GRACE record by adopting a parametric spectral method to calculate frequency-domain transfer functions of storage response to precipitation forcing and then generalize these transfer functions based on large-scale basin characteristics, such as percent forest cover and basin temperature. Among the parameters tested, results show that temperature, soil water-holding capacity, and percent forest cover are important controls on relative storage variability, while basin area and mean terrain slope are less important. The derived empirical relationships were accurate (0.54 ≤ Ef ≤ 0.84) in modeling global-scale water storage anomaly time series for the study basins using only precipitation, average basin temperature, and two land-surface variables, offering the potential for synthesis of basin storage time series beyond the GRACE observational period. Such an approach could be applied toward gap filling between current and future GRACE missions and for predicting basin storage given predictions of future precipitation. PMID:24563556
Testing the limits: cautions and concerns regarding the new Wechsler IQ and Memory scales.
Loring, David W; Bauer, Russell M
2010-02-23
The Wechsler Adult Intelligence Scale (WAIS) and the Wechsler Memory Scale (WMS) are 2 of the most common psychological tests used in clinical care and research in neurology. Newly revised versions of both instruments (WAIS-IV and WMS-IV) have recently been published and are increasingly being adopted by the neuropsychology community. There have been significant changes in the structure and content of both scales, leading to the potential for inaccurate patient classification if algorithms developed using their predecessors are employed. There are presently insufficient clinical data in neurologic populations to insure their appropriate application to neuropsychological evaluations. We provide a perspective on these important new neuropsychological instruments, comment on the pressures to adopt these tests in the absence of an appropriate evidence base supporting their incremental validity, and describe the potential negative impact on both patient care and continuing research applications.
NASA Astrophysics Data System (ADS)
Ricco, George Dante
In higher education and in engineering education in particular, changing majors is generally considered a negative event - or at least an event with negative consequences. An emergent field of study within engineering education revolves around understanding the factors and processes driving student changes of major. Of key importance to further the field of change of major research is a grasp of large scale phenomena occurring throughout multiple systems, knowledge of previous attempts at describing such issues, and the adoption of metrics to probe them effectively. The problem posed is exacerbated by the drive in higher education institutions and among state legislatures to understand and reduce time-to-degree and student attrition. With these factors in mind, insights into large-scale processes that affect student progression are essential to evaluating the success or failure of programs. The goals of this work include describing the current educational research on switchers, identifying core concepts and stumbling blocks in my treatment of switchers, and using the Multiple Institutional Database for Investigating Engineering Longitudinal Development (MIDFIELD) to explore how those who change majors perform as a function of large-scale academic pathways within and without the engineering context. To accomplish these goals, it was first necessary to delve into a recent history of the treatment of switchers within the literature and categorize their approach. While three categories of papers exist in the literature concerning change of major, all three may or may not be applicable to a given database of students or even a single institution. Furthermore, while the term has been coined in the literature, no portable metric for discussing large-scale navigational flexibility exists in engineering education. What such a metric would look like will be discussed as well as the delimitations involved. The results and subsequent discussion will include a description of changes of major, how they may or may not have a deleterious effect on one's academic pathway, the special context of changes of major in the pathways of students within first-year engineering programs students labeled as undecided, an exploration of curricular flexibility by the construction of a novel metric, and proposed future work.
LASSIE: simulating large-scale models of biochemical systems on GPUs.
Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo
2017-05-10
Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE is able to perform fast simulations of even larger models, whereby the tested CPU-implementation of LSODA failed to reach termination. LASSIE is therefore expected to make an important breakthrough in Systems Biology applications, for the execution of faster and in-depth computational analyses of large-scale models of complex biological systems.
Xu, Biao; Ames Lab., Ames, IA; Feng, Tianli L.; ...
2017-01-12
In order to enhance the performance of thermoelectric materials and enable access to their widespread applications, it is beneficial yet challenging to synthesize hollow nanostructures in large quantities, with high porosity, low thermal conductivity (κ) and excellent figure of merit (z T). We report a scalable (ca. 11.0 g per batch) and low-temperature colloidal processing route for Bi 2Te 2.5Se 0.5 hollow nanostructures. They are sintered into porous, bulk nanocomposites (phi 10 mm×h 10 mm) with low κ (0.48 W m -1 K -1) and the highest z T (1.18) among state-of-the-art Bi 2Te 3-xSe x materilas. Additional benefits ofmore » the unprecedented low relative density (68–77 %) are the large demand reduction of raw materials and the improved portability. This method can be adopted to fabricate other porous phase-transition and thermoelectric chalcogenide materials and will pave the way for the implementation of hollow nanostructures in other fields.« less
NASA Astrophysics Data System (ADS)
Takahata, Kazuya; Moriuchi, Sadatomo; Ooba, Kouki; Takami, Shigeyuki; Iwamoto, Akifumi; Mito, Toshiyuki; Imagawa, Shinsaku
2018-04-01
The Large Helical Device (LHD) superconducting magnet system consists of two pairs of helical coils and three pairs of poloidal coils. The poloidal coils use cable-in-conduit (CIC) conductors, which have now been adopted in many fusion devices, with forced cooling by supercritical helium. The poloidal coils were first energized with the helical coils on March 27, 1998. Since that time, the coils have experienced 54,600 h of steady cooling, 10,600 h of excitation operation, and nineteen thermal cycles for twenty years. During this period, no superconducting-to-normal transition of the conductors has been observed. The stable operation of the poloidal coils demonstrates that a CIC conductor is suited to large-scale superconducting magnets. The AC loss has remained constant, even though a slight decrease was observed in the early phase of operation. The hydraulic characteristics have been maintained without obstruction over the entire period of steady cooling. The experience gained from twenty years of operation has also provided lessons regarding malfunctions of peripheral equipment.
Large-Eddy-Simulation of a flow over a submerged rigid canopy
NASA Astrophysics Data System (ADS)
Monti, Alessandro; Omidyeganeh, Mohammad; Pinelli, Alfredo
2017-11-01
We have performed a wall-resolved Large-Eddy-Simulation of flow over a shallow submerged rigid canopy (H / h = 4 ; H and h are the open channel and the canopy heights respectively) in a transitional/dense regime (Nepf ARFM 44, 2011), at low Reynolds number (Reb =Ubulk H / ν = 6000). An immersed boundary method (Favier et al. JCP 261, 2013) has been adopted to represent filamentous rigid elements of the canopy. The presence of the permeable and porous canopy induces a typical inflection point in the mean velocity profile, depicting two separated and developed layers, outer boundary layer and in-canopy uniform flow. The aim of the work is to explore and unravel the mechanisms of the interaction between the fluid flow and the rigid canopy by identifying the physical parameters that govern the mixing mechanisms within the different flow layers and by exploring the impact of the sweep/ejection events at the canopy edge. The results show that the flow is characterised by large scale stream- and span-wise vortices and regions of different dynamics that affect also the filamentous layer, hence the mixing mechanisms.
Heritage Adoption Lessons Learned: Cover Deployment and Latch Mechanism
NASA Technical Reports Server (NTRS)
Wincentsen, James
2006-01-01
Within JPL, there is a technology thrust need to develop a larger Cover Deployment and Latch Mechanism (CDLM) for future missions. The approach taken was to adopt and scale the CDLM design as used on the Galaxy Evolution Explorer (GALEX) project. The three separate mechanisms that comprise the CDLM will be discussed in this paper in addition to a focus on heritage adoption lessons learned and specific examples. These lessons learned will be valuable to any project considering the use of heritage designs.
Probing Massive Black Hole Populations and Their Environments with LISA
NASA Astrophysics Data System (ADS)
Katz, Michael; Larson, Shane
2018-01-01
With the adoption of the LISA Mission Proposal by the European Space Agency in response to its call for L3 mission concepts, gravitational wave measurements from space are on the horizon. With data from the Illustris large-scale cosmological simulation, we provide analysis of LISA detection rates accompanied by characterization of the merging Massive Black Holes (MBH) and their host galaxies. MBHs of total mass $\\sim10^6-10^9 M_\\odot$ are the main focus of this study. Using a precise treatment of the dynamical friction evolutionary process prior to gravitational wave emission, we evolve MBH simulation particle mergers from $\\sim$kpc scales until coalescence to achieve a merger distribution. Using the statistical basis of the Illustris output, we Monte-carlo synthesize many realizations of the merging massive black hole population across space and time. We use those realizations to build mock LISA detection catalogs to understand the impact of LISA mission configurations on our ability to probe massive black hole merger populations and their environments throughout the visible Universe.
Structural Element Testing in Support of the Design of the NASA Composite Crew Module
NASA Technical Reports Server (NTRS)
Kellas, Sotiris; Jackson, Wade C.; Thesken, John C.; Schleicher, Eric; Wagner, Perry; Kirsch, Michael T.
2012-01-01
In January 2007, the NASA Administrator and Associate Administrator for the Exploration Systems Mission Directorate chartered the NASA Engineering and Safety Center (NESC) to design, build, and test a full-scale Composite Crew Module (CCM). For the design and manufacturing of the CCM, the team adopted the building block approach where design and manufacturing risks were mitigated through manufacturing trials and structural testing at various levels of complexity. Following NASA's Structural Design Verification Requirements, a further objective was the verification of design analysis methods and the provision of design data for critical structural features. Test articles increasing in complexity from basic material characterization coupons through structural feature elements and large structural components, to full-scale structures were evaluated. This paper discusses only four elements tests three of which include joints and one that includes a tapering honeycomb core detail. For each test series included are specimen details, instrumentation, test results, a brief analysis description, test analysis correlation and conclusions.
A Voyage through the Heliosphere (Invited)
NASA Astrophysics Data System (ADS)
Burlaga, L. F.
2009-12-01
Parker adopted the word “Heliosphere” to denote “the region of interstellar space swept out by the solar wind” His book “Interplanetary Dynamical Processes” (1963) provided “a comprehensive self-consistent dynamical picture of interplanetary activity” on spatial scales from the Larmor radius to the outermost limits of the heliosphere and over a broad range of temporal scales. The spacecraft Voyagers 1 and 2 have taken us on a journey through much of the heliosphere: from Earth, past the termination shock near 90 AU, and into the inner heliosheath. This talk will use magnetic field observations from V1 and V2 to illustrate how Parker’s dynamical picture has been largely confirmed by observations out to ~100 AU. It will also discuss some “complicating aspects of the dynamics…which will turn up in future observations…” that Parker envisaged. With continued funding, the Voyager spacecraft will allow us to explore the heliosheath, cross the boundary of the heliosphere, and sample the local interstellar medium, guided by still untested predictions of Parker.
A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON
King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix
2008-01-01
As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597
Morphological filtering and multiresolution fusion for mammographic microcalcification detection
NASA Astrophysics Data System (ADS)
Chen, Lulin; Chen, Chang W.; Parker, Kevin J.
1997-04-01
Mammographic images are often of relatively low contrast and poor sharpness with non-stationary background or clutter and are usually corrupted by noise. In this paper, we propose a new method for microcalcification detection using gray scale morphological filtering followed by multiresolution fusion and present a unified general filtering form called the local operating transformation for whitening filtering and adaptive thresholding. The gray scale morphological filters are used to remove all large areas that are considered as non-stationary background or clutter variations, i.e., to prewhiten images. The multiresolution fusion decision is based on matched filter theory. In addition to the normal matched filter, the Laplacian matched filter which is directly related through the wavelet transforms to multiresolution analysis is exploited for microcalcification feature detection. At the multiresolution fusion stage, the region growing techniques are used in each resolution level. The parent-child relations between resolution levels are adopted to make final detection decision. FROC is computed from test on the Nijmegen database.
Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...
2008-01-01
Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less
Vertical Silicon Nanowire Field Effect Transistors with Nanoscale Gate-All-Around
NASA Astrophysics Data System (ADS)
Guerfi, Youssouf; Larrieu, Guilhem
2016-04-01
Nanowires are considered building blocks for the ultimate scaling of MOS transistors, capable of pushing devices until the most extreme boundaries of miniaturization thanks to their physical and geometrical properties. In particular, nanowires' suitability for forming a gate-all-around (GAA) configuration confers to the device an optimum electrostatic control of the gate over the conduction channel and then a better immunity against the short channel effects (SCE). In this letter, a large-scale process of GAA vertical silicon nanowire (VNW) MOSFETs is presented. A top-down approach is adopted for the realization of VNWs with an optimum reproducibility followed by thin layer engineering at nanoscale. Good overall electrical performances were obtained, with excellent electrostatic behavior (a subthreshold slope (SS) of 95 mV/dec and a drain induced barrier lowering (DIBL) of 25 mV/V) for a 15-nm gate length. Finally, a first demonstration of dual integration of n-type and p-type VNW transistors for the realization of CMOS inverter is proposed.
NASA Astrophysics Data System (ADS)
García-Ruiz, José M.; Lana-Renault, Noemí; Beguería, Santiago; Lasanta, Teodoro; Regüés, David; Nadal-Romero, Estela; Serrano-Muela, Pilar; López-Moreno, Juan I.; Alvera, Bernardo; Martí-Bono, Carlos; Alatorre, Luis C.
2010-08-01
The hydrological and geomorphic effects of land use/land cover changes, particularly those associated with vegetation regrowth after farmland abandonment were investigated in the Central Spanish Pyrenees. The main focus was to assess the interactions among slope, catchment, basin, and fluvial channel processes over a range of spatial scales. In recent centuries most Mediterranean mountain areas have been subjected to significant human pressure through deforestation, cultivation of steep slopes, fires, and overgrazing. Depopulation commencing at the beginning of the 20th century, and particularly since the 1960s, has resulted in farmland abandonment and a reduction in livestock numbers, and this has led to an expansion of shrubs and forests. Studies in the Central Spanish Pyrenees, based on experimental plots and catchments, in large basins and fluvial channels, have confirmed that these land use changes have had hydrological and geomorphic consequences regardless of the spatial scale considered, and that processes occurring at any particular scale can be explained by such processes acting on other scales. Studies using experimental plots have demonstrated that during the period of greatest human pressure (mainly the 18th and 19th centuries), cultivation of steep slopes caused high runoff rates and extreme soil loss. Large parts of the small catchments behaved as runoff and sediment source areas, whereas the fluvial channels of large basins showed signs of high torrentiality (braided morphology, bare sedimentary bars, instability, and prevalence of bedload transport). Depopulation has concentrated most human pressure on the valley bottoms and specific locations such as resorts, whereas the remainder of the area has been affected by an almost generalized abandonment. Subsequent plant recolonization has resulted in a reduction of overland flow and declining soil erosion. At a catchment scale this has caused a reduction in sediment sources, and channel incision in the secondary streams. At the regional scale, the most important consequences include a reduction in the frequency of floods, reduced sediment yields, increasing stabilization of fluvial channels (colonization of sedimentary bars by riparian vegetation and a reduction in the braiding index), and stabilization of alluvial fans. These results demonstrate the complexity and multiscalar nature of the interactions among land use and runoff generation, soil erosion, sediment transport, and fluvial channel dynamics, and highlight the need to adopt a multiscale approach in other mountain areas of the world.
Performance analysis of parallel gravitational N-body codes on large GPU clusters
NASA Astrophysics Data System (ADS)
Huang, Si-Yi; Spurzem, Rainer; Berczik, Peter
2016-01-01
We compare the performance of two very different parallel gravitational N-body codes for astrophysical simulations on large Graphics Processing Unit (GPU) clusters, both of which are pioneers in their own fields as well as on certain mutual scales - NBODY6++ and Bonsai. We carry out benchmarks of the two codes by analyzing their performance, accuracy and efficiency through the modeling of structure decomposition and timing measurements. We find that both codes are heavily optimized to leverage the computational potential of GPUs as their performance has approached half of the maximum single precision performance of the underlying GPU cards. With such performance we predict that a speed-up of 200 - 300 can be achieved when up to 1k processors and GPUs are employed simultaneously. We discuss the quantitative information about comparisons of the two codes, finding that in the same cases Bonsai adopts larger time steps as well as larger relative energy errors than NBODY6++, typically ranging from 10 - 50 times larger, depending on the chosen parameters of the codes. Although the two codes are built for different astrophysical applications, in specified conditions they may overlap in performance at certain physical scales, thus allowing the user to choose either one by fine-tuning parameters accordingly.
Secure Access Control and Large Scale Robust Representation for Online Multimedia Event Detection
Liu, Changyu; Li, Huiling
2014-01-01
We developed an online multimedia event detection (MED) system. However, there are a secure access control issue and a large scale robust representation issue when we want to integrate traditional event detection algorithms into the online environment. For the first issue, we proposed a tree proxy-based and service-oriented access control (TPSAC) model based on the traditional role based access control model. Verification experiments were conducted on the CloudSim simulation platform, and the results showed that the TPSAC model is suitable for the access control of dynamic online environments. For the second issue, inspired by the object-bank scene descriptor, we proposed a 1000-object-bank (1000OBK) event descriptor. Feature vectors of the 1000OBK were extracted from response pyramids of 1000 generic object detectors which were trained on standard annotated image datasets, such as the ImageNet dataset. A spatial bag of words tiling approach was then adopted to encode these feature vectors for bridging the gap between the objects and events. Furthermore, we performed experiments in the context of event classification on the challenging TRECVID MED 2012 dataset, and the results showed that the robust 1000OBK event descriptor outperforms the state-of-the-art approaches. PMID:25147840
The fusion of large scale classified side-scan sonar image mosaics.
Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan
2006-07-01
This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.
Dorr, David A.; Cohen, Deborah J.; Adler-Milstein, Julia
2018-01-01
Failed diffusion of innovations may be linked to an inability to use and apply data, information, and knowledge to change perceptions of current practice and motivate change. Using qualitative and quantitative data from three large-scale health care delivery innovations—accountable care organizations, advanced primary care practice, and EvidenceNOW—we assessed where data-driven innovation is occurring and where challenges lie. We found that implementation of some technological components of innovation (for example, electronic health records) has occurred among health care organizations, but core functions needed to use data to drive innovation are lacking. Deficits include the inability to extract and aggregate data from the records; gaps in sharing data; and challenges in adopting advanced data functions, particularly those related to timely reporting of performance data. The unexpectedly high costs and burden incurred during implementation of the innovations have limited organizations’ ability to address these and other deficits. Solutions that could help speed progress in data-driven innovation include facilitating peer-to-peer technical assistance, providing tailored feedback reports to providers from data aggregators, and using practice facilitators skilled in using data technology for quality improvement to help practices transform. Policy efforts that promote these solutions may enable more rapid uptake of and successful participation in innovative delivery system reforms. PMID:29401031
Dorr, David A; Cohen, Deborah J; Adler-Milstein, Julia
2018-02-01
Failed diffusion of innovations may be linked to an inability to use and apply data, information, and knowledge to change perceptions of current practice and motivate change. Using qualitative and quantitative data from three large-scale health care delivery innovations-accountable care organizations, advanced primary care practice, and EvidenceNOW-we assessed where data-driven innovation is occurring and where challenges lie. We found that implementation of some technological components of innovation (for example, electronic health records) has occurred among health care organizations, but core functions needed to use data to drive innovation are lacking. Deficits include the inability to extract and aggregate data from the records; gaps in sharing data; and challenges in adopting advanced data functions, particularly those related to timely reporting of performance data. The unexpectedly high costs and burden incurred during implementation of the innovations have limited organizations' ability to address these and other deficits. Solutions that could help speed progress in data-driven innovation include facilitating peer-to-peer technical assistance, providing tailored feedback reports to providers from data aggregators, and using practice facilitators skilled in using data technology for quality improvement to help practices transform. Policy efforts that promote these solutions may enable more rapid uptake of and successful participation in innovative delivery system reforms.
Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System.
Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin
2016-08-18
Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems.
Scale Construction for Graphing: An Investigation of Students' Resources
ERIC Educational Resources Information Center
Delgado, Cesar; Lucero, Margaret M.
2015-01-01
Graphing is a fundamental part of the scientific process. Scales are key but little-studied components of graphs. Adopting a resources-based framework of cognitive structure, we identify the potential intuitive resources that six undergraduates of diverse majors and years at a public US research university activated when constructing scales, and…
The Robustness of IRT-Based Vertical Scaling Methods to Violation of Unidimensionality
ERIC Educational Resources Information Center
Yin, Liqun
2013-01-01
In recent years, many states have adopted Item Response Theory (IRT) based vertically scaled tests due to their compelling features in a growth-based accountability context. However, selection of a practical and effective calibration/scaling method and proper understanding of issues with possible multidimensionality in the test data is critical to…
Estimating ecological integrity in the interior Columbia River basin.
Thomas M. Quigley; Richard W. Haynes; Wendel J. Hann
2001-01-01
The adoption of ecosystem-based management strategies focuses attention on the need for broad scale estimates of ecological conditions; this poses two challenges for the science community: estimating broad scale ecosystem conditions from highly disparate data, often observed at different spatial scales, and interpreting these conditions relative to goals such as...
USDA-ARS?s Scientific Manuscript database
Agriculture covers 40% of Earth’s ice-free land area and has broad impacts on global biogeochemical cycles. While some agricultural management changes are small in scale or impact, others have the potential to shift biogeochemical cycles at landscape and larger scales if widely adopted. Understandin...
The management of health care service quality. A physician perspective
Bobocea, L; Gheorghe, IR; Spiridon, St; Gheorghe, CM; Purcarea, VL
2016-01-01
Applying marketing in health care services is presently an essential element for every manager or policy maker. In order to be successful, a health care organization has to identify an accurate measurement scale for defining service quality due to competitive pressure and cost values. The most widely employed scale in the services sector is SERVQUAL scale. In spite of being successfully adopted in fields such as brokerage and banking, experts concluded that the SERVQUAL scale should be modified depending on the specific context. Moreover, the SERVQUAL scale focused on the consumer’s perspective regarding service quality. While service quality was measured with the help of SERVQUAL scale, other experts identified a structure-process-outcome design, which, they thought, would be more suitable for health care services. This approach highlights a different perspective on investigating the service quality, namely, the physician’s perspective. Further, we believe that the Seven Prong Model for Improving Service Quality has been adopted in order to effectively measure the health care service in a Romanian context from a physician’s perspective. PMID:27453745
Deciphering the therapeutic stem cell strategies of large and midsize pharmaceutical firms.
Vertès, Alain A
2014-01-01
The slow adoption of cytotherapeutics remains a vexing hurdle given clinical progress achieved to date with a variety of stem cell lineages. Big and midsize pharmaceutical companies as an asset class still delay large-scale investments in this arena until technological and market risks will have been further reduced. Nonetheless, a handful of stem cell strategic alliance and licensing transactions have already been implemented, indicating that progress is actively monitored, although most of these involve midsize firms. The greatest difficulty is, perhaps, that the regenerative medicine industry is currently only approaching the point of inflexion of the technology development S-curve, as many more clinical trials read out. A path to accelerating technology adoption is to focus on innovation outliers among healthcare actors. These can be identified by analyzing systemic factors (e.g., national science policies and industry fragmentation) and intrinsic factors (corporate culture, e.g., nimble decision-making structures; corporate finance, e.g., opportunity costs and ownership structure; and operations, e.g., portfolio management strategies, threats on existing businesses and patent expirations). Another path is to accelerate the full clinical translation and commercialization of an allogeneic cytotherapeutic product in any indication to demonstrate the disease-modifying potential of the new products for treatment and prophylaxis, ideally for a large unmet medical need such as dry age-related macular degeneration, or for an orphan disease such as biologics-refractory acute graft-versus-host disease. In times of decreased industry average research productivities, regenerative medicine products provide important prospects for creating new franchises with a market potential that could very well mirror that achieved with the technology of monoclonal antibodies.
Adoptive cell transfer therapy for malignant gliomas.
Ishikawa, Eiichi; Takano, Shingo; Ohno, Tadao; Tsuboi, Koji
2012-01-01
To date, various adoptive immunotherapies have been attempted for treatment of malignant gliomas using nonspecific and/or specific effector cells. Since the late 1980s, with the development of rIL-2, the efficacy of lymphokine-activated killer (LAK) cell therapy with or without rIL-2 for malignant gliomas had been tested with some modifications in therapeutic protocols. With advancements in technology, ex vivo expanded tumor specific cytotoxic T-lymphocytes (CTL) or those lineages were used in clinical trials with higher tumor response rates. In addition, combinations of those adoptive cell transfer using LAK cells, CTLs or natural killer (NK) cells with autologous tumor vaccine (ATV) therapy were attempted. Also, a strategy of high-dose (or lymphodepleting) chemotherapy followed by adoptive cell transfer has been drawing attentions recently. The most important role of these clinical studies using cell therapy was to prove that these ex vivo expanded effector cells could kill tumor cells in vivo. Although recent clinical results could demonstrate radiologic tumor shrinkage in a number of cases, cell transfer therapy alone has been utilized less frequently, because of the high cost of ex vivo cell expansion, the short duration of antitumor activity in vivo, and the recent shift of interest to vaccine immunotherapy. Nevertheless, NK cell therapy using specific feeder cells or allergenic NK cell lines have potentials to be a good choice of treatment because of easy ex vivo expansion and their efficacy especially when combined with vaccine therapy as they are complementary to each other. Also, further studies are expected to clarify the efficacy of the high-dose chemotherapy followed by a large scale cell transfer therapy as a new therapeutic strategy for malignant gliomas.
Common origin of 3.55 keV x-ray line and gauge coupling unification with left-right dark matter
NASA Astrophysics Data System (ADS)
Borah, Debasish; Dasgupta, Arnab; Patra, Sudhanwa
2017-12-01
We present a minimal left-right dark matter framework that can simultaneously explain the recently observed 3.55 keV x-ray line from several galaxy clusters and gauge coupling unification at high energy scale. Adopting a minimal dark matter strategy, we consider both left and right handed triplet fermionic dark matter candidates which are stable by virtue of a remnant Z2≃(-1 )B -L symmetry arising after the spontaneous symmetry breaking of left-right gauge symmetry to that of the standard model. A scalar bitriplet field is incorporated whose first role is to allow radiative decay of right handed triplet dark matter into the left handed one and a photon with energy 3.55 keV. The other role this bitriplet field at TeV scale plays is to assist in achieving gauge coupling unification at a high energy scale within a nonsupersymmetric S O (10 ) model while keeping the scale of left-right gauge symmetry around the TeV corner. Apart from solving the neutrino mass problem and giving verifiable new contributions to neutrinoless double beta decay and charged lepton flavor violation, the model with TeV scale gauge bosons can also give rise to interesting collider signatures like diboson excess, dilepton plus two jets excess reported recently in the large hadron collider data.
NASA Astrophysics Data System (ADS)
Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi
2015-11-01
Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.
Morse, Wayde C; Hall, Troy E; Kruger, Linda E
2009-03-01
In this article, we examine how issues of scale affect the integration of recreation management with the management of other natural resources on public lands. We present two theories used to address scale issues in ecology and explore how they can improve the two most widely applied recreation-planning frameworks. The theory of patch dynamics and hierarchy theory are applied to the recreation opportunity spectrum (ROS) and the limits of acceptable change (LAC) recreation-planning frameworks. These frameworks have been widely adopted internationally, and improving their ability to integrate with other aspects of natural resource management has significant social and conservation implications. We propose that incorporating ecologic criteria and scale concepts into these recreation-planning frameworks will improve the foundation for integrated land management by resolving issues of incongruent boundaries, mismatched scales, and multiple-scale analysis. Specifically, we argue that whereas the spatially explicit process of the ROS facilitates integrated decision making, its lack of ecologic criteria, broad extent, and large patch size decrease its usefulness for integration at finer scales. The LAC provides explicit considerations for weighing competing values, but measurement of recreation disturbances within an LAC analysis is often done at too fine a grain and at too narrow an extent for integration with other recreation and resource concerns. We suggest that planners should perform analysis at multiple scales when making management decisions that involve trade-offs among competing values. The United States Forest Service is used as an example to discuss how resource-management agencies can improve this integration.
Family interactions in adoptive compared to nonadoptive families.
Rueter, Martha A; Keyes, Margaret A; Iacono, William G; McGue, Matt
2009-02-01
Despite the large and growing numbers of adoptive families, little research describes interactions in families with adopted adolescents. Yet, adopted adolescents' increased risk for adjustment problems, combined with the association between family interactions and adolescent adjustment in nonadoptive families, raises questions about differences in adoptive and nonadoptive family interactions. We compared observed and self-reported family interactions between 284 adoptive and 208 nonadoptive families and within 123 families with 1 adopted and 1 nonadopted adolescent. Adolescents averaged 14.9 years of age. Comparisons were made using analysis of variance incorporating hierarchical linear methods in SAS PROC MIXED to control family-related correlations in the data. Parents and children reported more conflict in adoptive families when compared with nonadoptive families. Families with 1 adopted and 1 nonadopted adolescent reported more conflict between parents and adopted adolescents. Observed parental behavior was similar across adoptive and nonadoptive children although adopted adolescents were less warm and, in families with 2 adopted children, more conflictual than nonadopted adolescents. These findings suggest a need for further investigation of the association between family interactions and adopted adolescent problem behavior. Copyright 2009 APA, all rights reserved.
Family Interactions in Adoptive Compared to Nonadoptive Families
Rueter, Martha A.; Keyes, Margaret A.; Iacono, William G.; McGue, Matt
2009-01-01
Despite the large and growing numbers of adoptive families, little research describes interactions in families with adopted adolescents. Yet, adopted adolescents’ increased risk for adjustment problems, combined with the association between family interactions and adolescent adjustment in nonadoptive families, raises questions about differences in adoptive and nonadoptive family interactions. We compared observed and self-reported family interactions between 284 adoptive and 208 nonadoptive families and within 123 families with 1 adopted and 1 nonadopted adolescent. Adolescents averaged 14.9 years of age. Comparisons were made using analysis of variance incorporating hierarchical linear methods in SAS PROC MIXED to control family-related correlations in the data. Parents and children reported more conflict in adoptive families when compared with nonadoptive families. Families with 1 adopted and 1 nonadopted adolescent reported more conflict between parents and adopted adolescents. Observed parental behavior was similar across adoptive and nonadoptive children although adopted adolescents were less warm and, in families with 2 adopted children, more conflictual than nonadopted adolescents. These findings suggest a need for further investigation of the association between family interactions and adopted adolescent problem behavior. PMID:19203160
NASA Astrophysics Data System (ADS)
Gowin, John; Bunclark, Lisa
2013-04-01
Africa is seen by many as the continent with the greatest potential for agricultural growth, but land degradation and environmental change threaten the African soil resource more severely than in many other regions of the planet. Achieving future food security will depend mainly on increasing production from rainfed agriculture. The challenge of delivering the required sustainable intensification in rainfed agriculture is most acute in the drylands - the semi-arid and dry sub-humid climatic regions. There are two broad strategies for increasing yields under these circumstances: (1) capturing more rainwater and storing it (increasing water availability), and (2) using the available water more effectively by increasing the plant growth and/or reducing non-productive soil evaporation (increasing water productivity). We focus on the first of these options - water harvesting, which is defined as, "the collection and concentration of rainfall runoff, or floodwaters, for plant production". The benefits of water harvesting have been documented from small scale experimental plot studies, but evidence of successful adoption and impact is weak. As a contribution to improving the evidence base, we present results from an investigation conducted in SSA to gather information on progress with efforts to promote adoption of water harvesting. The intention was to investigate in detail the processes and outcomes on a large enough sample area to draw some common conclusions. This was not a comprehensive analysis of all that is happening in each country, nor was it a random sample; this was a purposive sample guided by available baseline information to permit comparative analysis. Water harvesting seems to have made the most progress where techniques can be adopted by individual farmers: in Burkina Faso and Niger micro- scale zaï /tassa and demi-lune systems; in Sudan and Tanzania meso-scale majaruba and teras systems. Macro-scale systems requiring social organisation may offer greater potential benefits, but they are more difficult to implement, nevertheless some success stories are apparent: e.g. micro-watersheds in Ethiopia; floodwater harvesting in Sudan and Kenya. There is a marked contrast with much of the experience in India, where there has been greater emphasis on groundwater recharge. The very limited development of groundwater in SSA explains this, but in the absence of groundwater recharge the storage of runoff for supplementary irrigation depends entirely on small ponds. The challenge now is to develop effective methods to disseminate knowledge of successful water harvesting. We consider in particular the influence of available information on soils and rainfall.
CONSORT in China: past development and future direction.
Song, Tian-Jiao; Leng, Hou-Fu; Zhong, Linda Ld; Wu, Tai-Xiang; Bian, Zhao-Xiang
2015-06-01
The Consolidated Standards of Reporting Trials (CONSORT) Statement was published in 1996, and first introduced to China in 2001. Although CONSORT has been widely accepted in high-quality international journals, we still need to have more investigation on how many Chinese journals have adopted the CONSORT Statement, and whether the quality of reporting has improved. A systematic search of the "Instructions to authors" in all Chinese medical journals in China Academic Journals (CAJ) Full-text Database was conducted up to February 2012 and only 7 journals officially listed the requirements of the CONSORT Statement. The research articles about randomized controlled trials (RCTs) published in 2002, 2004, 2006, 2008, and 2010 from journals which had specifically adopted the CONSORT Statement, and from 30 top journals based on the Chinese Science Citation Index (CSCI) 2011 as the control group, were identified. The quality of both cohorts of articles was assessed using the revised CONSORT Checklist and Jadad scale. A total of 1221 Chinese medical journals was identified. Only seven journals stated clearly in the "Instructions to authors" that authors should adopt the CONSORT requirement in the clinical trial paper. None of these journals is among the control group in the CSCI 2011. In the selected years, a total of 171 articles from 7 journals which had adopted CONSORT and 232 articles in the control were identified as including RCT trials. The average scores according to the revised CONSORT Checklist were 29.47 for the CONSORT-adopting journals and 25.57 for the control group; while the average scores based on the Jadad scale were 2.53 for CONSORT-adopting journals and 1.97 for the control group. Few journals among Chinese medical journals have adopted the CONSORT Statement. The overall quality of RCT reports in the 7 journals which have adopted CONSORT was better than those in the top 30 journals which have not adopted CONSORT. The quality of RCT reports in Chinese journals needs further improvement, and the CONSORT Statement could be a very helpful guideline.
NASA Astrophysics Data System (ADS)
Tozzi, R.; Pezzopane, M.; De Michelis, P.; Pignalberi, A.; Siciliano, F.
2016-12-01
The constellation geometry adopted by ESA for Swarm satellites has opened the way to new investigations based on magnetic data. An example is the curl-B technique that allows reconstructing F-region electric current density in terms of its radial, meridional, and zonal components based on data from two satellites of Swarm constellation (Swarm A and B) which fly at different altitudes. Here, we apply this technique to more than 2 years of Swarm magnetic vector data and investigate the average large scale behaviour of F-region current densities as a function of local time, season and different interplanetary conditions (different strength and direction of the three IMF components and/or geomagnetic activity levels).
A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions
NASA Astrophysics Data System (ADS)
Liang, Yihao; Xing, Xiangjun; Li, Yaohang
2017-06-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
Ethics of Implementing Electronic Health Records in Developing Countries: Points to Consider
Were, Martin C.; Meslin, Eric M.
2011-01-01
Electronic Health Record systems (EHRs) are increasingly being used in many developing countries, several of which have moved beyond isolated pilot projects to active large-scale implementation as part of their national health strategies. Despite growing enthusiasm for adopting EHRs in resource poor settings, almost no attention has been paid to the ethical issues that might arise. In this article we argue that these ethical issues should be addressed now if EHRs are to be appropriately implemented in these settings. We take a systematic approach guided by a widely accepted ethical framework currently in use for developing countries to first describe the ethical issues, and then propose a set of ‘Points to Consider’ to guide further thinking and decision-making. PMID:22195214
Survey of MapReduce frame operation in bioinformatics.
Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke
2014-07-01
Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demaria, N.
This paper is a review of recent progress of RD53 Collaboration. Results obtained on the study of the radiation effects on 65 nm CMOS have matured enough to define first strategies to adopt in the design of analog and digital circuits. Critical building blocks and analog very front end chains have been designed, tested before and after 5–800 Mrad. Small prototypes of 64×64 pixels with complex digital architectures have been produced, and point to address the main issues of dealing with extremely high pixel rates, while operating at very small in-time thresholds in the analog front end. Lastly, the collaborationmore » is now proceeding at full speed towards the design of a large scale prototype, called RD53A, in 65 nm CMOS technology.« less
Jieyi Li; Arandjelovic, Ognjen
2017-07-01
Computer science and machine learning in particular are increasingly lauded for their potential to aid medical practice. However, the highly technical nature of the state of the art techniques can be a major obstacle in their usability by health care professionals and thus, their adoption and actual practical benefit. In this paper we describe a software tool which focuses on the visualization of predictions made by a recently developed method which leverages data in the form of large scale electronic records for making diagnostic predictions. Guided by risk predictions, our tool allows the user to explore interactively different diagnostic trajectories, or display cumulative long term prognostics, in an intuitive and easily interpretable manner.
ERIC Educational Resources Information Center
Gan, Chin Lay; Balakrishnan, Vimala
2016-01-01
Use of mobile technology is widespread, particularly among the younger generation. There is a huge potential for utilizing such technology in lecture classes with large numbers of students, serving as an interaction tool between the students and lecturers. The challenge is to identify significant adoption factors to ensure effective adoption of…
ERIC Educational Resources Information Center
Kenney, Jane; Newcombe, Ellen
2011-01-01
Adopting a new teaching approach is often a daunting task especially if one is an early adopter in a limited-resource environment. This article describes the challenges encountered and the strategies used in pilot testing a blended instructional method in a large size class within the college of education at a medium-sized university. The main…
Baird, Andrew J; Haslam, Roger A
2013-12-01
Beliefs, cognitions, and behaviors relating to pain can be associated with a range of negative outcomes. In patients, certain beliefs are associated with increased levels of pain and related disability. There are few data, however, showing the extent to which beliefs of patients differ from those of the general population. This study explored pain beliefs in a large nonclinical population and a chronic low back pain (CLBP) sample using the Pain Beliefs Questionnaire (PBQ) to identify differences in scores and factor structures between and within the samples. This was a cross-sectional study. The samples comprised patients attending a rehabilitation program and respondents to a workplace survey. Pain beliefs were assessed using the PBQ, which incorporates 2 scales: organic and psychological. Exploratory factor analysis was used to explore variations in factor structure within and between samples. The relationship between the 2 scales also was examined. Patients reported higher organic scores and lower psychological scores than the nonclinical sample. Within the nonclinical sample, those who reported frequent pain scored higher on the organic scale than those who did not. Factor analysis showed variations in relation to the presence of pain. The relationship between scales was stronger in those not reporting frequent pain. This was a cross-sectional study; therefore, no causal inferences can be made. Patients experiencing CLBP adopt a more biomedical perspective on pain than nonpatients. The presence of pain is also associated with increased biomedical thinking in a nonclinical sample. However, the impact is not only on the strength of beliefs, but also on the relationship between elements of belief and the underlying belief structure.
Monitoring and Modeling Water and Energy Fluxes in North China Plain: From Field to Regional Scales
NASA Astrophysics Data System (ADS)
Shen, Y.
2012-12-01
North China Plain is one of the mostly water deficit region in the world. Even though the total water withdrawal from surface and groundwater exceeded its renewable ability for long years, due to its importance to balance the food budget in China, large amount of groundwater is still extracted every year for intensive irrigation. With winter wheat and summer maize double-cropping system, the grain yield of NCP can reach a very high level of around 15 t/ha annually, which is largely depended on timely irrigation. As a result, the ceaseless over exploitation of groundwater caused serious environmental and ecological problems, e.g. nearly all the rivers run drying-up at plain areas, groundwater declined, land subsidence, and wetland shrank. The decrease in precipitation over past half century reinforced the water shortage in NCP. The sustainability of both the water resources and agriculture became the most important issue in this region. A key issue to the sustainable use of water resources is to improve the water use efficiency and reduce agricultural water consumptions. This study will introduce the efforts we put to clarify the water and heat balances in irrigated agricultural lands and its implications to crop yield, hydrology, and water resources evolution in NCP. We established a multi-scale observation system in NCP to study the surface water and heat processes and agricultural aspect of hydrological cycle in past years. Multi-disciplinary methods are adopted into this research such as micro-meteorologic, isotopic, soil hydrologic methods at the field scale, and remote sensing and modeling for study the water fluxes over regional scale. Detailed research activities and interesting as well as some initial results will be introduced at the workshop.
Manufacturing process scale-up of optical grade transparent spinel ceramic at ArmorLine Corporation
NASA Astrophysics Data System (ADS)
Spilman, Joseph; Voyles, John; Nick, Joseph; Shaffer, Lawrence
2013-06-01
While transparent Spinel ceramic's mechanical and optical characteristics are ideal for many Ultraviolet (UV), visible, Short-Wave Infrared (SWIR), Mid-Wave Infrared (MWIR), and multispectral sensor window applications, commercial adoption of the material has been hampered because the material has historically been available in relatively small sizes (one square foot per window or less), low volumes, unreliable supply, and with unreliable quality. Recent efforts, most notably by Technology Assessment and Transfer (TA and T), have scaled-up manufacturing processes and demonstrated the capability to produce larger windows on the order of two square feet, but with limited output not suitable for production type programs. ArmorLine Corporation licensed the hot-pressed Spinel manufacturing know-how of TA and T in 2009 with the goal of building the world's first dedicated full-scale Spinel production facility, enabling the supply of a reliable and sufficient volume of large Transparent Armor and Optical Grade Spinel plates. With over $20 million of private investment by J.F. Lehman and Company, ArmorLine has installed and commissioned the largest vacuum hot press in the world, the largest high-temperature/high-pressure hot isostatic press in the world, and supporting manufacturing processes within 75,000 square feet of manufacturing space. ArmorLine's equipment is capable of producing window blanks as large as 50" x 30" and the facility is capable of producing substantial volumes of material with its Lean configuration and 24/7 operation. Initial production capability was achieved in 2012. ArmorLine will discuss the challenges that were encountered during scale-up of the manufacturing processes, ArmorLine Optical Grade Spinel optical performance, and provide an overview of the facility and its capabilities.
Intrinsic alignments of galaxies in the MassiveBlack-II simulation: analysis of two-point statistics
NASA Astrophysics Data System (ADS)
Tenneti, Ananth; Singh, Sukhdeep; Mandelbaum, Rachel; di Matteo, Tiziana; Feng, Yu; Khandai, Nishikanta
2015-04-01
The intrinsic alignment of galaxies with the large-scale density field is an important astrophysical contaminant in upcoming weak lensing surveys. We present detailed measurements of the galaxy intrinsic alignments and associated ellipticity-direction (ED) and projected shape (wg+) correlation functions for galaxies in the cosmological hydrodynamic MassiveBlack-II simulation. We carefully assess the effects on galaxy shapes, misalignment of the stellar component with the dark matter shape and two-point statistics of iterative weighted (by mass and luminosity) definitions of the (reduced and unreduced) inertia tensor. We find that iterative procedures must be adopted for a reliable measurement of the reduced tensor but that luminosity versus mass weighting has only negligible effects. Both ED and wg+ correlations increase in amplitude with subhalo mass (in the range of 1010-6.0 × 1014 h-1 M⊙), with a weak redshift dependence (from z = 1 to 0.06) at fixed mass. At z ˜ 0.3, we predict a wg+ that is in reasonable agreement with Sloan Digital Sky Survey luminous red galaxy measurements and that decreases in amplitude by a factor of ˜5-18 for galaxies in the Large Synoptic Survey Telescope survey. We also compared the intrinsic alignments of centrals and satellites, with clear detection of satellite radial alignments within their host haloes. Finally, we show that wg+ (using subhaloes as tracers of density) and wδ+ (using dark matter density) predictions from the simulations agree with that of non-linear alignment (NLA) models at scales where the two-halo term dominates in the correlations (and tabulate associated NLA fitting parameters). The one-halo term induces a scale-dependent bias at small scales which is not modelled in the NLA model.
Petranovich, Christine L; Walz, Nicolay Chertkoff; Staat, Mary Allen; Chiu, Chung-Yiu Peter; Wade, Shari L
2015-01-01
The aim of this study was to investigate the association of neurocognitive functioning with internalizing and externalizing problems and school and social competence in children adopted internationally. Participants included girls between the ages of 6-12 years who were internationally adopted from China (n = 32) or Eastern Europe (n = 25) and a control group of never-adopted girls (n = 25). Children completed the Vocabulary and Matrix Reasoning subtests from the Wechsler Abbreviated Scale of Intelligence and the Score! and Sky Search subtests from the Test of Everyday Attention for Children. Parents completed the Child Behavior Checklist and the Home and Community Social Behavior Scales. Compared to the controls, the Eastern European group evidenced significantly more problems with externalizing behaviors and school and social competence and poorer performance on measures of verbal intelligence, perceptual reasoning, and auditory attention. More internalizing problems were reported in the Chinese group compared to the controls. Using generalized linear regression, interaction terms were examined to determine whether the associations of neurocognitive functioning with behavior varied across groups. Eastern European group status was associated with more externalizing problems and poorer school and social competence, irrespective of neurocognitive test performance. In the Chinese group, poorer auditory attention was associated with more problems with social competence. Neurocognitive functioning may be related to behavior in children adopted internationally. Knowledge about neurocognitive functioning may further our understanding of the impact of early institutionalization on post-adoption behavior.
Overrepresentation of Adopted Adolescents at a Hospital-Based Gender Dysphoria Clinic
Shumer, Daniel E.; Abrha, Aser; Feldman, Henry A.; Carswell, Jeremi
2017-01-01
Abstract Purpose: We have noted a greater than expected prevalence of adopted children presenting to our multidisciplinary gender program for evaluation of gender dysphoria. Methods: A retrospective review of 184 patient charts was conducted to assess the prevalence of adopted children presenting to gender clinic. Results: Fifteen of 184 patients seen were living with adoptive families (8.2%). This is significantly higher than expected based on U.S. census data. Conclusion: Adopted children are referred to our gender program more than would be expected based on the percentage of adopted children in our state and the United States at large. This may be due to a true increased risk of gender dysphoria in adopted children, or could represent presentation bias. Gender programs should be prepared to provide assessments for adopted children. Further work is needed to understand the relationship between adopted status and gender development. PMID:28861549
NASA Astrophysics Data System (ADS)
Fathonah, N. N.; Nurtono, T.; Kusdianto; Winardi, S.
2018-03-01
Single phase turbulent flow in a vessel agitated by side entering inclined blade turbine has simulated using CFD. The aim of this work is to identify the hydrodynamic characteristics of a model vessel, which geometrical configuration is adopted at industrial scale. The laboratory scale model vessel is a flat bottomed cylindrical tank agitated by side entering 4-blade inclined blade turbine with impeller rotational speed N=100-400 rpm. The effect of the impeller diameter on fluid flow pattern has been investigated. The fluid flow patterns in a vessel is essentially characterized by the phenomena of macro-instabilities, i.e. the flow patterns change with large scale in space and low frequency. The intensity of fluid flow in the tank increase with the increase of impeller rotational speed from 100, 200, 300, and 400 rpm. It was accompanied by shifting the position of the core of circulation flow away from impeller discharge stream and approached the front of the tank wall. The intensity of fluid flow in the vessel increase with the increase of the impeller diameter from d=3 cm to d=4 cm.
Putting the Spotlight Back on Plant Suspension Cultures
Santos, Rita B.; Abranches, Rita; Fischer, Rainer; Sack, Markus; Holland, Tanja
2016-01-01
Plant cell suspension cultures have several advantages that make them suitable for the production of recombinant proteins. They can be cultivated under aseptic conditions using classical fermentation technology, they are easy to scale-up for manufacturing, and the regulatory requirements are similar to those established for well-characterized production systems based on microbial and mammalian cells. It is therefore no surprise that taliglucerase alfa (Elelyso®)—the first licensed recombinant pharmaceutical protein derived from plants—is produced in plant cell suspension cultures. But despite this breakthrough, plant cells are still largely neglected compared to transgenic plants and the more recent plant-based transient expression systems. Here, we revisit plant cell suspension cultures and highlight recent developments in the field that show how the rise of plant cells parallels that of Chinese hamster ovary cells, currently the most widespread and successful manufacturing platform for biologics. These developments include medium optimization, process engineering, statistical experimental designs, scale-up/scale-down models, and process analytical technologies. Significant yield increases for diverse target proteins will encourage a gold rush to adopt plant cells as a platform technology, and the first indications of this breakthrough are already on the horizon. PMID:27014320
A self-scaling, distributed information architecture for public health, research, and clinical care.
McMurry, Andrew J; Gilbert, Clint A; Reis, Ben Y; Chueh, Henry C; Kohane, Isaac S; Mandl, Kenneth D
2007-01-01
This study sought to define a scalable architecture to support the National Health Information Network (NHIN). This architecture must concurrently support a wide range of public health, research, and clinical care activities. The architecture fulfils five desiderata: (1) adopt a distributed approach to data storage to protect privacy, (2) enable strong institutional autonomy to engender participation, (3) provide oversight and transparency to ensure patient trust, (4) allow variable levels of access according to investigator needs and institutional policies, (5) define a self-scaling architecture that encourages voluntary regional collaborations that coalesce to form a nationwide network. Our model has been validated by a large-scale, multi-institution study involving seven medical centers for cancer research. It is the basis of one of four open architectures developed under funding from the Office of the National Coordinator of Health Information Technology, fulfilling the biosurveillance use case defined by the American Health Information Community. The model supports broad applicability for regional and national clinical information exchanges. This model shows the feasibility of an architecture wherein the requirements of care providers, investigators, and public health authorities are served by a distributed model that grants autonomy, protects privacy, and promotes participation.
Xu, Zixiang; Zheng, Ping; Sun, Jibin; Ma, Yanhe
2013-01-01
Gene knockout has been used as a common strategy to improve microbial strains for producing chemicals. Several algorithms are available to predict the target reactions to be deleted. Most of them apply mixed integer bi-level linear programming (MIBLP) based on metabolic networks, and use duality theory to transform bi-level optimization problem of large-scale MIBLP to single-level programming. However, the validity of the transformation was not proved. Solution of MIBLP depends on the structure of inner problem. If the inner problem is continuous, Karush-Kuhn-Tucker (KKT) method can be used to reformulate the MIBLP to a single-level one. We adopt KKT technique in our algorithm ReacKnock to attack the intractable problem of the solution of MIBLP, demonstrated with the genome-scale metabolic network model of E. coli for producing various chemicals such as succinate, ethanol, threonine and etc. Compared to the previous methods, our algorithm is fast, stable and reliable to find the optimal solutions for all the chemical products tested, and able to provide all the alternative deletion strategies which lead to the same industrial objective. PMID:24348984
Modeling Learners' Social Centrality and Performance through Language and Discourse
ERIC Educational Resources Information Center
Dowell, Nia M.; Skrypnyk, Oleksandra; Joksimovic, Srecko; Graesser, Arthur C.; Dawson, Shane; Gaševic, Dragan; Hennis, Thieme A.; de Vries, Pieter; Kovanovic, Vitomir
2015-01-01
There is an emerging trend in higher education for the adoption of massive open online courses (MOOCs). However, despite this interest in learning at scale, there has been limited work investigating the impact MOOCs can play on student learning. In this study, we adopt a novel approach, using language and discourse as a tool to explore its…
The Pacific Northwest region vegetation and monitoring system.
Timothy A. Max; Hans T. Schreuder; John W. Hazard; Daniel D. Oswald; John Teply; Jim. Alegria
1996-01-01
A grid sampling strategy was adopted for broad-scale inventory and monitoring of forest and range vegetation on National Forest System lands in the Pacific North-west Region, USDA Forest Service. This paper documents the technical details of the adopted design and discusses alternative sampling designs that were considered. A less technical description of the selected...
Attitudes and Factors that Influence Decision-Making in Adoption from Care in Northern Ireland
ERIC Educational Resources Information Center
Barr, Lily
2004-01-01
This is a small-scale local study aimed at exploring the thinking and attitudes that inform or influence decision-making around proceeding to adoption. It also sought to explore or establish practitioners' views of potential tensions in this area and potential supports. It included open questions, attitudinal questions and required respondents to…
What Are Some Alternatives for Working Within a Regionally Adopted Science Framework?
ERIC Educational Resources Information Center
Perkes, Victor A.
Alternatives for working within a regionally adopted framework for selecting an elementary school science program are considered in this paper. The alternatives are ranked on a scale from 0 to 5 in increasing levels of modifying a set instructional pattern: Level 0, typified by indifference to any consistent program in science; Level 1, a complete…
Kim, Ki Joon
2014-01-01
Abstract This study explores the psychological effects of screen size on smartphone adoption by proposing an extended Technology Acceptance Model (TAM) that integrates an empirical comparison between large and small screens with perceived control, affective quality, and the original TAM constructs. A structural equation modeling analysis was conducted on data collected from a between-subjects experiment (N=130) in which users performed a web-based task on a smartphone with either a large (5.3 inches) or a small (3.7 inches) screen. Results show that a large screen, compared to a small screen, is likely to lead to higher smartphone adoption by simultaneously promoting both the utilitarian and hedonic qualities of smartphones, which in turn positively influence perceived ease of use of—and attitude toward—the device respectively. Implications and directions for future research are discussed. PMID:24694112
Kim, Ki Joon; Sundar, S Shyam
2014-07-01
This study explores the psychological effects of screen size on smartphone adoption by proposing an extended Technology Acceptance Model (TAM) that integrates an empirical comparison between large and small screens with perceived control, affective quality, and the original TAM constructs. A structural equation modeling analysis was conducted on data collected from a between-subjects experiment (N=130) in which users performed a web-based task on a smartphone with either a large (5.3 inches) or a small (3.7 inches) screen. Results show that a large screen, compared to a small screen, is likely to lead to higher smartphone adoption by simultaneously promoting both the utilitarian and hedonic qualities of smartphones, which in turn positively influence perceived ease of use of-and attitude toward-the device respectively. Implications and directions for future research are discussed.
Jenkins, Marion W; Cairncross, Sandy
2010-03-01
Latrine diffusion patterns across 502 villages in Benin, West Africa, were analysed to explore factors driving initial and increasing levels of household adoption in low-coverage rural areas of sub-Saharan Africa. Variables explaining adoption related to population density, size, infrastructure/services, non-agricultural occupations, road and urban proximity, and the nearby latrine adoption rate, capturing differences in the physical and social environment, lifestyles and latrine exposure involved in stimulating status/prestige and well-being reasons for latrine adoption. Contagion was most important in explaining adoption initiation. Cluster analysis revealed four distinct village typologies of demand for latrines which provide a framework for tailoring promotional interventions to better match the different sanitation demand characteristics of communities in scaling-up sanitation development and promotion programmes.
NASA Astrophysics Data System (ADS)
Eekhout, Joris P. C.; de Vente, Joris
2017-04-01
Climate change has strong implications for many essential ecosystem services, such as provision of drinking and irrigation water, soil erosion and flood control. Especially large impacts are expected in the Mediterranean, already characterised by frequent floods and droughts. The projected higher frequency of extreme weather events under climate change will lead to an increase of plant water stress, reservoir inflow and sediment yield. Sustainable Land Management (SLM) practices are increasingly promoted as climate change adaptation strategy and to increase resilience against extreme events. However, there is surprisingly little known about their impacts and trade-offs on ecosystem services at regional scales. The aim of this research is to provide insight in the potential of SLM for climate change adaptation, focusing on catchment-scale impacts on soil and water resources. We applied a spatially distributed hydrological model (SPHY), coupled with an erosion model (MUSLE) to the Segura River catchment (15,978 km2) in SE Spain. We run the model for three periods: one reference (1981-2000) and two future scenarios (2031-2050 and 2081-2100). Climate input data for the future scenarios were based on output from 9 Regional Climate Models and for different emission scenarios (RCP 4.5 and RCP 8.5). Realistic scenarios of SLM practices were developed based on a local stakeholder consultation process. The evaluated SLM scenarios focussed on reduced tillage and organic amendments under tree and cereal crops, covering 24% and 15% of the catchment, respectively. In the reference scenario, implementation of SLM at the field-scale led to an increase of the infiltration capacity of the soil and a reduction of surface runoff up to 29%, eventually reducing catchment-scale reservoir inflow by 6%. This led to a reduction of field-scale sediment yield of more than 50% and a reduced catchment-scale sediment flux to reservoirs of 5%. SLM was able to fully mitigate the effect of climate change at the field-scale and partly at the catchment-scale. Therefore, we conclude that large-scale adoption of SLM can effectively contribute to climate change adaptation by increasing the soil infiltration capacity, the soil water retention capacity and soil moisture content in the rootzone, leading to less crop stress. These findings of regional scale impacts of SLM are of high relevance for land-owners, -managers and policy makers to design effective climate change adaptation strategies.
ERIC Educational Resources Information Center
Aarons, Gregory A.; Glisson, Charles; Hoagwood, Kimberly; Kelleher, Kelly; Landsverk, John; Cafri, Guy
2010-01-01
The Evidence-Based Practice Attitude Scale (EBPAS) assesses mental health and social service provider attitudes toward adopting evidence-based practices. Scores on the EBPAS derive from 4 subscales (i.e., Appeal, Requirements, Openness, and Divergence) as well as the total scale, and preliminary studies have linked EBPAS scores to clinic structure…