NASA Technical Reports Server (NTRS)
Morgan, R. P.; Singh, J. P.; Rothenberg, D.; Robinson, B. E.
1975-01-01
The needs to be served, the subsectors in which the system might be used, the technology employed, and the prospects for future utilization of an educational telecommunications delivery system are described and analyzed. Educational subsectors are analyzed with emphasis on the current status and trends within each subsector. Issues which affect future development, and prospects for future use of media, technology, and large-scale electronic delivery within each subsector are included. Information on technology utilization is presented. Educational telecommunications services are identified and grouped into categories: public television and radio, instructional television, computer aided instruction, computer resource sharing, and information resource sharing. Technology based services, their current utilization, and factors which affect future development are stressed. The role of communications satellites in providing these services is discussed. Efforts to analyze and estimate future utilization of large-scale educational telecommunications are summarized. Factors which affect future utilization are identified. Conclusions are presented.
Grid-Enabled Quantitative Analysis of Breast Cancer
2010-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope
Li, Chen; Yongbo, Lv; Chi, Chen
2015-01-01
Based on the data from 30 provincial regions in China, an assessment and empirical analysis was carried out on the utilizing and sharing of the large-scale scientific equipment with a comprehensive assessment model established on the three dimensions, namely, equipment, utilization and sharing. The assessment results were interpreted in light of relevant policies. The results showed that on the whole, the overall development level in the provincial regions in eastern and central China is higher than that in western China. This is mostly because of the large gap among the different provincial regions with respect to the equipped level. But in terms of utilizing and sharing, some of the Western provincial regions, such as Ningxia, perform well, which is worthy of our attention. Policy adjustment targeting at the differentiation, elevation of the capacity of the equipment management personnel, perfection of the sharing and cooperation platform, and the promotion of the establishment of open sharing funds, are all important measures to promote the utilization and sharing of the large-scale scientific equipment and to narrow the gap among different regions. PMID:25937850
Charting the Emergence of Corporate Procurement of Utility-Scale PV |
Jeffrey J. Cook Though most large-scale solar photovoltaic (PV) deployment has been driven by utility corporate interest in renewables as more companies are recognizing that solar PV can provide clean United States highlighting states with utility-scale solar PV purchasing options Figure 2. States with
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolinger, Mark; Seel, Joachim
2015-09-01
Other than the nine Solar Energy Generation Systems (“SEGS”) parabolic trough projects built in the 1980s, virtually no large-scale or “utility-scale” solar projects – defined here to include any groundmounted photovoltaic (“PV”), concentrating photovoltaic (“CPV”), or concentrating solar thermal power (“CSP”) project larger than 5 MW AC – existed in the United States prior to 2007. By 2012 – just five years later – utility-scale had become the largest sector of the overall PV market in the United States, a distinction that was repeated in both 2013 and 2014 and that is expected to continue for at least the nextmore » few years. Over this same short period, CSP also experienced a bit of a renaissance in the United States, with a number of large new parabolic trough and power tower systems – some including thermal storage – achieving commercial operation. With this critical mass of new utility-scale projects now online and in some cases having operated for a number of years (generating not only electricity, but also empirical data that can be mined), the rapidly growing utility-scale sector is ripe for analysis. This report, the third edition in an ongoing annual series, meets this need through in-depth, annually updated, data-driven analysis of not just installed project costs or prices – i.e., the traditional realm of solar economics analyses – but also operating costs, capacity factors, and power purchase agreement (“PPA”) prices from a large sample of utility-scale solar projects in the United States. Given its current dominance in the market, utility-scale PV also dominates much of this report, though data from CPV and CSP projects are presented where appropriate.« less
Charting the Emergence of Corporate Procurement of Utility-Scale PV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heeter, Jenny S.; Cook, Jeffrey J.; Bird, Lori A.
Corporations and other institutions have contracted for more than 2300 MW of off-site solar, using power purchase agreements, green tariffs, or bilateral deals with utilities. This paper examines the benefits, challenges, and outlooks for large-scale off-site solar purchasing in the United States. Pathways differ based on where they are available, the hedge value they can provide, and their ease of implementation. The paper features case studies of an aggregate PPA (Massachusetts Institute of Technology, Boston Medical Center, and Post Office Square), a corporation exiting their incumbent utility (MGM Resorts), a utility offering large scale renewables to corporate customers (Alabama Powersmore » Renewable Procurement Program), and a company with approval to sell energy into wholesale markets (Google Energy Inc.).« less
Wind power for the electric-utility industry: Policy incentives for fuel conservation
NASA Astrophysics Data System (ADS)
March, F.; Dlott, E. H.; Korn, D. H.; Madio, F. R.; McArthur, R. C.; Vachon, W. A.
1982-06-01
A systematic method for evaluating the economics of solar-electric/conservation technologies as fuel-savings investments for electric utilities in the presence of changing federal incentive policies is presented. The focus is on wind energy conversion systems (WECS) as the solar technology closest to near-term large scale implementation. Commercially available large WECS are described, along with computer models to calculate the economic impact of the inclusion of WECS as 10% of the base-load generating capacity on a grid. A guide to legal structures and relationships which impinge on large-scale WECS utilization is developed, together with a quantitative examination of the installation of 1000 MWe of WECS capacity by a utility in the northeast states. Engineering and financial analyses were performed, with results indicating government policy changes necessary to encourage the entrance of utilities into the field of windpower utilization.
Lai, Hsien-Tang; Kung, Pei-Tseng; Su, Hsun-Pi; Tsai, Wen-Chen
2014-09-01
Limited studies with large samples have been conducted on the utilization of dental calculus scaling among people with physical or mental disabilities. This study aimed to investigate the utilization of dental calculus scaling among the national disabled population. This study analyzed the utilization of dental calculus scaling among the disabled people, using the nationwide data between 2006 and 2008. Descriptive analysis and logistic regression were performed to analyze related influential factors for dental calculus scaling utilization. The dental calculus scaling utilization rate among people with physical or mental disabilities was 16.39%, and the annual utilization frequency was 0.2 times. Utilization rate was higher among the female and non-aboriginal samples. Utilization rate decreased with increased age and disability severity while utilization rate increased with income, education level, urbanization of residential area and number of chronic illnesses. Related influential factors for dental calculus scaling utilization rate were gender, age, ethnicity (aboriginal or non-aboriginal), education level, urbanization of residence area, income, catastrophic illnesses, chronic illnesses, disability types, and disability severity significantly influenced the dental calculus scaling utilization rate. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Burstein, Leigh
Two specific methods of analysis in large-scale evaluations are considered: structural equation modeling and selection modeling/analysis of non-equivalent control group designs. Their utility in large-scale educational program evaluation is discussed. The examination of these methodological developments indicates how people (evaluators,…
Overview and current status of DOE/UPVG`s TEAM-UP Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hester, S.
1995-11-01
An overview is given of the Utility Photovoltaic Group. The mission is to accelerate the use of small-scale and large scale applications of photovoltaics for the benefit of the electric utilities and their customers.
Grid-Enabled Quantitative Analysis of Breast Cancer
2009-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast
Sakurai, Hidehiro; Masukawa, Hajime; Kitashima, Masaharu; Inoue, Kazuhito
2010-01-01
In order to decrease CO(2) emissions from the burning of fossil fuels, the development of new renewable energy sources sufficiently large in quantity is essential. To meet this need, we propose large-scale H(2) production on the sea surface utilizing cyanobacteria. Although many of the relevant technologies are in the early stage of development, this chapter briefly examines the feasibility of such H(2) production, in order to illustrate that under certain conditions large-scale photobiological H(2) production can be viable. Assuming that solar energy is converted to H(2) at 1.2% efficiency, the future cost of H(2) can be estimated to be about 11 (pipelines) and 26.4 (compression and marine transportation) cents kWh(-1), respectively.
Large Scale Density Estimation of Blue and Fin Whales (LSD)
2014-09-30
172. McDonald, MA, Hildebrand, JA, and Mesnick, S (2009). Worldwide decline in tonal frequencies of blue whale songs . Endangered Species Research 9...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...estimating blue and fin whale density that is effective over large spatial scales and is designed to cope with spatial variation in animal density utilizing
NREL, California Independent System Operator, and First Solar | Energy
Solar NREL, California Independent System Operator, and First Solar Demonstrate Essential Reliability Services with Utility-Scale Solar NREL, the California Independent System Operator (CAISO), and First Solar conducted a demonstration project on a large utility-scale photovoltaic (PV) power plant in California to
Charting the Emergence of Corporate Procurement of Utility-Scale PV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heeter, Jenny S.; Cook, Jeffrey J.; Bird, Lori A.
Through July 2017, corporate customers contracted for more than 2,300 MW of utility-scale solar. This paper examines the benefits, challenges, and outlooks for large-scale off-site solar purchasing through four pathways: PPAs, retail choice, utility partnerships (green tariffs and bilateral contracts with utilities), and by becoming a licensed wholesale seller of electricity. Each pathway differs based on where in the United States it is available, the value provided to a corporate off-taker, and the ease of implementation. The paper concludes with a discussion of future pathway comparison, noting that to deploy more corporate off-site solar, new procurement pathways are needed.
Utilization of Large Scale Surface Models for Detailed Visibility Analyses
NASA Astrophysics Data System (ADS)
Caha, J.; Kačmařík, M.
2017-11-01
This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.
A large-scale photonic node architecture that utilizes interconnected OXC subsystems.
Iwai, Yuto; Hasegawa, Hiroshi; Sato, Ken-ichi
2013-01-14
We propose a novel photonic node architecture that is composed of interconnected small-scale optical cross-connect subsystems. We also developed an efficient dynamic network control algorithm that complies with a restriction on the number of intra-node fibers used for subsystem interconnection. Numerical evaluations verify that the proposed architecture offers almost the same performance as the equivalent single large-scale cross-connect switch, while enabling substantial hardware scale reductions.
Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian Yang
2013-01-01
Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...
Integrating market processes into utility resource planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahn, E.P.
1992-11-01
Integrated resource planning has resulted in an abundance of alternatives for meeting existing and new demand for electricity services: (1) utility demand-side management (DSM) programs, (2) DSM bidding, (3) competitive bidding for private power supplies, (4) utility re-powering, and (5) new utility construction. Each alternative relies on a different degree of planning for implementation and, therefore, each alternative relies on markets to a greater or lesser degree. This paper shows how the interaction of planning processes and market forces results in resource allocations among the alternatives. The discussion focuses on three phenomena that are driving forces behind the unanticipated consequences'more » of contemporary integrated resource planning efforts. These forces are: (1) large-scale DSM efforts, (2) customer bypass, and (3) large-scale independent power projects. 22 refs., 3 figs., 2 tabs.« less
Robust Coordination for Large Sets of Simple Rovers
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Agogino, Adrian
2006-01-01
The ability to coordinate sets of rovers in an unknown environment is critical to the long-term success of many of NASA;s exploration missions. Such coordination policies must have the ability to adapt in unmodeled or partially modeled domains and must be robust against environmental noise and rover failures. In addition such coordination policies must accommodate a large number of rovers, without excessive and burdensome hand-tuning. In this paper we present a distributed coordination method that addresses these issues in the domain of controlling a set of simple rovers. The application of these methods allows reliable and efficient robotic exploration in dangerous, dynamic, and previously unexplored domains. Most control policies for space missions are directly programmed by engineers or created through the use of planning tools, and are appropriate for single rover missions or missions requiring the coordination of a small number of rovers. Such methods typically require significant amounts of domain knowledge, and are difficult to scale to large numbers of rovers. The method described in this article aims to address cases where a large number of rovers need to coordinate to solve a complex time dependent problem in a noisy environment. In this approach, each rover decomposes a global utility, representing the overall goal of the system, into rover-specific utilities that properly assign credit to the rover s actions. Each rover then has the responsibility to create a control policy that maximizes its own rover-specific utility. We show a method of creating rover-utilities that are "aligned" with the global utility, such that when the rovers maximize their own utility, they also maximize the global utility. In addition we show that our method creates rover-utilities that allow the rovers to create their control policies quickly and reliably. Our distributed learning method allows large sets rovers be used unmodeled domains, while providing robustness against rover failures and changing environments. In experimental simulations we show that our method scales well with large numbers of rovers in addition to being robust against noisy sensor inputs and noisy servo control. The results show that our method is able to scale to large numbers of rovers and achieves up to 400% performance improvement over standard machine learning methods.
Examining the Invisible Loop: Tutors in Large Scale Teacher Development Programmes
ERIC Educational Resources Information Center
Bansilal, Sarah
2014-01-01
The recent curriculum changes in the South African education system have necessitated the development of large scale in-service training programmes for teachers. For some teacher training providers this has resulted in utilizing the services of tutors or facilitators from the various regions to deliver the programme. This article examines the role…
ERIC Educational Resources Information Center
Turner, Henry J.
2014-01-01
This dissertation of practice utilized a multiple case-study approach to examine distributed leadership within five school districts that were attempting to gain acceptance of a large-scale 1:1 technology initiative. Using frame theory and distributed leadership theory as theoretical frameworks, this study interviewed each district's…
Background/Questions/Methods As interest in continental-scale ecology increases to address large-scale ecological problems, ecologists need indicators of complex processes that can be collected quickly at many sites across large areas. We are exploring the utility of stable isot...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trujillo, Angelina Michelle
Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.
Hu, Michael Z.; Zhu, Ting
2015-12-04
This study reviews the experimental synthesis and engineering developments that focused on various green approaches and large-scale process production routes for quantum dots. Fundamental process engineering principles were illustrated. In relation to the small-scale hot injection method, our discussions focus on the non-injection route that could be scaled up with engineering stir-tank reactors. In addition, applications that demand to utilize quantum dots as "commodity" chemicals are discussed, including solar cells and solid-state lightings.
A novel iron-lead redox flow battery for large-scale energy storage
NASA Astrophysics Data System (ADS)
Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Ren, Y. X.
2017-04-01
The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies for the massive utilization of intermittent renewables especially wind and solar energy. This work presents a novel redox flow battery that utilizes inexpensive and abundant Fe(II)/Fe(III) and Pb/Pb(II) redox couples as redox materials. Experimental results show that both the Fe(II)/Fe(III) and Pb/Pb(II) redox couples have fast electrochemical kinetics in methanesulfonic acid, and that the coulombic efficiency and energy efficiency of the battery are, respectively, as high as 96.2% and 86.2% at 40 mA cm-2. Furthermore, the battery exhibits stable performance in terms of efficiencies and discharge capacities during the cycle test. The inexpensive redox materials, fast electrochemical kinetics and stable cycle performance make the present battery a promising candidate for large-scale energy storage applications.
Wind Power Innovation Enables Shift to Utility-Scale - Continuum Magazine
the 1930s, a farmer in South Dakota built a small wind turbine on his farm, generating enough enough electricity to power thousands of homes. Aerial photo of large wind turbine with mountains in the background. Aerial view of the Siemens utility-scale wind turbine at the National Wind Technology Center
ERIC Educational Resources Information Center
Newman, J. N.
1979-01-01
Discussed is the utilization of surface ocean waves as a potential source of power. Simple and large-scale wave power devices and conversion systems are described. Alternative utilizations, environmental impacts, and future prospects of this alternative energy source are detailed. (BT)
ERIC Educational Resources Information Center
Morgan, Robert P.; And Others
Opportunities for utilizing large-scale educational telecommunications delivery systems to aid in meeting needs of U.S. education are extensively analyzed in a NASA-funded report. Status, trends, and issues in various educational subsectors are assessed, along with current use of telecommunications and technology and factors working for and…
Stability of large-scale systems.
NASA Technical Reports Server (NTRS)
Siljak, D. D.
1972-01-01
The purpose of this paper is to present the results obtained in stability study of large-scale systems based upon the comparison principle and vector Liapunov functions. The exposition is essentially self-contained, with emphasis on recent innovations which utilize explicit information about the system structure. This provides a natural foundation for the stability theory of dynamic systems under structural perturbations.
ERIC Educational Resources Information Center
Strietholt, Rolf; Scherer, Ronny
2018-01-01
The present paper aims to discuss how data from international large-scale assessments (ILSAs) can be utilized and combined, even with other existing data sources, in order to monitor educational outcomes and study the effectiveness of educational systems. We consider different purposes of linking data, namely, extending outcomes measures,…
Overview of Opportunities for Co-Location of Solar Energy Technologies and Vegetation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macknick, Jordan; Beatty, Brenda; Hill, Graham
2013-12-01
Large-scale solar facilities have the potential to contribute significantly to national electricity production. Many solar installations are large-scale or utility-scale, with a capacity over 1 MW and connected directly to the electric grid. Large-scale solar facilities offer an opportunity to achieve economies of scale in solar deployment, yet there have been concerns about the amount of land required for solar projects and the impact of solar projects on local habitat. During the site preparation phase for utility-scale solar facilities, developers often grade land and remove all vegetation to minimize installation and operational costs, prevent plants from shading panels, and minimizemore » potential fire or wildlife risks. However, the common site preparation practice of removing vegetation can be avoided in certain circumstances, and there have been successful examples where solar facilities have been co-located with agricultural operations or have native vegetation growing beneath the panels. In this study we outline some of the impacts that large-scale solar facilities can have on the local environment, provide examples of installations where impacts have been minimized through co-location with vegetation, characterize the types of co-location, and give an overview of the potential benefits from co-location of solar energy projects and vegetation. The varieties of co-location can be replicated or modified for site-specific use at other solar energy installations around the world. We conclude with opportunities to improve upon our understanding of ways to reduce the environmental impacts of large-scale solar installations.« less
Large-area photogrammetry based testing of wind turbine blades
NASA Astrophysics Data System (ADS)
Poozesh, Peyman; Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter; Harvey, Eric; Yarala, Rahul
2017-03-01
An optically based sensing system that can measure the displacement and strain over essentially the entire area of a utility-scale blade leads to a measurement system that can significantly reduce the time and cost associated with traditional instrumentation. This paper evaluates the performance of conventional three dimensional digital image correlation (3D DIC) and three dimensional point tracking (3DPT) approaches over the surface of wind turbine blades and proposes a multi-camera measurement system using dynamic spatial data stitching. The potential advantages for the proposed approach include: (1) full-field measurement distributed over a very large area, (2) the elimination of time-consuming wiring and expensive sensors, and (3) the need for large-channel data acquisition systems. There are several challenges associated with extending the capability of a standard 3D DIC system to measure entire surface of utility scale blades to extract distributed strain, deflection, and modal parameters. This paper only tries to address some of the difficulties including: (1) assessing the accuracy of the 3D DIC system to measure full-field distributed strain and displacement over the large area, (2) understanding the geometrical constraints associated with a wind turbine testing facility (e.g. lighting, working distance, and speckle pattern size), (3) evaluating the performance of the dynamic stitching method to combine two different fields of view by extracting modal parameters from aligned point clouds, and (4) determining the feasibility of employing an output-only system identification to estimate modal parameters of a utility scale wind turbine blade from optically measured data. Within the current work, the results of an optical measurement (one stereo-vision system) performed on a large area over a 50-m utility-scale blade subjected to quasi-static and cyclic loading are presented. The blade certification and testing is typically performed using International Electro-Technical Commission standard (IEC 61400-23). For static tests, the blade is pulled in either flap-wise or edge-wise directions to measure deflection or distributed strain at a few limited locations of a large-sized blade. Additionally, the paper explores the error associated with using a multi-camera system (two stereo-vision systems) in measuring 3D displacement and extracting structural dynamic parameters on a mock set up emulating a utility-scale wind turbine blade. The results obtained in this paper reveal that the multi-camera measurement system has the potential to identify the dynamic characteristics of a very large structure.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu
2018-04-20
A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.
Scale and modeling issues in water resources planning
Lins, H.F.; Wolock, D.M.; McCabe, G.J.
1997-01-01
Resource planners and managers interested in utilizing climate model output as part of their operational activities immediately confront the dilemma of scale discordance. Their functional responsibilities cover relatively small geographical areas and necessarily require data of relatively high spatial resolution. Climate models cover a large geographical, i.e. global, domain and produce data at comparatively low spatial resolution. Although the scale differences between model output and planning input are large, several techniques have been developed for disaggregating climate model output to a scale appropriate for use in water resource planning and management applications. With techniques in hand to reduce the limitations imposed by scale discordance, water resource professionals must now confront a more fundamental constraint on the use of climate models-the inability to produce accurate representations and forecasts of regional climate. Given the current capabilities of climate models, and the likelihood that the uncertainty associated with long-term climate model forecasts will remain high for some years to come, the water resources planning community may find it impractical to utilize such forecasts operationally.
Support for solar energy: Examining sense of place and utility-scale development in California
Carlisle, Juliet E.; Kane, Stephanie L.; Solan, David; ...
2014-08-20
As solar costs have declined PV systems have experienced considerable growth since 2003, especially in China, Japan, Germany, and the U.S. Thus, a more nuanced understanding of a particular public's attitudes toward utility-scale solar development, as it arrives in a market and region, is warranted and will likely be instructive for other areas in the world where this type of development will occur in the near future. Using data collected from a 2013 telephone survey (N=594) from the six Southern Californian counties selected based on existing and proposed solar developments and available suitable land, we examine public attitudes toward solarmore » energy and construction of large-scale solar facilities, testing whether attitudes toward such developments are the result of sense of place and attachment to place. Overall, we have mixed results. Place attachment and sense of place fail to produce significant effects except in terms of perceived positive benefits. That is, respondents interpret the change resulting from large-scale solar development in a positive way insofar as perceived positive economic impacts are positively related to support for nearby large-scale construction.« less
Support for solar energy: Examining sense of place and utility-scale development in California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juliet E. Carlisle; Stephanie L. Kane; David Solan
2015-07-01
As solar costs have declined PV systems have experienced considerable growth since 2003, especially in China, Japan, Germany, and the U.S. Thus, a more nuanced understanding of a particular public's attitudes toward utility-scale solar development, as it arrives in a market and region, is warranted and will likely be instructive for other areas in the world where this type of development will occur in the near future. Using data collected from a 2013 telephone survey (N = 594) from the six Southern Californian counties selected based on existing and proposed solar developments and available suitable land, we examine public attitudesmore » toward solar energy and construction of large-scale solar facilities, testing whether attitudes toward such developments are the result of sense of place and attachment to place. Overall, we have mixed results. Place attachment and sense of place fail to produce significant effects except in terms of perceived positive benefits. That is, respondents interpret the change resulting from large-scale solar development in a positive way insofar as perceived positive economic impacts are positively related to support for nearby large-scale construction.« less
Impact of Utility-Scale Distributed Wind on Transmission-Level System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brancucci Martinez-Anido, C.; Hodge, B. M.
2014-09-01
This report presents a new renewable integration study that aims to assess the potential for adding distributed wind to the current power system with minimal or no upgrades to the distribution or transmission electricity systems. It investigates the impacts of integrating large amounts of utility-scale distributed wind power on bulk system operations by performing a case study on the power system of the Independent System Operator-New England (ISO-NE).
NASA Technical Reports Server (NTRS)
Criswell, D. R. (Editor)
1976-01-01
The practicality of exploiting the moon, not only as a source of materials for large habitable structures at Lagrangian points, but also as a base for colonization is discussed in abstracts of papers presented at a special session on lunar utilization. Questions and answers which followed each presentation are included after the appropriate abstract. Author and subject indexes are provided.
CFD Script for Rapid TPS Damage Assessment
NASA Technical Reports Server (NTRS)
McCloud, Peter
2013-01-01
This grid generation script creates unstructured CFD grids for rapid thermal protection system (TPS) damage aeroheating assessments. The existing manual solution is cumbersome, open to errors, and slow. The invention takes a large-scale geometry grid and its large-scale CFD solution, and creates a unstructured patch grid that models the TPS damage. The flow field boundary condition for the patch grid is then interpolated from the large-scale CFD solution. It speeds up the generation of CFD grids and solutions in the modeling of TPS damages and their aeroheating assessment. This process was successfully utilized during STS-134.
Novel Miscanthus Germplasm-Based Value Chains: A Life Cycle Assessment
Wagner, Moritz; Kiesel, Andreas; Hastings, Astley; Iqbal, Yasir; Lewandowski, Iris
2017-01-01
In recent years, considerable progress has been made in miscanthus research: improvement of management practices, breeding of new genotypes, especially for marginal conditions, and development of novel utilization options. The purpose of the current study was a holistic analysis of the environmental performance of such novel miscanthus-based value chains. In addition, the relevance of the analyzed environmental impact categories was assessed. A Life Cycle Assessment was conducted to analyse the environmental performance of the miscanthus-based value chains in 18 impact categories. In order to include the substitution of a reference product, a system expansion approach was used. In addition, a normalization step was applied. This allowed the relevance of these impact categories to be evaluated for each utilization pathway. The miscanthus was cultivated on six sites in Europe (Aberystwyth, Adana, Moscow, Potash, Stuttgart and Wageningen) and the biomass was utilized in the following six pathways: (1) small-scale combustion (heat)—chips; (2) small-scale combustion (heat)—pellets; (3) large-scale combustion (CHP)—biomass baled for transport and storage; (4) large-scale combustion (CHP)—pellets; (5) medium-scale biogas plant—ensiled miscanthus biomass; and (6) large-scale production of insulation material. Thus, in total, the environmental performance of 36 site × pathway combinations was assessed. The comparatively high normalized results of human toxicity, marine, and freshwater ecotoxicity, and freshwater eutrophication indicate the relevance of these impact categories in the assessment of miscanthus-based value chains. Differences between the six sites can almost entirely be attributed to variations in biomass yield. However, the environmental performance of the utilization pathways analyzed varied widely. The largest differences were shown for freshwater and marine ecotoxicity, and freshwater eutrophication. The production of insulation material had the lowest impact on the environment, with net benefits in all impact categories expect three (marine eutrophication, human toxicity, agricultural land occupation). This performance can be explained by the multiple use of the biomass, first as material and subsequently as an energy carrier, and by the substitution of an emission-intensive reference product. The results of this study emphasize the importance of assessing all environmental impacts when selecting appropriate utilization pathways. PMID:28642784
ERIC Educational Resources Information Center
Frey, Andreas; Hartig, Johannes; Rupp, Andre A.
2009-01-01
In most large-scale assessments of student achievement, several broad content domains are tested. Because more items are needed to cover the content domains than can be presented in the limited testing time to each individual student, multiple test forms or booklets are utilized to distribute the items to the students. The construction of an…
Parallel Index and Query for Large Scale Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Jerry; Wu, Kesheng; Ruebel, Oliver
2011-07-18
Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing ofmore » a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.« less
Application of Small-Scale Systems: Evaluation of Alternatives
John Wilhoit; Robert Rummer
1999-01-01
Large-scale mechanized systems are not well-suited for harvesting smaller tracts of privately owned forest land. New alternative small-scale harvesting systems are needed which utilize mechanized felling, have a low capital investment requirement, are small in physical size, and are based primarily on adaptations of current harvesting technology. This paper presents...
Owen, Sheldon F.; Berl, Jacob L.; Edwards, John W.; Ford, W. Mark; Wood, Petra Bohall
2015-01-01
We studied a raccoon (Procyon lotor) population within a managed central Appalachian hardwood forest in West Virginia to investigate the effects of intensive forest management on raccoon spatial requirements and habitat selection. Raccoon home-range (95% utilization distribution) and core-area (50% utilization distribution) size differed between sexes with males maintaining larger (2×) home ranges and core areas than females. Home-range and core-area size did not differ between seasons for either sex. We used compositional analysis to quantify raccoon selection of six different habitat types at multiple spatial scales. Raccoons selected riparian corridors (riparian management zones [RMZ]) and intact forests (> 70 y old) at the core-area spatial scale. RMZs likely were used by raccoons because they provided abundant denning resources (i.e., large-diameter trees) as well as access to water. Habitat composition associated with raccoon foraging locations indicated selection for intact forests, riparian areas, and regenerating harvest (stands <10 y old). Although raccoons were able to utilize multiple habitat types for foraging resources, a selection of intact forest and RMZs at multiple spatial scales indicates the need of mature forest (with large-diameter trees) for this species in managed forests in the central Appalachians.
Voltage Impacts of Utility-Scale Distributed Wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, A.
2014-09-01
Although most utility-scale wind turbines in the United States are added at the transmission level in large wind power plants, distributed wind power offers an alternative that could increase the overall wind power penetration without the need for additional transmission. This report examines the distribution feeder-level voltage issues that can arise when adding utility-scale wind turbines to the distribution system. Four of the Pacific Northwest National Laboratory taxonomy feeders were examined in detail to study the voltage issues associated with adding wind turbines at different distances from the sub-station. General rules relating feeder resistance up to the point of turbinemore » interconnection to the expected maximum voltage change levels were developed. Additional analysis examined line and transformer overvoltage conditions.« less
White Paper on Dish Stirling Technology: Path Toward Commercial Deployment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andraka, Charles E.; Stechel, Ellen; Becker, Peter
2016-07-01
Dish Stirling energy systems have been developed for distributed and large-scale utility deployment. This report summarizes the state of the technology in a joint project between Stirling Energy Systems, Sandia National Laboratories, and the Department of Energy in 2011. It then lays out a feasible path to large scale deployment, including development needs and anticipated cost reduction paths that will make a viable deployment product.
NASA Astrophysics Data System (ADS)
Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei
2017-10-01
In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1981-01-01
Progress is reported in reading MAGSAT tapes in modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere. The modeling technique utilizes a linear current element representation of the large-scale space-current system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Solar ADEPT Project: Satcon is developing a compact, lightweight power conversion device that is capable of taking utility-scale solar power and outputting it directly into the electric utility grid at distribution voltage levels—eliminating the need for large transformers. Transformers “step up” the voltage of the power that is generated by a solar power system so it can be efficiently transported through transmission lines and eventually “stepped down” to usable voltages before it enters homes and businesses. Power companies step up the voltage because less electricity is lost along transmission lines when the voltage is high and current is low. Satcon’smore » new power conversion devices will eliminate these heavy transformers and connect a utility-scale solar power system directly to the grid. Satcon’s modular devices are designed to ensure reliability—if one device fails it can be bypassed and the system can continue to run.« less
Impact of Large Scale Energy Efficiency Programs On Consumer Tariffs and Utility Finances in India
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhyankar, Nikit; Phadke, Amol
2011-01-20
Large-scale EE programs would modestly increase tariffs but reduce consumers' electricity bills significantly. However, the primary benefit of EE programs is a significant reduction in power shortages, which might make these programs politically acceptable even if tariffs increase. To increase political support, utilities could pursue programs that would result in minimal tariff increases. This can be achieved in four ways: (a) focus only on low-cost programs (such as replacing electric water heaters with gas water heaters); (b) sell power conserved through the EE program to the market at a price higher than the cost of peak power purchase; (c) focusmore » on programs where a partial utility subsidy of incremental capital cost might work and (d) increase the number of participant consumers by offering a basket of EE programs to fit all consumer subcategories and tariff tiers. Large scale EE programs can result in consistently negative cash flows and significantly erode the utility's overall profitability. In case the utility is facing shortages, the cash flow is very sensitive to the marginal tariff of the unmet demand. This will have an important bearing on the choice of EE programs in Indian states where low-paying rural and agricultural consumers form the majority of the unmet demand. These findings clearly call for a flexible, sustainable solution to the cash-flow management issue. One option is to include a mechanism like FAC in the utility incentive mechanism. Another sustainable solution might be to have the net program cost and revenue loss built into utility's revenue requirement and thus into consumer tariffs up front. However, the latter approach requires institutionalization of EE as a resource. The utility incentive mechanisms would be able to address the utility disincentive of forgone long-run return but have a minor impact on consumer benefits. Fundamentally, providing incentives for EE programs to make them comparable to supply-side investments is a way of moving the electricity sector toward a model focused on providing energy services rather than providing electricity.« less
Pathways for Off-site Corporate PV Procurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heeter, Jenny S
Through July 2017, corporate customers contracted for more than 2,300 MW of utility-scale solar. This paper examines the benefits, challenges, and outlooks for large-scale off-site solar purchasing through four pathways: power purchase agreements, retail choice, utility partnerships (green tariffs and bilateral contracts with utilities), and by becoming a licensed wholesale seller of electricity. Each pathway differs based on where in the United States it is available, the value provided to a corporate off-taker, and the ease of implementation. The paper concludes with a discussion of future pathway comparison, noting that to deploy more corporate off-site solar, new procurement pathways aremore » needed.« less
NASA Astrophysics Data System (ADS)
Hong, J.; Guala, M.; Chamorro, L. P.; Sotiropoulos, F.
2014-06-01
Despite major research efforts, the interaction of the atmospheric boundary layer with turbines and multi-turbine arrays at utility scale remains poorly understood today. This lack of knowledge stems from the limited number of utility-scale research facilities and a number of technical challenges associated with obtaining high-resolution measurements at field scale. We review recent results obtained at the University of Minnesota utility-scale wind energy research station (the EOLOS facility), which is comprised of a 130 m tall meteorological tower and a fully instrumented 2.5MW Clipper Liberty C96 wind turbine. The results address three major areas: 1) The detailed characterization of the wake structures at a scale of 36×36 m2 using a novel super-large-scale particle image velocimetry based on natural snowflakes, including the rich tip vortex dynamics and their correlation with turbine operations, control, and performance; 2) The use of a WindCube Lidar profiler to investigate how wind at various elevations influences turbine power fluctuation and elucidate the role of wind gusts on individual blade loading; and 3) The systematic quantification of the interaction between the turbine instantaneous power output and tower foundation strain with the incoming flow turbulence, which is measured from the meteorological tower.
Research on the impacts of large-scale electric vehicles integration into power grid
NASA Astrophysics Data System (ADS)
Su, Chuankun; Zhang, Jian
2018-06-01
Because of its special energy driving mode, electric vehicles can improve the efficiency of energy utilization and reduce the pollution to the environment, which is being paid more and more attention. But the charging behavior of electric vehicles is random and intermittent. If the electric vehicle is disordered charging in a large scale, it causes great pressure on the structure and operation of the power grid and affects the safety and economic operation of the power grid. With the development of V2G technology in electric vehicle, the study of the charging and discharging characteristics of electric vehicles is of great significance for improving the safe operation of the power grid and the efficiency of energy utilization.
Validation of Satellite Retrieved Land Surface Variables
NASA Technical Reports Server (NTRS)
Lakshmi, Venkataraman; Susskind, Joel
1999-01-01
The effective use of satellite observations of the land surface is limited by the lack of high spatial resolution ground data sets for validation of satellite products. Recent large scale field experiments include FIFE, HAPEX-Sahel and BOREAS which provide us with data sets that have large spatial coverage and long time coverage. It is the objective of this paper to characterize the difference between the satellite estimates and the ground observations. This study and others along similar lines will help us in utilization of satellite retrieved data in large scale modeling studies.
Environmental status of livestock and poultry sectors in China under current transformation stage.
Qian, Yi; Song, Kaihui; Hu, Tao; Ying, Tianyu
2018-05-01
Intensive animal husbandry had aroused great environmental concerns in many developed countries. However, some developing countries are still undergoing the environmental pollution from livestock and poultry sectors. Driven by the large demand, China has experienced a remarkable increase in dairy and meat production, especially in the transformation stage from conventional household breeding to large-scale industrial breeding. At the same time, a large amount of manure from the livestock and poultry sector is released into waterbodies and soil, causing eutrophication and soil degradation. This condition will be reinforced in the large-scale cultivation where the amount of manure exceeds the soil nutrient capacity, if not treated or utilized properly. Our research aims to analyze whether the transformation of raising scale would be beneficial to the environment as well as present the latest status of livestock and poultry sectors in China. The estimation of the pollutants generated and discharged from livestock and poultry sector in China will facilitate the legislation of manure management. This paper analyzes the pollutants generated from the manure of the five principal commercial animals in different farming practices. The results show that the fattening pigs contribute almost half of the pollutants released from manure. Moreover, the beef cattle exert the largest environmental impact for unitary production, about 2-3 times of pork and 5-20 times of chicken. The animals raised with large-scale feedlots practice generate fewer pollutants than those raised in households. The shift towards industrial production of livestock and poultry is easier to manage from the environmental perspective, but adequate large-scale cultivation is encouraged. Regulation control, manure treatment and financial subsidies for the manure treatment and utilization are recommended to achieve the ecological agriculture in China. Copyright © 2017 Elsevier B.V. All rights reserved.
Transmission Infrastructure | Energy Analysis | NREL
aggregating geothermal with other complementary generating technologies, in renewable energy zones infrastructure planning and expansion to enable large-scale deployment of renewable energy in the future. Large Energy, FERC, NERC, and the regional entities, transmission providers, generating companies, utilities
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1982-01-01
The status of the initial testing of the modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is reported. The modeling technique utilizes a linear current element representation of the large scale space-current system.
NASA Astrophysics Data System (ADS)
Luginbuhl, Molly; Rundle, John B.; Hawkins, Angela; Turcotte, Donald L.
2018-01-01
Nowcasting is a new method of statistically classifying seismicity and seismic risk (Rundle et al. 2016). In this paper, the method is applied to the induced seismicity at the Geysers geothermal region in California and the induced seismicity due to fluid injection in Oklahoma. Nowcasting utilizes the catalogs of seismicity in these regions. Two earthquake magnitudes are selected, one large say M_{λ } ≥ 4, and one small say M_{σ } ≥ 2. The method utilizes the number of small earthquakes that occurs between pairs of large earthquakes. The cumulative probability distribution of these values is obtained. The earthquake potential score (EPS) is defined by the number of small earthquakes that has occurred since the last large earthquake, the point where this number falls on the cumulative probability distribution of interevent counts defines the EPS. A major advantage of nowcasting is that it utilizes "natural time", earthquake counts, between events rather than clock time. Thus, it is not necessary to decluster aftershocks and the results are applicable if the level of induced seismicity varies in time. The application of natural time to the accumulation of the seismic hazard depends on the applicability of Gutenberg-Richter (GR) scaling. The increasing number of small earthquakes that occur after a large earthquake can be scaled to give the risk of a large earthquake occurring. To illustrate our approach, we utilize the number of M_{σ } ≥ 2.75 earthquakes in Oklahoma to nowcast the number of M_{λ } ≥ 4.0 earthquakes in Oklahoma. The applicability of the scaling is illustrated during the rapid build-up of injection-induced seismicity between 2012 and 2016, and the subsequent reduction in seismicity associated with a reduction in fluid injections. The same method is applied to the geothermal-induced seismicity at the Geysers, California, for comparison.
A new framework to increase the efficiency of large-scale solar power plants.
NASA Astrophysics Data System (ADS)
Alimohammadi, Shahrouz; Kleissl, Jan P.
2015-11-01
A new framework to estimate the spatio-temporal behavior of solar power is introduced, which predicts the statistical behavior of power output at utility scale Photo-Voltaic (PV) power plants. The framework is based on spatio-temporal Gaussian Processes Regression (Kriging) models, which incorporates satellite data with the UCSD version of the Weather and Research Forecasting model. This framework is designed to improve the efficiency of the large-scale solar power plants. The results are also validated from measurements of the local pyranometer sensors, and some improvements in different scenarios are observed. Solar energy.
Large-Scale Power Production Potential on U.S. Department of Energy Lands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandt, Alicen J.; Elgqvist, Emma M.; Gagne, Douglas A.
This report summarizes the potential for independent power producers to generate large-scale power on U.S. Department of Energy (DOE) lands and export that power into a larger power market, rather than serving on-site DOE loads. The report focuses primarily on the analysis of renewable energy (RE) technologies that are commercially viable at utility scale, including photovoltaics (PV), concentrating solar power (CSP), wind, biomass, landfill gas (LFG), waste to energy (WTE), and geothermal technologies. The report also summarizes the availability of fossil fuel, uranium, or thorium resources at 55 DOE sites.
Mahjouri, Najmeh; Ardestani, Mojtaba
2011-01-01
In this paper, two cooperative and non-cooperative methodologies are developed for a large-scale water allocation problem in Southern Iran. The water shares of the water users and their net benefits are determined using optimization models having economic objectives with respect to the physical and environmental constraints of the system. The results of the two methodologies are compared based on the total obtained economic benefit, and the role of cooperation in utilizing a shared water resource is demonstrated. In both cases, the water quality in rivers satisfies the standards. Comparing the results of the two mentioned approaches shows the importance of acting cooperatively to achieve maximum revenue in utilizing a surface water resource while the river water quantity and quality issues are addressed.
Beyond Widgets -- Systems Incentive Programs for Utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Regnier, Cindy; Mathew, Paul; Robinson, Alastair
Utility incentive programs remain one of the most significant means of deploying commercialized, but underutilized building technologies to scale. However, these programs have been largely limited to component-based products (e.g., lamps, RTUs). While some utilities do provide ‘custom’ incentive programs with whole building and system level technical assistance, these programs require deeper levels of analysis, resulting in higher program costs. This results in custom programs being restricted to utilities with greater resources, and are typically applied mainly to large or energy-intensive facilities, leaving much of the market without cost effective access and incentives for these solutions. In addition, with increasinglymore » stringent energy codes, cost effective component-based solutions that achieve significant savings are dwindling. Building systems (e.g., integrated façade, HVAC and/or lighting solutions) can deliver higher savings that translate into large sector-wide savings if deployed at the scale of these programs. However, systems application poses a number of challenges – baseline energy use must be defined and measured; the metrics for energy and performance must be defined and tested against; in addition, system savings must be validated under well understood conditions. This paper presents a sample of findings of a project to develop validated utility incentive program packages for three specific integrated building systems, in collaboration with Xcel Energy (CO, MN), ComEd, and a consortium of California Public Owned Utilities (CA POUs) (Northern California Power Agency(NCPA) and the Southern California Public Power Authority(SCPPA)). Furthermore, these program packages consist of system specifications, system performance, M&V protocols, streamlined assessment methods, market assessment and implementation guidance.« less
Design Tools for Evaluating Multiprocessor Programs
1976-07-01
than large uniprocessing machines, and 2. economies of scale in manufacturing. Perhaps the most compelling reason (possibly a consequence of the...speed, redundancy, (inefficiency, resource utilization, and economies of the components. [Browne 73, Lehman 66] 6. How can the system be scheduled...mejsures are interesting about the computation? Somn may be: speed, redundancy, (inefficiency, resource utilization, and economies of the components
Implementation of the Agitated Behavior Scale in the Electronic Health Record.
Wilson, Helen John; Dasgupta, Kritis; Michael, Kathleen
The purpose of the study was to implement an Agitated Behavior Scale through an electronic health record and to evaluate the usability of the scale in a brain injury unit at a rehabilitation hospital. A quality improvement project was conducted in the brain injury unit at a large rehabilitation hospital with registered nurses as participants using convenience sampling. The project consisted of three phases and included education, implementation of the scale in the electronic health record, and administration of the survey questionnaire, which utilized the system usability scale. The Agitated Behavior Scale was found to be usable, and there was 92.2% compliance with the use of the electronic Electronic Agitated Behavior Scale. The Agitated Behavior Scale was effectively implemented in the electronic health record and was found to be usable in the assessment of agitation. Utilization of the scale through the electronic health record on a daily basis will allow for an early identification of agitation in patients with traumatic brain injury and enable prompt interventions to manage agitation.
NASA Technical Reports Server (NTRS)
Guhathakurta, M.; Fisher, R. R.
1994-01-01
In this paper we utilize the latitiude distribution of the coronal temperature during the period 1984-1992 that was derived in a paper by Guhathakurta et al, 1993, utilizing ground-based intensity observations of the green (5303 A Fe XIV) and red (6374 A Fe X) coronal forbidden lines from the National Solar Observatory at Sacramento Peak, and establish it association with the global magnetic field and the density distributions in the corona. A determination of plasma temperature, T, was estimated from the intensity ratio Fe X/Fe XIV (where T is inversely proportional to the ratio), since both emission lines come from ionized states of Fe, and the ratio is only weakly dependent on density. We observe that there is a large-scale organization of the inferred coronal temperature distribution that is associated with the large-scale, weak magnetic field structures and bright coronal features; this organization tends to persist through most of the magnetic activity cycle. These high-temperature structures exhibit time-space characteristics which are similar to those of the polar crown filaments. This distribution differs in spatial and temporal characterization from the traditional picture of sunspot and active region evolution over the range of the sunspot cycle, which are manifestations of the small-scale, strong magnetic field regions.
The Preschool Learning Behaviors Scale: Dimensionality and External Validity in Head Start
ERIC Educational Resources Information Center
McDermott, Paul A.; Rikoon, Samuel H.; Waterman, Clare; Fantuzzo, John W.
2012-01-01
Given the importance of accurately gauging early childhood approaches to learning, this study reports evidence for the dimensionality and utility of the Preschool Learning Behaviors Scale for use with disadvantaged preschool children. Data from a large (N = 1,666) sample representative of urban Head Start classrooms revealed three reliable…
Using stroboscopic flow imaging to validate large-scale computational fluid dynamics simulations
NASA Astrophysics Data System (ADS)
Laurence, Ted A.; Ly, Sonny; Fong, Erika; Shusteff, Maxim; Randles, Amanda; Gounley, John; Draeger, Erik
2017-02-01
The utility and accuracy of computational modeling often requires direct validation against experimental measurements. The work presented here is motivated by taking a combined experimental and computational approach to determine the ability of large-scale computational fluid dynamics (CFD) simulations to understand and predict the dynamics of circulating tumor cells in clinically relevant environments. We use stroboscopic light sheet fluorescence imaging to track the paths and measure the velocities of fluorescent microspheres throughout a human aorta model. Performed over complex physiologicallyrealistic 3D geometries, large data sets are acquired with microscopic resolution over macroscopic distances.
ERIC Educational Resources Information Center
Lee, David G.
1974-01-01
Describes several successful attempts to utilize solar energy for heating and providing electrical energy for homes. Indicates that more research and development are needed, especially in the area of large scale usage. (SLH)
State of the Art in Large-Scale Soil Moisture Monitoring
NASA Technical Reports Server (NTRS)
Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.;
2013-01-01
Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.
Tarescavage, Anthony M; Corey, David M; Gupton, Herbert M; Ben-Porath, Yossef S
2015-01-01
Minnesota Multiphasic Personality Inventory-2-Restructured Form scores for 145 male police officer candidates were compared with supervisor ratings of field performance and problem behaviors during their initial probationary period. Results indicated that the officers produced meaningfully lower and less variant substantive scale scores compared to the general population. After applying a statistical correction for range restriction, substantive scale scores from all domains assessed by the inventory demonstrated moderate to large correlations with performance criteria. The practical significance of these results was assessed with relative risk ratio analyses that examined the utility of specific cutoffs on scales demonstrating associations with performance criteria.
Large Composite Structures Processing Technologies for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Clinton, R. G., Jr.; Vickers, J. H.; McMahon, W. M.; Hulcher, A. B.; Johnston, N. J.; Cano, R. J.; Belvin, H. L.; McIver, K.; Franklin, W.; Sidwell, D.
2001-01-01
Significant efforts have been devoted to establishing the technology foundation to enable the progression to large scale composite structures fabrication. We are not capable today of fabricating many of the composite structures envisioned for the second generation reusable launch vehicle (RLV). Conventional 'aerospace' manufacturing and processing methodologies (fiber placement, autoclave, tooling) will require substantial investment and lead time to scale-up. Out-of-autoclave process techniques will require aggressive efforts to mature the selected technologies and to scale up. Focused composite processing technology development and demonstration programs utilizing the building block approach are required to enable envisioned second generation RLV large composite structures applications. Government/industry partnerships have demonstrated success in this area and represent best combination of skills and capabilities to achieve this goal.
Aydmer, A.A.; Chew, W.C.; Cui, T.J.; Wright, D.L.; Smith, D.V.; Abraham, J.D.
2001-01-01
A simple and efficient method for large scale three-dimensional (3-D) subsurface imaging of inhomogeneous background is presented. One-dimensional (1-D) multifrequency distorted Born iterative method (DBIM) is employed in the inversion. Simulation results utilizing synthetic scattering data are given. Calibration of the very early time electromagnetic (VETEM) experimental waveforms is detailed along with major problems encountered in practice and their solutions. This discussion is followed by the results of a large scale application of the method to the experimental data provided by the VETEM system of the U.S. Geological Survey. The method is shown to have a computational complexity that is promising for on-site inversion.
NASA Astrophysics Data System (ADS)
McMillan, Mitchell; Hu, Zhiyong
2017-10-01
Streambank erosion is a major source of fluvial sediment, but few large-scale, spatially distributed models exist to quantify streambank erosion rates. We introduce a spatially distributed model for streambank erosion applicable to sinuous, single-thread channels. We argue that such a model can adequately characterize streambank erosion rates, measured at the outsides of bends over a 2-year time period, throughout a large region. The model is based on the widely-used excess-velocity equation and comprised three components: a physics-based hydrodynamic model, a large-scale 1-dimensional model of average monthly discharge, and an empirical bank erodibility parameterization. The hydrodynamic submodel requires inputs of channel centerline, slope, width, depth, friction factor, and a scour factor A; the large-scale watershed submodel utilizes watershed-averaged monthly outputs of the Noah-2.8 land surface model; bank erodibility is based on tree cover and bank height as proxies for root density. The model was calibrated with erosion rates measured in sand-bed streams throughout the northern Gulf of Mexico coastal plain. The calibrated model outperforms a purely empirical model, as well as a model based only on excess velocity, illustrating the utility of combining a physics-based hydrodynamic model with an empirical bank erodibility relationship. The model could be improved by incorporating spatial variability in channel roughness and the hydrodynamic scour factor, which are here assumed constant. A reach-scale application of the model is illustrated on ∼1 km of a medium-sized, mixed forest-pasture stream, where the model identifies streambank erosion hotspots on forested and non-forested bends.
Efficient Storage Scheme of Covariance Matrix during Inverse Modeling
NASA Astrophysics Data System (ADS)
Mao, D.; Yeh, T. J.
2013-12-01
During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Lin, Paul Tinphone
2009-01-01
This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling andmore » multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.« less
Wake characteristics of wind turbines in utility-scale wind farms
NASA Astrophysics Data System (ADS)
Yang, Xiaolei; Foti, Daniel; Sotiropoulos, Fotis
2017-11-01
The dynamics of turbine wakes is affected by turbine operating conditions, ambient atmospheric turbulent flows, and wakes from upwind turbines. Investigations of the wake from a single turbine have been extensively carried out in the literature. Studies on the wake dynamics in utility-scale wind farms are relatively limited. In this work, we employ large-eddy simulation with an actuator surface or actuator line model for turbine blades to investigate the wake dynamics in utility-scale wind farms. Simulations of three wind farms, i.e., the Horns Rev wind farm in Denmark, Pleasant Valley wind farm in Minnesota, and the Vantage wind farm in Washington are carried out. The computed power shows a good agreement with measurements. Analysis of the wake dynamics in the three wind farms is underway and will be presented in the conference. This work was support by Xcel Energy (RD4-13). The computational resources were provided by National Renewable Energy Laboratory.
Optimization of hybrid power system composed of SMES and flywheel MG for large pulsed load
NASA Astrophysics Data System (ADS)
Niiyama, K.; Yagai, T.; Tsuda, M.; Hamajima, T.
2008-09-01
A superconducting magnetic storage system (SMES) has some advantages such as rapid large power response and high storage efficiency which are superior to other energy storage systems. A flywheel motor generator (FWMG) has large scaled capacity and high reliability, and hence is broadly utilized for a large pulsed load, while it has comparatively low storage efficiency due to high mechanical loss compared with SMES. A fusion power plant such as International Thermo-Nuclear Experimental Reactor (ITER) requires a large and long pulsed load which causes a frequency deviation in a utility power system. In order to keep the frequency within an allowable deviation, we propose a hybrid power system for the pulsed load, which equips the SMES and the FWMG with the utility power system. We evaluate installation cost and frequency control performance of three power systems combined with energy storage devices; (i) SMES with the utility power, (ii) FWMG with the utility power, (iii) both SMES and FWMG with the utility power. The first power system has excellent frequency power control performance but its installation cost is high. The second system has inferior frequency control performance but its installation cost is the lowest. The third system has good frequency control performance and its installation cost is attained lower than the first power system by adjusting the ratio between SMES and FWMG.
ResStock Analysis Tool | Buildings | NREL
Energy and Cost Savings for U.S. Homes Contact Eric Wilson to learn how ResStock can benefit your approach to large-scale residential energy analysis by combining: Large public and private data sources uncovered $49 billion in potential annual utility bill savings through cost-effective energy efficiency
Hierarchical spatial models for predicting tree species assemblages across large domains
Andrew O. Finley; Sudipto Banerjee; Ronald E. McRoberts
2009-01-01
Spatially explicit data layers of tree species assemblages, referred to as forest types or forest type groups, are a key component in large-scale assessments of forest sustainability, biodiversity, timber biomass, carbon sinks and forest health monitoring. This paper explores the utility of coupling georeferenced national forest inventory (NFI) data with readily...
A hierarchical spatial framework for forest landscape planning.
Pete Bettinger; Marie Lennette; K. Norman Johnson; Thomas A. Spies
2005-01-01
A hierarchical spatial framework for large-scale, long-term forest landscape planning is presented along with example policy analyses for a 560,000 ha area of the Oregon Coast Range. The modeling framework suggests utilizing the detail provided by satellite imagery to track forest vegetation condition and for representation of fine-scale features, such as riparian...
Convergence of microclimate in residential landscapes across diverse cities in the United States
Sharon J. Hall; J. Learned; B. Ruddell; K.L. Larson; J. Cavender-Bares; N. Bettez; P.M. Groffman; Morgan Grove; J.B. Heffernan; S.E. Hobbie; J.L. Morse; C. Neill; K.C. Nelson; Jarlath O' Neil-Dunne; L. Ogden; D.E. Pataki; W.D. Pearse; C. Polsky; R. Roy Chowdhury; M.K. Steele; T.L.E. Trammell
2016-01-01
The urban heat island (UHI) is a well-documented pattern of warming in cities relative to rural areas. Most UHI research utilizes remote sensing methods at large scales, or climate sensors in single cities surrounded by standardized land cover. Relatively few studies have explored continental-scale climatic patterns within common urban microenvironments such as...
A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures
NASA Astrophysics Data System (ADS)
Kaveh, A.; Ilchi Ghazaan, M.
2018-02-01
In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharchev, Nikolay; Batanov, German; Petrov, Alexandr
2008-10-15
A version of the collective backscattering diagnostic using gyrotron radiation for small-scale turbulence is described. The diagnostic is used to measure small-scale (k{sub s}{approx_equal}34 cm{sup -1}) plasma density fluctuations in large helical device experiments on the electron cyclotron heating of plasma with the use of 200 kW 82.7 GHz heating gyrotron. A good signal to noise ratio during plasma production phase was obtained, while contamination of stray light increased during plasma build-up phase. The effect of the stray radiation was investigated. The available quasioptical system of the heating system was utilized for this purpose.
NASA Astrophysics Data System (ADS)
Klise, G. T.; Tidwell, V. C.; Macknick, J.; Reno, M. D.; Moreland, B. D.; Zemlick, K. M.
2013-12-01
In the Southwestern United States, there are many large utility-scale solar photovoltaic (PV) and concentrating solar power (CSP) facilities currently in operation, with even more under construction and planned for future development. These are locations with high solar insolation and access to large metropolitan areas and existing grid infrastructure. The Bureau of Land Management, under a reasonably foreseeable development scenario, projects a total of almost 32 GW of installed utility-scale solar project capacity in the Southwest by 2030. To determine the potential impacts to water resources and the potential limitations water resources may have on development, we utilized methods outlined by the Bureau of Land Management (BLM) to determine potential water use in designated solar energy zones (SEZs) for construction and operations & maintenance (O&M), which is then evaluated according to water availability in six Southwestern states. Our results indicate that PV facilities overall use less water, however water for construction is high compared to lifetime operational water needs. There is a transition underway from wet cooled to dry cooled CSP facilities and larger PV facilities due to water use concerns, though some water is still necessary for construction, operations, and maintenance. Overall, ten watersheds, 9 in California, and one in New Mexico were identified as being of particular concern because of limited water availability. Understanding the location of potentially available water sources can help the solar industry determine locations that minimize impacts to existing water resources, and help understand potential costs when utilizing non-potable water sources or purchasing existing appropriated water. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolinger, Mark; Seel, Joachim
The utility-scale solar sector—defined here to include any ground-mounted photovoltaic (“PV”), concentrating photovoltaic (“CPV”), or concentrating solar power (“CSP”) project that is larger than 5 MWAC in capacity—has led the overall U.S. solar market in terms of installed capacity since 2012. It is expected to maintain its market-leading position for at least another five years, driven in part by December 2015’s three-year extension of the 30% federal investment tax credit (“ITC”) through 2019 (coupled with a favorable switch to a “start construction” rather than a “placed in service” eligibility requirement, and a gradual phase down of the credit to 10%more » by 2022). In fact, in 2016 alone, the utility-scale sector is projected to install more than twice as much new capacity as it ever has previously in a single year. This unprecedented boom makes it difficult, yet more important than ever, to stay abreast of the latest utility-scale market developments and trends. This report—the fourth edition in an ongoing annual series—is intended to help meet this need, by providing in-depth, annually updated, data-driven analysis of the utility-scale solar project fleet in the United States. Drawing on empirical project-level data from a wide range of sources, this report analyzes not just installed project costs or prices—i.e., the traditional realm of most solar economic analyses—but also operating costs, capacity factors, and power purchase agreement (“PPA”) prices from a large sample of utility-scale solar projects throughout the United States. Given its current dominance in the market, utility-scale PV also dominates much of this report, though data from CPV and CSP projects are also presented where appropriate.« less
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
NASA Technical Reports Server (NTRS)
Pelletier, R. E.; Hudnall, W. H.
1987-01-01
The use of Space Shuttle Large Format Camera (LFC) color, IR/color, and B&W images in large-scale soil mapping is discussed and illustrated with sample photographs from STS 41-6 (October 1984). Consideration is given to the characteristics of the film types used; the photographic scales available; geometric and stereoscopic factors; and image interpretation and classification for soil-type mapping (detecting both sharp and gradual boundaries), soil parent material topographic and hydrologic assessment, natural-resources inventory, crop-type identification, and stress analysis. It is suggested that LFC photography can play an important role, filling the gap between aerial and satellite remote sensing.
Cosmic Rays and Gamma-Rays in Large-Scale Structure
NASA Astrophysics Data System (ADS)
Inoue, Susumu; Nagashima, Masahiro; Suzuki, Takeru K.; Aoki, Wako
2004-12-01
During the hierarchical formation of large scale structure in the universe, the progressive collapse and merging of dark matter should inevitably drive shocks into the gas, with nonthermal particle acceleration as a natural consequence. Two topics in this regard are discussed, emphasizing what important things nonthermal phenomena may tell us about the structure formation (SF) process itself. 1. Inverse Compton gamma-rays from large scale SF shocks and non-gravitational effects, and the implications for probing the warm-hot intergalactic medium. We utilize a semi-analytic approach based on Monte Carlo merger trees that treats both merger and accretion shocks self-consistently. 2. Production of 6Li by cosmic rays from SF shocks in the early Galaxy, and the implications for probing Galaxy formation and uncertain physics on sub-Galactic scales. Our new observations of metal-poor halo stars with the Subaru High Dispersion Spectrograph are highlighted.
The Use of Weighted Graphs for Large-Scale Genome Analysis
Zhou, Fang; Toivonen, Hannu; King, Ross D.
2014-01-01
There is an acute need for better tools to extract knowledge from the growing flood of sequence data. For example, thousands of complete genomes have been sequenced, and their metabolic networks inferred. Such data should enable a better understanding of evolution. However, most existing network analysis methods are based on pair-wise comparisons, and these do not scale to thousands of genomes. Here we propose the use of weighted graphs as a data structure to enable large-scale phylogenetic analysis of networks. We have developed three types of weighted graph for enzymes: taxonomic (these summarize phylogenetic importance), isoenzymatic (these summarize enzymatic variety/redundancy), and sequence-similarity (these summarize sequence conservation); and we applied these types of weighted graph to survey prokaryotic metabolism. To demonstrate the utility of this approach we have compared and contrasted the large-scale evolution of metabolism in Archaea and Eubacteria. Our results provide evidence for limits to the contingency of evolution. PMID:24619061
Geospatial Optimization of Siting Large-Scale Solar Projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macknick, Jordan; Quinby, Ted; Caulfield, Emmet
2014-03-01
Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent withmore » each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.« less
ERIC Educational Resources Information Center
York, Travis; Becker, Christian
2012-01-01
Despite increased attention for environmental sustainability programming, large-scale adoption of pro-environmental behaviors has been slow and largely short-term. This article analyzes the crucial role of ethics in this respect. The authors utilize an interdisciplinary approach drawing on virtue ethics and cognitive development theory to…
Network bandwidth utilization forecast model on high bandwidth networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wuchert; Sim, Alex
With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology,more » our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.« less
Network Bandwidth Utilization Forecast Model on High Bandwidth Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Sim, Alex
With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology,more » our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.« less
Modelling utility-scale wind power plants. Part 1: Economics
NASA Astrophysics Data System (ADS)
Milligan, Michael R.
1999-10-01
As the worldwide use of wind turbine generators continues to increase in utility-scale applications, it will become increasingly important to assess the economic and reliability impact of these intermittent resources. Although the utility industry in the United States appears to be moving towards a restructured environment, basic economic and reliability issues will continue to be relevant to companies involved with electricity generation. This article is the first of two which address modelling approaches and results obtained in several case studies and research projects at the National Renewable Energy Laboratory (NREL). This first article addresses the basic economic issues associated with electricity production from several generators that include large-scale wind power plants. An important part of this discussion is the role of unit commitment and economic dispatch in production cost models. This paper includes overviews and comparisons of the prevalent production cost modelling methods, including several case studies applied to a variety of electric utilities. The second article discusses various methods of assessing capacity credit and results from several reliability-based studies performed at NREL.
Imaging spectroscopy links aspen genotype with below-ground processes at landscape scales
Madritch, Michael D.; Kingdon, Clayton C.; Singh, Aditya; Mock, Karen E.; Lindroth, Richard L.; Townsend, Philip A.
2014-01-01
Fine-scale biodiversity is increasingly recognized as important to ecosystem-level processes. Remote sensing technologies have great potential to estimate both biodiversity and ecosystem function over large spatial scales. Here, we demonstrate the capacity of imaging spectroscopy to discriminate among genotypes of Populus tremuloides (trembling aspen), one of the most genetically diverse and widespread forest species in North America. We combine imaging spectroscopy (AVIRIS) data with genetic, phytochemical, microbial and biogeochemical data to determine how intraspecific plant genetic variation influences below-ground processes at landscape scales. We demonstrate that both canopy chemistry and below-ground processes vary over large spatial scales (continental) according to aspen genotype. Imaging spectrometer data distinguish aspen genotypes through variation in canopy spectral signature. In addition, foliar spectral variation correlates well with variation in canopy chemistry, especially condensed tannins. Variation in aspen canopy chemistry, in turn, is correlated with variation in below-ground processes. Variation in spectra also correlates well with variation in soil traits. These findings indicate that forest tree species can create spatial mosaics of ecosystem functioning across large spatial scales and that these patterns can be quantified via remote sensing techniques. Moreover, they demonstrate the utility of using optical properties as proxies for fine-scale measurements of biodiversity over large spatial scales. PMID:24733949
Cross-Scale Molecular Analysis of Chemical Heterogeneity in Shale Rocks
Hao, Zhao; Bechtel, Hans A.; Kneafsey, Timothy; ...
2018-02-07
The organic and mineralogical heterogeneity in shale at micrometer and nanometer spatial scales contributes to the quality of gas reserves, gas flow mechanisms and gas production. Here, we demonstrate two molecular imaging approaches based on infrared spectroscopy to obtain mineral and kerogen information at these mesoscale spatial resolutions in large-sized shale rock samples. The first method is a modified microscopic attenuated total reflectance measurement that utilizes a large germanium hemisphere combined with a focal plane array detector to rapidly capture chemical images of shale rock surfaces spanning hundreds of micrometers with micrometer spatial resolution. The second method, synchrotron infrared nano-spectroscopy,more » utilizes a metallic atomic force microscope tip to obtain chemical images of micrometer dimensions but with nanometer spatial resolution. This chemically "deconvoluted" imaging at the nano-pore scale is then used to build a machine learning model to generate a molecular distribution map across scales with a spatial span of 1000 times, which enables high-throughput geochemical characterization in greater details across the nano-pore and micro-grain scales and allows us to identify co-localization of mineral phases with chemically distinct organics and even with gas phase sorbents. Finally, this characterization is fundamental to understand mineral and organic compositions affecting the behavior of shales.« less
Cross-Scale Molecular Analysis of Chemical Heterogeneity in Shale Rocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Zhao; Bechtel, Hans A.; Kneafsey, Timothy
The organic and mineralogical heterogeneity in shale at micrometer and nanometer spatial scales contributes to the quality of gas reserves, gas flow mechanisms and gas production. Here, we demonstrate two molecular imaging approaches based on infrared spectroscopy to obtain mineral and kerogen information at these mesoscale spatial resolutions in large-sized shale rock samples. The first method is a modified microscopic attenuated total reflectance measurement that utilizes a large germanium hemisphere combined with a focal plane array detector to rapidly capture chemical images of shale rock surfaces spanning hundreds of micrometers with micrometer spatial resolution. The second method, synchrotron infrared nano-spectroscopy,more » utilizes a metallic atomic force microscope tip to obtain chemical images of micrometer dimensions but with nanometer spatial resolution. This chemically "deconvoluted" imaging at the nano-pore scale is then used to build a machine learning model to generate a molecular distribution map across scales with a spatial span of 1000 times, which enables high-throughput geochemical characterization in greater details across the nano-pore and micro-grain scales and allows us to identify co-localization of mineral phases with chemically distinct organics and even with gas phase sorbents. Finally, this characterization is fundamental to understand mineral and organic compositions affecting the behavior of shales.« less
Subsurface Monitoring of CO2 Sequestration - A Review and Look Forward
NASA Astrophysics Data System (ADS)
Daley, T. M.
2012-12-01
The injection of CO2 into subsurface formations is at least 50 years old with large-scale utilization of CO2 for enhanced oil recovery (CO2-EOR) beginning in the 1970s. Early monitoring efforts had limited measurements in available boreholes. With growing interest in CO2 sequestration beginning in the 1990's, along with growth in geophysical reservoir monitoring, small to mid-size sequestration monitoring projects began to appear. The overall goals of a subsurface monitoring plan are to provide measurement of CO2 induced changes in subsurface properties at a range of spatial and temporal scales. The range of spatial scales allows tracking of the location and saturation of the plume with varying detail, while finer temporal sampling (up to continuous) allows better understanding of dynamic processes (e.g. multi-phase flow) and constraining of reservoir models. Early monitoring of small scale pilots associated with CO2-EOR (e.g., the McElroy field and the Lost Hills field), developed many of the methodologies including tomographic imaging and multi-physics measurements. Large (reservoir) scale sequestration monitoring began with the Sleipner and Weyburn projects. Typically, large scale monitoring, such as 4D surface seismic, has limited temporal sampling due to costs. Smaller scale pilots can allow more frequent measurements as either individual time-lapse 'snapshots' or as continuous monitoring. Pilot monitoring examples include the Frio, Nagaoka and Otway pilots using repeated well logging, crosswell imaging, vertical seismic profiles and CASSM (continuous active-source seismic monitoring). For saline reservoir sequestration projects, there is typically integration of characterization and monitoring, since the sites are not pre-characterized resource developments (oil or gas), which reinforces the need for multi-scale measurements. As we move beyond pilot sites, we need to quantify CO2 plume and reservoir properties (e.g. pressure) over large scales, while still obtaining high resolution. Typically the high-resolution (spatial and temporal) tools are deployed in permanent or semi-permanent borehole installations, where special well design may be necessary, such as non-conductive casing for electrical surveys. Effective utilization of monitoring wells requires an approach of modular borehole monitoring (MBM) were multiple measurements can be made. An example is recent work at the Citronelle pilot injection site where an MBM package with seismic, fluid sampling and distributed fiber sensing was deployed. For future large scale sequestration monitoring, an adaptive borehole-monitoring program is proposed.
NASA Astrophysics Data System (ADS)
McClain, Bobbi J.; Porter, William F.
2000-11-01
Satellite imagery is a useful tool for large-scale habitat analysis; however, its limitations need to be tested. We tested these limitations by varying the methods of a habitat evaluation for white-tailed deer ( Odocoileus virginianus) in the Adirondack Park, New York, USA, utilizing harvest data to create and validate the assessment models. We used two classified images, one with a large minimum mapping unit but high accuracy and one with no minimum mapping unit but slightly lower accuracy, to test the sensitivity of the evaluation to these differences. We tested the utility of two methods of assessment, habitat suitability index modeling, and pattern recognition modeling. We varied the scale at which the models were applied by using five separate sizes of analysis windows. Results showed that the presence of a large minimum mapping unit eliminates important details of the habitat. Window size is relatively unimportant if the data are averaged to a large resolution (i.e., township), but if the data are used at the smaller resolution, then the window size is an important consideration. In the Adirondacks, the proportion of hardwood and softwood in an area is most important to the spatial dynamics of deer populations. The low occurrence of open area in all parts of the park either limits the effect of this cover type on the population or limits our ability to detect the effect. The arrangement and interspersion of cover types were not significant to deer populations.
NASA Astrophysics Data System (ADS)
Amoroso, Richard L.
HÉCTOR A.A brief introductory survey of Unified Field Mechanics (UFM) is given from the perspective of a Holographic Anthropic Multiverse cosmology in 12 `continuous-state' dimensions. The paradigm with many new parameters is cast in a scale-invariant conformal covariant Dirac polarized vacuum utilizing extended HD forms of the de Broglie-Bohm and Cramer interpretations of quantum theory. The model utilizes a unique form of M-Theory based in part on the original hadronic form of string theory that had a variable string tension, TS and included a tachyon. The model is experimentally testable, thus putatively able to demonstrate the existence of large-scale additional dimensionality (LSXD), test for QED violating tight-bound state spectral lines in hydrogen `below' the lowest Bohr orbit, and surmount the quantum uncertainty principle utilizing a hyperincursive Sagnac Effect resonance hierarchy.
Integrating Residential Photovoltaics With Power Lines
NASA Technical Reports Server (NTRS)
Borden, C. S.
1985-01-01
Report finds rooftop solar-cell arrays feed excess power to electric-utility grid for fee are potentially attractive large-scale application of photovoltaic technology. Presents assessment of breakeven costs of these arrays under variety of technological and economic assumptions.
Akerele, David; Ljolje, Dragan; Talundzic, Eldin; Udhayakumar, Venkatachalam
2017-01-01
Accurate diagnosis of malaria infections continues to be challenging and elusive, especially in the detection of submicroscopic infections. Developing new malaria diagnostic tools that are sensitive enough to detect low-level infections, user friendly, cost effective and capable of performing large scale diagnosis, remains critical. We have designed novel self-quenching photo-induced electron transfer (PET) fluorogenic primers for the detection of P. ovale by real-time PCR. In our study, a total of 173 clinical samples, consisting of different malaria species, were utilized to test this novel PET-PCR primer. The sensitivity and specificity were calculated using nested-PCR as the reference test. The novel primer set demonstrated a sensitivity of 97.5% and a specificity of 99.2% (95% CI 85.2–99.8% and 95.2–99.9% respectively). Furthermore, the limit of detection for P. ovale was found to be 1 parasite/μl. The PET-PCR assay is a new molecular diagnostic tool with comparable performance to other commonly used PCR methods. It is relatively easy to perform, and amiable to large scale malaria surveillance studies and malaria control and elimination programs. Further field validation of this novel primer will be helpful to ascertain the utility for large scale malaria screening programs. PMID:28640824
Akerele, David; Ljolje, Dragan; Talundzic, Eldin; Udhayakumar, Venkatachalam; Lucchi, Naomi W
2017-01-01
Accurate diagnosis of malaria infections continues to be challenging and elusive, especially in the detection of submicroscopic infections. Developing new malaria diagnostic tools that are sensitive enough to detect low-level infections, user friendly, cost effective and capable of performing large scale diagnosis, remains critical. We have designed novel self-quenching photo-induced electron transfer (PET) fluorogenic primers for the detection of P. ovale by real-time PCR. In our study, a total of 173 clinical samples, consisting of different malaria species, were utilized to test this novel PET-PCR primer. The sensitivity and specificity were calculated using nested-PCR as the reference test. The novel primer set demonstrated a sensitivity of 97.5% and a specificity of 99.2% (95% CI 85.2-99.8% and 95.2-99.9% respectively). Furthermore, the limit of detection for P. ovale was found to be 1 parasite/μl. The PET-PCR assay is a new molecular diagnostic tool with comparable performance to other commonly used PCR methods. It is relatively easy to perform, and amiable to large scale malaria surveillance studies and malaria control and elimination programs. Further field validation of this novel primer will be helpful to ascertain the utility for large scale malaria screening programs.
To discover novel PPI signaling hubs for lung cancer, CTD2 Center at Emory utilized large-scale genomics datasets and literature to compile a set of lung cancer-associated genes. A library of expression vectors were generated for these genes and utilized for detecting pairwise PPIs with cell lysate-based TR-FRET assays in high-throughput screening format. Read the abstract.
Feasibility of large-scale power plants based on thermoelectric effects
NASA Astrophysics Data System (ADS)
Liu, Liping
2014-12-01
Heat resources of small temperature difference are easily accessible, free and enormous on the Earth. Thermoelectric effects provide the technology for converting these heat resources directly into electricity. We present designs for electricity generators based on thermoelectric effects that utilize heat resources of small temperature difference, e.g., ocean water at different depths and geothermal resources, and conclude that large-scale power plants based on thermoelectric effects are feasible and economically competitive. The key observation is that the power factor of thermoelectric materials, unlike the figure of merit, can be improved by orders of magnitude upon laminating good conductors and good thermoelectric materials. The predicted large-scale power generators based on thermoelectric effects, if validated, will have the advantages of the scalability, renewability, and free supply of heat resources of small temperature difference on the Earth.
NASA Astrophysics Data System (ADS)
Hotta, Arto
During recent years, once-through supercritical (OTSC) CFB technology has been developed, enabling the CFB technology to proceed to medium-scale (500 MWe) utility projects such as Łagisza Power Plant in Poland owned by Poludniowy Koncern Energetyczny SA. (PKE), with net efficiency nearly 44%. Łagisza power plant is currently under commissioning and has reached full load operation in March 2009. The initial operation shows very good performance and confirms, that the CFB process has no problems with the scaling up to this size. Also the once-through steam cycle utilizing Siemens' vertical tube Benson technology has performed as predicted in the CFB process. Foster Wheeler has developed the CFB design further up to 800 MWe with net efficiency of ≥45%.
Enzymatic regeneration of adenosine triphosphate cofactor
NASA Technical Reports Server (NTRS)
Marshall, D. L.
1974-01-01
Regenerating adenosine triphosphate (ATP) from adenosine diphosphate (ADP) by enzymatic process which utilizes carbamyl phosphate as phosphoryl donor is technique used to regenerate expensive cofactors. Process allows complex enzymatic reactions to be considered as candidates for large-scale continuous processes.
Trajectory and Mixing Scaling Laws for Confined and Unconfined Transverse Jets
2012-05-01
engines , issues of confinement, very large density ratio, and super/transcritical effects complicate the utility of the ...opposite wall at a streamwise position that is one -half pipe diameter downstream of the injection location (termed moderate impaction). This...BD, and Eq. 10 scaling laws are 0.97 and 0.90, respectively. One of the primary effects of the confinement is that the
Cost estimate for a proposed GDF Suez LNG testing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchat, Thomas K.; Brady, Patrick Dennis; Jernigan, Dann A.
2014-02-01
At the request of GDF Suez, a Rough Order of Magnitude (ROM) cost estimate was prepared for the design, construction, testing, and data analysis for an experimental series of large-scale (Liquefied Natural Gas) LNG spills on land and water that would result in the largest pool fires and vapor dispersion events ever conducted. Due to the expected cost of this large, multi-year program, the authors utilized Sandia's structured cost estimating methodology. This methodology insures that the efforts identified can be performed for the cost proposed at a plus or minus 30 percent confidence. The scale of the LNG spill, fire,more » and vapor dispersion tests proposed by GDF could produce hazard distances and testing safety issues that need to be fully explored. Based on our evaluations, Sandia can utilize much of our existing fire testing infrastructure for the large fire tests and some small dispersion tests (with some modifications) in Albuquerque, but we propose to develop a new dispersion testing site at our remote test area in Nevada because of the large hazard distances. While this might impact some testing logistics, the safety aspects warrant this approach. In addition, we have included a proposal to study cryogenic liquid spills on water and subsequent vaporization in the presence of waves. Sandia is working with DOE on applications that provide infrastructure pertinent to wave production. We present an approach to conduct repeatable wave/spill interaction testing that could utilize such infrastructure.« less
Combined heat and power supply using Carnot engines
NASA Astrophysics Data System (ADS)
Horlock, J. H.
The Marshall Report on the thermodynamic and economic feasibility of introducing large scale combined heat and electrical power generation (CHP) into the United Kingdom is summarized. Combinations of reversible power plant (Carnot engines) to meet a given demand of power and heat production are analyzed. The Marshall Report states that fairly large scale CHP plants are an attractive energy saving option for areas of high heat load densities. Analysis shows that for given requirements, the total heat supply and utilization factor are functions of heat output, reservoir supply temperature, temperature of heat rejected to the reservoir, and an intermediate temperature for district heating.
A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.
Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu
2017-10-01
The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.
REMOVAL OF SO2 FROM INDUSTRIAL WASTE GASES
The paper discusses technology for sulfur dioxide (SO2) pollution control by flue gas cleaning (called 'scrubbing') in the utility industry, a technology that has advanced significantly during the past 5 years. Federal Regulations are resulting in increasingly large-scale applica...
NASA Astrophysics Data System (ADS)
Sasaki, Yuki; Kitaura, Ryo; Yuk, Jong Min; Zettl, Alex; Shinohara, Hisanori
2016-04-01
By utilizing graphene-sandwiched structures recently developed in this laboratory, we are able to visualize small droplets of liquids in nanometer scale. We have found that small water droplets as small as several tens of nanometers sandwiched by two single-layer graphene are frequently observed by TEM. Due to the electron beam irradiation during the TEM observation, these sandwiched droplets are frequently moving from one place to another and are subjected to create small bubbles inside. The synthesis of a large area single-domain graphene of high-quality is essential to prepare the graphene sandwiched cell which safely encapsulates the droplets in nanometer size.
Hofmeister series salts enhance purification of plasmid DNA by non-ionic detergents
Lezin, George; Kuehn, Michael R.; Brunelli, Luca
2011-01-01
Ion-exchange chromatography is the standard technique used for plasmid DNA purification, an essential molecular biology procedure. Non-ionic detergents (NIDs) have been used for plasmid DNA purification, but it is unclear whether Hofmeister series salts (HSS) change the solubility and phase separation properties of specific NIDs, enhancing plasmid DNA purification. After scaling-up NID-mediated plasmid DNA isolation, we established that NIDs in HSS solutions minimize plasmid DNA contamination with protein. In addition, large-scale NID/HSS solutions eliminated LPS contamination of plasmid DNA more effectively than Qiagen ion-exchange columns. Large-scale NID isolation/NID purification generated increased yields of high quality DNA compared to alkali isolation/column purification. This work characterizes how HSS enhance NID-mediated plasmid DNA purification, and demonstrates that NID phase transition is not necessary for LPS removal from plasmid DNA. Specific NIDs such as IGEPAL CA-520 can be utilized for rapid, inexpensive and efficient laboratory-based large-scale plasmid DNA purification, outperforming Qiagen-based column procedures. PMID:21351074
Zhao, Shanrong; Prenger, Kurt; Smith, Lance
2013-01-01
RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets. PMID:25937948
Zhao, Shanrong; Prenger, Kurt; Smith, Lance
2013-01-01
RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets.
On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat
NASA Astrophysics Data System (ADS)
Hua, H.
2016-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.
Large scale in vivo recordings to study neuronal biophysics.
Giocomo, Lisa M
2015-06-01
Over the last several years, technological advances have enabled researchers to more readily observe single-cell membrane biophysics in awake, behaving animals. Studies utilizing these technologies have provided important insights into the mechanisms generating functional neural codes in both sensory and non-sensory cortical circuits. Crucial for a deeper understanding of how membrane biophysics control circuit dynamics however, is a continued effort to move toward large scale studies of membrane biophysics, in terms of the numbers of neurons and ion channels examined. Future work faces a number of theoretical and technical challenges on this front but recent technological developments hold great promise for a larger scale understanding of how membrane biophysics contribute to circuit coding and computation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Remm, Jaanus; Hanski, Ilpo K; Tuominen, Sakari; Selonen, Vesa
2017-10-01
Animals use and select habitat at multiple hierarchical levels and at different spatial scales within each level. Still, there is little knowledge on the scale effects at different spatial levels of species occupancy patterns. The objective of this study was to examine nonlinear effects and optimal-scale landscape characteristics that affect occupancy of the Siberian flying squirrel, Pteromys volans , in South- and Mid-Finland. We used presence-absence data ( n = 10,032 plots of 9 ha) and novel approach to separate the effects on site-, landscape-, and regional-level occupancy patterns. Our main results were: landscape variables predicted the placement of population patches at least twice as well as they predicted the occupancy of particular sites; the clear optimal value of preferred habitat cover for species landscape-level abundance is a surprisingly low value (10% within a 4 km buffer); landscape metrics exert different effects on species occupancy and abundance in high versus low population density regions of our study area. We conclude that knowledge of regional variation in landscape utilization will be essential for successful conservation of the species. The results also support the view that large-scale landscape variables have high predictive power in explaining species abundance. Our study demonstrates the complex response of species occurrence at different levels of population configuration on landscape structure. The study also highlights the need for data in large spatial scale to increase the precision of biodiversity mapping and prediction of future trends.
Development of fire test methods for airplane interior materials
NASA Technical Reports Server (NTRS)
Tustin, E. A.
1978-01-01
Fire tests were conducted in a 737 airplane fuselage at NASA-JSC to characterize jet fuel fires in open steel pans (simulating post-crash fire sources and a ruptured airplane fuselage) and to characterize fires in some common combustibles (simulating in-flight fire sources). Design post-crash and in-flight fire source selections were based on these data. Large panels of airplane interior materials were exposed to closely-controlled large scale heating simulations of the two design fire sources in a Boeing fire test facility utilizing a surplused 707 fuselage section. Small samples of the same airplane materials were tested by several laboratory fire test methods. Large scale and laboratory scale data were examined for correlative factors. Published data for dangerous hazard levels in a fire environment were used as the basis for developing a method to select the most desirable material where trade-offs in heat, smoke and gaseous toxicant evolution must be considered.
NASA Astrophysics Data System (ADS)
Zeng, Y. K.; Zhao, T. S.; An, L.; Zhou, X. L.; Wei, L.
2015-12-01
The promise of redox flow batteries (RFBs) utilizing soluble redox couples, such as all vanadium ions as well as iron and chromium ions, is becoming increasingly recognized for large-scale energy storage of renewables such as wind and solar, owing to their unique advantages including scalability, intrinsic safety, and long cycle life. An ongoing question associated with these two RFBs is determining whether the vanadium redox flow battery (VRFB) or iron-chromium redox flow battery (ICRFB) is more suitable and competitive for large-scale energy storage. To address this concern, a comparative study has been conducted for the two types of battery based on their charge-discharge performance, cycle performance, and capital cost. It is found that: i) the two batteries have similar energy efficiencies at high current densities; ii) the ICRFB exhibits a higher capacity decay rate than does the VRFB; and iii) the ICRFB is much less expensive in capital costs when operated at high power densities or at large capacities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendriks, R.V.; Nolan, P.S.
1987-01-01
The paper describes and discusses the key design features of the retrofit of EPA's Limestone Injection Multistage Burner (LIMB) system to an operating, wall-fired utility boiler at Ohio Edison's Edgewater Station. It further describes results of the pertinent projects in EPA's LIMB program and shows how these results were used as the basis for the design of the system. The full-scale demonstration is expected to prove the effectiveness and cost of the LIMB concept for use on large-scale utility boilers. The equipment is now being installed at Edgewater, with system start-up scheduled for May 1987.
Habitats as ecological templates and their utility in ecological resource management
The kind, distribution and abundance of estuarine organisms largely depend upon the habitats present. Consequently, as habitats change so may organisms in the landscape together with the ecosystem goods and services they provide. At estuary scale, sediment features (sand; mud),...
Nowcasting Induced Seismicity at the Groningen Gas Field in the Netherlands
NASA Astrophysics Data System (ADS)
Luginbuhl, M.; Rundle, J. B.; Turcotte, D. L.
2017-12-01
The Groningen natural gas field in the Netherlands has recently been a topic of controversy for many residents in the surrounding area. The gas field provides energy for the majority of the country; however, for a minority of Dutch citizens who live nearby, the seismicity induced by the gas field is a cause for major concern. Since the early 2000's, the region has seen an increase in both number and magnitude of events, the largest of which was a magnitude 3.6 in 2012. Earthquakes of this size and smaller easily cause infrastructural damage to older houses and farms built with single brick walls. Nowcasting is a new method of statistically classifying seismicity and seismic risk. In this paper, the method is applied to the induced seismicity at the natural gas fields in Groningen, Netherlands. Nowcasting utilizes the catalogs of seismicity in these regions. Two earthquake magnitudes are selected, one large say , and one small say . The method utilizes the number of small earthquakes that occur between pairs of large earthquakes. The cumulative probability distribution of these values is obtained. The earthquake potential score (EPS) is defined by the number of small earthquakes that have occurred since the last large earthquake, the point where this number falls on the cumulative probability distribution of interevent counts defines the EPS. A major advantage of nowcasting is that it utilizes "natural time", earthquake counts, between events rather than clock time. Thus, it is not necessary to decluster aftershocks and the results are applicable if the level of induced seismicity varies in time, which it does in this case. The application of natural time to the accumulation of the seismic hazard depends on the applicability of Gutenberg-Richter (GR) scaling. The increasing number of small earthquakes that occur after a large earthquake can be scaled to give the risk of a large earthquake occurring. To illustrate our approach, we utilize the number of earthquakes in Groningen to nowcast the number of earthquakes in Groningen. The applicability of the scaling is illustrated during the rapid build up of seismicity between 2004 and 2016. It can now be used to forecast the expected reduction in seismicity associated with reduction in gas production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauder, C.
This subcontract report was completed under the auspices of the NREL/SCE High-Penetration Photovoltaic (PV) Integration Project, which is co-funded by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and the California Solar Initiative (CSI) Research, Development, Demonstration, and Deployment (RD&D) program funded by the California Public Utility Commission (CPUC) and managed by Itron. This project is focused on modeling, quantifying, and mitigating the impacts of large utility-scale PV systems (generally 1-5 MW in size) that are interconnected to the distribution system. This report discusses the concerns utilities have when interconnecting large PV systems thatmore » interconnect using PV inverters (a specific application of frequency converters). Additionally, a number of capabilities of PV inverters are described that could be implemented to mitigate the distribution system-level impacts of high-penetration PV integration. Finally, the main issues that need to be addressed to ease the interconnection of large PV systems to the distribution system are presented.« less
NASA Technical Reports Server (NTRS)
Scholl, R. E. (Editor)
1979-01-01
Earthquake engineering research capabilities of the National Aeronautics and Space Administration (NASA) facilities at George C. Marshall Space Flight Center (MSFC), Alabama, were evaluated. The results indicate that the NASA/MSFC facilities and supporting capabilities offer unique opportunities for conducting earthquake engineering research. Specific features that are particularly attractive for large scale static and dynamic testing of natural and man-made structures include the following: large physical dimensions of buildings and test bays; high loading capacity; wide range and large number of test equipment and instrumentation devices; multichannel data acquisition and processing systems; technical expertise for conducting large-scale static and dynamic testing; sophisticated techniques for systems dynamics analysis, simulation, and control; and capability for managing large-size and technologically complex programs. Potential uses of the facilities for near and long term test programs to supplement current earthquake research activities are suggested.
Fiber supercapacitors utilizing pen ink for flexible/wearable energy storage.
Fu, Yongping; Cai, Xin; Wu, Hongwei; Lv, Zhibin; Hou, Shaocong; Peng, Ming; Yu, Xiao; Zou, Dechun
2012-11-08
A novel type of flexible fiber/wearable supercapacitor that is composed of two fiber electrodes - a helical spacer wire and an electrolyte - is demonstrated. In the carbon-based fiber supercapacitor (FSC), which has high capacitance performance, commercial pen ink is directly utilized as the electrochemical material. FSCs have potential benefits in the pursuit of low-cost, large-scale, and efficient flexible/wearable energy storage systems. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ozawa, Sachiko; Grewal, Simrun; Bridges, John F P
2016-04-01
Community-based health insurance (CBHI) schemes have been introduced in low- and middle-income countries to increase health service utilization and provide financial protection from high healthcare expenditures. We assess the impact of household size on decisions to enroll in CBHI and demonstrate how to correct for group disparity in scale (i.e. variance differences). A discrete choice experiment was conducted across five CBHI attributes. Preferences were elicited through forced-choice paired comparison choice tasks designed based on D-efficiency. Differences in preferences were examined between small (1-4 family members) and large (5-12 members) households using conditional logistic regression. Swait and Louviere test was used to identify and correct for differences in scale. One-hundred and sixty households were surveyed in Northwest Cambodia. Increased insurance premium was associated with disutility [odds ratio (OR) 0.61, p < 0.01], while significant increase in utility was noted for higher hospital fee coverage (OR 10.58, p < 0.01), greater coverage of travel and meal costs (OR 4.08, p < 0.01), and more frequent communication with the insurer (OR 1.33, p < 0.01). While the magnitude of preference for hospital fee coverage appeared larger for the large household group (OR 14.15) compared to the small household group (OR 8.58), differences in scale were observed (p < 0.05). After adjusting for scale (k, ratio of scale between large to small household groups = 1.227, 95 % confidence interval 1.002-1.515), preference differences by household size became negligible. Differences in stated preferences may be due to scale, or variance differences between groups, rather than true variations in preference. Coverage of hospital fees, travel and meal costs are given significant weight in CBHI enrollment decisions regardless of household size. Understanding how community members make decisions about health insurance can inform low- and middle-income countries' paths towards universal health coverage.
NASA Astrophysics Data System (ADS)
Yu, Qing; Wang, Bowen; Chen, Zhengwei; Urabe, Go; Glover, Matthew S.; Shi, Xudong; Guo, Lian-Wang; Kent, K. Craig; Li, Lingjun
2017-09-01
Protein glycosylation, one of the most heterogeneous post-translational modifications, can play a major role in cellular signal transduction and disease progression. Traditional mass spectrometry (MS)-based large-scale glycoprotein sequencing studies heavily rely on identifying enzymatically released glycans and their original peptide backbone separately, as there is no efficient fragmentation method to produce unbiased glycan and peptide product ions simultaneously in a single spectrum, and that can be conveniently applied to high throughput glycoproteome characterization, especially for N-glycopeptides, which can have much more branched glycan side chains than relatively less complex O-linked glycans. In this study, a redefined electron-transfer/higher-energy collision dissociation (EThcD) fragmentation scheme is applied to incorporate both glycan and peptide fragments in one single spectrum, enabling complete information to be gathered and great microheterogeneity details to be revealed. Fetuin was first utilized to prove the applicability with 19 glycopeptides and corresponding five glycosylation sites identified. Subsequent experiments tested its utility for human plasma N-glycoproteins. Large-scale studies explored N-glycoproteomics in rat carotid arteries over the course of restenosis progression to investigate the potential role of glycosylation. The integrated fragmentation scheme provides a powerful tool for the analysis of intact N-glycopeptides and N-glycoproteomics. We also anticipate this approach can be readily applied to large-scale O-glycoproteome characterization. [Figure not available: see fulltext.
Region-specific patterns and drivers of macroscale forest plant invasions
Basil V. Iannone; Christopher M. Oswalt; Andrew M. Liebhold; Qinfeng Guo; Kevin M. Potter; Gabriela C. Nunez-Mir; Sonja N. Oswalt; Bryan C. Pijanowski; Songlin Fei; Bethany Bradley
2015-01-01
Aim Stronger inferences about biological invasions may be obtained when accounting for multiple invasion measures and the spatial heterogeneity occurring across large geographic areas. We pursued this enquiry by utilizing a multimeasure, multiregional framework to investigate forest plant invasions at a subcontinental scale. ...
NREL, NYSERDA, and Con Edison Partner on Home Energy Management Systems |
at large scale, the overall impact could be a win-win for both homeowners and utilities, which could sources. Founded in 1823, Con Edison provides electric, gas, and steam service to 10 million people who
A perspective on forward research and development paths for cost-effective solar energy utilization.
Lewis, Nathan S
2009-01-01
Solar electricity has long been recognized as a potential energy source that holds great promise. Several approaches towards converting sunlight into energy are elaborated in this Viewpoint, and discussed with respect to their feasibility for large-scale application.
Rapid underway profiling of water quality in Queensland estuaries.
Hodge, Jonathan; Longstaff, Ben; Steven, Andy; Thornton, Phillip; Ellis, Peter; McKelvie, Ian
2005-01-01
We present an overview of a portable underway water quality monitoring system (RUM-Rapid Underway Monitoring), developed by integrating several off-the-shelf water quality instruments to provide rapid, comprehensive, and spatially referenced 'snapshots' of water quality conditions. We demonstrate the utility of the system from studies in the Northern Great Barrier Reef (Daintree River) and the Moreton Bay region. The Brisbane dataset highlights RUM's utility in characterising plumes as well as its ability to identify the smaller scale structure of large areas. RUM is shown to be particularly useful when measuring indicators with large small-scale variability such as turbidity and chlorophyll-a. Additionally, the Daintree dataset shows the ability to integrate other technologies, resulting in a more comprehensive analysis, whilst sampling offshore highlights some of the analytical issues required for sampling low concentration data. RUM is a low cost, highly flexible solution that can be modified for use in any water type, on most vessels and is only limited by the available monitoring technologies.
Propeller aircraft interior noise model utilization study and validation
NASA Technical Reports Server (NTRS)
Pope, L. D.
1984-01-01
Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.
NASA Astrophysics Data System (ADS)
Wagener, T.
2017-12-01
Current societal problems and questions demand that we increasingly build hydrologic models for regional or even continental scale assessment of global change impacts. Such models offer new opportunities for scientific advancement, for example by enabling comparative hydrology or connectivity studies, and for improved support of water management decision, since we might better understand regional impacts on water resources from large scale phenomena such as droughts. On the other hand, we are faced with epistemic uncertainties when we move up in scale. The term epistemic uncertainty describes those uncertainties that are not well determined by historical observations. This lack of determination can be because the future is not like the past (e.g. due to climate change), because the historical data is unreliable (e.g. because it is imperfectly recorded from proxies or missing), or because it is scarce (either because measurements are not available at the right scale or there is no observation network available at all). In this talk I will explore: (1) how we might build a bridge between what we have learned about catchment scale processes and hydrologic model development and evaluation at larger scales. (2) How we can understand the impact of epistemic uncertainty in large scale hydrologic models. And (3) how we might utilize large scale hydrologic predictions to understand climate change impacts, e.g. on infectious disease risk.
Telecommunications technology and rural education in the United States
NASA Technical Reports Server (NTRS)
Perrine, J. R.
1975-01-01
The rural sector of the US is examined from the point of view of whether telecommunications technology can augment the development of rural education. Migratory farm workers and American Indians were the target groups which were examined as examples of groups with special needs in rural areas. The general rural population and the target groups were examined to identify problems and to ascertain specific educational needs. Educational projects utilizing telecommunications technology in target group settings were discussed. Large scale regional ATS-6 satellite-based experimental educational telecommunications projects were described. Costs and organizational factors were also examined for large scale rural telecommunications projects.
Zhang, Lening; Messner, Steven F; Lu, Jianhong
2007-02-01
This article discusses research experience gained from a large-scale survey of criminal victimization recently conducted in Tianjin, China. The authors review some of the more important challenges that arose in the research, their responses to these challenges, and lessons learned that might be beneficial to other scholars who are interested in conducting criminological research in China. Their experience underscores the importance of understanding the Chinese political, cultural, and academic context, and the utility of collaborating with experienced and knowledgeable colleagues "on site." Although there are some special difficulties and barriers, their project demonstrates the feasibility of original criminological data collection in China.
A practical large scale/high speed data distribution system using 8 mm libraries
NASA Technical Reports Server (NTRS)
Howard, Kevin
1993-01-01
Eight mm tape libraries are known primarily for their small size, large storage capacity, and low cost. However, many applications require an additional attribute which, heretofore, has been lacking -- high transfer rate. Transfer rate is particularly important in a large scale data distribution environment -- an environment in which 8 mm tape should play a very important role. Data distribution is a natural application for 8 mm for several reasons: most large laboratories have access to 8 mm tape drives, 8 mm tapes are upwardly compatible, 8 mm media are very inexpensive, 8 mm media are light weight (important for shipping purposes), and 8 mm media densely pack data (5 gigabytes now and 15 gigabytes on the horizon). If the transfer rate issue were resolved, 8 mm could offer a good solution to the data distribution problem. To that end Exabyte has analyzed four ways to increase its transfer rate: native drive transfer rate increases, data compression at the drive level, tape striping, and homogeneous drive utilization. Exabyte is actively pursuing native drive transfer rate increases and drive level data compression. However, for non-transmitted bulk data applications (which include data distribution) the other two methods (tape striping and homogeneous drive utilization) hold promise.
de la Torre, Andrea; Metivier, Aisha; Chu, Frances; ...
2015-11-25
Methane-utilizing bacteria (methanotrophs) are capable of growth on methane and are attractive systems for bio-catalysis. However, the application of natural methanotrophic strains to large-scale production of value-added chemicals/biofuels requires a number of physiological and genetic alterations. An accurate metabolic model coupled with flux balance analysis can provide a solid interpretative framework for experimental data analyses and integration.
GIGGLE: a search engine for large-scale integrated genome analysis.
Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R
2018-02-01
GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation.
GIGGLE: a search engine for large-scale integrated genome analysis
Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R
2018-01-01
GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation. PMID:29309061
Microfluidic large-scale integration: the evolution of design rules for biological automation.
Melin, Jessica; Quake, Stephen R
2007-01-01
Microfluidic large-scale integration (mLSI) refers to the development of microfluidic chips with thousands of integrated micromechanical valves and control components. This technology is utilized in many areas of biology and chemistry and is a candidate to replace today's conventional automation paradigm, which consists of fluid-handling robots. We review the basic development of mLSI and then discuss design principles of mLSI to assess the capabilities and limitations of the current state of the art and to facilitate the application of mLSI to areas of biology. Many design and practical issues, including economies of scale, parallelization strategies, multiplexing, and multistep biochemical processing, are discussed. Several microfluidic components used as building blocks to create effective, complex, and highly integrated microfluidic networks are also highlighted.
Continuous Easy-Plane Deconfined Phase Transition on the Kagome Lattice
NASA Astrophysics Data System (ADS)
Zhang, Xue-Feng; He, Yin-Chen; Eggert, Sebastian; Moessner, Roderich; Pollmann, Frank
2018-03-01
We use large scale quantum Monte Carlo simulations to study an extended Hubbard model of hard core bosons on the kagome lattice. In the limit of strong nearest-neighbor interactions at 1 /3 filling, the interplay between frustration and quantum fluctuations leads to a valence bond solid ground state. The system undergoes a quantum phase transition to a superfluid phase as the interaction strength is decreased. It is still under debate whether the transition is weakly first order or represents an unconventional continuous phase transition. We present a theory in terms of an easy plane noncompact C P1 gauge theory describing the phase transition at 1 /3 filling. Utilizing large scale quantum Monte Carlo simulations with parallel tempering in the canonical ensemble up to 15552 spins, we provide evidence that the phase transition is continuous at exactly 1 /3 filling. A careful finite size scaling analysis reveals an unconventional scaling behavior hinting at deconfined quantum criticality.
Gannotti, Mary E; Law, Mary; Bailes, Amy F; OʼNeil, Margaret E; Williams, Uzma; DiRezze, Briano
2016-01-01
A step toward advancing research about rehabilitation service associated with positive outcomes for children with cerebral palsy is consensus about a conceptual framework and measures. A Delphi process was used to establish consensus among clinicians and researchers in North America. Directors of large pediatric rehabilitation centers, clinicians from large hospitals, and researchers with expertise in outcomes participated (N = 18). Andersen's model of health care utilization framed outcomes: consumer satisfaction, activity, participation, quality of life, and pain. Measures agreed upon included Participation and Environment Measure for Children and Youth, Measure of Processes of Care, PEDI-CAT, KIDSCREEN-10, PROMIS Pediatric Pain Interference Scale, Visual Analog Scale for pain intensity, PROMIS Global Health Short Form, Family Environment Scale, Family Support Scale, and functional classification levels for gross motor, manual ability, and communication. Universal forms for documenting service use are needed. Findings inform clinicians and researchers concerned with outcome assessment.
HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.
Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J
2016-06-03
Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lenee-Bluhm, P.; Rhinefrank, Ken
The overarching project objective is to demonstrate the feasibility of using an innovative PowerTake-Off (PTO) Module in Columbia Power's utility-scale wave energy converter (WEC). The PTO Module uniquely combines a large-diameter, direct-drive, rotary permanent magnet generator; a patent-pending rail-bearing system; and a corrosion-resistant fiber-reinforced-plastic structure
Daily time series evapotranspiration maps for Oklahoma and Texas panhandle
USDA-ARS?s Scientific Manuscript database
Evapotranspiration (ET) is an important process in ecosystems’ water budget and closely linked to its productivity. Therefore, regional scale daily time series ET maps developed at high and medium resolutions have large utility in studying the carbon-energy-water nexus and managing water resources. ...
Evaluation of Two PCR-based Swine-specific Fecal Source Tracking Assays (Abstract)
Several PCR-based methods have been proposed to identify swine fecal pollution in environmental waters. However, the utility of these assays in identifying swine fecal contamination on a broad geographic scale is largely unknown. In this study, we evaluated the specificity, distr...
High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing
NASA Astrophysics Data System (ADS)
Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.
2015-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.
Reblin, Maija; Clayton, Margaret F; John, Kevin K; Ellington, Lee
2016-07-01
In this article, we present strategies for collecting and coding a large longitudinal communication data set collected across multiple sites, consisting of more than 2000 hours of digital audio recordings from approximately 300 families. We describe our methods within the context of implementing a large-scale study of communication during cancer home hospice nurse visits, but this procedure could be adapted to communication data sets across a wide variety of settings. This research is the first study designed to capture home hospice nurse-caregiver communication, a highly understudied location and type of communication event. We present a detailed example protocol encompassing data collection in the home environment, large-scale, multisite secure data management, the development of theoretically-based communication coding, and strategies for preventing coder drift and ensuring reliability of analyses. Although each of these challenges has the potential to undermine the utility of the data, reliability between coders is often the only issue consistently reported and addressed in the literature. Overall, our approach demonstrates rigor and provides a "how-to" example for managing large, digitally recorded data sets from collection through analysis. These strategies can inform other large-scale health communication research.
Reblin, Maija; Clayton, Margaret F; John, Kevin K; Ellington, Lee
2015-01-01
In this paper, we present strategies for collecting and coding a large longitudinal communication dataset collected across multiple sites, consisting of over 2000 hours of digital audio recordings from approximately 300 families. We describe our methods within the context of implementing a large-scale study of communication during cancer home hospice nurse visits, but this procedure could be adapted to communication datasets across a wide variety of settings. This research is the first study designed to capture home hospice nurse-caregiver communication, a highly understudied location and type of communication event. We present a detailed example protocol encompassing data collection in the home environment, large-scale, multi-site secure data management, the development of theoretically-based communication coding, and strategies for preventing coder drift and ensuring reliability of analyses. Although each of these challenges have the potential to undermine the utility of the data, reliability between coders is often the only issue consistently reported and addressed in the literature. Overall, our approach demonstrates rigor and provides a “how-to” example for managing large, digitally-recorded data sets from collection through analysis. These strategies can inform other large-scale health communication research. PMID:26580414
NASA Technical Reports Server (NTRS)
DeLay, Tom K.; Munafo, Paul (Technical Monitor)
2001-01-01
The AFRL USFE project is an experimental test bed for new propulsion technologies. It will utilize ambient temperature fuel and oxidizers (Kerosene and Hydrogen peroxide). The system is pressure fed, not pump fed, and will utilize a helium pressurant tank to drive the system. Mr. DeLay has developed a method for cost effectively producing a unique, large pressurant tank that is not commercially available. The pressure vessel is a layered composite structure with an electroformed metallic permeation barrier. The design/process is scalable and easily adaptable to different configurations with minimal cost in tooling development 1/3 scale tanks have already been fabricated and are scheduled for testing. The full-scale pressure vessel (50" diameter) design will be refined based on the performance of the sub-scale tank. The pressure vessels have been designed to operate at 6,000 psi. a PV/W of 1.92 million is anticipated.
Kurowski, Brad G; Wade, Shari L; Kirkwood, Michael W; Brown, Tanya M; Stancin, Terry; Taylor, H Gerry
2013-12-01
To characterize utilization of mental health services and determine the ability of a behavior problem and clinical functioning assessment to predict utilization of such services within the first 6 months after moderate and severe traumatic brain injury in a large cohort of adolescents. Multicenter cross-sectional study. Outpatient setting of 4 tertiary pediatric hospitals, 2 tertiary general medical centers, and 1 specialized children's hospital. Adolescents age 12-17 years (n = 132), 1-6 months after moderate-to-severe traumatic brain injury. Logistic regression was used to determine the association of mental health service utilization with clinical functioning as assessed by the Child and Adolescent Functional Assessment Scale and behavior problems assessed by the Child Behavioral Checklist. Mental health service utilization measured by the Service Assessment for Children and Adolescents. Behavioral or functional impairment occurred in 37%-56%. Of the total study population, 24.2% reported receiving outpatient mental health services, 8.3% reported receiving school services, and 28.8% reported receiving any type of mental health service. Use of any (school or outpatient) mental health service was associated with borderline to impaired total Child and Adolescent Functional Assessment Scale (odds ratio 3.50 [95% confidence interval, 1.46-8.40]; P < .01) and the Child Behavioral Checklist Total Competence (odds ratio 5.08 [95% confidence interval, 2.02-12.76]; P < .01). A large proportion of participants had unmet mental health needs. Both the Child and Adolescent Functional Assessment Scale and the Child Behavioral Checklist identified individuals who would likely benefit from mental health services in outpatient or school settings. Future research should focus on methods to ensure early identification by health care providers of adolescents with traumatic brain injury in need of mental health services. Copyright © 2013 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Economics of carbon dioxide capture and utilization-a supply and demand perspective.
Naims, Henriette
2016-11-01
Lately, the technical research on carbon dioxide capture and utilization (CCU) has achieved important breakthroughs. While single CO 2 -based innovations are entering the markets, the possible economic effects of a large-scale CO 2 utilization still remain unclear to policy makers and the public. Hence, this paper reviews the literature on CCU and provides insights on the motivations and potential of making use of recovered CO 2 emissions as a commodity in the industrial production of materials and fuels. By analyzing data on current global CO 2 supply from industrial sources, best practice benchmark capture costs and the demand potential of CO 2 utilization and storage scenarios with comparative statics, conclusions can be drawn on the role of different CO 2 sources. For near-term scenarios the demand for the commodity CO 2 can be covered from industrial processes, that emit CO 2 at a high purity and low benchmark capture cost of approximately 33 €/t. In the long-term, with synthetic fuel production and large-scale CO 2 utilization, CO 2 is likely to be available from a variety of processes at benchmark costs of approx. 65 €/t. Even if fossil-fired power generation is phased out, the CO 2 emissions of current industrial processes would suffice for ambitious CCU demand scenarios. At current economic conditions, the business case for CO 2 utilization is technology specific and depends on whether efficiency gains or substitution of volatile priced raw materials can be achieved. Overall, it is argued that CCU should be advanced complementary to mitigation technologies and can unfold its potential in creating local circular economy solutions.
ERIC Educational Resources Information Center
Peterson, Norman G., Ed.
As part of the United States Army's Project A, research has been conducted to develop and field test a battery of experimental tests to complement the Armed Services Vocational Aptitude Battery in predicting soldiers' job performance. Project A is the United States Army's large-scale manpower effort to improve selection, classification, and…
Detection of submicron scale cracks and other surface anomalies using positron emission tomography
Cowan, Thomas E.; Howell, Richard H.; Colmenares, Carlos A.
2004-02-17
Detection of submicron scale cracks and other mechanical and chemical surface anomalies using PET. This surface technique has sufficient sensitivity to detect single voids or pits of sub-millimeter size and single cracks or fissures of millimeter size; and single cracks or fissures of millimeter-scale length, micrometer-scale depth, and nanometer-scale length, micrometer-scale depth, and nanometer-scale width. This technique can also be applied to detect surface regions of differing chemical reactivity. It may be utilized in a scanning or survey mode to simultaneously detect such mechanical or chemical features over large interior or exterior surface areas of parts as large as about 50 cm in diameter. The technique involves exposing a surface to short-lived radioactive gas for a time period, removing the excess gas to leave a partial monolayer, determining the location and shape of the cracks, voids, porous regions, etc., and calculating the width, depth, and length thereof. Detection of 0.01 mm deep cracks using a 3 mm detector resolution has been accomplished using this technique.
In-orbit assembly mission for the Space Solar Power Station
NASA Astrophysics Data System (ADS)
Cheng, ZhengAi; Hou, Xinbin; Zhang, Xinghua; Zhou, Lu; Guo, Jifeng; Song, Chunlin
2016-12-01
The Space Solar Power Station (SSPS) is a large spacecraft that utilizes solar power in space to supply power to an electric grid on Earth. A large symmetrical integrated concept has been proposed by the China Academy of Space Technology (CAST). Considering its large scale, the SSPS requires a modular design and unitized general interfaces that would be assembled in orbit. Facilities system supporting assembly procedures, which include a Reusable Heavy Lift Launch Vehicle, orbital transfer and space robots, is introduced. An integrated assembly scheme utilizing space robots to realize this platform SSPS concept is presented. This paper tried to give a preliminary discussion about the minimized time and energy cost of the assembly mission under best sequence and route This optimized assembly mission planning allows the SSPS to be built in orbit rapidly, effectively and reliably.
Acceleration of spiking neural network based pattern recognition on NVIDIA graphics processors.
Han, Bing; Taha, Tarek M
2010-04-01
There is currently a strong push in the research community to develop biological scale implementations of neuron based vision models. Systems at this scale are computationally demanding and generally utilize more accurate neuron models, such as the Izhikevich and the Hodgkin-Huxley models, in favor of the more popular integrate and fire model. We examine the feasibility of using graphics processing units (GPUs) to accelerate a spiking neural network based character recognition network to enable such large scale systems. Two versions of the network utilizing the Izhikevich and Hodgkin-Huxley models are implemented. Three NVIDIA general-purpose (GP) GPU platforms are examined, including the GeForce 9800 GX2, the Tesla C1060, and the Tesla S1070. Our results show that the GPGPUs can provide significant speedup over conventional processors. In particular, the fastest GPGPU utilized, the Tesla S1070, provided a speedup of 5.6 and 84.4 over highly optimized implementations on the fastest central processing unit (CPU) tested, a quadcore 2.67 GHz Xeon processor, for the Izhikevich and the Hodgkin-Huxley models, respectively. The CPU implementation utilized all four cores and the vector data parallelism offered by the processor. The results indicate that GPUs are well suited for this application domain.
Design, Construction and Testing of an In-Pile Loop for PWR (Pressurized Water Reactor) Simulation.
1987-06-01
computer modeling remains at best semiempirical (C-i), this large variation in scaling factor makes extrapolation of data impossible. The DIDO Water...in a full scale PWR are not practical. The reactor plant is not controlled to tolerances necessary for research, and utilities are reluctant to vary...MIT Reactor Safeguards Committee, in revision 1 to the PCCL Safety Evaluation Report (SER), for final approval to begin in-pile testing and
Scaled Lunar Module Jet Erosion Experiments
NASA Technical Reports Server (NTRS)
Land, Norman S.; Scholl, Harland F.
1966-01-01
An experimental research program was conducted on the erosion of particulate surfaces by a jet exhaust. These experiments were scaled to represent the lunar module (LM) during landing. A conical cold-gas nozzle simulating the lunar module nozzle was utilized. The investigation was conducted within a large vacuum chamber by using gravel or glass beads as a simulated soil. The effects of thrust, descent speed, nozzle terminal height, particle size on crater size, and visibility during jet erosion were determined.
Harvey, Benjamin Simeon; Ji, Soo-Yeon
2017-01-01
As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring forth oncological inference to the bioinformatics community through the analysis of large-scale cancer genomic (LSCG) DNA and mRNA microarray data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological interpretation by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale distributed parallel (CSDP) separable 1-D wavelet decomposition technique for denoising through differential expression thresholding and classification of LSCG microarray data. This research presents a novel methodology that utilizes a CSDP separable 1-D method for wavelet-based transformation in order to initialize a threshold which will retain significantly expressed genes through the denoising process for robust classification of cancer patients. Additionally, the overall study was implemented and encompassed within CSDP environment. The utilization of cloud computing and wavelet-based thresholding for denoising was used for the classification of samples within the Global Cancer Map, Cancer Cell Line Encyclopedia, and The Cancer Genome Atlas. The results proved that separable 1-D parallel distributed wavelet denoising in the cloud and differential expression thresholding increased the computational performance and enabled the generation of higher quality LSCG microarray datasets, which led to more accurate classification results.
Utilizing Online Training for Child Sexual Abuse Prevention: Benefits and Limitations
ERIC Educational Resources Information Center
Paranal, Rechelle; Thomas, Kiona Washington; Derrick, Christina
2012-01-01
The prevalence of child sexual abuse demands innovative approaches to prevent further victimization. The online environment provides new opportunities to expand existing child sexual abuse prevention trainings that target adult gatekeepers and allow for large scale interventions that are fiscally viable. This article discusses the benefits and…
High density circuit technology, part 2
NASA Technical Reports Server (NTRS)
Wade, T. E.
1982-01-01
A multilevel metal interconnection system for very large scale integration (VLSI) systems utilizing polyimides as the interlayer dielectric material is described. A complete characterization of polyimide materials is given as well as experimental methods accomplished using a double level metal test pattern. A low temperature, double exposure polyimide patterning procedure is also presented.
ERIC Educational Resources Information Center
Wang, Qiu; Diemer, Matthew A.; Maier, Kimberly S.
2013-01-01
This study integrated Bayesian hierarchical modeling and receiver operating characteristic analysis (BROCA) to evaluate how interest strength (IS) and interest differentiation (ID) predicted low–socioeconomic status (SES) youth's interest-major congruence (IMC). Using large-scale Kuder Career Search online-assessment data, this study fit three…
Connecting with Rice: Carolina Lowcountry and Africa
ERIC Educational Resources Information Center
Mitchell, Jerry T.; Collins, Larianne; Wise, Susan S.; Caughman, Monti
2012-01-01
Though lasting less than 200 years, large-scale rice production in South Carolina and Georgia "probably represented the most significant utilization of the tidewater zone for crop agriculture ever attained in the United States." Rice is a specialty crop where successful cultivation relied heavily upon "adaptation" to nature via…
Over the next decade, data requirements to inform air quality management decisions and policies will need to be expanded to large spatial domains to accommodate decisions which more frequently cross geo-political boundaries; from urban (local) and regional scales to regional, sup...
To the Cloud! A Grassroots Proposal to Accelerate Brain Science Discovery
Vogelstein, Joshua T.; Mensh, Brett; Hausser, Michael; Spruston, Nelson; Evans, Alan; Kording, Konrad; Amunts, Katrin; Ebell, Christoph; Muller, Jeff; Telefont, Martin; Hill, Sean; Koushika, Sandhya P.; Cali, Corrado; Valdés-Sosa, Pedro Antonio; Littlewood, Peter; Koch, Christof; Saalfeld, Stephan; Kepecs, Adam; Peng, Hanchuan; Halchenko, Yaroslav O.; Kiar, Gregory; Poo, Mu-Ming; Poline, Jean-Baptiste; Milham, Michael P.; Schaffer, Alyssa Picchini; Gidron, Rafi; Okano, Hideyuki; Calhoun, Vince D; Chun, Miyoung; Kleissas, Dean M.; Vogelstein, R. Jacob; Perlman, Eric; Burns, Randal; Huganir, Richard; Miller, Michael I.
2018-01-01
The revolution in neuroscientific data acquisition is creating an analysis challenge. We propose leveraging cloud-computing technologies to enable large-scale neurodata storing, exploring, analyzing, and modeling. This utility will empower scientists globally to generate and test theories of brain function and dysfunction. PMID:27810005
A Commercialization Roadmap for Carbon-Negative Energy Systems
NASA Astrophysics Data System (ADS)
Sanchez, D.
2016-12-01
The Intergovernmental Panel on Climate Change (IPCC) envisages the need for large-scale deployment of net-negative CO2 emissions technologies by mid-century to meet stringent climate mitigation goals and yield a net drawdown of atmospheric carbon. Yet there are few commercial deployments of BECCS outside of niche markets, creating uncertainty about commercialization pathways and sustainability impacts at scale. This uncertainty is exacerbated by the absence of a strong policy framework, such as high carbon prices and research coordination. Here, we propose a strategy for the potential commercial deployment of BECCS. This roadmap proceeds via three steps: 1) via capture and utilization of biogenic CO2 from existing bioenergy facilities, notably ethanol fermentation, 2) via thermochemical co-conversion of biomass and fossil fuels, particularly coal, and 3) via dedicated, large-scale BECCS. Although biochemical conversion is a proven first market for BECCS, this trajectory alone is unlikely to drive commercialization of BECCS at the gigatonne scale. In contrast to biochemical conversion, thermochemical conversion of coal and biomass enables large-scale production of fuels and electricity with a wide range of carbon intensities, process efficiencies and process scales. Aside from systems integration, primarily technical barriers are involved in large-scale biomass logistics, gasification and gas cleaning. Key uncertainties around large-scale BECCS deployment are not limited to commercialization pathways; rather, they include physical constraints on biomass cultivation or CO2 storage, as well as social barriers, including public acceptance of new technologies and conceptions of renewable and fossil energy, which co-conversion systems confound. Despite sustainability risks, this commercialization strategy presents a pathway where energy suppliers, manufacturers and governments could transition from laggards to leaders in climate change mitigation efforts.
Evolving from bioinformatics in-the-small to bioinformatics in-the-large.
Parker, D Stott; Gorlick, Michael M; Lee, Christopher J
2003-01-01
We argue the significance of a fundamental shift in bioinformatics, from in-the-small to in-the-large. Adopting a large-scale perspective is a way to manage the problems endemic to the world of the small-constellations of incompatible tools for which the effort required to assemble an integrated system exceeds the perceived benefit of the integration. Where bioinformatics in-the-small is about data and tools, bioinformatics in-the-large is about metadata and dependencies. Dependencies represent the complexities of large-scale integration, including the requirements and assumptions governing the composition of tools. The popular make utility is a very effective system for defining and maintaining simple dependencies, and it offers a number of insights about the essence of bioinformatics in-the-large. Keeping an in-the-large perspective has been very useful to us in large bioinformatics projects. We give two fairly different examples, and extract lessons from them showing how it has helped. These examples both suggest the benefit of explicitly defining and managing knowledge flows and knowledge maps (which represent metadata regarding types, flows, and dependencies), and also suggest approaches for developing bioinformatics database systems. Generally, we argue that large-scale engineering principles can be successfully adapted from disciplines such as software engineering and data management, and that having an in-the-large perspective will be a key advantage in the next phase of bioinformatics development.
NASA Astrophysics Data System (ADS)
Roadman, Jason; Mohseni, Kamran
2009-11-01
Modern technology operating in the atmospheric boundary layer could benefit from more accurate wind tunnel testing. While scaled atmospheric boundary layer tunnels have been well developed, tunnels replicating portions of the turbulence of the atmospheric boundary layer at full scale are a comparatively new concept. Testing at full-scale Reynolds numbers with full-scale turbulence in an ``atmospheric wind tunnel'' is sought. Many programs could utilize such a tool including that of Micro Aerial Vehicles (MAVs) and other unmanned aircraft, the wind energy industry, fuel efficient vehicles, and the study of bird and insect fight. The construction of an active ``gust generator'' for a new atmospheric tunnel is reviewed and the turbulence it generates is measured utilizing single and cross hot wires. Results from this grid are compared to atmospheric turbulence and it is shown that various gust strengths can be produced corresponding to days ranging from calm to quite gusty. An initial test is performed in the atmospheric wind tunnel whereby the effects of various turbulence conditions on transition and separation on the upper surface of a MAV wing is investigated using oil flow visualization.
Advanced bulk processing of lightweight materials for utilization in the transportation sector
NASA Astrophysics Data System (ADS)
Milner, Justin L.
The overall objective of this research is to develop the microstructure of metallic lightweight materials via multiple advanced processing techniques with potentials for industrial utilization on a large scale to meet the demands of the aerospace and automotive sectors. This work focused on (i) refining the grain structure to increase the strength, (ii) controlling the texture to increase formability and (iii) directly reducing processing/production cost of lightweight material components. Advanced processing is conducted on a bulk scale by several severe plastic deformation techniques including: accumulative roll bonding, isolated shear rolling and friction stir processing to achieve the multiple targets of this research. Development and validation of the processing techniques is achieved through wide-ranging experiments along with detailed mechanical and microstructural examination of the processed material. On a broad level, this research will make advancements in processing of bulk lightweight materials facilitating industrial-scale implementation. Where accumulative roll bonding and isolated shear rolling, currently feasible on an industrial scale, processes bulk sheet materials capable of replacing more expensive grades of alloys and enabling low-temperature and high-strain-rate formability. Furthermore, friction stir processing to manufacture lightweight tubes, made from magnesium alloys, has the potential to increase the utilization of these materials in the automotive and aerospace sectors for high strength - high formability applications. With the increased utilization of these advanced processing techniques will significantly reduce the cost associated with lightweight materials for many applications in the transportation sectors.
Molecular dynamics simulations of large macromolecular complexes.
Perilla, Juan R; Goh, Boon Chong; Cassidy, C Keith; Liu, Bo; Bernardi, Rafael C; Rudack, Till; Yu, Hang; Wu, Zhe; Schulten, Klaus
2015-04-01
Connecting dynamics to structural data from diverse experimental sources, molecular dynamics simulations permit the exploration of biological phenomena in unparalleled detail. Advances in simulations are moving the atomic resolution descriptions of biological systems into the million-to-billion atom regime, in which numerous cell functions reside. In this opinion, we review the progress, driven by large-scale molecular dynamics simulations, in the study of viruses, ribosomes, bioenergetic systems, and other diverse applications. These examples highlight the utility of molecular dynamics simulations in the critical task of relating atomic detail to the function of supramolecular complexes, a task that cannot be achieved by smaller-scale simulations or existing experimental approaches alone. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fundamental tests of galaxy formation theory
NASA Technical Reports Server (NTRS)
Silk, J.
1982-01-01
The structure of the universe as an environment where traces exist of the seed fluctuations from which galaxies formed is studied. The evolution of the density fluctuation modes that led to the eventual formation of matter inhomogeneities is reviewed, How the resulting clumps developed into galaxies and galaxy clusters acquiring characteristic masses, velocity dispersions, and metallicities, is discussed. Tests are described that utilize the large scale structure of the universe, including the dynamics of the local supercluster, the large scale matter distribution, and the anisotropy of the cosmic background radiation, to probe the earliest accessible stages of evolution. Finally, the role of particle physics is described with regard to its observable implications for galaxy formation.
Large-scale Growth and Simultaneous Doping of Molybdenum Disulfide Nanosheets
Kim, Seong Jun; Kang, Min-A; Kim, Sung Ho; Lee, Youngbum; Song, Wooseok; Myung, Sung; Lee, Sun Sook; Lim, Jongsun; An, Ki-Seok
2016-01-01
A facile method that uses chemical vapor deposition (CVD) for the simultaneous growth and doping of large-scale molybdenum disulfide (MoS2) nanosheets was developed. We employed metalloporphyrin as a seeding promoter layer for the uniform growth of MoS2 nanosheets. Here, a hybrid deposition system that combines thermal evaporation and atomic layer deposition (ALD) was utilized to prepare the promoter. The doping effect of the promoter was verified by X-ray photoelectron spectroscopy and Raman spectroscopy. In addition, the carrier density of the MoS2 nanosheets was manipulated by adjusting the thickness of the metalloporphyrin promoter layers, which allowed the electrical conductivity in MoS2 to be manipulated. PMID:27044862
Exploring Cloud Computing for Large-scale Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Han, Binh; Yin, Jian
This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less
Evaluating the Health Impact of Large-Scale Public Policy Changes: Classical and Novel Approaches
Basu, Sanjay; Meghani, Ankita; Siddiqi, Arjumand
2018-01-01
Large-scale public policy changes are often recommended to improve public health. Despite varying widely—from tobacco taxes to poverty-relief programs—such policies present a common dilemma to public health researchers: how to evaluate their health effects when randomized controlled trials are not possible. Here, we review the state of knowledge and experience of public health researchers who rigorously evaluate the health consequences of large-scale public policy changes. We organize our discussion by detailing approaches to address three common challenges of conducting policy evaluations: distinguishing a policy effect from time trends in health outcomes or preexisting differences between policy-affected and -unaffected communities (using difference-in-differences approaches); constructing a comparison population when a policy affects a population for whom a well-matched comparator is not immediately available (using propensity score or synthetic control approaches); and addressing unobserved confounders by utilizing quasi-random variations in policy exposure (using regression discontinuity, instrumental variables, or near-far matching approaches). PMID:28384086
A low-cost iron-cadmium redox flow battery for large-scale energy storage
NASA Astrophysics Data System (ADS)
Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Jiang, H. R.
2016-10-01
The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies that offer a potential solution to the intermittency of renewable sources such as wind and solar. The prerequisite for widespread utilization of RFBs is low capital cost. In this work, an iron-cadmium redox flow battery (Fe/Cd RFB) with a premixed iron and cadmium solution is developed and tested. It is demonstrated that the coulombic efficiency and energy efficiency of the Fe/Cd RFB reach 98.7% and 80.2% at 120 mA cm-2, respectively. The Fe/Cd RFB exhibits stable efficiencies with capacity retention of 99.87% per cycle during the cycle test. Moreover, the Fe/Cd RFB is estimated to have a low capital cost of 108 kWh-1 for 8-h energy storage. Intrinsically low-cost active materials, high cell performance and excellent capacity retention equip the Fe/Cd RFB to be a promising solution for large-scale energy storage systems.
Water Utility Lime Sludge Reuse – An Environmental Sorbent ...
Lime sludge can be used as an environmental sorbent to remove sulfur dioxide (SO2) and acid gases, by the ultra-fine CaCO3 particles, and to sequester mercury and other heavy metals, by the Natural Organic Matter and residual activated carbon. The laboratory experimental set up included a simulated flue gas preparation unit, a lab-scale wet scrubber, and a mercury analyzer system. The influent mercury concentration was based on a range from 22 surveyed power plants. The reactivity of the lime sludge sample for acid neutralization was determined using a method similar to method ASTM C1318-95. Similar experiments were conducted using reagent calcium carbonate and calcium sulfate to obtain baseline data for comparing with the lime sludge test results. The project also evaluated the techno-economic feasibility and sustainable benefits of reusing lime softening sludge. If implemented on a large scale, this transformative approach for recycling waste materials from water treatment utilities at power generation utilities for environmental cleanup can save both water and power utilities millions of dollars. Huge amounts of lime sludge waste, generated from hundreds of water treatment utilities across the U.S., is currently disposed in landfills. This project evaluated a sustainable and economically-attractive approach to the use of lime sludge waste as a valuable resource for power generation utilities.
NASA Technical Reports Server (NTRS)
Chen, Fei; Yates, David; LeMone, Margaret
2001-01-01
To understand the effects of land-surface heterogeneity and the interactions between the land-surface and the planetary boundary layer at different scales, we develop a multiscale data set. This data set, based on the Cooperative Atmosphere-Surface Exchange Study (CASES97) observations, includes atmospheric, surface, and sub-surface observations obtained from a dense observation network covering a large region on the order of 100 km. We use this data set to drive three land-surface models (LSMs) to generate multi-scale (with three resolutions of 1, 5, and 10 kilometers) gridded surface heat flux maps for the CASES area. Upon validating these flux maps with measurements from surface station and aircraft, we utilize them to investigate several approaches for estimating the area-integrated surface heat flux for the CASES97 domain of 71x74 square kilometers, which is crucial for land surface model development/validation and area water and energy budget studies. This research is aimed at understanding the relative contribution of random turbulence versus organized mesoscale circulations to the area-integrated surface flux at the scale of 100 kilometers, and identifying the most important effective parameters for characterizing the subgrid-scale variability for large-scale atmosphere-hydrology models.
Nguyen, Trung T; Barber, Andrew R; Corbin, Kendall; Zhang, Wei
2017-01-01
The worldwide annual production of lobster was 165,367 tons valued over $3.32 billion in 2004, but this figure rose up to 304,000 tons in 2012. Over half the volume of the worldwide lobster production has been processed to meet the rising global demand in diversified lobster products. Lobster processing generates a large amount of by-products (heads, shells, livers, and eggs) which account for 50-70% of the starting material. Continued production of these lobster processing by-products (LPBs) without corresponding process development for efficient utilization has led to disposal issues associated with costs and pollutions. This review presents the promising opportunities to maximize the utilization of LPBs by economic recovery of their valuable components to produce high value-added products. More than 50,000 tons of LPBs are globally generated, which costs lobster processing companies upward of about $7.5 million/year for disposal. This not only presents financial and environmental burdens to the lobster processors but also wastes a valuable bioresource. LPBs are rich in a range of high-value compounds such as proteins, chitin, lipids, minerals, and pigments. Extracts recovered from LPBs have been demonstrated to possess several functionalities and bioactivities, which are useful for numerous applications in water treatment, agriculture, food, nutraceutical, pharmaceutical products, and biomedicine. Although LPBs have been studied for recovery of valuable components, utilization of these materials for the large-scale production is still very limited. Extraction of lobster components using microwave, ultrasonic, and supercritical fluid extraction were found to be promising techniques that could be used for large-scale production. LPBs are rich in high-value compounds that are currently being underutilized. These compounds can be extracted for being used as functional ingredients, nutraceuticals, and pharmaceuticals in a wide range of commercial applications. The efficient utilization of LPBs would not only generate significant economic benefits but also reduce the problems of waste management associated with the lobster industry. This comprehensive review highlights the availability of the global LPBs, the key components in LPBs and their current applications, the limitations to the extraction techniques used, and the suggested emerging techniques which may be promising on an industrial scale for the maximized utilization of LPBs. Graphical abstractLobster processing by-product as bioresource of several functional and bioactive compounds used in various value-added products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfrum, E.J.; Weaver, P.F.
Researchers at the National Renewable Energy Laboratory (NREL) have been investigating the use of model photosynthetic microorganisms that use sunlight and two-carbon organic substrates (e.g., ethanol, acetate) to produce biodegradable polyhydroxyalkanoate (PHA) copolymers as carbon storage compounds. Use of these biological PHAs in single-use plastics applications, followed by their post-consumer composting or anaerobic digestion, could impact petroleum consumption as well as the overloading of landfills. The large-scale production of PHA polymers by photosynthetic bacteria will require large-scale reactor systems utilizing either sunlight or artificial illumination. The first step in the scale-up process is to quantify the microbial growth rates andmore » the PHA production rates as a function of reaction conditions such as nutrient concentration, temperature, and light quality and intensity.« less
A Magnetic Bead-Integrated Chip for the Large Scale Manufacture of Normalized esiRNAs
Wang, Zhao; Huang, Huang; Zhang, Hanshuo; Sun, Changhong; Hao, Yang; Yang, Junyu; Fan, Yu; Xi, Jianzhong Jeff
2012-01-01
The chemically-synthesized siRNA duplex has become a powerful and widely used tool for RNAi loss-of-function studies, but suffers from a high off-target effect problem. Recently, endoribonulease-prepared siRNA (esiRNA) has been shown to be an attractive alternative due to its lower off-target effect and cost effectiveness. However, the current manufacturing method for esiRNA is complicated, mainly in regards to purification and normalization on a large-scale level. In this study, we present a magnetic bead-integrated chip that can immobilize amplification or transcription products on beads and accomplish transcription, digestion, normalization and purification in a robust and convenient manner. This chip is equipped to manufacture ready-to-use esiRNAs on a large-scale level. Silencing specificity and efficiency of these esiRNAs were validated at the transcriptional, translational and functional levels. Manufacture of several normalized esiRNAs in a single well, including those silencing PARP1 and BRCA1, was successfully achieved, and the esiRNAs were subsequently utilized to effectively investigate their synergistic effect on cell viability. A small esiRNA library targeting 68 tyrosine kinase genes was constructed for a loss-of-function study, and four genes were identified in regulating the migration capability of Hela cells. We believe that this approach provides a more robust and cost-effective choice for manufacturing esiRNAs than current approaches, and therefore these heterogeneous RNA strands may have utility in most intensive and extensive applications. PMID:22761791
Molecular diagnosis of malaria by photo-induced electron transfer fluorogenic primers: PET-PCR.
Lucchi, Naomi W; Narayanan, Jothikumar; Karell, Mara A; Xayavong, Maniphet; Kariuki, Simon; DaSilva, Alexandre J; Hill, Vincent; Udhayakumar, Venkatachalam
2013-01-01
There is a critical need for developing new malaria diagnostic tools that are sensitive, cost effective and capable of performing large scale diagnosis. The real-time PCR methods are particularly robust for large scale screening and they can be used in malaria control and elimination programs. We have designed novel self-quenching photo-induced electron transfer (PET) fluorogenic primers for the detection of P. falciparum and the Plasmodium genus by real-time PCR. A total of 119 samples consisting of different malaria species and mixed infections were used to test the utility of the novel PET-PCR primers in the diagnosis of clinical samples. The sensitivity and specificity were calculated using a nested PCR as the gold standard and the novel primer sets demonstrated 100% sensitivity and specificity. The limits of detection for P. falciparum was shown to be 3.2 parasites/µl using both Plasmodium genus and P. falciparum-specific primers and 5.8 parasites/µl for P. ovale, 3.5 parasites/µl for P. malariae and 5 parasites/µl for P. vivax using the genus specific primer set. Moreover, the reaction can be duplexed to detect both Plasmodium spp. and P. falciparum in a single reaction. The PET-PCR assay does not require internal probes or intercalating dyes which makes it convenient to use and less expensive than other real-time PCR diagnostic formats. Further validation of this technique in the field will help to assess its utility for large scale screening in malaria control and elimination programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ye; Thornber, Ben
2016-04-12
Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology andmore » wind tunnel experiments.« less
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
General relativistic corrections to the weak lensing convergence power spectrum
NASA Astrophysics Data System (ADS)
Giblin, John T.; Mertens, James B.; Starkman, Glenn D.; Zentner, Andrew R.
2017-11-01
We compute the weak lensing convergence power spectrum, Cℓκκ, in a dust-filled universe using fully nonlinear general relativistic simulations. The spectrum is then compared to more standard, approximate calculations by computing the Bardeen (Newtonian) potentials in linearized gravity and partially utilizing the Born approximation. We find corrections to the angular power spectrum amplitude of order ten percent at very large angular scales, ℓ˜2 - 3 , and percent-level corrections at intermediate angular scales of ℓ˜20 - 30 .
Lo, Yu-Chen; Senese, Silvia; Li, Chien-Ming; Hu, Qiyang; Huang, Yong; Damoiseaux, Robert; Torres, Jorge Z.
2015-01-01
Target identification is one of the most critical steps following cell-based phenotypic chemical screens aimed at identifying compounds with potential uses in cell biology and for developing novel disease therapies. Current in silico target identification methods, including chemical similarity database searches, are limited to single or sequential ligand analysis that have limited capabilities for accurate deconvolution of a large number of compounds with diverse chemical structures. Here, we present CSNAP (Chemical Similarity Network Analysis Pulldown), a new computational target identification method that utilizes chemical similarity networks for large-scale chemotype (consensus chemical pattern) recognition and drug target profiling. Our benchmark study showed that CSNAP can achieve an overall higher accuracy (>80%) of target prediction with respect to representative chemotypes in large (>200) compound sets, in comparison to the SEA approach (60–70%). Additionally, CSNAP is capable of integrating with biological knowledge-based databases (Uniprot, GO) and high-throughput biology platforms (proteomic, genetic, etc) for system-wise drug target validation. To demonstrate the utility of the CSNAP approach, we combined CSNAP's target prediction with experimental ligand evaluation to identify the major mitotic targets of hit compounds from a cell-based chemical screen and we highlight novel compounds targeting microtubules, an important cancer therapeutic target. The CSNAP method is freely available and can be accessed from the CSNAP web server (http://services.mbi.ucla.edu/CSNAP/). PMID:25826798
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-01-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-09-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.
Huang, Yueng-Hsiang; Zohar, Dov; Robertson, Michelle M; Garabet, Angela; Murphy, Lauren A; Lee, Jin
2013-10-01
The objective of this study was to develop and test the reliability and validity of a new scale designed for measuring safety climate among mobile remote workers, using utility/electrical workers as exemplar. The new scale employs perceived safety priority as the metric of safety climate and a multi-level framework, separating the measurement of organization- and group-level safety climate items into two sub-scales. The question of the emergence of shared perceptions among remote workers was also examined. For the initial survey development, several items were adopted from a generic safety climate scale and new industry-specific items were generated based on an extensive literature review, expert judgment, 15-day field observations, and 38 in-depth individual interviews with subject matter experts (i.e., utility industry electrical workers, trainers and supervisors of electrical workers). The items were revised after 45 cognitive interviews and a pre-test with 139 additional utility/electrical workers. The revised scale was subsequently implemented with a total of 2421 workers at two large US electric utility companies (1560 participants for the pilot company and 861 for the second company). Both exploratory (EFA) and confirmatory factor analyses (CFA) were adopted to finalize the items and to ensure construct validity. Reliability of the scale was tested based on Cronbach's α. Homogeneity tests examined whether utility/electrical workers' safety climate perceptions were shared within the same supervisor group. This was followed by an analysis of the criterion-related validity, which linked the safety climate scores to self-reports of safety behavior and injury outcomes (i.e., recordable incidents, missing days due to work-related injuries, vehicle accidents, and near misses). Six dimensions (Safety pro-activity, General training, Trucks and equipment, Field orientation, Financial Investment, and Schedule flexibility) with 29 items were extracted from the EFA to measure the organization-level safety climate. Three dimensions (Supervisory care, Participation encouragement, and Safety straight talk) with 19 items were extracted to measure the group-level safety climate. Acceptable ranges of internal consistency statistics for the sub-scales were observed. Whether or not to aggregate these multi-dimensions of safety climate into a single higher-order construct (overall safety climate) was discussed. CFAs confirmed the construct validity of the developed safety climate scale for utility/electrical workers. Homogeneity tests showed that utility/electrical workers' safety climate perceptions were shared within the same supervisor group. Both the organization- and group-level safety climate scores showed a statistically significant relationship with workers' self-reported safety behaviors and injury outcomes. A valid and reliable instrument to measure the essential elements of safety climate for utility/electrical workers in the remote working situation has been introduced. The scale can provide an in-depth understanding of safety climate based on its key dimensions and show where improvements can be made at both group and organization levels. As such, it may also offer a valuable starting point for future safety interventions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Solving large scale traveling salesman problems by chaotic neurodynamics.
Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki
2002-03-01
We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches.
Judging Alignment of Curriculum-Based Measures in Mathematics and Common Core Standards
ERIC Educational Resources Information Center
Morton, Christopher
2013-01-01
Measurement literature supports the utility of alignment models for application with state standards and large-scale assessments. However, the literature is lacking in the application of these models to curriculum-based measures (CBMs) and common core standards. In this study, I investigate the alignment of CBMs and standards, with specific…
Municipal Sludge Application in Forests of Northern Michigan: a Case Study.
D.G. Brockway; P.V. Nguyen
1986-01-01
A large-scale operational demonstration and research project was cooperatively established by the US. Environmental Protection Agency, Michigan Department of Natural Resources, and Michigan State University to evaluate the practice of forest land application as an option for sludge utilization. Project objectives included completing (1) a logistic and economic...
A Longitudinal Analysis of Students' Motives for Communicating with Their Instructors
ERIC Educational Resources Information Center
Myers, Scott A.
2017-01-01
This study utilized the longitudinal survey research design using students' motives to communicate with their instructors as a test case. Participants were 282 undergraduate students enrolled in introductory communication courses at a large Mid-Atlantic university who completed the Student Communication Motives scale at three points (Time 1:…
Help-Seeking Behavior Following a Community Tragedy: An Application of the Andersen Model
ERIC Educational Resources Information Center
Cowart, Brian L.
2013-01-01
For healthcare agencies and other professionals to most efficiently provide aid following large scale community tragedies, agencies and professionals must understand the determinants that lead individuals to require and seek various forms of help. This study examined Andersen's Behavioral Model of Healthcare Use and its utility in predicting…
Design for a Study of American Youth.
ERIC Educational Resources Information Center
Flanagan, John C.; And Others
Project TALENT is a large-scale, long-range educational research effort aimed at developing methods for the identification, development, and utilization of human talents, which has involved some 440,000 students in 1,353 public, private, and parochial secondary schools in all parts of the country. Data collected through teacher-administered tests,…
ERIC Educational Resources Information Center
Vaughan, Angela L.; Lalonde, Trent L.; Jenkins-Guarnieri, Michael A.
2014-01-01
Many researchers assessing the efficacy of educational programs face challenges due to issues with non-randomization and the likelihood of dependence between nested subjects. The purpose of the study was to demonstrate a rigorous research methodology using a hierarchical propensity score matching method that can be utilized in contexts where…
Utilization of citric acid in wood bonding
USDA-ARS?s Scientific Manuscript database
Citric acid (CA) is a weak organic acid. It exists most notably in citrus fruits so that it is named likewise. As a commodity chemical, CA is produced on a large scale by fermentation. In this chapter, we first briefly review the applied research and methods for commercial production of CA. Then we ...
Diagnosing Competency Mastery in Science: An Application of GDM to TIMSS 2011 Data
ERIC Educational Resources Information Center
Kabiri, Masoud; Ghazi-Tabatabaei, Mahmood; Bazargan, Abbas; Shokoohi-Yekta, Mohsen; Kharrazi, Kamal
2017-01-01
Numerous diagnostic studies have been conducted on large-scale assessments to illustrate the students' mastery profile in the areas of math and reading; however, for science a limited number of investigations are reported. This study investigated Iranian eighth graders' competency mastery of science and examined the utility of the General…
Ask Here PA: Large-Scale Synchronous Virtual Reference for Pennsylvania
ERIC Educational Resources Information Center
Mariner, Vince
2008-01-01
Ask Here PA is Pennsylvania's new statewide live chat reference and information service. This article discusses the key strategies utilized by Ask Here PA administrators to recruit participating libraries to contribute staff time to the service, the importance of centralized staff training, the main aspects of staff training, and activating the…
ERIC Educational Resources Information Center
Sample Mcmeeking, Laura B.; Cobb, R. Brian; Basile, Carole
2010-01-01
This paper introduces a variation on the post-test only cohort control design and addresses questions concerning both the methodological credibility and the practical utility of employing this design variation in evaluations of large-scale complex professional development programmes in mathematics education. The original design and design…
In 2010, Kansas City, MO (KCMO) signed a consent degree with EPA on combined sewer overflows. The City decided to use adaptive management in order to extensively utilize green infrastructure (GI) in lieu of, and in addition to, structural controls. KCMO installed 130 GI storm co...
ORNL Pre-test Analyses of A Large-scale Experiment in STYLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Paul T; Yin, Shengjun; Klasky, Hilda B
Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current statusmore » of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite-element solutions and to also assess the level of confidence that can be placed in the best-estimate finiteelement solutions.« less
Ma, Li; Runesha, H Birali; Dvorkin, Daniel; Garbe, John R; Da, Yang
2008-01-01
Background Genome-wide association studies (GWAS) using single nucleotide polymorphism (SNP) markers provide opportunities to detect epistatic SNPs associated with quantitative traits and to detect the exact mode of an epistasis effect. Computational difficulty is the main bottleneck for epistasis testing in large scale GWAS. Results The EPISNPmpi and EPISNP computer programs were developed for testing single-locus and epistatic SNP effects on quantitative traits in GWAS, including tests of three single-locus effects for each SNP (SNP genotypic effect, additive and dominance effects) and five epistasis effects for each pair of SNPs (two-locus interaction, additive × additive, additive × dominance, dominance × additive, and dominance × dominance) based on the extended Kempthorne model. EPISNPmpi is the parallel computing program for epistasis testing in large scale GWAS and achieved excellent scalability for large scale analysis and portability for various parallel computing platforms. EPISNP is the serial computing program based on the EPISNPmpi code for epistasis testing in small scale GWAS using commonly available operating systems and computer hardware. Three serial computing utility programs were developed for graphical viewing of test results and epistasis networks, and for estimating CPU time and disk space requirements. Conclusion The EPISNPmpi parallel computing program provides an effective computing tool for epistasis testing in large scale GWAS, and the epiSNP serial computing programs are convenient tools for epistasis analysis in small scale GWAS using commonly available computer hardware. PMID:18644146
Deformation and Failure Mechanisms of Shape Memory Alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Samantha Hayes
2015-04-15
The goal of this research was to understand the fundamental mechanics that drive the deformation and failure of shape memory alloys (SMAs). SMAs are difficult materials to characterize because of the complex phase transformations that give rise to their unique properties, including shape memory and superelasticity. These phase transformations occur across multiple length scales (one example being the martensite-austenite twinning that underlies macroscopic strain localization) and result in a large hysteresis. In order to optimize the use of this hysteretic behavior in energy storage and damping applications, we must first have a quantitative understanding of this transformation behavior. Prior resultsmore » on shape memory alloys have been largely qualitative (i.e., mapping phase transformations through cracked oxide coatings or surface morphology). The PI developed and utilized new approaches to provide a quantitative, full-field characterization of phase transformation, conducting a comprehensive suite of experiments across multiple length scales and tying these results to theoretical and computational analysis. The research funded by this award utilized new combinations of scanning electron microscopy, diffraction, digital image correlation, and custom testing equipment and procedures to study phase transformation processes at a wide range of length scales, with a focus at small length scales with spatial resolution on the order of 1 nanometer. These experiments probe the basic connections between length scales during phase transformation. In addition to the insights gained on the fundamental mechanisms driving transformations in shape memory alloys, the unique experimental methodologies developed under this award are applicable to a wide range of solid-to-solid phase transformations and other strain localization mechanisms.« less
Multiresolution persistent homology for excessively large biomolecular datasets
NASA Astrophysics Data System (ADS)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei
2015-10-01
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.
A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.
Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang
2016-04-01
Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.
NASA Astrophysics Data System (ADS)
McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin
2017-12-01
We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.
Design, processing and testing of LSI arrays, hybrid microelectronics task
NASA Technical Reports Server (NTRS)
Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.; Rothrock, C. W.
1979-01-01
Mathematical cost models previously developed for hybrid microelectronic subsystems were refined and expanded. Rework terms related to substrate fabrication, nonrecurring developmental and manufacturing operations, and prototype production are included. Sample computer programs were written to demonstrate hybrid microelectric applications of these cost models. Computer programs were generated to calculate and analyze values for the total microelectronics costs. Large scale integrated (LST) chips utilizing tape chip carrier technology were studied. The feasibility of interconnecting arrays of LSU chips utilizing tape chip carrier and semiautomatic wire bonding technology was demonstrated.
The development of a solar-powered residential heating and cooling system
NASA Technical Reports Server (NTRS)
1974-01-01
Efforts to demonstrate the engineering feasibility of utilizing solar power for residential heating and cooling are described. These efforts were concentrated on the analysis, design, and test of a full-scale demonstration system which is currently under construction at the National Aeronautics and Space Administration, Marshall Space Flight Center, Huntsville, Alabama. The basic solar heating and cooling system under development utilizes a flat plate solar energy collector, a large water tank for thermal energy storage, heat exchangers for space heating and water heating, and an absorption cycle air conditioner for space cooling.
NASA Astrophysics Data System (ADS)
Alberts, Samantha J.
The investigation of microgravity fluid dynamics emerged out of necessity with the advent of space exploration. In particular, capillary research took a leap forward in the 1960s with regards to liquid settling and interfacial dynamics. Due to inherent temperature variations in large spacecraft liquid systems, such as fuel tanks, forces develop on gas-liquid interfaces which induce thermocapillary flows. To date, thermocapillary flows have been studied in small, idealized research geometries usually under terrestrial conditions. The 1 to 3m lengths in current and future large tanks and hardware are designed based on hardware rather than research, which leaves spaceflight systems designers without the technological tools to effectively create safe and efficient designs. This thesis focused on the design and feasibility of a large length-scale thermocapillary flow experiment, which utilizes temperature variations to drive a flow. The design of a helical channel geometry ranging from 1 to 2.5m in length permits a large length-scale thermocapillary flow experiment to fit in a seemingly small International Space Station (ISS) facility such as the Fluids Integrated Rack (FIR). An initial investigation determined the proposed experiment produced measurable data while adhering to the FIR facility limitations. The computational portion of this thesis focused on the investigation of functional geometries of fuel tanks and depots using Surface Evolver. This work outlines the design of a large length-scale thermocapillary flow experiment for the ISS FIR. The results from this work improve the understanding thermocapillary flows and thus improve technological tools for predicting heat and mass transfer in large length-scale thermocapillary flows. Without the tools to understand the thermocapillary flows in these systems, engineers are forced to design larger, heavier vehicles to assure safety and mission success.
Zhu, Jixin; Cao, Liujun; Wu, Yingsi; Gong, Yongji; Liu, Zheng; Hoster, Harry E; Zhang, Yunhuai; Zhang, Shengtao; Yang, Shubin; Yan, Qingyu; Ajayan, Pulickel M; Vajtai, Robert
2013-01-01
Various two-dimensional (2D) materials have recently attracted great attention owing to their unique properties and wide application potential in electronics, catalysis, energy storage, and conversion. However, large-scale production of ultrathin sheets and functional nanosheets remains a scientific and engineering challenge. Here we demonstrate an efficient approach for large-scale production of V2O5 nanosheets having a thickness of 4 nm and utilization as building blocks for constructing 3D architectures via a freeze-drying process. The resulting highly flexible V2O5 structures possess a surface area of 133 m(2) g(-1), ultrathin walls, and multilevel pores. Such unique features are favorable for providing easy access of the electrolyte to the structure when they are used as a supercapacitor electrode, and they also provide a large electroactive surface that advantageous in energy storage applications. As a consequence, a high specific capacitance of 451 F g(-1) is achieved in a neutral aqueous Na2SO4 electrolyte as the 3D architectures are utilized for energy storage. Remarkably, the capacitance retention after 4000 cycles is more than 90%, and the energy density is up to 107 W·h·kg(-1) at a high power density of 9.4 kW kg(-1).
Corstjens, Paul L A M; Hoekstra, Pytsje T; de Dood, Claudia J; van Dam, Govert J
2017-11-01
Methodological applications of the high sensitivity genus-specific Schistosoma CAA strip test, allowing detection of single worm active infections (ultimate sensitivity), are discussed for efficient utilization in sample pooling strategies. Besides relevant cost reduction, pooling of samples rather than individual testing can provide valuable data for large scale mapping, surveillance, and monitoring. The laboratory-based CAA strip test utilizes luminescent quantitative up-converting phosphor (UCP) reporter particles and a rapid user-friendly lateral flow (LF) assay format. The test includes a sample preparation step that permits virtually unlimited sample concentration with urine, reaching ultimate sensitivity (single worm detection) at 100% specificity. This facilitates testing large urine pools from many individuals with minimal loss of sensitivity and specificity. The test determines the average CAA level of the individuals in the pool thus indicating overall worm burden and prevalence. When requiring test results at the individual level, smaller pools need to be analysed with the pool-size based on expected prevalence or when unknown, on the average CAA level of a larger group; CAA negative pools do not require individual test results and thus reduce the number of tests. Straightforward pooling strategies indicate that at sub-population level the CAA strip test is an efficient assay for general mapping, identification of hotspots, determination of stratified infection levels, and accurate monitoring of mass drug administrations (MDA). At the individual level, the number of tests can be reduced i.e. in low endemic settings as the pool size can be increased as opposed to prevalence decrease. At the sub-population level, average CAA concentrations determined in urine pools can be an appropriate measure indicating worm burden. Pooling strategies allowing this type of large scale testing are feasible with the various CAA strip test formats and do not affect sensitivity and specificity. It allows cost efficient stratified testing and monitoring of worm burden at the sub-population level, ideally for large-scale surveillance generating hard data for performance of MDA programs and strategic planning when moving towards transmission-stop and elimination.
Techniques for automatic large scale change analysis of temporal multispectral imagery
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.
Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan
2016-01-01
The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willingham, Alison N.; /Ohio State U.
Statewide surveys of furbearers in Illinois indicate gray (Urocyon cinereoargenteus) and red (Vulpes vulpes) foxes have experienced substantial declines in relative abundance, whereas other species such as raccoons (Procyon lotor) and coyotes (Canis latrans) have exhibited dramatic increases during the same time period. The cause of the declines of gray and red foxes has not been identified, and the current status of gray foxes remains uncertain. Therefore, I conducted a large-scale predator survey and tracked radiocollared gray foxes from 2004 to 2007 in order to determine the distribution, survival, cause-specific mortality sources and land cover associations of gray foxes inmore » an urbanized region of northeastern Illinois, and examined the relationships between the occurrence of gray fox and the presence other species of mesopredators, specifically coyotes and raccoons. Although generalist mesopredators are common and can reach high densities in many urban areas their urban ecology is poorly understood due to their secretive nature and wariness of humans. Understanding how mesopredators utilize urbanized landscapes can be useful in the management and control of disease outbreaks, mitigation of nuisance wildlife issues, and gaining insight into how mesopredators shape wildlife communities in highly fragmented areas. I examined habitat associations of raccoons, opossums (Didelphis virginiana), domestic cats (Felis catus), coyotes, foxes (gray and red), and striped skunks (Mephitis mephitis) at multiple spatial scales in an urban environment. Gray fox occurrence was rare and widely dispersed, and survival estimates were similar to other studies. Gray fox occurrence was negatively associated with natural and semi-natural land cover types. Fox home range size increased with increasing urban development suggesting that foxes may be negatively influenced by urbanization. Gray fox occurrence was not associated with coyote or raccoon presence. However, spatial avoidance and mortality due to coyote predation was documented and disease was a major mortality source for foxes. The declining relative abundance of gray fox in Illinois is likely a result of a combination of factors. Assessment of habitat associations indicated that urban mesopredators, particularly coyotes and foxes, perceived the landscape as relatively homogeneous and that urban mesopredators interacted with the environment at scales larger than that accommodated by remnant habitat patches. Coyote and fox presence was found to be associated with a high degree of urban development at large and intermediate spatial scales. However, at a small spatial scale fox presence was associated with high density urban land cover whereas coyote presence was associated with urban development with increased forest cover. Urban habitats can offer a diversity of prey items and anthropogenic resources and natural land cover could offer coyotes daytime resting opportunities in urban areas where they may not be as tolerated as smaller foxes. Raccoons and opossums were found to utilize moderately developed landscapes with interspersed natural and semi-natural land covers at a large spatial scale, which may facilitate dispersal movements. At intermediate and small spatial scales, both species were found to utilize areas that were moderately developed and included forested land cover. These results indicated that raccoons and opossums used natural areas in proximity to anthropogenic resources. At a large spatial scale, skunk presence was associated with highly developed landscapes with interspersed natural and semi-natural land covers. This may indicate that skunks perceived the urban matrix as more homogeneous than raccoons or opossums. At an intermediate spatial scale skunks were associated with moderate levels of development and increased forest cover, which indicated that they might utilize natural land cover in proximity to human-dominated land cover. At the smallest spatial scale skunk presence was associated with forested land cover surrounded by a suburban matrix. Compared to raccoons and opossums, skunks may not be tolerated in close proximity to human development in urban areas. Domestic cat presence was positively associated with increasingly urbanized and less diverse landscapes with decreased amounts of forest and urban open space at the largest spatial scale. At an intermediate spatial scale, cat presence was associated with a moderate degree of urban development characterized by increased forest cover, and at a small spatial scale cat presence was associated with a high degree of urbanization. Free-ranging domestic cats are often associated with human-dominated landscapes and likely utilize remnant natural habitat patches for hunting purposes, which may have implications for native predator and prey species existing in fragmented habitat patches in proximity to human development.« less
Natural snowfall reveals large-scale flow structures in the wake of a 2.5-MW wind turbine.
Hong, Jiarong; Toloui, Mostafa; Chamorro, Leonardo P; Guala, Michele; Howard, Kevin; Riley, Sean; Tucker, James; Sotiropoulos, Fotis
2014-06-24
To improve power production and structural reliability of wind turbines, there is a pressing need to understand how turbines interact with the atmospheric boundary layer. However, experimental techniques capable of quantifying or even qualitatively visualizing the large-scale turbulent flow structures around full-scale turbines do not exist today. Here we use snowflakes from a winter snowstorm as flow tracers to obtain velocity fields downwind of a 2.5-MW wind turbine in a sampling area of ~36 × 36 m(2). The spatial and temporal resolutions of the measurements are sufficiently high to quantify the evolution of blade-generated coherent motions, such as the tip and trailing sheet vortices, identify their instability mechanisms and correlate them with turbine operation, control and performance. Our experiment provides an unprecedented in situ characterization of flow structures around utility-scale turbines, and yields significant insights into the Reynolds number similarity issues presented in wind energy applications.
Geographic smoothing of solar PV: Results from Gujarat
Klima, Kelly; Apt, Jay
2015-09-24
We examine the potential for geographic smoothing of solar photovoltaic (PV) electricity generation using 13 months of observed power production from utility-scale plants in Gujarat, India. To our knowledge, this is the first published analysis of geographic smoothing of solar PV using actual generation data at high time resolution from utility-scale solar PV plants. We use geographic correlation and Fourier transform estimates of the power spectral density (PSD) to characterize the observed variability of operating solar PV plants as a function of time scale. Most plants show a spectrum that is linear in the log–log domain at high frequencies f,more » ranging from f -1.23 to f -1.56 (slopes of -1.23 and -1.56), thus exhibiting more relative variability at high frequencies than exhibited by wind plants. PSDs for large PV plants have a steeper slope than those for small plants, hence more smoothing at short time scales. Interconnecting 20 Gujarat plants yields a f -1.66 spectrum, reducing fluctuations at frequencies corresponding to 6 h and 1 h by 23% and 45%, respectively. Half of this smoothing can be obtained through connecting 4-5 plants; reaching marginal improvement of 1% per added plant occurs at 12-14 plants. The largest plant (322 MW) showed an f -1.76 spectrum. Furthermore, this suggests that in Gujarat the potential for smoothing is limited to that obtained by one large plant.« less
Development of Low-cost, High Energy-per-unit-area Solar Cell Modules
NASA Technical Reports Server (NTRS)
Jones, G. T.; Chitre, S.; Rhee, S. S.
1978-01-01
The development of two hexagonal solar cell process sequences, a laserscribing process technique for scribing hexagonal and modified hexagonal solar cells, a large through-put diffusion process, and two surface macrostructure processes suitable for large scale production is reported. Experimental analysis was made on automated spin-on anti-reflective coating equipment and high pressure wafer cleaning equipment. Six hexagonal solar cell modules were fabricated. Also covered is a detailed theoretical analysis on the optimum silicon utilization by modified hexagonal solar cells.
NASA Technical Reports Server (NTRS)
1988-01-01
ARCO Solar manufactures PV Systems tailored to a broad variety of applications. PV arrays are routinely used at remote communications installations to operate large microwave repeaters, TV and radio repeaters rural telephone, and small telemetry systems that monitor environmental conditions. Also used to power agricultural water pumping systems, to provide electricity for isolated villages and medical clinics, for corrosion protection for pipelines and bridges, to power railroad signals, air/sea navigational aids, and for many types of military systems. ARCO is now moving into large scale generation for utilities.
A 32-bit NMOS microprocessor with a large register file
NASA Astrophysics Data System (ADS)
Sherburne, R. W., Jr.; Katevenis, M. G. H.; Patterson, D. A.; Sequin, C. H.
1984-10-01
Two scaled versions of a 32-bit NMOS reduced instruction set computer CPU, called RISC II, have been implemented on two different processing lines using the simple Mead and Conway layout rules with lambda values of 2 and 1.5 microns (corresponding to drawn gate lengths of 4 and 3 microns), respectively. The design utilizes a small set of simple instructions in conjunction with a large register file in order to provide high performance. This approach has resulted in two surprisingly powerful single-chip processors.
Evaluation of Sampling Methods for Bacillus Spore ...
Journal Article Following a wide area release of biological materials, mapping the extent of contamination is essential for orderly response and decontamination operations. HVAC filters process large volumes of air and therefore collect highly representative particulate samples in buildings. HVAC filter extraction may have great utility in rapidly estimating the extent of building contamination following a large-scale incident. However, until now, no studies have been conducted comparing the two most appropriate sampling approaches for HVAC filter materials: direct extraction and vacuum-based sampling.
Development of a metal-clad advanced composite shear web design concept
NASA Technical Reports Server (NTRS)
Laakso, J. H.
1974-01-01
An advanced composite web concept was developed for potential application to the Space Shuttle Orbiter main engine thrust structure. The program consisted of design synthesis, analysis, detail design, element testing, and large scale component testing. A concept was sought that offered significant weight saving by the use of Boron/Epoxy (B/E) reinforced titanium plate structure. The desired concept was one that was practical and that utilized metal to efficiently improve structural reliability. The resulting development of a unique titanium-clad B/E shear web design concept is described. Three large scale components were fabricated and tested to demonstrate the performance of the concept: a titanium-clad plus or minus 45 deg B/E web laminate stiffened with vertical B/E reinforced aluminum stiffeners.
NASA Astrophysics Data System (ADS)
Tenney, Andrew; Coleman, Thomas; Berry, Matthew; Magstadt, Andy; Gogineni, Sivaram; Kiel, Barry
2015-11-01
Shock cells and large scale structures present in a three-stream non-axisymmetric jet are studied both qualitatively and quantitatively. Large Eddy Simulation is utilized first to gain an understanding of the underlying physics of the flow and direct the focus of the physical experiment. The flow in the experiment is visualized using long exposure Schlieren photography, with time resolved Schlieren photography also a possibility. Velocity derivative diagnostics are calculated from the grey-scale Schlieren images are analyzed using continuous wavelet transforms. Pressure signals are also captured in the near-field of the jet to correlate with the velocity derivative diagnostics and assist in unraveling this complex flow. We acknowledge the support of AFRL through an SBIR grant.
Decentralized state estimation for a large-scale spatially interconnected system.
Liu, Huabo; Yu, Haisheng
2018-03-01
A decentralized state estimator is derived for the spatially interconnected systems composed of many subsystems with arbitrary connection relations. An optimization problem on the basis of linear matrix inequality (LMI) is constructed for the computations of improved subsystem parameter matrices. Several computationally effective approaches are derived which efficiently utilize the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, this decentralized state estimator is proved to converge to a stable system and obtain a bounded covariance matrix of estimation errors under certain conditions. Numerical simulations show that the obtained decentralized state estimator is attractive in the synthesis of a large-scale networked system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Desjardins, T. R.; Gilmore, M.
2016-05-01
Grid biasing is utilized in a large-scale helicon plasma to modify an existing instability. It is shown both experimentally and with a linear stability analysis to be a hybrid drift-Kelvin-Helmholtz mode. At low magnetic field strengths, coherent fluctuations are present, while at high magnetic field strengths, the plasma is broad-band turbulent. Grid biasing is used to drive the once-coherent fluctuations to a broad-band turbulent state, as well as to suppress them. There is a corresponding change in the flow shear. When a high positive bias (10Te) is applied to the grid electrode, a large-scale ( n ˜/n ≈50 % ) is excited. This mode has been identified as the potential relaxation instability.
Computational examination of utility scale wind turbine wake interactions
Okosun, Tyamo; Zhou, Chenn Q.
2015-07-14
We performed numerical simulations of small, utility scale wind turbine groupings to determine how wakes generated by upstream turbines affect the performance of the small turbine group as a whole. Specifically, various wind turbine arrangements were simulated to better understand how turbine location influences small group wake interactions. The minimization of power losses due to wake interactions certainly plays a significant role in the optimization of wind farms. Since wind turbines extract kinetic energy from the wind, the air passing through a wind turbine decreases in velocity, and turbines downstream of the initial turbine experience flows of lower energy, resultingmore » in reduced power output. Our study proposes two arrangements of turbines that could generate more power by exploiting the momentum of the wind to increase velocity at downstream turbines, while maintaining low wake interactions at the same time. Furthermore, simulations using Computational Fluid Dynamics are used to obtain results much more quickly than methods requiring wind tunnel models or a large scale experimental test.« less
NASA Technical Reports Server (NTRS)
Akle, W.
1983-01-01
This study report defines a set of tests and measurements required to characterize the performance of a Large Space System (LSS), and to scale this data to other LSS satellites. Requirements from the Mobile Communication Satellite (MSAT) configurations derived in the parent study were used. MSAT utilizes a large, mesh deployable antenna, and encompasses a significant range of LSS technology issues in the areas of structural/dynamics, control, and performance predictability. In this study, performance requirements were developed for the antenna. Special emphasis was placed on antenna surface accuracy, and pointing stability. Instrumentation and measurement systems, applicable to LSS, were selected from existing or on-going technology developments. Laser ranging and angulation systems, presently in breadboard status, form the backbone of the measurements. Following this, a set of ground, STS, and GEO-operational were investigated. A third scale (15 meter) antenna system as selected for ground characterization followed by STS flight technology development. This selection ensures analytical scaling from ground-to-orbit, and size scaling. Other benefits are cost and ability to perform reasonable ground tests. Detail costing of the various tests and measurement systems were derived and are included in the report.
Questionnaire-based assessment of executive functioning: Psychometrics.
Castellanos, Irina; Kronenberger, William G; Pisoni, David B
2018-01-01
The psychometric properties of the Learning, Executive, and Attention Functioning (LEAF) scale were investigated in an outpatient clinical pediatric sample. As a part of clinical testing, the LEAF scale, which broadly measures neuropsychological abilities related to executive functioning and learning, was administered to parents of 118 children and adolescents referred for psychological testing at a pediatric psychology clinic; 85 teachers also completed LEAF scales to assess reliability across different raters and settings. Scores on neuropsychological tests of executive functioning and academic achievement were abstracted from charts. Psychometric analyses of the LEAF scale demonstrated satisfactory internal consistency, parent-teacher inter-rater reliability in the small to large effect size range, and test-retest reliability in the large effect size range, similar to values for other executive functioning checklists. Correlations between corresponding subscales on the LEAF and other behavior checklists were large, while most correlations with neuropsychological tests of executive functioning and achievement were significant but in the small to medium range. Results support the utility of the LEAF as a reliable and valid questionnaire-based assessment of delays and disturbances in executive functioning and learning. Applications and advantages of the LEAF and other questionnaire measures of executive functioning in clinical neuropsychology settings are discussed.
A New View on Origin, Role and Manipulation of Large Scales in Turbulent Boundary Layers
NASA Technical Reports Server (NTRS)
Corke, T. C.; Nagib, H. M.; Guezennec, Y. G.
1982-01-01
The potential of passive 'manipulators' for altering the large scale turbulent structures in boundary layers was investigated. Utilizing smoke wire visualization and multisensor probes, the experiment verified that the outer scales could be suppressed by simple arrangements of parallel plates. As a result of suppressing the outer scales in turbulent layers, a decrease in the streamwise growth of the boundary layer thickness was achieved and was coupled with a 30 percent decrease in the local wall friction coefficient. After accounting for the drag on the manipulator plates, the net drag reduction reached a value of 20 percent within 55 boundary layer thicknesses downstream of the device. No evidence for the reoccurrence of the outer scales was present at this streamwise distance thereby suggesting that further reductions in the net drag are attainable. The frequency of occurrence of the wall events is simultaneously dependent on the two parameters, Re2 delta sub 2 and Re sub x. As a result of being able to independently control the inner and outer boundary layer characteristics with these manipulators, a different view of these layers emerged.
AirSTAR: A UAV Platform for Flight Dynamics and Control System Testing
NASA Technical Reports Server (NTRS)
Jordan, Thomas L.; Foster, John V.; Bailey, Roger M.; Belcastro, Christine M.
2006-01-01
As part of the NASA Aviation Safety Program at Langley Research Center, a dynamically scaled unmanned aerial vehicle (UAV) and associated ground based control system are being developed to investigate dynamics modeling and control of large transport vehicles in upset conditions. The UAV is a 5.5% (seven foot wingspan), twin turbine, generic transport aircraft with a sophisticated instrumentation and telemetry package. A ground based, real-time control system is located inside an operations vehicle for the research pilot and associated support personnel. The telemetry system supports over 70 channels of data plus video for the downlink and 30 channels for the control uplink. Data rates are in excess of 200 Hz. Dynamic scaling of the UAV, which includes dimensional, weight, inertial, actuation, and control system scaling, is required so that the sub-scale vehicle will realistically simulate the flight characteristics of the full-scale aircraft. This testbed will be utilized to validate modeling methods, flight dynamics characteristics, and control system designs for large transport aircraft, with the end goal being the development of technologies to reduce the fatal accident rate due to loss-of-control.
Lange, Rael T; Brickell, Tracey A; Bailie, Jason M; Tulsky, David S; French, Louis M
2016-01-01
To examine the clinical utility and psychometric properties of the Traumatic Brain Injury Quality of Life (TBI-QOL) scale in a US military population. One hundred fifty-two US military service members (age: M = 34.3, SD = 9.4; 89.5% men) prospectively enrolled from the Walter Reed National Military Medical Center and other nationwide community outreach initiatives. Participants included 99 service members who had sustained a mild traumatic brain injury (TBI) and 53 injured or noninjured controls without TBI (n = 29 and n = 24, respectively). Participants completed the TBI-QOL scale and 5 other behavioral measures, on average, 33.8 months postinjury (SD = 37.9). Fourteen TBI-QOL subscales; Neurobehavioral Symptom Inventory; Posttraumatic Stress Disorder Checklist-Civilian version; Alcohol Use Disorders Identification Test; Combat Exposure Scale. The internal consistency reliability of the TBI-QOL scales ranged from α = .91 to α = .98. The convergent and discriminant validity of the 14 TBI-QOL subscales was high. The mild TBI group had significantly worse scores on 10 of the 14 TBI-QOL subscales than the control group (range, P < .001 to P = .043). Effect sizes ranged from medium to very large (d = 0.35 to d = 1.13). The largest differences were found on the Cognition-General Concerns (d = 1.13), Executive Function (d = 0.94), Grief-Loss (d = 0.88), Pain Interference (d = 0.83), and Headache Pain (d = 0.83) subscales. These results support the use of the TBI-QOL scale as a measure of health-related quality of life in a mild TBI military sample. Additional research is recommended to further evaluate the clinical utility of the TBI-QOL scale in both military and civilian settings.
Approaches to advancescientific understanding of macrosystems ecology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, Ofir; Ball, Becky; Bond-Lamberty, Benjamin
Macrosystem ecological studies inherently investigate processes that interact across multiple spatial and temporal scales, requiring intensive sampling and massive amounts of data from diverse sources to incorporate complex cross-scale and hierarchical interactions. Inherent challenges associated with these characteristics include high computational demands, data standardization and assimilation, identification of important processes and scales without prior knowledge, and the need for large, cross-disciplinary research teams that conduct long-term studies. Therefore, macrosystem ecology studies must utilize a unique set of approaches that are capable of encompassing these methodological characteristics and associated challenges. Several case studies demonstrate innovative methods used in current macrosystem ecologymore » studies.« less
Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resseguie, David R
There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less
David W. MacFarlane
2015-01-01
Accurately assessing forest biomass potential is contingent upon having accurate tree biomass models to translate data from forest inventories. Building generality into these models is especially important when they are to be applied over large spatial domains, such as regional, national and international scales. Here, new, generalized whole-tree mass / volume...
To assess large-scale ecological conditions efficiently, indicators that can be collected quickly at many sites need to be developed. We explore the utility of delta 15N from basal food chain organisms to provide information on N loading and processing in lakes, rivers and stream...
NASA Technical Reports Server (NTRS)
Koblinsky, C. J.
1984-01-01
Remotely sensed signatures of ocean surface characteristics from active and passive satellite-borne radiometers in conjunction with in situ data were utilized to examine the large scale, low frequency circulation of the world's oceans. Studies of the California Current, the Gulf of California, and the Kuroshio Extension Current in the western North Pacific were reviewed briefly. The importance of satellite oceanographic tools was emphasized.
ERIC Educational Resources Information Center
Madhavan, Krishna; Johri, Aditya; Xian, Hanjun; Wang, G. Alan; Liu, Xiaomo
2014-01-01
The proliferation of digital information technologies and related infrastructure has given rise to novel ways of capturing, storing and analyzing data. In this paper, we describe the research and development of an information system called Interactive Knowledge Networks for Engineering Education Research (iKNEER). This system utilizes a framework…
Are High School Students Living in Lodgings at an Increased Risk for Internalizing Problems?
ERIC Educational Resources Information Center
Wannebo, Wenche; Wichstrom, Lars
2010-01-01
This study aimed to investigate whether leaving home to live in lodgings during senior high school can be a risk factor for the development of internalizing problems. Utilizing two large-scale prospective community studies of 2399 and 3906 Norwegian students (age range 15-19 years), respectively, the difference in internalizing symptoms between…
LANDSAT activities in the Republic of Zaire
NASA Technical Reports Server (NTRS)
Ilunga, S.
1975-01-01
An overview of the LANDSAT data utilization program of the Republic of Zaire is presented. The program emphasizes topics of economic significance to the national development program of Zaire: (1) agricultural land use capability analysis, including evaluation of the effects of large-scale burnings; (2) mineral resources evaluation; and (3) production of mapping materials for poorly covered regions.
USDA-ARS?s Scientific Manuscript database
Throughout the western United States there is increased interest in utilizing woodland biomass as an alternative energy source. We conducted a pilot study to predict one seed juniper (Juniperus monosperma) chip yield from tree-crown dimensions measured on the ground or derived from Very Large Scale ...
Symposium on Documentation Planning in Developing Countries at Bad Godesberg, 28-30 November 1967.
ERIC Educational Resources Information Center
German Foundation for International Development, Bonn (West Germany).
One reason given for the failure of the large-scale efforts in the decade 1955-1965 to increase significantly the rate of economic and technological growth in the "developing" countries of the world has been insufficient utilization of existing information essential to this development. Motivated by this belief and the opinion that this…
Moving forward: Responding to and mitigating effects of the MPB epidemic [Chapter 8
Claudia Regan; Barry Bollenbacher; Rob Gump; Mike Hillis
2014-01-01
The final webinar in the Future Forest Webinar Series provided an example of how managers utilized available science to address questions about post-epidemic forest conditions. Assessments of current conditions and projected trends, and how these compare with historical patterns, provide important information for land management planning. Large-scale disturbance events...
Impact of a Major National Evaluation Study: Israel's Van Leer Report.
ERIC Educational Resources Information Center
Alkin, Marvin C.; Lewy, Arieh
This investigation documents the impact of the Van Leer Study, a large-scale evaluation study of achievement in the primary schools of Israel. It is intended to increase understanding of the process of evaluation utilization, showing how evaluation findings and other kinds of information can work together, over time and in a variety of ways, to…
Solar power satellite: System definition study. Part 1, volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1977-01-01
A study of the solar power satellite system, which represents a means of tapping baseload electric utility power from the sun on a large scale, was summarized. Study objectives, approach, and planning are presented along with an energy conversion evaluation. Basic requirements were considered in regard to space transportation, construction, and maintainability.
ERIC Educational Resources Information Center
Pommerich, Mary
2007-01-01
Computer administered tests are becoming increasingly prevalent as computer technology becomes more readily available on a large scale. For testing programs that utilize both computer and paper administrations, mode effects are problematic in that they can result in examinee scores that are artificially inflated or deflated. As such, researchers…
Hu, Hang-Wei; Wang, Jun-Tao; Singh, Brajesh K; Liu, Yu-Rong; Chen, Yong-Liang; Zhang, Yu-Jing; He, Ji-Zheng
2018-04-24
Antibiotic resistance is ancient and prevalent in natural ecosystems and evolved long before the utilization of synthetic antibiotics started, but factors influencing the large-scale distribution patterns of natural antibiotic resistance genes (ARGs) remain largely unknown. Here, a large-scale investigation over 4000 km was performed to profile soil ARGs, plant communities and bacterial communities from 300 quadrats across five forest biomes with minimal human impact. We detected diverse and abundant ARGs in forests, including over 160 genes conferring resistance to eight major categories of antibiotics. The diversity of ARGs was strongly and positively correlated with the diversity of bacteria, herbaceous plants and mobile genetic elements (MGEs). The ARG composition was strongly correlated with the taxonomic structure of bacteria and herbs. Consistent with this strong correlation, structural equation modelling demonstrated that the positive effects of bacterial and herb communities on ARG patterns were maintained even when simultaneously accounting for multiple drivers (climate, spatial predictors and edaphic factors). These findings suggest a paradigm that the interactions between aboveground and belowground communities shape the large-scale distribution of soil resistomes, providing new knowledge for tackling the emerging environmental antibiotic resistance. © 2018 Society for Applied Microbiology and John Wiley & Sons Ltd.
Molecular Diagnosis of Malaria by Photo-Induced Electron Transfer Fluorogenic Primers: PET-PCR
Lucchi, Naomi W.; Narayanan, Jothikumar; Karell, Mara A.; Xayavong, Maniphet; Kariuki, Simon; DaSilva, Alexandre J.; Hill, Vincent; Udhayakumar, Venkatachalam
2013-01-01
There is a critical need for developing new malaria diagnostic tools that are sensitive, cost effective and capable of performing large scale diagnosis. The real-time PCR methods are particularly robust for large scale screening and they can be used in malaria control and elimination programs. We have designed novel self-quenching photo-induced electron transfer (PET) fluorogenic primers for the detection of P. falciparum and the Plasmodium genus by real-time PCR. A total of 119 samples consisting of different malaria species and mixed infections were used to test the utility of the novel PET-PCR primers in the diagnosis of clinical samples. The sensitivity and specificity were calculated using a nested PCR as the gold standard and the novel primer sets demonstrated 100% sensitivity and specificity. The limits of detection for P. falciparum was shown to be 3.2 parasites/µl using both Plasmodium genus and P. falciparum-specific primers and 5.8 parasites/µl for P. ovale, 3.5 parasites/µl for P. malariae and 5 parasites/µl for P. vivax using the genus specific primer set. Moreover, the reaction can be duplexed to detect both Plasmodium spp. and P. falciparum in a single reaction. The PET-PCR assay does not require internal probes or intercalating dyes which makes it convenient to use and less expensive than other real-time PCR diagnostic formats. Further validation of this technique in the field will help to assess its utility for large scale screening in malaria control and elimination programs. PMID:23437209
A Hybrid Coarse-graining Approach for Lipid Bilayers at Large Length and Time Scales
Ayton, Gary S.; Voth, Gregory A.
2009-01-01
A hybrid analytic-systematic (HAS) coarse-grained (CG) lipid model is developed and employed in a large-scale simulation of a liposome. The methodology is termed hybrid analyticsystematic as one component of the interaction between CG sites is variationally determined from the multiscale coarse-graining (MS-CG) methodology, while the remaining component utilizes an analytic potential. The systematic component models the in-plane center of mass interaction of the lipids as determined from an atomistic-level MD simulation of a bilayer. The analytic component is based on the well known Gay-Berne ellipsoid of revolution liquid crystal model, and is designed to model the highly anisotropic interactions at a highly coarse-grained level. The HAS CG approach is the first step in an “aggressive” CG methodology designed to model multi-component biological membranes at very large length and timescales. PMID:19281167
Networking for large-scale science: infrastructure, provisioning, transport and application mapping
NASA Astrophysics Data System (ADS)
Rao, Nageswara S.; Carter, Steven M.; Wu, Qishi; Wing, William R.; Zhu, Mengxia; Mezzacappa, Anthony; Veeraraghavan, Malathi; Blondin, John M.
2005-01-01
Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts.
Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Gaura, Elena; Brusey, James; Zhang, Xuekun; Dutkiewicz, Eryk
2016-07-18
Super dense wireless sensor networks (WSNs) have become popular with the development of Internet of Things (IoT), Machine-to-Machine (M2M) communications and Vehicular-to-Vehicular (V2V) networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.
NASA Astrophysics Data System (ADS)
Takasaki, Koichi
This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).
Deep learning-based fine-grained car make/model classification for visual surveillance
NASA Astrophysics Data System (ADS)
Gundogdu, Erhan; Parıldı, Enes Sinan; Solmaz, Berkan; Yücesoy, Veysel; Koç, Aykut
2017-10-01
Fine-grained object recognition is a potential computer vision problem that has been recently addressed by utilizing deep Convolutional Neural Networks (CNNs). Nevertheless, the main disadvantage of classification methods relying on deep CNN models is the need for considerably large amount of data. In addition, there exists relatively less amount of annotated data for a real world application, such as the recognition of car models in a traffic surveillance system. To this end, we mainly concentrate on the classification of fine-grained car make and/or models for visual scenarios by the help of two different domains. First, a large-scale dataset including approximately 900K images is constructed from a website which includes fine-grained car models. According to their labels, a state-of-the-art CNN model is trained on the constructed dataset. The second domain that is dealt with is the set of images collected from a camera integrated to a traffic surveillance system. These images, which are over 260K, are gathered by a special license plate detection method on top of a motion detection algorithm. An appropriately selected size of the image is cropped from the region of interest provided by the detected license plate location. These sets of images and their provided labels for more than 30 classes are employed to fine-tune the CNN model which is already trained on the large scale dataset described above. To fine-tune the network, the last two fully-connected layers are randomly initialized and the remaining layers are fine-tuned in the second dataset. In this work, the transfer of a learned model on a large dataset to a smaller one has been successfully performed by utilizing both the limited annotated data of the traffic field and a large scale dataset with available annotations. Our experimental results both in the validation dataset and the real field show that the proposed methodology performs favorably against the training of the CNN model from scratch.
NASA Astrophysics Data System (ADS)
Block, P. J.; Alexander, S.; WU, S.
2017-12-01
Skillful season-ahead predictions conditioned on local and large-scale hydro-climate variables can provide valuable knowledge to farmers and reservoir operators, enabling informed water resource allocation and management decisions. In Ethiopia, the potential for advancing agriculture and hydropower management, and subsequently economic growth, is substantial, yet evidence suggests a weak adoption of prediction information by sectoral audiences. To address common critiques, including skill, scale, and uncertainty, probabilistic forecasts are developed at various scales - temporally and spatially - for the Finchaa hydropower dam and the Koga agricultural scheme in an attempt to promote uptake and application. Significant prediction skill is evident across scales, particularly for statistical models. This raises questions regarding other potential barriers to forecast utilization at community scales, which are also addressed.
Won, Sungho; Choi, Hosik; Park, Suyeon; Lee, Juyoung; Park, Changyi; Kwon, Sunghoon
2015-01-01
Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called "large P and small N" problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO) and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.
Richardson, Jeff; Iezzi, Angelo; Khan, Munir A
2015-08-01
Health state utilities measured by the major multi-attribute utility instruments differ. Understanding the reasons for this is important for the choice of instrument and for research designed to reconcile these differences. This paper investigates these reasons by explaining pairwise differences between utilities derived from six multi-attribute utility instruments in terms of (1) their implicit measurement scales; (2) the structure of their descriptive systems; and (3) 'micro-utility effects', scale-adjusted differences attributable to their utility formula. The EQ-5D-5L, SF-6D, HUI 3, 15D and AQoL-8D were administered to 8,019 individuals. Utilities and unweighted values were calculated using each instrument. Scale effects were determined by the linear relationship between utilities, the effect of the descriptive system by comparison of scale-adjusted values and 'micro-utility effects' by the unexplained difference between utilities and values. Overall, 66 % of the differences between utilities was attributable to the descriptive systems, 30.3 % to scale effects and 3.7 % to micro-utility effects. Results imply that the revision of utility algorithms will not reconcile differences between instruments. The dominating importance of the descriptive system highlights the need for researchers to select the instrument most capable of describing the health states relevant for a study. Reconciliation of inconsistent utilities produced by different instruments must focus primarily upon the content of the descriptive system. Utility weights primarily determine the measurement scale. Other differences, attributable to utility formula, are comparatively unimportant.
Extreme Cost Reductions with Multi-Megawatt Centralized Inverter Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwabe, Ulrich; Fishman, Oleg
2015-03-20
The objective of this project was to fully develop, demonstrate, and commercialize a new type of utility scale PV system. Based on patented technology, this includes the development of a truly centralized inverter system with capacities up to 100MW, and a high voltage, distributed harvesting approach. This system promises to greatly impact both the energy yield from large scale PV systems by reducing losses and increasing yield from mismatched arrays, as well as reduce overall system costs through very cost effective conversion and BOS cost reductions enabled by higher voltage operation.
Self and world: large scale installations at science museums.
Shimojo, Shinsuke
2008-01-01
This paper describes three examples of illusion installation in a science museum environment from the author's collaboration with the artist and architect. The installations amplify the illusory effects, such as vection (visually-induced sensation of self motion) and motion-induced blindness, to emphasize that perception is not just to obtain structure and features of objects, but rather to grasp the dynamic relationship between the self and the world. Scaling up the size and utilizing the live human body turned out to be keys for installations with higher emotional impact.
Solar Energy Technologies Office FY 2017 Budget At-A-Glance
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2016-03-01
The Solar Energy Technologies Office supports the SunShot Initiative goal to make solar energy technologies cost competitive with conventional energy sources by 2020. Reducing the total installed cost for utility-scale solar electricity by approximately 75% (2010 baseline) to roughly $0.06 per kWh without subsidies will enable rapid, large-scale adoption of solar electricity across the United States. This investment will help re-establish American technological and market leadership in solar energy, reduce environmental impacts of electricity generation, and strengthen U.S. economic competitiveness.
Recovery Act: Oxy-Combustion Techology Development for Industrial-Scale Boiler Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levasseur, Armand
2014-04-30
Alstom Power Inc. (Alstom), under U.S. DOE/NETL Cooperative Agreement No. DE-NT0005290, is conducting a development program to generate detailed technical information needed for application of oxy-combustion technology. The program is designed to provide the necessary information and understanding for the next step of large-scale commercial demonstration of oxy combustion in tangentially fired boilers and to accelerate the commercialization of this technology. The main project objectives include: • Design and develop an innovative oxyfuel system for existing tangentially-fired boiler units that minimizes overall capital investment and operating costs. • Evaluate performance of oxyfuel tangentially fired boiler systems in pilot scale testsmore » at Alstom’s 15 MWth tangentially fired Boiler Simulation Facility (BSF). • Address technical gaps for the design of oxyfuel commercial utility boilers by focused testing and improvement of engineering and simulation tools. • Develop the design, performance and costs for a demonstration scale oxyfuel boiler and auxiliary systems. • Develop the design and costs for both industrial and utility commercial scale reference oxyfuel boilers and auxiliary systems that are optimized for overall plant performance and cost. • Define key design considerations and develop general guidelines for application of results to utility and different industrial applications. The project was initiated in October 2008 and the scope extended in 2010 under an ARRA award. The project completion date was April 30, 2014. Central to the project is 15 MWth testing in the BSF, which provided in-depth understanding of oxy-combustion under boiler conditions, detailed data for improvement of design tools, and key information for application to commercial scale oxy-fired boiler design. Eight comprehensive 15 MWth oxy-fired test campaigns were performed with different coals, providing detailed data on combustion, emissions, and thermal behavior over a matrix of fuels, oxyprocess variables and boiler design parameters. Significant improvement of CFD modeling tools and validation against 15 MWth experimental data has been completed. Oxy-boiler demonstration and large reference designs have been developed, supported with the information and knowledge gained from the 15 MWth testing. The results from the 15 MWth testing in the BSF and complimentary bench-scale testing are addressed in this volume (Volume II) of the final report. The results of the modeling efforts (Volume III) and the oxy boiler design efforts (Volume IV) are reported in separate volumes.« less
NASA Astrophysics Data System (ADS)
Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael
2015-06-01
Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.
Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M
2017-12-06
While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.
NASA Astrophysics Data System (ADS)
Qu, Xingtian; Li, Jinlai; Yin, Zhifu
2018-04-01
Micro- and nanofluidic chips are becoming increasing significance for biological and medical applications. Future advances in micro- and nanofluidics and its utilization in commercial applications depend on the development and fabrication of low cost and high fidelity large scale plastic micro- and nanofluidic chips. However, the majority of the present fabrication methods suffer from a low bonding rate of the chip during thermal bonding process due to air trapping between the substrate and the cover plate. In the present work, a novel bonding technique based on Ar plasma and water treatment was proposed to fully bond the large scale micro- and nanofluidic chips. The influence of Ar plasma parameters on the water contact angle and the effect of bonding conditions on the bonding rate and the bonding strength of the chip were studied. The fluorescence tests demonstrate that the 5 × 5 cm2 poly(methyl methacrylate) chip with 180 nm wide and 180 nm deep nanochannels can be fabricated without any block and leakage by our newly developed method.
Ten-channel InP-based large-scale photonic integrated transmitter fabricated by SAG technology
NASA Astrophysics Data System (ADS)
Zhang, Can; Zhu, Hongliang; Liang, Song; Cui, Xiao; Wang, Huitao; Zhao, Lingjuan; Wang, Wei
2014-12-01
A 10-channel InP-based large-scale photonic integrated transmitter was fabricated by selective area growth (SAG) technology combined with butt-joint regrowth (BJR) technology. The SAG technology was utilized to fabricate the electroabsorption modulated distributed feedback (DFB) laser (EML) arrays at the same time. The design of coplanar electrodes for electroabsorption modulator (EAM) was used for the flip-chip bonding package. The lasing wavelength of DFB laser could be tuned by the integrated micro-heater to match the ITU grids, which only needs one electrode pad. The average output power of each channel is 250 μW with an injection current of 200 mA. The static extinction ratios of the EAMs for 10 channels tested are ranged from 15 to 27 dB with a reverse bias of 6 V. The frequencies of 3 dB bandwidth of the chip for each channel are around 14 GHz. The novel design and simple fabrication process show its enormous potential in reducing the cost of large-scale photonic integrated circuit (LS-PIC) transmitter with high chip yields.
A Large Scale Wind Tunnel for the Study of High Reynolds Number Turbulent Boundary Layer Physics
NASA Astrophysics Data System (ADS)
Priyadarshana, Paththage; Klewicki, Joseph; Wosnik, Martin; White, Chris
2008-11-01
Progress and the basic features of the University of New Hampshire's very large multi-disciplinary wind tunnel are reported. The refinement of the overall design has been greatly aided through consultations with an external advisory group. The facility test section is 73 m long, 6 m wide, and 2.5 m nominally high, and the maximum free stream velocity is 30 m/s. A very large tunnel with relatively low velocities makes the small scale turbulent motions resolvable by existing measurement systems. The maximum Reynolds number is estimated at &+circ;= δuτ/ν˜50000, where δ is the boundary layer thickness and uτ is the friction velocity. The effects of scale separation on the generation of the Reynolds stress gradient appearing in the mean momentum equation are briefly discussed to justify the need to attain &+circ; in excess of about 40000. Lastly, plans for future utilization of the facility as a community-wide resource are outlined. This project is supported through the NSF-EPSCoR RII Program, grant number EPS0701730.
Building Indigenous Community Resilience in the Great Plains
NASA Astrophysics Data System (ADS)
Gough, B.
2014-12-01
Indigenous community resilience is rooted in the seasoned lifeways, developed over generations, incorporated into systems of knowledge, and realized in artifacts of infrastructure through keen observations of the truth and consequences of their interactions with the environment found in place over time. Their value lies, not in their nature as artifacts, but in the underlying patterns and processes of culture: how previous adaptations were derived and evolved, and how the principles and processes of detailed observation may inform future adaptations. This presentation examines how such holistic community approaches, reflected in design and practice, can be applied to contemporary issues of energy and housing in a rapidly changing climate. The Indigenous Peoples of the Great Plains seek to utilize the latest scientific climate modeling to support the development of large, utility scale distributed renewable energy projects and to re-invigorate an indigenous housing concept of straw bale construction, originating in this region. In the energy context, we explore the potential for the development of an intertribal wind energy dynamo on the Great Plains, utilizing elements of existing federal policies for Indian energy development and existing federal infrastructure initially created to serve hydropower resources, which may be significantly altered under current and prospective drought scenarios. For housing, we consider the opportunity to address the built environment in Indian Country, where Tribes have greater control as it consists largely of residences needed for their growing populations. Straw bale construction allows for greater use of local natural and renewable materials in a strategy for preparedness for the weather extremes and insurance perils already common to the region, provides solutions to chronic unemployment and increasing energy costs, while offering greater affordable comfort in both low and high temperature extremes. The development of large utility scale distributed wind gives greater systemwide flexibility to incorporate renewables and the communal construction techniques associated with straw bale housing puts high-performance shelter back into the hands of the community. Creative and distributed experimentation can result in more graceful failures forward.
Williams, Leanne M
2016-01-01
Complex emotional, cognitive and self-reflective functions rely on the activation and connectivity of large-scale neural circuits. These circuits offer a relevant scale of focus for conceptualizing a taxonomy for depression and anxiety based on specific profiles (or biotypes) of neural circuit dysfunction. Here, the theoretical review first outlined the current consensus as to what constitutes the organization of large-scale circuits in the human brain identified using parcellation and meta-analysis. The focus is on neural circuits implicated in resting reflection (“default mode”), detection of “salience”, affective processing (“threat” and “reward”), “attention” and “cognitive control”. Next, the current evidence regarding which type of dysfunctions in these circuits characterize depression and anxiety disorders was reviewed, with an emphasis on published meta-analyses and reviews of circuit dysfunctions that have been identified in at least two well-powered case:control studies. Grounded in the review of these topics, a conceptual framework is proposed for considering neural circuit-defined “biotypes”. In this framework, biotypes are defined by profiles of extent of dysfunction on each large-scale circuit. The clinical implications of a biotype approach for guiding classification and treatment of depression and anxiety is considered. Future research directions will develop the validity and clinical utility of a neural circuit biotype model that spans diagnostic categories and helps to translate neuroscience into clinical practice in the real world. PMID:27653321
77 FR 9700 - Utility Scale Wind Towers From China and Vietnam
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-17
...)] Utility Scale Wind Towers From China and Vietnam Determinations On the basis of the record \\1\\ developed... threatened with material injury by reason of imports from China of utility scale wind towers, provided for in... with material injury by reason of imports from Vietnam of utility scale wind towers, provided for in...
78 FR 10210 - Utility Scale Wind Towers From China and Vietnam
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-13
...)] Utility Scale Wind Towers From China and Vietnam Determinations On the basis of the record \\1\\ developed... with material injury by reason of imports of utility scale wind towers from China and Vietnam, provided... of imports of utility scale wind towers from China and Vietnam. Commissioner Dean A. Pinkert...
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1982-01-01
Progress made in reducing MAGSAT data and displaying magnetic field perturbations caused primarily by external currents is reported. A periodic and repeatable perturbation pattern is described that arises from external current effects but appears as unique signatures associated with upper middle latitudes on the Earth's surface. Initial testing of the modeling procedure that was developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is also discussed. The modeling technique utilizes a linear current element representation of the large scale space current system.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1982-01-01
Efforts in support of the development of a model of the magnetic fields due to ionospheric and magnetospheric electrical currents are discussed. Specifically, progress made in reading MAGSAT tapes and plotting the deviation of the measured magnetic field components with respect to a spherical harmonic model of the main geomagnetic field is reported. Initial tests of the modeling procedure developed to compute the ionosphere/magnetosphere-induced fields at satellite orbit are also described. The modeling technique utilizes a liner current element representation of the large scale current system.
Muthamilarasan, Mehanathan; Venkata Suresh, B.; Pandey, Garima; Kumari, Kajal; Parida, Swarup Kumar; Prasad, Manoj
2014-01-01
Generating genomic resources in terms of molecular markers is imperative in molecular breeding for crop improvement. Though development and application of microsatellite markers in large-scale was reported in the model crop foxtail millet, no such large-scale study was conducted for intron-length polymorphic (ILP) markers. Considering this, we developed 5123 ILP markers, of which 4049 were physically mapped onto 9 chromosomes of foxtail millet. BLAST analysis of 5123 expressed sequence tags (ESTs) suggested the function for ∼71.5% ESTs and grouped them into 5 different functional categories. About 440 selected primer pairs representing the foxtail millet genome and the different functional groups showed high-level of cross-genera amplification at an average of ∼85% in eight millets and five non-millet species. The efficacy of the ILP markers for distinguishing the foxtail millet is demonstrated by observed heterozygosity (0.20) and Nei's average gene diversity (0.22). In silico comparative mapping of physically mapped ILP markers demonstrated substantial percentage of sequence-based orthology and syntenic relationship between foxtail millet chromosomes and sorghum (∼50%), maize (∼46%), rice (∼21%) and Brachypodium (∼21%) chromosomes. Hence, for the first time, we developed large-scale ILP markers in foxtail millet and demonstrated their utility in germplasm characterization, transferability, phylogenetics and comparative mapping studies in millets and bioenergy grass species. PMID:24086082
Asteroid Redirect Mission Concept: A Bold Approach for Utilizing Space Resources
NASA Technical Reports Server (NTRS)
Mazanek, Daniel D.; Merrill, Raymond G.; Brophy, John R.; Mueller, Robert P.
2014-01-01
The utilization of natural resources from asteroids is an idea that is older than the Space Age. The technologies are now available to transform this endeavour from an idea into reality. The Asteroid Redirect Mission (ARM) is a mission concept which includes the goal of robotically returning a small Near-Earth Asteroid (NEA) or a multi-ton boulder from a large NEA to cislunar space in the mid 2020's using an advanced Solar Electric Propulsion (SEP) vehicle and currently available technologies. The paradigm shift enabled by the ARM concept would allow in-situ resource utilization (ISRU) to be used at the human mission departure location (i.e., cislunar space) versus exclusively at the deep-space mission destination. This approach drastically reduces the barriers associated with utilizing ISRU for human deep-space missions. The successful testing of ISRU techniques and associated equipment could enable large-scale commercial ISRU operations to become a reality and enable a future space-based economy utilizing processed asteroidal materials. This paper provides an overview of the ARM concept and discusses the mission objectives, key technologies, and capabilities associated with the mission, as well as how the ARM and associated operations would benefit humanity's quest for the exploration and settlement of space.
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.; ...
2018-02-06
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-17
... DEPARTMENT OF COMMERCE International Trade Administration [A-570-981, A-552-814] Utility Scale... duty investigations of utility scale wind towers from the People's Republic of China and the Socialist... investigations are currently due no later than June 6, 2012. \\1\\ See Utility Scale Wind Towers From the People's...
The Utility-Scale Future - Continuum Magazine | NREL
Spring 2011 / Issue 1 Continuum. Clean Energy Innovation at NREL The Utility-Scale Future Continuum facility will lead the way. Wind Innovation Enables Utility-Scale 02 Wind Innovation Enables Utility-Scale Archives 9 Beyond R&D: Market Impact 8 NREL Analysis 7 Partnering: An Engine for Innovation 6 Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marrinan, Thomas; Leigh, Jason; Renambot, Luc
Mixed presence collaboration involves remote collaboration between multiple collocated groups. This paper presents the design and results of a user study that focused on mixed presence collaboration using large-scale tiled display walls. The research was conducted in order to compare data synchronization schemes for multi-user visualization applications. Our study compared three techniques for sharing data between display spaces with varying constraints and affordances. The results provide empirical evidence that using data sharing techniques with continuous synchronization between the sites lead to improved collaboration for a search and analysis task between remotely located groups. We have also identified aspects of synchronizedmore » sessions that result in increased remote collaborator awareness and parallel task coordination. It is believed that this research will lead to better utilization of large-scale tiled display walls for distributed group work.« less
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
Kenneth Chilman; James Vogel; Greg Brown; John H. Burde
2004-01-01
This paper has 3 purposes: to discuss 1. case study research and its utility for recreation management decisionmaking, 2. the recreation visitor inventory and monitoring process developed from case study research, and 3. a successful replication of the process in a large-scale, multi-year application. Although case study research is discussed in research textbooks as...
ERIC Educational Resources Information Center
Ker, Hsiang-Wei
2017-01-01
Motivational constructs and students' engagements have great impacts on students' mathematics achievements, yet they have not been theoretically investigated using international large-scale assessment data. This study utilized the mathematics data of the Trends in International Mathematics and Science Study 2011 to conduct a comparative and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Joseph; Pirrung, Meg; McCue, Lee Ann
FQC is software that facilitates large-scale quality control of FASTQ files by carrying out a QC protocol, parsing results, and aggregating quality metrics within and across experiments into an interactive dashboard. The dashboard utilizes human-readable configuration files to manipulate the pages and tabs, and is extensible with CSV data.
ERIC Educational Resources Information Center
Langley, Hillary A.; Coffman, Jennifer L.; Ornstein, Peter A.
2017-01-01
Data from a large-scale, longitudinal research study with an ethnically and socioeconomically diverse sample were utilized to explore linkages between maternal elaborative conversational style and the development of children's autobiographical and deliberate memory. Assessments were made when the children were aged 3, 5, and 6 years old, and the…
Does the public notice visual resource problems on the federal estate?
John D. Peine
1979-01-01
Results of the 1977 Federal estate are highlighted. The survey of recreation on the Federal estate represents a unique data set which was uniformly collected across all Federal land managing agencies and sections of the country. The on-site sampling procedures utilized in this survey process have never before been applied on such a large scale. Procedures followed and...
ERIC Educational Resources Information Center
Lavonen, Jari; Juuti, Kalle; Meisalo, Veijo
2003-01-01
In this study we analyse how the experiences of chemistry teachers on the use of a Microcomputer-Based Laboratory (MBL), gathered by a Likert-scale instrument, can be utilized to develop the new package "Empirica 2000." We used exploratory factor analysis to identify the essential features in a large set of questionnaire data to see how…
LaWen T. Hollingsworth; Laurie L. Kurth; Bernard R. Parresol; Roger D. Ottmar; Susan J. Prichard
2012-01-01
Landscape-scale fire behavior analyses are important to inform decisions on resource management projects that meet land management objectives and protect values from adverse consequences of fire. Deterministic and probabilistic geospatial fire behavior analyses are conducted with various modeling systems including FARSITE, FlamMap, FSPro, and Large Fire Simulation...
NASA Technical Reports Server (NTRS)
Suematsu, Y.; Iga, K.
1980-01-01
Crystal growth and the characteristics of semiconductor lasers and diodes for the long wavelength band used in optical communications are examined. It is concluded that to utilize the advantages of this band, it is necessary to have a large scale multiple wavelength communication, along with optical cumulative circuits and optical exchangers.
Dirk Pflugmacher; Warren B. Cohen; Robert E. Kennedy; Michael. Lefsky
2008-01-01
Accurate estimates of forest aboveground biomass are needed to reduce uncertainties in global and regional terrestrial carbon fluxes. In this study we investigated the utility of the Geoscience Laser Altimeter System (GLAS) onboard the Ice, Cloud and land Elevation Satellite for large-scale biomass inventories. GLAS is the first spaceborne lidar sensor that will...
ERIC Educational Resources Information Center
Conley, David; Lombardi, Allison; Seburn, Mary; McGaughy, Charis
2009-01-01
This study reports the preliminary results from a field test of the College-readiness Performance Assessment System (C-PAS), a large-scale, 6th-12th grade criterion-referenced assessment system that utilizes classroom-embedded performance tasks to measure student progress toward the development of key cognitive skills associated with success in…
Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.
Loong, Bronwyn; Zaslavsky, Alan M.; He, Yulei; Harrington, David P.
2013-01-01
Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents’ identities and sensitive attributes, by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by CanCORS, a comprehensive observational study of the experiences, treatments, and outcomes of patients with lung or colorectal cancer in the United States. We review inferential methods for partially synthetic data, and discuss selection of high disclosure risk variables for synthesis, specification of imputation models, and identification disclosure risk assessment. We evaluate data utility by replicating published analyses and comparing results using original and synthetic data, and discuss practical issues in preserving inferential conclusions. We found that important subgroup relationships must be included in the synthetic data imputation model, to preserve the data utility of the observed data for a given analysis procedure. We conclude that synthetic CanCORS data are suited best for preliminary data analyses purposes. These methods address the requirement to share data in clinical research without compromising confidentiality. PMID:23670983
Hierarchical Data Distribution Scheme for Peer-to-Peer Networks
NASA Astrophysics Data System (ADS)
Bhushan, Shashi; Dave, M.; Patel, R. B.
2010-11-01
In the past few years, peer-to-peer (P2P) networks have become an extremely popular mechanism for large-scale content sharing. P2P systems have focused on specific application domains (e.g. music files, video files) or on providing file system like capabilities. P2P is a powerful paradigm, which provides a large-scale and cost-effective mechanism for data sharing. P2P system may be used for storing data globally. Can we implement a conventional database on P2P system? But successful implementation of conventional databases on the P2P systems is yet to be reported. In this paper we have presented the mathematical model for the replication of the partitions and presented a hierarchical based data distribution scheme for the P2P networks. We have also analyzed the resource utilization and throughput of the P2P system with respect to the availability, when a conventional database is implemented over the P2P system with variable query rate. Simulation results show that database partitions placed on the peers with higher availability factor perform better. Degradation index, throughput, resource utilization are the parameters evaluated with respect to the availability factor.
Sea, soil, sky - Testing solar's limits
NASA Astrophysics Data System (ADS)
Hopkinson, J.
1981-12-01
The potentials and actualities of large scale biomass, ocean thermal, and satellite solar power systems are discussed. Biomass is an energy already on-line in installations ranging from home-sized wood-burning stoves to utility sized generators fueled by sawdust and forest residue. Uses of wheat straw, fast-growing trees such as eucalyptus and alder, and euphorbia as biofuels are examined, noting restrictions imposed by land use limitations and the necessity for genetic engineering for more suitable plants. Pyrolysis and thermochemical gasification of biomass to form gaseous, solid, and liquid fuels are explored, and mention is made of utility refuse and sewage incineration for power generation. OTEC, satellite solar power systems, and tidal generator plants are considered as promising for further investigation and perhaps useful in limited applications, while solar pond power plants require extremely large areas to be effective.
Lipid metabolism and potentials of biofuel and high added-value oil production in red algae.
Sato, Naoki; Moriyama, Takashi; Mori, Natsumi; Toyoshima, Masakazu
2017-04-01
Biomass production is currently explored in microalgae, macroalgae and land plants. Microalgal biofuel development has been performed mostly in green algae. In the Japanese tradition, macrophytic red algae such as Pyropia yezoensis and Gelidium crinale have been utilized as food and industrial materials. Researches on the utilization of unicellular red microalgae such as Cyanidioschyzon merolae and Porphyridium purpureum started only quite recently. Red algae have relatively large plastid genomes harboring more than 200 protein-coding genes that support the biosynthetic capacity of the plastid. Engineering the plastid genome is a unique potential of red microalgae. In addition, large-scale growth facilities of P. purpureum have been developed for industrial production of biofuels. C. merolae has been studied as a model alga for cell and molecular biological analyses with its completely determined genomes and transformation techniques. Its acidic and warm habitat makes it easy to grow this alga axenically in large scales. Its potential as a biofuel producer is recently documented under nitrogen-limited conditions. Metabolic pathways of the accumulation of starch and triacylglycerol and the enzymes involved therein are being elucidated. Engineering these regulatory mechanisms will open a possibility of exploiting the full capability of production of biofuel and high added-value oil. In the present review, we will describe the characteristics and potential of these algae as biotechnological seeds.
Verplanck, Philip L; Furlong, Edward T; Gray, James L; Phillips, Patrick J; Wolf, Ruth E; Esposito, Kathleen
2010-05-15
A primary pathway for emerging contaminants (pharmaceuticals, personal care products, steroids, and hormones) to enter aquatic ecosystems is effluent from sewage treatment plants (STP), and identifying technologies to minimize the amount of these contaminants released is important. Quantifying the flux of these contaminants through STPs is difficult. This study evaluates the behavior of gadolinium, a rare earth element (REE) utilized as a contrasting agent in magnetic resonance imaging (MRI), through four full-scale metropolitan STPs that utilize several biosolids thickening, conditioning, stabilization, and dewatering processing technologies. The organically complexed Gd from MRIs has been shown to be stable in aquatic systems and has the potential to be utilized as a conservative tracer in STP operations to compare to an emerging contaminant of interest. Influent and effluent waters display large enrichments in Gd compared to other REEs. In contrast, most sludge samples from the STPs do not display Gd enrichments, including primary sludges and end-product sludges. The excess Gd appears to remain in the liquid phase throughout the STP operations, but detailed quantification of the input Gd load and residence times of various STP operations is needed to utilize Gd as a conservative tracer.
The value of residential photovoltaic systems: A comprehensive assessment
NASA Technical Reports Server (NTRS)
Borden, C. S.
1983-01-01
Utility-interactive photovoltaic (PV) arrays on residential rooftops appear to be a potentially attractive, large-scale application of PV technology. Results of a comprehensive assessment of the value (i.e., break-even cost) of utility-grid connected residential photovoltaic power systems under a variety of technological and economic assumptions are presented. A wide range of allowable PV system costs are calculated for small (4.34 kW (p) sub ac) residential PV systems in various locales across the United States. Primary factor in this variation are differences in local weather conditions, utility-specific electric generation capacity, fuel types, and customer-load profiles that effect purchase and sell-back rates, and non-uniform state tax considerations. Additional results from this analysis are: locations having the highest insolation values are not necessary the most economically attractive sites; residential PV systems connected in parallel to the utility demonstrate high percentages of energy sold back to the grid, and owner financial and tax assumptions cause large variations in break-even costs. Significant cost reduction and aggressive resolution of potential institutional impediments (e.g., liability, standards, metering, and technical integration) are required for a residential PV marker to become a major electric-grid-connected energy-generation source.
The value of residential photovoltaic systems: A comprehensive assessment
NASA Astrophysics Data System (ADS)
Borden, C. S.
1983-09-01
Utility-interactive photovoltaic (PV) arrays on residential rooftops appear to be a potentially attractive, large-scale application of PV technology. Results of a comprehensive assessment of the value (i.e., break-even cost) of utility-grid connected residential photovoltaic power systems under a variety of technological and economic assumptions are presented. A wide range of allowable PV system costs are calculated for small (4.34 kW (p) sub ac) residential PV systems in various locales across the United States. Primary factor in this variation are differences in local weather conditions, utility-specific electric generation capacity, fuel types, and customer-load profiles that effect purchase and sell-back rates, and non-uniform state tax considerations. Additional results from this analysis are: locations having the highest insolation values are not necessary the most economically attractive sites; residential PV systems connected in parallel to the utility demonstrate high percentages of energy sold back to the grid, and owner financial and tax assumptions cause large variations in break-even costs. Significant cost reduction and aggressive resolution of potential institutional impediments (e.g., liability, standards, metering, and technical integration) are required for a residential PV marker to become a major electric-grid-connected energy-generation source.
Owen, Jesse; Imel, Zac E
2016-04-01
This article introduces the special section on utilizing large data sets to explore psychotherapy processes and outcomes. The increased use of technology has provided new opportunities for psychotherapy researchers. In particular, there is a rise in large databases of tens of thousands clients. Additionally, there are new ways to pool valuable resources for meta-analytic processes. At the same time, these tools also come with limitations. These issues are introduced as well as brief overview of the articles. (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Sobel, A. H.; Wang, S.; Bellon, G.; Sessions, S. L.; Woolnough, S.
2013-12-01
Parameterizations of large-scale dynamics have been developed in the past decade for studying the interaction between tropical convection and large-scale dynamics, based on our physical understanding of the tropical atmosphere. A principal advantage of these methods is that they offer a pathway to attack the key question of what controls large-scale variations of tropical deep convection. These methods have been used with both single column models (SCMs) and cloud-resolving models (CRMs) to study the interaction of deep convection with several kinds of environmental forcings. While much has been learned from these efforts, different groups' efforts are somewhat hard to compare. Different models, different versions of the large-scale parameterization methods, and experimental designs that differ in other ways are used. It is not obvious which choices are consequential to the scientific conclusions drawn and which are not. The methods have matured to the point that there is value in an intercomparison project. In this context, the Global Atmospheric Systems Study - Weak Temperature Gradient (GASS-WTG) project was proposed at the Pan-GASS meeting in September 2012. The weak temperature gradient approximation is one method to parameterize large-scale dynamics, and is used in the project name for historical reasons and simplicity, but another method, the damped gravity wave (DGW) method, will also be used in the project. The goal of the GASS-WTG project is to develop community understanding of the parameterization methods currently in use. Their strengths, weaknesses, and functionality in models with different physics and numerics will be explored in detail, and their utility to improve our understanding of tropical weather and climate phenomena will be further evaluated. This presentation will introduce the intercomparison project, including background, goals, and overview of the proposed experimental design. Interested groups will be invited to join (it will not be too late), and preliminary results will be presented.
Moving contact lines on vibrating surfaces
NASA Astrophysics Data System (ADS)
Solomenko, Zlatko; Spelt, Peter; Scott, Julian
2017-11-01
Large-scale simulations of flows with moving contact lines for realistic conditions generally requires a subgrid scale model (analyses based on matched asymptotics) to account for the unresolved part of the flow, given the large range of length scales involved near contact lines. Existing models for the interface shape in the contact-line region are primarily for steady flows on homogeneous substrates, with encouraging results in 3D simulations. Introduction of complexities would require further investigation of the contact-line region, however. Here we study flows with moving contact lines on planar substrates subject to vibrations, with applications in controlling wetting/dewetting. The challenge here is to determine the change in interface shape near contact lines due to vibrations. To develop further insight, 2D direct numerical simulations (wherein the flow is resolved down to an imposed slip length) have been performed to enable comparison with asymptotic theory, which is also developed further. Perspectives will also be presented on the final objective of the work, which is to develop a subgrid scale model that can be utilized in large-scale simulations. The authors gratefully acknowledge the ANR for financial support (ANR-15-CE08-0031) and the meso-centre FLMSN for use of computational resources. This work was Granted access to the HPC resources of CINES under the allocation A0012B06893 made by GENCI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srivastava, V.; Fannin, K.F.; Biljetina, R.
1986-07-01
The Institute of Gas Technology (IGT) conducted a comprehensive laboratory-scale research program to develop and optimize the anaerobic digestion process for producing methane from water hyacinth and sludge blends. This study focused on digester design and operating techniques, which gave improved methane yields and production rates over those observed using conventional digesters. The final digester concept and the operating experience was utilized to design and operate a large-scale experimentla test unit (ETU) at Walt Disney World, Florida. This paper describes the novel digester design, operating techniques, and the results obtained in the laboratory. The paper also discusses a kinetic modelmore » which predicts methane yield, methane production rate, and digester effluent solids as a function of retention time. This model was successfully utilized to predict the performance of the ETU. 15 refs., 6 figs., 6 tabs.« less
Improving anaerobic and aerobic degradation by ultrasonic disintegration of biomass.
Neis, Uwe; Nickel, Klaus; Lundén, Anna
2008-11-01
Biological cell lysis is known to be the rate-limiting step of anaerobic biosolids degradation. Due to the slow pace by which this reaction occurs, it is necessary to equip treatment plants with large digesters or alternatively incorporate technological aids. High-power ultrasound used to disintegrate bacterial cells has been utilized as a pre-treatment process prior to anaerobic digestion. Through this application, as seen on pilot- and full-scales, it is possible to attain up to 30% more biogas, an increase in VS-destruction of up to 30% and a reduced sludge mass for disposal. Utilizing ultrasound technology in aerobic applications is a new and innovative approach. Improved denitrification through a more readily available internal carbon source, and less excess sludge mass can be traced to the positive effects that sonication of sludge has on the overall biological wastewater treatment process. Reference full-scale installations suggest that the technology is both technically feasible and economically sound.
Revolution…Now The Future Arrives for Five Clean Energy Technologies – 2015 Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
In 2013, the U.S. Department of Energy (DOE) released the Revolution Now report, highlighting four transformational technologies: land-based wind power, silicon photovoltaic (PV) solar modules, light-emitting diodes (LEDs), and electric vehicles (EVs). That study and its 2014 update showed how dramatic reductions in cost are driving a surge in consumer, industrial, and commercial adoption for these clean energy technologies—as well as yearly progress. In addition to presenting the continued progress made over the last year in these areas, this year’s update goes further. Two separate sections now cover large, central, utility-scale PV plants and smaller, rooftop, distributed PV systems tomore » highlight how both have achieved significant deployment nationwide, and have done so through different innovations, such as easier access to capital for utility-scale PV and reductions of non-hardware costs and third-party ownership for distributed PV. Along with these core technologies« less
Efficient hemodynamic event detection utilizing relational databases and wavelet analysis
NASA Technical Reports Server (NTRS)
Saeed, M.; Mark, R. G.
2001-01-01
Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.
PowderSim: Lagrangian Discrete and Mesh-Free Continuum Simulation Code for Cohesive Soils
NASA Technical Reports Server (NTRS)
Johnson, Scott; Walton, Otis; Settgast, Randolph
2013-01-01
PowderSim is a calculation tool that combines a discrete-element method (DEM) module, including calibrated interparticle-interaction relationships, with a mesh-free, continuum, SPH (smoothed-particle hydrodynamics) based module that utilizes enhanced, calibrated, constitutive models capable of mimicking both large deformations and the flow behavior of regolith simulants and lunar regolith under conditions anticipated during in situ resource utilization (ISRU) operations. The major innovation introduced in PowderSim is to use a mesh-free method (SPH-based) with a calibrated and slightly modified critical-state soil mechanics constitutive model to extend the ability of the simulation tool to also address full-scale engineering systems in the continuum sense. The PowderSim software maintains the ability to address particle-scale problems, like size segregation, in selected regions with a traditional DEM module, which has improved contact physics and electrostatic interaction models.
NASA Astrophysics Data System (ADS)
Schwaiger, Karl; Haider, Markus; Haemmerle, Martin; Steiner, Peter; Obermaier, Michael-Dario
2016-05-01
Flexible dispatch able solar thermal electricity plants applying state of the art power cycles have the potential of playing a vital role in modern electricity systems and even participating in the ancillary market. By replacing molten salt via particles, operation temperatures can be increased and plant efficiencies of over 45 % can be reached. In this work the concept for a utility scale plant using corundum as storage/heat transfer material is thermodynamically modeled and its key performance data are cited. A novel indirect fluidized bed particle receiver concept is presented, profiting from a near black body behavior being able to heat up large particle flows by realizing temperature cycles over 500°C. Specialized fluidized bed steam-generators are applied with negligible auxiliary power demand. The performance of the key components is discussed and a rough sketch of the plant is provided.
Tielen, Deirdre; Wollmann, Lisa
2014-01-01
The social interaction anxiety scale (SIAS) and the social phobia scale (SPS) assess anxiety in social interactions and fear of scrutiny by others. This study examines the psychometric properties of the Dutch versions of the SIAS and SPS using data from a large group of patients with social phobia and a community-based sample. Confirmatory factor analysis revealed that the SIAS is unidimensional, whereas the SPS is comprised of three subscales. The internal consistency of the scales and subscales was good. The concurrent and discriminant validity was supported and the scales were well able to discriminate between patients and community-based respondents. Cut-off values with excellent sensitivity and specificity are presented. Of all self-report measures included, the SPS was the most sensitive for treatment effects. Normative data are provided which can be used to assess whether clinically significant change has occurred in individual patients. PMID:24701560
de Beurs, Edwin; Tielen, Deirdre; Wollmann, Lisa
2014-01-01
The social interaction anxiety scale (SIAS) and the social phobia scale (SPS) assess anxiety in social interactions and fear of scrutiny by others. This study examines the psychometric properties of the Dutch versions of the SIAS and SPS using data from a large group of patients with social phobia and a community-based sample. Confirmatory factor analysis revealed that the SIAS is unidimensional, whereas the SPS is comprised of three subscales. The internal consistency of the scales and subscales was good. The concurrent and discriminant validity was supported and the scales were well able to discriminate between patients and community-based respondents. Cut-off values with excellent sensitivity and specificity are presented. Of all self-report measures included, the SPS was the most sensitive for treatment effects. Normative data are provided which can be used to assess whether clinically significant change has occurred in individual patients.
Advanced Grid-Friendly Controls Demonstration Project for Utility-Scale PV Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gevorgian, Vahan; O'Neill, Barbara
A typical photovoltaic (PV) power plant consists of multiple power electronic inverters and can contribute to grid stability and reliability through sophisticated 'grid-friendly' controls. The availability and dissemination of actual test data showing the viability of advanced utility-scale PV controls among all industry stakeholders can leverage PV's value from being simply an energy resource to providing additional ancillary services that range from variability smoothing and frequency regulation to power quality. Strategically partnering with a selected utility and/or PV power plant operator is a key condition for a successful demonstration project. The U.S. Department of Energy's (DOE's) Solar Energy Technologies Officemore » selected the National Renewable Energy Laboratory (NREL) to be a principal investigator in a two-year project with goals to (1) identify a potential partner(s), (2) develop a detailed scope of work and test plan for a field project to demonstrate the gird-friendly capabilities of utility-scale PV power plants, (3) facilitate conducting actual demonstration tests, and (4) disseminate test results among industry stakeholders via a joint NREL/DOE publication and participation in relevant technical conferences. The project implementation took place in FY 2014 and FY 2015. In FY14, NREL established collaborations with AES and First Solar Electric, LLC, to conduct demonstration testing on their utility-scale PV power plants in Puerto Rico and Texas, respectively, and developed test plans for each partner. Both Puerto Rico Electric Power Authority and the Electric Reliability Council of Texas expressed interest in this project because of the importance of such advanced controls for the reliable operation of their power systems under high penetration levels of variable renewable generation. During FY15, testing was completed on both plants, and a large amount of test data was produced and analyzed that demonstrates the ability of PV power plants to provide various types of new grid-friendly controls.« less
Performance/price estimates for cortex-scale hardware: a design space exploration.
Zaveri, Mazad S; Hammerstrom, Dan
2011-04-01
In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Anber, U.; Wang, S.; Gentine, P.; Jensen, M. P.
2017-12-01
A framework is introduced to investigate the indirect impact of aerosol loading on tropical deep convection using 3-dimentional idealized cloud-system resolving simulations with coupled large-scale circulation. The large scale dynamics is parameterized using a spectral weak temperature gradient approximation that utilizes the dominant balance in the tropics between adiabatic cooling and diabatic heating. Aerosol loading effect is examined by varying the number concentration of nuclei (CCN) to form cloud droplets in the bulk microphysics scheme over a wide range from 30 to 5000 without including any radiative effect as the radiative cooling is prescribed at a constant rate, to isolate the microphysical effect. Increasing aerosol number concentration causes mean precipitation to decrease monotonically, despite the increase in cloud condensates. Such reduction in precipitation efficiency is attributed to reduction in the surface enthalpy fluxes, and not to the divergent circulation, as the gross moist stability remains unchanged. We drive a simple scaling argument based on the moist static energy budget, that enables a direct estimation of changes in precipitation given known changes in surfaces enthalpy fluxes and the constant gross moist stability. The impact on cloud hydrometers and microphysical properties is also examined and is consistent with the macro-physical picture.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...
2017-01-28
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei, E-mail: wei@math.msu.edu
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topologicalmore » analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.« less
Elamin, Nasreldin Alhasan; Yurkowski, David James; Chekchak, Tarik; Walter, Ryan Patrick; Klaus, Rebecca; Hill, Graham; Hussey, Nigel Edward
2017-01-01
A large reef manta ray (Manta alfredi) aggregation has been observed off the north Sudanese Red Sea coast since the 1950s. Sightings have been predominantly within the boundaries of a marine protected area (MPA), which was designated a UNESCO World Heritage Site in July 2016. Contrasting economic development trajectories have been proposed for the area (small-scale ecotourism and large-scale island development). To examine space-use, Wildlife Computers® SPOT 5 tags were secured to three manta rays. A two-state switching Bayesian state space model (BSSM), that allowed movement parameters to switch between resident and travelling, was fit to the recorded locations, and 50% and 95% kernel utilization distributions (KUD) home ranges calculated. A total of 682 BSSM locations were recorded between 30 October 2012 and 6 November 2013. Of these, 98.5% fell within the MPA boundaries; 99.5% for manta 1, 91.5% for manta 2, and 100% for manta 3. The BSSM identified that all three mantas were resident during 99% of transmissions, with 50% and 95% KUD home ranges falling mainly within the MPA boundaries. For all three mantas combined (88.4%), and all individuals (manta 1–92.4%, manta 2–64.9%, manta 3–91.9%), the majority of locations occurred within 15 km of the proposed large-scale island development. Results indicated that the MPA boundaries are spatially appropriate for manta rays in the region, however, a close association to the proposed large-scale development highlights the potential threat of disruption. Conversely, the focused nature of spatial use highlights the potential for reliable ecotourism opportunities. PMID:29069079
Kessel, Steven Thomas; Elamin, Nasreldin Alhasan; Yurkowski, David James; Chekchak, Tarik; Walter, Ryan Patrick; Klaus, Rebecca; Hill, Graham; Hussey, Nigel Edward
2017-01-01
A large reef manta ray (Manta alfredi) aggregation has been observed off the north Sudanese Red Sea coast since the 1950s. Sightings have been predominantly within the boundaries of a marine protected area (MPA), which was designated a UNESCO World Heritage Site in July 2016. Contrasting economic development trajectories have been proposed for the area (small-scale ecotourism and large-scale island development). To examine space-use, Wildlife Computers® SPOT 5 tags were secured to three manta rays. A two-state switching Bayesian state space model (BSSM), that allowed movement parameters to switch between resident and travelling, was fit to the recorded locations, and 50% and 95% kernel utilization distributions (KUD) home ranges calculated. A total of 682 BSSM locations were recorded between 30 October 2012 and 6 November 2013. Of these, 98.5% fell within the MPA boundaries; 99.5% for manta 1, 91.5% for manta 2, and 100% for manta 3. The BSSM identified that all three mantas were resident during 99% of transmissions, with 50% and 95% KUD home ranges falling mainly within the MPA boundaries. For all three mantas combined (88.4%), and all individuals (manta 1-92.4%, manta 2-64.9%, manta 3-91.9%), the majority of locations occurred within 15 km of the proposed large-scale island development. Results indicated that the MPA boundaries are spatially appropriate for manta rays in the region, however, a close association to the proposed large-scale development highlights the potential threat of disruption. Conversely, the focused nature of spatial use highlights the potential for reliable ecotourism opportunities.
NASA Astrophysics Data System (ADS)
Niwa, Masaki; Takashina, Shoichi; Mori, Yojiro; Hasegawa, Hiroshi; Sato, Ken-ichi; Watanabe, Toshio
2015-01-01
With the continuous increase in Internet traffic, reconfigurable optical add-drop multiplexers (ROADMs) have been widely adopted in the core and metro core networks. Current ROADMs, however, allow only static operation. To realize future dynamic optical-network services, and to minimize any human intervention in network operation, the optical signal add/drop part should have colorless/directionless/contentionless (C/D/C) capabilities. This is possible with matrix switches or a combination of splitter-switches and optical tunable filters. The scale of the matrix switch increases with the square of the number of supported channels, and hence, the matrix-switch-based architecture is not suitable for creating future large-scale ROADMs. In contrast, the numbers of splitter ports, switches, and tunable filters increase linearly with the number of supported channels, and hence the tunable-filter-based architecture will support all future traffic. So far, we have succeeded in fabricating a compact tunable filter that consists of multi-stage cyclic arrayed-waveguide gratings (AWGs) and switches by using planar-lightwave-circuit (PLC) technologies. However, this multistage configuration suffers from large insertion loss and filter narrowing. Moreover, power-consuming temperature control is necessary since it is difficult to make cyclic AWGs athermal. We propose here novel tunable-filter architecture that sandwiches a single-stage non-cyclic athermal AWG having flatter-topped passbands between small-scale switches. With this configuration, the optical tunable filter attains low insertion loss, large passband bandwidths, low power consumption, compactness, and high cost-effectiveness. A prototype is monolithically fabricated with PLC technologies and its excellent performance is experimentally confirmed utilizing 80-channel 30-GBaud dual-polarization quadrature phase-shift-keying (QPSK) signals.
Moricoli, Diego; Muller, William Anthony; Carbonella, Damiano Cosimo; Balducci, Maria Cristina; Dominici, Sabrina; Watson, Richard; Fiori, Valentina; Weber, Evan; Cianfriglia, Maurizio; Scotlandi, Katia; Magnani, Mauro
2014-06-01
Migration of leukocytes into site of inflammation involves several steps mediated by various families of adhesion molecules. CD99 play a significant role in transendothelial migration (TEM) of leukocytes. Inhibition of TEM by specific monoclonal antibody (mAb) can provide a potent therapeutic approach to treating inflammatory conditions. However, the therapeutic utilization of whole IgG can lead to an inappropriate activation of Fc receptor-expressing cells, inducing serious adverse side effects due to cytokine release. In this regard, specific recombinant antibody in single chain variable fragments (scFvs) originated by phage library may offer a solution by affecting TEM function in a safe clinical context. However, this consideration requires large scale production of functional scFv antibodies and the absence of toxic reagents utilized for solubilization and refolding step of inclusion bodies that may discourage industrial application of these antibody fragments. In order to apply the scFv anti-CD99 named C7A in a clinical setting, we herein describe an efficient and large scale production of the antibody fragments expressed in E. coli as periplasmic insoluble protein avoiding gel filtration chromatography approach, and laborious refolding step pre- and post-purification. Using differential salt elution which is a simple, reproducible and effective procedure we are able to separate scFv in monomer format from aggregates. The purified scFv antibody C7A exhibits inhibitory activity comparable to an antagonistic conventional mAb, thus providing an excellent agent for blocking CD99 signaling. This protocol can be useful for the successful purification of other monomeric scFvs which are expressed as periplasmic inclusion bodies in bacterial systems. Copyright © 2014 Elsevier B.V. All rights reserved.
The scale-dependent market trend: Empirical evidences using the lagged DFA method
NASA Astrophysics Data System (ADS)
Li, Daye; Kou, Zhun; Sun, Qiankun
2015-09-01
In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.
Sustainability of utility-scale solar energy: Critical environmental concepts
NASA Astrophysics Data System (ADS)
Hernandez, R. R.; Moore-O'Leary, K. A.; Johnston, D. S.; Abella, S.; Tanner, K.; Swanson, A.; Kreitler, J.; Lovich, J.
2017-12-01
Renewable energy development is an arena where ecological, political, and socioeconomic values collide. Advances in renewable energy will incur steep environmental costs to landscapes in which facilities are constructed and operated. Scientists - including those from academia, industry, and government agencies - have only recently begun to quantify trade-off in this arena, often using ground-mounted, utility-scale solar energy facilities (USSE, ≥ 1 megawatt) as a model. Here, we discuss five critical ecological concepts applicable to the development of more sustainable USSE with benefits over fossil-fuel-generated energy: (1) more sustainable USSE development requires careful evaluation of trade-offs between land, energy, and ecology; (2) species responses to habitat modification by USSE vary; (3) cumulative and large-scale ecological impacts are complex and challenging to mitigate; (4) USSE development affects different types of ecosystems and requires customized design and management strategies; and (5) long-term ecological consequences associated with USSE sites must be carefully considered. These critical concepts provide a framework for reducing adverse environmental impacts, informing policy to establish and address conservation priorities, and improving energy production sustainability.
Toward economic flood loss characterization via hazard simulation
NASA Astrophysics Data System (ADS)
Czajkowski, Jeffrey; Cunha, Luciana K.; Michel-Kerjan, Erwann; Smith, James A.
2016-08-01
Among all natural disasters, floods have historically been the primary cause of human and economic losses around the world. Improving flood risk management requires a multi-scale characterization of the hazard and associated losses—the flood loss footprint. But this is typically not available in a precise and timely manner, yet. To overcome this challenge, we propose a novel and multidisciplinary approach which relies on a computationally efficient hydrological model that simulates streamflow for scales ranging from small creeks to large rivers. We adopt a normalized index, the flood peak ratio (FPR), to characterize flood magnitude across multiple spatial scales. The simulated FPR is then shown to be a key statistical driver for associated economic flood losses represented by the number of insurance claims. Importantly, because it is based on a simulation procedure that utilizes generally readily available physically-based data, our flood simulation approach has the potential to be broadly utilized, even for ungauged and poorly gauged basins, thus providing the necessary information for public and private sector actors to effectively reduce flood losses and save lives.
Sustainability of utility-scale solar energy – critical ecological concepts
Moore-O'Leary, Kara A.; Hernandez, Rebecca R.; Johnston, Dave S.; Abella, Scott R.; Tanner, Karen E.; Swanson, Amanda C.; Kreitler, Jason R.; Lovich, Jeffrey E.
2017-01-01
Renewable energy development is an arena where ecological, political, and socioeconomic values collide. Advances in renewable energy will incur steep environmental costs to landscapes in which facilities are constructed and operated. Scientists – including those from academia, industry, and government agencies – have only recently begun to quantify trade-offs in this arena, often using ground-mounted, utility-scale solar energy facilities (USSE, ≥1 megawatt) as a model. Here, we discuss five critical ecological concepts applicable to the development of more sustainable USSE with benefits over fossil-fuel-generated energy: (1) more sustainable USSE development requires careful evaluation of trade-offs between land, energy, and ecology; (2) species responses to habitat modification by USSE vary; (3) cumulative and large-scale ecological impacts are complex and challenging to mitigate; (4) USSE development affects different types of ecosystems and requires customized design and management strategies; and (5) long-term ecological consequences associated with USSE sites must be carefully considered. These critical concepts provide a framework for reducing adverse environmental impacts, informing policy to establish and address conservation priorities, and improving energy production sustainability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadav, Rakesh K.; Poppenhaeger, Katja; Wolk, Scott J.
Despite the lack of a shear-rich tachocline region, low-mass fully convective (FC) stars are capable of generating strong magnetic fields, indicating that a dynamo mechanism fundamentally different from the solar dynamo is at work in these objects. We present a self-consistent three-dimensional model of magnetic field generation in low-mass FC stars. The model utilizes the anelastic magnetohydrodynamic equations to simulate compressible convection in a rotating sphere. A distributed dynamo working in the model spontaneously produces a dipole-dominated surface magnetic field of the observed strength. The interaction of this field with the turbulent convection in outer layers shreds it, producing small-scalemore » fields that carry most of the magnetic flux. The Zeeman–Doppler-Imaging technique applied to synthetic spectropolarimetric data based on our model recovers most of the large-scale field. Our model simultaneously reproduces the morphology and magnitude of the large-scale field as well as the magnitude of the small-scale field observed on low-mass FC stars.« less
Large increase in fracture resistance of stishovite with crack extension less than one micrometer
Yoshida, Kimiko; Wakai, Fumihiro; Nishiyama, Norimasa; Sekine, Risako; Shinoda, Yutaka; Akatsu, Takashi; Nagoshi, Takashi; Sone, Masato
2015-01-01
The development of strong, tough, and damage-tolerant ceramics requires nano/microstructure design to utilize toughening mechanisms operating at different length scales. The toughening mechanisms so far known are effective in micro-scale, then, they require the crack extension of more than a few micrometers to increase the fracture resistance. Here, we developed a micro-mechanical test method using micro-cantilever beam specimens to determine the very early part of resistance-curve of nanocrystalline SiO2 stishovite, which exhibited fracture-induced amorphization. We revealed that this novel toughening mechanism was effective even at length scale of nanometer due to narrow transformation zone width of a few tens of nanometers and large dilatational strain (from 60 to 95%) associated with the transition of crystal to amorphous state. This testing method will be a powerful tool to search for toughening mechanisms that may operate at nanoscale for attaining both reliability and strength of structural materials. PMID:26051871
Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.
1996-12-17
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.
Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei
1996-01-01
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.
Bayesian Hierarchical Modeling for Big Data Fusion in Soil Hydrology
NASA Astrophysics Data System (ADS)
Mohanty, B.; Kathuria, D.; Katzfuss, M.
2016-12-01
Soil moisture datasets from remote sensing (RS) platforms (such as SMOS and SMAP) and reanalysis products from land surface models are typically available on a coarse spatial granularity of several square km. Ground based sensors on the other hand provide observations on a finer spatial scale (meter scale or less) but are sparsely available. Soil moisture is affected by high variability due to complex interactions between geologic, topographic, vegetation and atmospheric variables. Hydrologic processes usually occur at a scale of 1 km or less and therefore spatially ubiquitous and temporally periodic soil moisture products at this scale are required to aid local decision makers in agriculture, weather prediction and reservoir operations. Past literature has largely focused on downscaling RS soil moisture for a small extent of a field or a watershed and hence the applicability of such products has been limited. The present study employs a spatial Bayesian Hierarchical Model (BHM) to derive soil moisture products at a spatial scale of 1 km for the state of Oklahoma by fusing point scale Mesonet data and coarse scale RS data for soil moisture and its auxiliary covariates such as precipitation, topography, soil texture and vegetation. It is seen that the BHM model handles change of support problems easily while performing accurate uncertainty quantification arising from measurement errors and imperfect retrieval algorithms. The computational challenge arising due to the large number of measurements is tackled by utilizing basis function approaches and likelihood approximations. The BHM model can be considered as a complex Bayesian extension of traditional geostatistical prediction methods (such as Kriging) for large datasets in the presence of uncertainties.
Neuron array with plastic synapses and programmable dendrites.
Ramakrishnan, Shubha; Wunderlich, Richard; Hasler, Jennifer; George, Suma
2013-10-01
We describe a novel neuromorphic chip architecture that models neurons for efficient computation. Traditional architectures of neuron array chips consist of large scale systems that are interfaced with AER for implementing intra- or inter-chip connectivity. We present a chip that uses AER for inter-chip communication but uses fast, reconfigurable FPGA-style routing with local memory for intra-chip connectivity. We model neurons with biologically realistic channel models, synapses and dendrites. This chip is suitable for small-scale network simulations and can also be used for sequence detection, utilizing directional selectivity properties of dendrites, ultimately for use in word recognition.
Imaging mouse cerebellum with serial optical coherence scanner (Conference Presentation)
NASA Astrophysics Data System (ADS)
Liu, Chao J.; Williams, Kristen; Orr, Harry; Taner, Akkin
2017-02-01
We present the serial optical coherence scanner (SOCS), which consists of a polarization sensitive optical coherence tomography and a vibratome with associated controls for serial imaging, to visualize the cerebellum and adjacent brainstem of mouse. The cerebellar cortical layers and white matter are distinguished by using intrinsic optical contrasts. Images from serial scans reveal the large-scale anatomy in detail and map the nerve fiber pathways in the cerebellum and adjacent brainstem. The optical system, which has 5.5 μm axial resolution, utilizes a scan lens or a water-immersion microscope objective resulting in 10 μm or 4 μm lateral resolution, respectively. The large-scale brain imaging at high resolution requires an efficient way to collect large datasets. It is important to improve the SOCS system to deal with large-scale and large number of samples in a reasonable time. The imaging and slicing procedure for a section took about 4 minutes due to a low speed of the vibratome blade to maintain slicing quality. SOCS has potential to investigate pathological changes and monitor the effects of therapeutic drugs in cerebellar diseases such as spinocerebellar ataxia 1 (SCA1). The SCA1 is a neurodegenerative disease characterized by atrophy and eventual loss of Purkinje cells from the cerebellar cortex, and the optical contrasts provided by SOCS is being evaluated for biomarkers of the disease.
The Large Scale Distribution of Water Ice in the Polar Regions of the Moon
NASA Astrophysics Data System (ADS)
Jordan, A.; Wilson, J. K.; Schwadron, N.; Spence, H. E.
2017-12-01
For in situ resource utilization, one must know where water ice is on the Moon. Many datasets have revealed both surface deposits of water ice and subsurface deposits of hydrogen near the lunar poles, but it has proved difficult to resolve the differences among the locations of these deposits. Despite these datasets disagreeing on how deposits are distributed on small scales, we show that most of these datasets do agree on the large scale distribution of water ice. We present data from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) on the Lunar Reconnaissance Orbiter (LRO), LRO's Lunar Exploration Neutron Detector (LEND), the Neutron Spectrometer on Lunar Prospector (LPNS), LRO's Lyman Alpha Mapping Project (LAMP), LRO's Lunar Orbiter Laser Altimeter (LOLA), and Chandrayaan-1's Moon Mineralogy Mapper (M3). All, including those that show clear evidence for water ice, reveal surprisingly similar trends with latitude, suggesting that both surface and subsurface datasets are measuring ice. All show that water ice increases towards the poles, and most demonstrate that its signature appears at about ±70° latitude and increases poleward. This is consistent with simulations of how surface and subsurface cold traps are distributed with latitude. This large scale agreement constrains the origin of the ice, suggesting that an ancient cometary impact (or impacts) created a large scale deposit that has been rendered locally heterogeneous by subsequent impacts. Furthermore, it also shows that water ice may be available down to ±70°—latitudes that are more accessible than the poles for landing.
Turbulence in molecular clouds - A new diagnostic tool to probe their origin
NASA Technical Reports Server (NTRS)
Canuto, V. M.; Battaglia, A.
1985-01-01
A method is presented to uncover the instability responsible for the type of turbulence observed in molecular clouds and the value of the physical parameters of the 'placental medium' from which turbulence originated. The method utilizes the observational relation between velocities and sizes of molecular clouds, together with a recent model for large-scale turbulence (constructed by Canuto and Goldman, 1985).
Larry E. Laing; David Gori; James T. Jones
2005-01-01
The multi-partner Greater Huachuca Mountains fire planning effort involves over 500,000 acres of public and private lands. This large area supports distinct landscapes that have evolved with fire. Utilizing GIS as a tool, the United States Forest Service (USFS), General Ecosystem Survey (GES), and Natural Resources Conservation Service (NRCS) State Soil Geographic...
K. L. Frank; L. S. Kalkstein; B. W. Geils; H. W. Thistle
2008-01-01
This study developed a methodology to temporally classify large scale, upper level atmospheric conditions over North America, utilizing a newly-developed upper level synoptic classification (ULSC). Four meteorological variables: geopotential height, specific humidity, and u- and v-wind components, at the 500 hPa level over North America were obtained from the NCEP/NCAR...
The PR2D (Place, Route in 2-Dimensions) automatic layout computer program handbook
NASA Technical Reports Server (NTRS)
Edge, T. M.
1978-01-01
Place, Route in 2-Dimensions is a standard cell automatic layout computer program for generating large scale integrated/metal oxide semiconductor arrays. The program was utilized successfully for a number of years in both government and private sectors but until now was undocumented. The compilation, loading, and execution of the program on a Sigma V CP-V operating system is described.
Aziz Ebrahimi; Abdolkarim Zarei; Shaneka Lawson; Keith E. Woeste; M. J. M. Smulders
2016-01-01
Persian walnut (Juglans regia L.) is the world's most widely grown nut crop, but large-scale assessments and comparisons of the genetic diversity of the crop are notably lacking. To guide the conservation and utilization of Persian walnut genetic resources, genotypes (n = 189) from 25 different regions in 14 countries on...
Christopher A. Dicus; Kevin J. Osborne
2015-01-01
When managing for fire across a large landscape, the types of fuel treatments, the locations of treatments, and the percentage of the landscape being treated should all interact to impact not only potential fire size, but also carbon dynamics across that landscape. To investigate these interactions, we utilized a forest growth model (FVS-FFE) and fire simulation...
Valuing the Recreational Benefits from the Creation of Nature Reserves in Irish Forests
Riccardo Scarpa; Susan M. Chilton; W. George Hutchinson; Joseph Buongiorno
2000-01-01
Data from a large-scale contingent valuation study are used to investigate the effects of forest attribum on willingness to pay for forest recreation in Ireland. In particular, the presence of a nature reserve in the forest is found to significantly increase the visitors' willingness to pay. A random utility model is used to estimate the welfare change associated...
Late-time cosmological phase transitions
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
It is shown that the potential galaxy formation and large scale structure problems of objects existing at high redshifts (Z approx. greater than 5), structures existing on scales of 100 M pc as well as velocity flows on such scales, and minimal microwave anisotropies ((Delta)T/T) (approx. less than 10(exp -5)) can be solved if the seeds needed to generate structure form in a vacuum phase transition after decoupling. It is argued that the basic physics of such a phase transition is no more exotic than that utilized in the more traditional GUT scale phase transitions, and that, just as in the GUT case, significant random Gaussian fluctuations and/or topological defects can form. Scale lengths of approx. 100 M pc for large scale structure as well as approx. 1 M pc for galaxy formation occur naturally. Possible support for new physics that might be associated with such a late-time transition comes from the preliminary results of the SAGE solar neutrino experiment, implying neutrino flavor mixing with values similar to those required for a late-time transition. It is also noted that a see-saw model for the neutrino masses might also imply a tau neutrino mass that is an ideal hot dark matter candidate. However, in general either hot or cold dark matter can be consistent with a late-time transition.
Time-Series Forecast Modeling on High-Bandwidth Network Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Sim, Alex
With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less
Time-Series Forecast Modeling on High-Bandwidth Network Measurements
Yoo, Wucherl; Sim, Alex
2016-06-24
With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less
NASA Technical Reports Server (NTRS)
Valle, Gerard D.; Selig, Molly; Litteken, Doug; Oliveras, Ovidio
2012-01-01
This paper documents the integration of a large hatch penetration into an inflatable module. This paper also documents the comparison of analytical load predictions with measured results utilizing strain measurement. Strain was measured by utilizing photogrammetric measurement and through measurement obtained from strain gages mounted to selected clevises that interface with the structural webbings. Bench testing showed good correlation between strain measurement obtained from an extensometer and photogrammetric measurement especially after the fabric has transitioned through the low load/high strain region of the curve. Test results for the full-scale torus showed mixed results in the lower load and thus lower strain regions. Overall strain, and thus load, measured by strain gages and photogrammetry tracked fairly well with analytical predictions. Methods and areas of improvements are discussed.
Using First Differences to Reduce Inhomogeneity in Radiosonde Temperature Datasets.
NASA Astrophysics Data System (ADS)
Free, Melissa; Angell, James K.; Durre, Imke; Lanzante, John; Peterson, Thomas C.; Seidel, Dian J.
2004-11-01
The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade-1 for 1960 97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade-1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.
Feng, Wei; Xiao, Kai; Zhou, Wenbing; Zhu, Duanwei; Zhou, Yiyong; Yuan, Yu; Xiao, Naidong; Wan, Xiaoqiong; Hua, Yumei; Zhao, Jianwei
2017-01-01
Eichhornia crassipes (EC, water hyacinth) has gained attention due to its alarming reproductive capacity, which subsequently leads to serious ecological damage of water in many eutrophic lakes in the world. The traditional mechanical removal methods have disadvantages. They squander this valuable lignocellulosic resource. Meanwhile, there is a bottleneck for the subsequently reasonable and efficient utilization of EC biomass on a large scale after phytoremediation of polluted water using EC. As a result, the exploration of effective EC utilization technologies has become a popular research field. After years of exploration and amelioration, there have been significant breakthroughs in this research area, including the synthesis of excellent EC cellulose-derived materials, innovative bioenergy production, etc. This review organizes the research of the utilization of the EC biomass among several important fields and then analyses the advantages and disadvantages for each pathway. Finally, comprehensive EC utilization technologies are proposed as a reference. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.
2015-01-01
Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541
NASA Astrophysics Data System (ADS)
Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.
2014-12-01
Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices.
Ruan, Junhu; Wang, Xuping; Shi, Yan
2014-01-01
We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs) and assign medial aid points (MAPs) to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i) More TDCs often increase the efficiency and utility of medical supplies; (ii) It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii) The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is. PMID:25350005
Schreier, Amy L; Grove, Matt
2014-05-01
The benefits of spatial memory for foraging animals can be assessed on two distinct spatial scales: small-scale space (travel within patches) and large-scale space (travel between patches). While the patches themselves may be distributed at low density, within patches resources are likely densely distributed. We propose, therefore, that spatial memory for recalling the particular locations of previously visited feeding sites will be more advantageous during between-patch movement, where it may reduce the distances traveled by animals that possess this ability compared to those that must rely on random search. We address this hypothesis by employing descriptive statistics and spectral analyses to characterize the daily foraging routes of a band of wild hamadryas baboons in Filoha, Ethiopia. The baboons slept on two main cliffs--the Filoha cliff and the Wasaro cliff--and daily travel began and ended on a cliff; thus four daily travel routes exist: Filoha-Filoha, Filoha-Wasaro, Wasaro-Wasaro, Wasaro-Filoha. We use newly developed partial sum methods and distribution-fitting analyses to distinguish periods of area-restricted search from more extensive movements. The results indicate a single peak in travel activity in the Filoha-Filoha and Wasaro-Filoha routes, three peaks of travel activity in the Filoha-Wasaro routes, and two peaks in the Wasaro-Wasaro routes; and are consistent with on-the-ground observations of foraging and ranging behavior of the baboons. In each of the four daily travel routes the "tipping points" identified by the partial sum analyses indicate transitions between travel in small- versus large-scale space. The correspondence between the quantitative analyses and the field observations suggest great utility for using these types of analyses to examine primate travel patterns and especially in distinguishing between movement in small versus large-scale space. Only the distribution-fitting analyses are inconsistent with the field observations, which may be due to the scale at which these analyses were conducted. © 2013 Wiley Periodicals, Inc.
Decentralization, stabilization, and estimation of large-scale linear systems
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Vukcevic, M. B.
1976-01-01
In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level. A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.
Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo
2014-04-21
Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.
NASA Technical Reports Server (NTRS)
Sanders, Bobby W.; Weir, Lois J.
2008-01-01
A new hypersonic inlet for a turbine-based combined-cycle (TBCC) engine has been designed. This split-flow inlet is designed to provide flow to an over-under propulsion system with turbofan and dual-mode scramjet engines for flight from takeoff to Mach 7. It utilizes a variable-geometry ramp, high-speed cowl lip rotation, and a rotating low-speed cowl that serves as a splitter to divide the flow between the low-speed turbofan and the high-speed scramjet and to isolate the turbofan at high Mach numbers. The low-speed inlet was designed for Mach 4, the maximum mode transition Mach number. Integration of the Mach 4 inlet into the Mach 7 inlet imposed significant constraints on the low-speed inlet design, including a large amount of internal compression. The inlet design was used to develop mechanical designs for two inlet mode transition test models: small-scale (IMX) and large-scale (LIMX) research models. The large-scale model is designed to facilitate multi-phase testing including inlet mode transition and inlet performance assessment, controls development, and integrated systems testing with turbofan and scramjet engines.
Large-Scale Advanced Prop-Fan (LAP) pitch change actuator and control design report
NASA Technical Reports Server (NTRS)
Schwartz, R. A.; Carvalho, P.; Cutler, M. J.
1986-01-01
In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that the high inherent efficiency previously demonstrated by low speed turboprop propulsion systems may now be extended to today's higher speed aircraft if advanced high-speed propeller blades having thin airfoils and aerodynamic sweep are utilized. Hamilton Standard has designed a 9-foot diameter single-rotation Large-Scale Advanced Prop-Fan (LAP) which will be tested on a static test stand, in a high speed wind tunnel and on a research aircraft. The major objective of this testing is to establish the structural integrity of large-scale Prop-Fans of advanced construction in addition to the evaluation of aerodynamic performance and aeroacoustic design. This report describes the operation, design features and actual hardware of the (LAP) Prop-Fan pitch control system. The pitch control system which controls blade angle and propeller speed consists of two separate assemblies. The first is the control unit which provides the hydraulic supply, speed governing and feather function for the system. The second unit is the hydro-mechanical pitch change actuator which directly changes blade angle (pitch) as scheduled by the control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gritsenko, Marina A.; Xu, Zhe; Liu, Tao
Comprehensive, quantitative information on abundances of proteins and their post-translational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labelling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification andmore » quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples, and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
Gritsenko, Marina A; Xu, Zhe; Liu, Tao; Smith, Richard D
2016-01-01
Comprehensive, quantitative information on abundances of proteins and their posttranslational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labeling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification and quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
Aad, G.
2014-12-11
Research is conducted for non-resonant new phenomena in dielectron and dimuon final states, originating from either contact interactions or large extra spatial dimensions. The LHC 2012 proton–proton collision dataset recorded by the ATLAS detector is used, corresponding to 20 fb –1 at √s = 8 TeV. The dilepton invariant mass spectrum is a discriminating variable in both searches, with the contact interaction search additionally utilizing the dilepton forward-backward asymmetry. No significant deviations from the Standard Model expectation are observed. Lower limits are set on the ℓℓqq contact interaction scale Λ between 15.4 TeV and 26.3 TeV, at the 95% credibilitymore » level. For large extra spatial dimensions, lower limits are set on the string scale MS between 3.2 TeV to 5.0 TeV.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
Thatcher, T L; Wilson, D J; Wood, E E; Craig, M J; Sextro, R G
2004-08-01
Scale modeling is a useful tool for analyzing complex indoor spaces. Scale model experiments can reduce experimental costs, improve control of flow and temperature conditions, and provide a practical method for pretesting full-scale system modifications. However, changes in physical scale and working fluid (air or water) can complicate interpretation of the equivalent effects in the full-scale structure. This paper presents a detailed scaling analysis of a water tank experiment designed to model a large indoor space, and experimental results obtained with this model to assess the influence of furniture and people in the pollutant concentration field at breathing height. Theoretical calculations are derived for predicting the effects from losses of molecular diffusion, small scale eddies, turbulent kinetic energy, and turbulent mass diffusivity in a scale model, even without Reynolds number matching. Pollutant dispersion experiments were performed in a water-filled 30:1 scale model of a large room, using uranine dye injected continuously from a small point source. Pollutant concentrations were measured in a plane, using laser-induced fluorescence techniques, for three interior configurations: unobstructed, table-like obstructions, and table-like and figure-like obstructions. Concentrations within the measurement plane varied by more than an order of magnitude, even after the concentration field was fully developed. Objects in the model interior had a significant effect on both the concentration field and fluctuation intensity in the measurement plane. PRACTICAL IMPLICATION: This scale model study demonstrates both the utility of scale models for investigating dispersion in indoor environments and the significant impact of turbulence created by furnishings and people on pollutant transport from floor level sources. In a room with no furniture or occupants, the average concentration can vary by about a factor of 3 across the room. Adding furniture and occupants can increase this spatial variation by another factor of 3.
Mackin, R Scott; Delucchi, Kevin L; Bennett, Robert W; Areán, Patricia A
2011-02-01
This study was conducted to determine the effect of cognitive impairment (CI) on mental healthcare costs for older low-income adults with severe psychiatric illness. Data were collected from 62 ethnically diverse low-income older adults with severe psychiatric illness who were participating in day programming at a large community mental health center. CI was diagnosed by a neuropsychologist utilizing the Mattis Dementia Rating Scale-Second Edition and structured ratings of functional impairment (Clinical Dementia Rating Scale). Mental healthcare costs for 6, 12, and 24-month intervals before cognitive assessments were obtained for each participant. Substance abuse history was evaluated utilizing a structured questionnaire, depression symptom severity was assessed utilizing the Hamilton Depression Rating Scale, and psychiatric diagnoses were obtained through medical chart abstraction. CI was exhibited by 61% of participants and was associated with significantly increased mental healthcare costs during 6, 12, and 24-month intervals. Results of a regression analysis indicated that ethnicity and CI were both significant predictors of log transformed mental healthcare costs over 24 months with CI accounting for 13% of the variance in cost data. CI is a significant factor associated with increased mental healthcare costs in patients with severe psychiatric illness. Identifying targeted interventions to accommodate CI may lead to improving treatment outcomes and reducing the burden of mental healthcare costs for individuals with severe psychiatric illness.
Large-Scale Weather Disturbances in Mars’ Southern Extratropics
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, Melinda A.
2015-11-01
Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.
Rungarunlert, Sasitorn; Ferreira, Joao N; Dinnyes, Andras
2016-01-01
Generation of cardiomyocytes from pluripotent stem cells (PSCs) is a common and valuable approach to produce large amount of cells for various applications, including assays and models for drug development, cell-based therapies, and tissue engineering. All these applications would benefit from a reliable bioreactor-based methodology to consistently generate homogenous PSC-derived embryoid bodies (EBs) at a large scale, which can further undergo cardiomyogenic differentiation. The goal of this chapter is to describe a scalable method to consistently generate large amount of homogeneous and synchronized EBs from PSCs. This method utilizes a slow-turning lateral vessel bioreactor to direct the EB formation and their subsequent cardiomyogenic lineage differentiation.
Visualizing the Big (and Large) Data from an HPC Resource
NASA Astrophysics Data System (ADS)
Sisneros, R.
2015-10-01
Supercomputers are built to endure painfully large simulations and contend with resulting outputs. These are characteristics that scientists are all too willing to test the limits of in their quest for science at scale. The data generated during a scientist's workflow through an HPC center (large data) is the primary target for analysis and visualization. However, the hardware itself is also capable of generating volumes of diagnostic data (big data); this presents compelling opportunities to deploy analogous analytic techniques. In this paper we will provide a survey of some of the many ways in which visualization and analysis may be crammed into the scientific workflow as well as utilized on machine-specific data.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.
Ding, Xiuhua; Su, Shaoyong; Nandakumar, Kannabiran; Wang, Xiaoling; Fardo, David W
2014-01-01
Large-scale genetic studies are often composed of related participants, and utilizing familial relationships can be cumbersome and computationally challenging. We present an approach to efficiently handle sequencing data from complex pedigrees that incorporates information from rare variants as well as common variants. Our method employs a 2-step procedure that sequentially regresses out correlation from familial relatedness and then uses the resulting phenotypic residuals in a penalized regression framework to test for associations with variants within genetic units. The operating characteristics of this approach are detailed using simulation data based on a large, multigenerational cohort.
Effect of nacelle on wake meandering in a laboratory scale wind turbine using LES
NASA Astrophysics Data System (ADS)
Foti, Daniel; Yang, Xiaolei; Guala, Michele; Sotiropoulos, Fotis
2015-11-01
Wake meandering, large scale motion in the wind turbine wakes, has considerable effects on the velocity deficit and turbulence intensity in the turbine wake from the laboratory scale to utility scale wind turbines. In the dynamic wake meandering model, the wake meandering is assumed to be caused by large-scale atmospheric turbulence. On the other hand, Kang et al. (J. Fluid Mech., 2014) demonstrated that the nacelle geometry has a significant effect on the wake meandering of a hydrokinetic turbine, through the interaction of the inner wake of the nacelle vortex with the outer wake of the tip vortices. In this work, the significance of the nacelle on the wake meandering of a miniature wind turbine previously used in experiments (Howard et al., Phys. Fluid, 2015) is demonstrated with large eddy simulations (LES) using immersed boundary method with fine enough grids to resolve the turbine geometric characteristics. The three dimensionality of the wake meandering is analyzed in detail through turbulent spectra and meander reconstruction. The computed flow fields exhibit wake dynamics similar to those observed in the wind tunnel experiments and are analyzed to shed new light into the role of the energetic nacelle vortex on wake meandering. This work was supported by Department of Energy DOE (DE-EE0002980, DE-EE0005482 and DE-AC04-94AL85000), and Sandia National Laboratories. Computational resources were provided by Sandia National Laboratories and the University of Minnesota Supercomputing.
Numerical Simulations of Subscale Wind Turbine Rotor Inboard Airfoils at Low Reynolds Number
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaylock, Myra L.; Maniaci, David Charles; Resor, Brian R.
2015-04-01
New blade designs are planned to support future research campaigns at the SWiFT facility in Lubbock, Texas. The sub-scale blades will reproduce specific aerodynamic characteristics of utility-scale rotors. Reynolds numbers for megawatt-, utility-scale rotors are generally above 2-8 million. The thickness of inboard airfoils for these large rotors are typically as high as 35-40%. The thickness and the proximity to three-dimensional flow of these airfoils present design and analysis challenges, even at the full scale. However, more than a decade of experience with the airfoils in numerical simulation, in the wind tunnel, and in the field has generated confidence inmore » their performance. Reynolds number regimes for the sub-scale rotor are significantly lower for the inboard blade, ranging from 0.7 to 1 million. Performance of the thick airfoils in this regime is uncertain because of the lack of wind tunnel data and the inherent challenge associated with numerical simulations. This report documents efforts to determine the most capable analysis tools to support these simulations in an effort to improve understanding of the aerodynamic properties of thick airfoils in this Reynolds number regime. Numerical results from various codes of four airfoils are verified against previously published wind tunnel results where data at those Reynolds numbers are available. Results are then computed for other Reynolds numbers of interest.« less
NASA Astrophysics Data System (ADS)
Mei, D.-M.; Wang, G.-J.; Mei, H.; Yang, G.; Liu, J.; Wagner, M.; Panth, R.; Kooi, K.; Yang, Y.-Y.; Wei, W.-Z.
2018-03-01
Light, MeV-scale dark matter (DM) is an exciting DM candidate that is undetectable by current experiments. A germanium (Ge) detector utilizing internal charge amplification for the charge carriers created by the ionization of impurities is a promising new technology with experimental sensitivity for detecting MeV-scale DM. We analyze the physics mechanisms of the signal formation, charge creation, charge internal amplification, and the projected sensitivity for directly detecting MeV-scale DM particles. We present a design for a novel Ge detector at helium temperature (˜ 4 K) enabling ionization of impurities from DM impacts. With large localized E-fields, the ionized excitations can be accelerated to kinetic energies larger than the Ge bandgap at which point they can create additional electron-hole pairs, producing intrinsic amplification to achieve an ultra-low energy threshold of ˜ 0.1 eV for detecting low-mass DM particles in the MeV scale. Correspondingly, such a Ge detector with 1 kg-year exposure will have high sensitivity to a DM-nucleon cross section of ˜ 5 × 10^{-45} cm2 at a DM mass of ˜ 10 MeV/c2 and a DM-electron cross section of ˜ 5 × 10^{-46} cm2 at a DM mass of ˜ 1 MeV/c^2.
USDA-ARS?s Scientific Manuscript database
Utilizing next-generation sequencing technology, combined with ChIP (Chromatin Immunoprecipitation) technology, we analyzed histone modification (acetylation) induced by butyrate and the large-scale mapping of the epigenomic landscape of normal histone H3 and acetylated histone H3K9 and H3K27. To d...
Is There Future Utility in Nuclear Weapons Nuclear Weapons Save Lives
2014-02-13
operate with relative impunity short of large-scale conflict. Some point to a nuclear India and Pakistan as an example of instability concern. In...1997, South Asia observer Neil Joeck argued that “ India and Pakistan’s nuclear capabilities have not created strategic stability (and) do not reduce...elimination of illiteracy , provision of sustainable energy, debt relief for developing countries, clearance of landmines and more has been estimated
ERIC Educational Resources Information Center
Polanin, Joshua R.; Wilson, Sandra Jo
2014-01-01
The purpose of this project is to demonstrate the practical methods developed to utilize a dataset consisting of both multivariate and multilevel effect size data. The context for this project is a large-scale meta-analytic review of the predictors of academic achievement. This project is guided by three primary research questions: (1) How do we…
Unmanned Aircraft Systems Traffic Management (UTM)
NASA Technical Reports Server (NTRS)
Johnson, Ronald D.
2018-01-01
UTM is an 'air traffic management' ecosystem for uncontrolled operations. UTM utilizes industry's ability to supply services under FAA's regulatory authority where these services do not exist. UTM development will ultimately enable the management of large scale, low-altitude UAS operations. Operational concept will address beyond visual line of sight UAS operations under 400 ft. AGL. Information architecture, data exchange protocols, software functions. Roles/responsibilities of FAA and operators. Performance requirements.
High capacity immobilized amine sorbents
Gray, McMahan L [Pittsburgh, PA; Champagne, Kenneth J [Fredericktown, PA; Soong, Yee [Monroeville, PA; Filburn, Thomas [Granby, CT
2007-10-30
A method is provided for making low-cost CO.sub.2 sorbents that can be used in large-scale gas-solid processes. The improved method entails treating an amine to increase the number of secondary amine groups and impregnating the amine in a porous solid support. The method increases the CO.sub.2 capture capacity and decreases the cost of utilizing an amine-enriched solid sorbent in CO.sub.2 capture systems.
Compact wavelength-selective optical switch based on digital optical phase conjugation.
Li, Zhiyang; Claver, Havyarimana
2013-11-15
In this Letter, we show that digital optical phase conjugation might be utilized to construct a new kind of wavelength-selective switches. When incorporated with a multimode interferometer, these switches have wide bandwidth, high tolerance for fabrication error, and low polarization dependency. They might help to build large-scale multiwavelength nonblocking switching systems, or even to fabricate an optical cross-connecting or routing system on a chip.
Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.
Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less
Loong, Bronwyn; Zaslavsky, Alan M; He, Yulei; Harrington, David P
2013-10-30
Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents' identities and sensitive attributes by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by the Cancer Care Outcomes Research and Surveillance (CanCORS) project, a comprehensive observational study of the experiences, treatments, and outcomes of patients with lung or colorectal cancer in the USA. We review inferential methods for partially synthetic data and discuss selection of high disclosure risk variables for synthesis, specification of imputation models, and identification disclosure risk assessment. We evaluate data utility by replicating published analyses and comparing results using original and synthetic data and discuss practical issues in preserving inferential conclusions. We found that important subgroup relationships must be included in the synthetic data imputation model, to preserve the data utility of the observed data for a given analysis procedure. We conclude that synthetic CanCORS data are suited best for preliminary data analyses purposes. These methods address the requirement to share data in clinical research without compromising confidentiality. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Villalobos, J. I.
2005-12-01
The modeling of basin structures is an important step in the development of plans and policies for ground water management. To facilitate in the analysis of large scale regional structures, gravity data is implemented to examine the overall structural trend of the region. The gravitational attraction of structures in the upper mantle and crust provide vital information about the possible structure and composition of a region. Improved availability of gravity data via internet has promoted extensive construction and interpretation of gravity maps in the analysis of sub-surface structural anomalies. The utilization of gravity data appears to be particularly worthwhile because it is a non-invasive and inexpensive means of addressing the subsurface tectonic framework of large scale regions. In this paper, the author intends to illustrate 1) acquisition of gravity data and its processing; 2) interpretation of gravity data; and 3) sources of uncertainty and errors by using a case study of the Jornada del Muerto basin in South-Central New Mexico where integrated gravity data inferred several faults, sub-basins and thickness variations within the basins structure. The author also explores the integration of gravity method with other geophysical methods to further refine the delineation of basins.
NASA Astrophysics Data System (ADS)
Adkins, Kevin; Elfajri, Oumnia; Sescu, Adrian
2016-11-01
Simulation and modeling have shown that wind farms have an impact on the near-surface atmospheric boundary layer (ABL) as turbulent wakes generated by the turbines enhance vertical mixing. These changes alter downstream atmospheric properties. With a large portion of wind farms hosted within an agricultural context, changes to the environment can potentially have secondary impacts such as to the productivity of crops. With the exception of a few observational data sets that focus on the impact to near-surface temperature, little to no observational evidence exists. These few studies also lack high spatial resolution due to their use of a limited number of meteorological towers or remote sensing techniques. This study utilizes an instrumented small unmanned aerial system (sUAS) to gather in-situ field measurements from two Midwest wind farms, focusing on the impact that large utility-scale wind turbines have on relative humidity. Results are also compared to numerical experiments conducted using large eddy simulation (LES). Wind turbines are found to differentially alter the relative humidity in the downstream, spanwise and vertical directions under a variety of atmospheric stability conditions.
Effect of video server topology on contingency capacity requirements
NASA Astrophysics Data System (ADS)
Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.
1996-03-01
Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.
NASA Astrophysics Data System (ADS)
George, D. L.; Iverson, R. M.
2012-12-01
Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.
The performance of low-cost commercial cloud computing as an alternative in computational chemistry.
Thackston, Russell; Fortenberry, Ryan C
2015-05-05
The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.
Fluet, Gerard G.; Merians, Alma S.; Qiu, Qinyin; Lafond, Ian; Saleh, Soha; Ruano, Viviana; Delmonico, Andrea R.; Adamovich, Sergei V.
2014-01-01
Background and Purpose A majority of studies examining repetitive task practice facilitated by robots for the treatment of upper extremity paresis utilize standardized protocols applied to large groups. Others utilize interventions tailored to patients but don't describe the clinical decision making process utilized to develop and modify interventions. This case report will describe a robot-based intervention customized to match the goals and clinical presentation of a gentleman with upper extremity hemiparesis secondary to stroke. Methods PM is an 85 year-old man with left hemiparesis secondary to an intracerebral hemorrhage five years prior to examination. Outcomes were measured before and after a one month period of home therapy and after a one month robotic intervention. The intervention was designed to address specific impairments identified during his PT examination. When necessary, activities were modified based on the patient's response to his first week of treatment. Outcomes PM trained twelve sessions using six virtually simulated activities. Modifications to original configurations of these activities resulted in performance improvements in five of these activities. PM demonstrated a 35 second improvement in Jebsen Test of Hand Function time and a 44 second improvement in Wolf Motor Function Test time subsequent to the robotic training intervention. Reaching kinematics, 24 hour activity measurement and the Hand and Activities of Daily Living scales of the Stroke Impact Scale all improved as well. Discussion A customized program of robotically facilitated rehabilitation resulted in large short-term improvements in several measurements of upper extremity function in a patient with chronic hemiparesis. PMID:22592063
Klimyuk, Victor; Pogue, Gregory; Herz, Stefan; Butler, John; Haydon, Hugh
2014-01-01
This review describes the adaptation of the plant virus-based transient expression system, magnICON(®) for the at-scale manufacturing of pharmaceutical proteins. The system utilizes so-called "deconstructed" viral vectors that rely on Agrobacterium-mediated systemic delivery into the plant cells for recombinant protein production. The system is also suitable for production of hetero-oligomeric proteins like immunoglobulins. By taking advantage of well established R&D tools for optimizing the expression of protein of interest using this system, product concepts can reach the manufacturing stage in highly competitive time periods. At the manufacturing stage, the system offers many remarkable features including rapid production cycles, high product yield, virtually unlimited scale-up potential, and flexibility for different manufacturing schemes. The magnICON system has been successfully adaptated to very different logistical manufacturing formats: (1) speedy production of multiple small batches of individualized pharmaceuticals proteins (e.g. antigens comprising individualized vaccines to treat NonHodgkin's Lymphoma patients) and (2) large-scale production of other pharmaceutical proteins such as therapeutic antibodies. General descriptions of the prototype GMP-compliant manufacturing processes and facilities for the product formats that are in preclinical and clinical testing are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Richard S.; Levine, Jonathan S.; Bielicki, Jeffrey M.
CO 2 capture, utilization, and storage (CCUS) technology has yet to be widely deployed at a commercial scale despite multiple high-profile demonstration projects. We suggest that developing a large-scale, visible, and financially viable CCUS network could potentially overcome many barriers to deployment and jumpstart commercial-scale CCUS. To date, substantial effort has focused on technology development to reduce the costs of CO 2 capture from coal-fired power plants. Here, we propose that near-term investment could focus on implementing CO 2 capture on facilities that produce high-value chemicals/products. These facilities can absorb the expected impact of the marginal increase in the costmore » of production on the price of their product, due to the addition of CO 2 capture, more than coal-fired power plants. A financially viable demonstration of a large-scale CCUS network requires offsetting the costs of CO 2 capture by using the CO 2 as an input to the production of market-viable products. As a result, we demonstrate this alternative development path with the example of an integrated CCUS system where CO 2 is captured from ethylene producers and used for enhanced oil recovery in the U.S. Gulf Coast region.« less
Simulation-optimization of large agro-hydrosystems using a decomposition approach
NASA Astrophysics Data System (ADS)
Schuetze, Niels; Grundmann, Jens
2014-05-01
In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.
Recent "Ground Testing" Experiences in the National Full-Scale Aerodynamics Complex
NASA Technical Reports Server (NTRS)
Zell, Peter; Stich, Phil; Sverdrup, Jacobs; George, M. W. (Technical Monitor)
2002-01-01
The large test sections of the National Full-scale Aerodynamics Complex (NFAC) wind tunnels provide ideal controlled wind environments to test ground-based objects and vehicles. Though this facility was designed and provisioned primarily for aeronautical testing requirements, several experiments have been designed to utilize existing model mount structures to support "non-flying" systems. This presentation will discuss some of the ground-based testing capabilities of the facility and provide examples of groundbased tests conducted in the facility to date. It will also address some future work envisioned and solicit input from the SATA membership on ways to improve the service that NASA makes available to customers.
Cyber Security and Reliability in a Digital Cloud
2013-01-01
a higher utilization of servers, lower professional support staff needs, economies of scale for the physical facility, and the flexibility to locate...as a system, the DoD can achieve the economies of scale typically associated with large data centers. Recommendation 3: The DoD CIO and DISA...providers will help set standards for secure cloud computing across the economy . Recommendation 7: The DoD CIO and DISA should participate in the
Middleton, Richard S.; Levine, Jonathan S.; Bielicki, Jeffrey M.; ...
2015-04-27
CO 2 capture, utilization, and storage (CCUS) technology has yet to be widely deployed at a commercial scale despite multiple high-profile demonstration projects. We suggest that developing a large-scale, visible, and financially viable CCUS network could potentially overcome many barriers to deployment and jumpstart commercial-scale CCUS. To date, substantial effort has focused on technology development to reduce the costs of CO 2 capture from coal-fired power plants. Here, we propose that near-term investment could focus on implementing CO 2 capture on facilities that produce high-value chemicals/products. These facilities can absorb the expected impact of the marginal increase in the costmore » of production on the price of their product, due to the addition of CO 2 capture, more than coal-fired power plants. A financially viable demonstration of a large-scale CCUS network requires offsetting the costs of CO 2 capture by using the CO 2 as an input to the production of market-viable products. As a result, we demonstrate this alternative development path with the example of an integrated CCUS system where CO 2 is captured from ethylene producers and used for enhanced oil recovery in the U.S. Gulf Coast region.« less
Scale-Free Networks and Commercial Air Carrier Transportation in the United States
NASA Technical Reports Server (NTRS)
Conway, Sheila R.
2004-01-01
Network science, or the art of describing system structure, may be useful for the analysis and control of large, complex systems. For example, networks exhibiting scale-free structure have been found to be particularly well suited to deal with environmental uncertainty and large demand growth. The National Airspace System may be, at least in part, a scalable network. In fact, the hub-and-spoke structure of the commercial segment of the NAS is an often-cited example of an existing scale-free network After reviewing the nature and attributes of scale-free networks, this assertion is put to the test: is commercial air carrier transportation in the United States well explained by this model? If so, are the positive attributes of these networks, e.g. those of efficiency, flexibility and robustness, fully realized, or could we effect substantial improvement? This paper first outlines attributes of various network types, then looks more closely at the common carrier air transportation network from perspectives of the traveler, the airlines, and Air Traffic Control (ATC). Network models are applied within each paradigm, including discussion of implied strengths and weaknesses of each model. Finally, known limitations of scalable networks are discussed. With an eye towards NAS operations, utilizing the strengths and avoiding the weaknesses of scale-free networks are addressed.
Meteor Crater: Energy of formation - Implications of centrifuge scaling
NASA Technical Reports Server (NTRS)
Schmidt, R. M.
1980-01-01
Recent work on explosive cratering has demonstrated the utility of performing subscale experiments on a geotechnic centrifuge to develop scaling rules for very large energy events. The present investigation is concerned with an extension of this technique to impact cratering. Experiments have been performed using a projectile gun mounted directly on the centrifuge rotor to launch projectiles into a suitable soil container undergoing centripetal accelerations in excess of 500 G. The pump tube of a two-stage light-gas gun was used to attain impact velocities of approximately 2 km/sec. The results of the experiments indicate that the energy of formation of any large impact crater depends upon the impact velocity. This dependence, shown for the case of Meteor Crater, is consistent with analogous results for the specific energy dependence of explosives and is expected to persist to impact velocities in excess of 25 km/sec.
Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems
Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.
2014-01-01
The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545
Wiley, Joshua S; Shelley, Jacob T; Cooks, R Graham
2013-07-16
We describe a handheld, wireless low-temperature plasma (LTP) ambient ionization source and its performance on a benchtop and a miniature mass spectrometer. The source, which is inexpensive to build and operate, is battery-powered and utilizes miniature helium cylinders or air as the discharge gas. Comparison of a conventional, large-scale LTP source against the handheld LTP source, which uses less helium and power than the large-scale version, revealed that the handheld source had similar or slightly better analytical performance. Another advantage of the handheld LTP source is the ability to quickly interrogate a gaseous, liquid, or solid sample without requiring any setup time. A small, 7.4-V Li-polymer battery is able to sustain plasma for 2 h continuously, while the miniature helium cylinder supplies gas flow for approximately 8 continuous hours. Long-distance ion transfer was achieved for distances up to 1 m.
NASA Technical Reports Server (NTRS)
Anyamba, Assaf; Linthicum, Kenneth J.; Small, Jennifer; Britch, S. C.; Tucker, C. J.
2012-01-01
Remotely sensed vegetation measurements for the last 30 years combined with other climate data sets such as rainfall and sea surface temperatures have come to play an important role in the study of the ecology of arthropod-borne diseases. We show that epidemics and epizootics of previously unpredictable Rift Valley fever are directly influenced by large scale flooding associated with the El Ni o/Southern Oscillation. This flooding affects the ecology of disease transmitting arthropod vectors through vegetation development and other bioclimatic factors. This information is now utilized to monitor, model, and map areas of potential Rift Valley fever outbreaks and is used as an early warning system for risk reduction of outbreaks to human and animal health, trade, and associated economic impacts. The continuation of such satellite measurements is critical to anticipating, preventing, and managing disease epidemics and epizootics and other climate-related disasters.
ProMotE: an efficient algorithm for counting independent motifs in uncertain network topologies.
Ren, Yuanfang; Sarkar, Aisharjya; Kahveci, Tamer
2018-06-26
Identifying motifs in biological networks is essential in uncovering key functions served by these networks. Finding non-overlapping motif instances is however a computationally challenging task. The fact that biological interactions are uncertain events further complicates the problem, as it makes the existence of an embedding of a given motif an uncertain event as well. In this paper, we develop a novel method, ProMotE (Probabilistic Motif Embedding), to count non-overlapping embeddings of a given motif in probabilistic networks. We utilize a polynomial model to capture the uncertainty. We develop three strategies to scale our algorithm to large networks. Our experiments demonstrate that our method scales to large networks in practical time with high accuracy where existing methods fail. Moreover, our experiments on cancer and degenerative disease networks show that our method helps in uncovering key functional characteristics of biological networks.
Promoting R & D in photobiological hydrogen production utilizing mariculture-raised cyanobacteria.
Sakurai, Hidehiro; Masukawa, Hajime
2007-01-01
This review article explores the potential of using mariculture-raised cyanobacteria as solar energy converters of hydrogen (H(2)). The exploitation of the sea surface for large-scale renewable energy production and the reasons for selecting the economical, nitrogenase-based systems of cyanobacteria for H(2) production, are described in terms of societal benefits. Reports of cyanobacterial photobiological H(2) production are summarized with respect to specific activity, efficiency of solar energy conversion, and maximum H(2) concentration attainable. The need for further improvements in biological parameters such as low-light saturation properties, sustainability of H(2) production, and so forth, and the means to overcome these difficulties through the identification of promising wild-type strains followed by optimization of the selected strains using genetic engineering are also discussed. Finally, a possible mechanism for the development of economical large-scale mariculture operations in conjunction with international cooperation and social acceptance is outlined.
Cheow, Lih Feng; Viswanathan, Ramya; Chin, Chee-Sing; Jennifer, Nancy; Jones, Robert C; Guccione, Ernesto; Quake, Stephen R; Burkholder, William F
2014-10-07
Homogeneous assay platforms for measuring protein-ligand interactions are highly valued due to their potential for high-throughput screening. However, the implementation of these multiplexed assays in conventional microplate formats is considerably expensive due to the large amounts of reagents required and the need for automation. We implemented a homogeneous fluorescence anisotropy-based binding assay in an automated microfluidic chip to simultaneously interrogate >2300 pairwise interactions. We demonstrated the utility of this platform in determining the binding affinities between chromatin-regulatory proteins and different post-translationally modified histone peptides. The microfluidic chip assay produces comparable results to conventional microtiter plate assays, yet requires 2 orders of magnitude less sample and an order of magnitude fewer pipetting steps. This approach enables one to use small samples for medium-scale screening and could ease the bottleneck of large-scale protein purification.
A unified framework of image latent feature learning on Sina microblog
NASA Astrophysics Data System (ADS)
Wei, Jinjin; Jin, Zhigang; Zhou, Yuan; Zhang, Rui
2015-10-01
Large-scale user-contributed images with texts are rapidly increasing on the social media websites, such as Sina microblog. However, the noise and incomplete correspondence between the images and the texts give rise to the difficulty in precise image retrieval and ranking. In this paper, a hypergraph-based learning framework is proposed for image ranking, which simultaneously utilizes visual feature, textual content and social link information to estimate the relevance between images. Representing each image as a vertex in the hypergraph, complex relationship between images can be reflected exactly. Then updating the weight of hyperedges throughout the hypergraph learning process, the effect of different edges can be adaptively modulated in the constructed hypergraph. Furthermore, the popularity degree of the image is employed to re-rank the retrieval results. Comparative experiments on a large-scale Sina microblog data-set demonstrate the effectiveness of the proposed approach.
Big Data Analytics for Genomic Medicine
He, Karen Y.; Ge, Dongliang; He, Max M.
2017-01-01
Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients’ genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs. PMID:28212287
Seismic data restoration with a fast L1 norm trust region method
NASA Astrophysics Data System (ADS)
Cao, Jingjie; Wang, Yanfei
2014-08-01
Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.
Willow bioenergy plantation research in the Northeast
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, E.H.; Abrahamson, L.P.; Kopp, R.F.
1993-12-31
Experiments were established in Central New York in the spring of 1987 to evaluate the potential of Salix for biomass production in bioenergy plantations. Emphasis of the research was on developing and refining establishment, tending and maintenance techniques, with complimentary study of breeding, coppice physiology, pests, nutrient use and bioconversion to energy products. Current yields utilizing salix clones developed in cooperation with the University of Toronto in short-rotation intensive culture bioenergy plantations in the Northeast approximate 8 oven dry tons per acre per year with annual harvesting. Successful clones have been identified and culture techniques refined. The results are nowmore » being integrated to establish a 100 acre Salix large-scale bioenergy farm to demonstrate current successful biomass production technology and to provide plantations of sufficient size to test harvesters; adequately assess economics of the systems; and provide large quantities of uniform biomass for pilot-scale conversion facilities.« less
Petri Net controller synthesis based on decomposed manufacturing models.
Dideban, Abbas; Zeraatkar, Hashem
2018-06-01
Utilizing of supervisory control theory on the real systems in many modeling tools such as Petri Net (PN) becomes challenging in recent years due to the significant states in the automata models or uncontrollable events. The uncontrollable events initiate the forbidden states which might be removed by employing some linear constraints. Although there are many methods which have been proposed to reduce these constraints, enforcing them to a large-scale system is very difficult and complicated. This paper proposes a new method for controller synthesis based on PN modeling. In this approach, the original PN model is broken down into some smaller models in which the computational cost reduces significantly. Using this method, it is easy to reduce and enforce the constraints to a Petri net model. The appropriate results of our proposed method on the PN models denote worthy controller synthesis for the large scale systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Big Data Analytics for Genomic Medicine.
He, Karen Y; Ge, Dongliang; He, Max M
2017-02-15
Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients' genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs.
Unmasking the masked Universe: the 2M++ catalogue through Bayesian eyes
NASA Astrophysics Data System (ADS)
Lavaux, Guilhem; Jasche, Jens
2016-01-01
This work describes a full Bayesian analysis of the Nearby Universe as traced by galaxies of the 2M++ survey. The analysis is run in two sequential steps. The first step self-consistently derives the luminosity-dependent galaxy biases, the power spectrum of matter fluctuations and matter density fields within a Gaussian statistic approximation. The second step makes a detailed analysis of the three-dimensional large-scale structures, assuming a fixed bias model and a fixed cosmology. This second step allows for the reconstruction of both the final density field and the initial conditions at z = 1000 assuming a fixed bias model. From these, we derive fields that self-consistently extrapolate the observed large-scale structures. We give two examples of these extrapolation and their utility for the detection of structures: the visibility of the Sloan Great Wall, and the detection and characterization of the Local Void using DIVA, a Lagrangian based technique to classify structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramamurthy, Byravamurthy
2014-05-05
In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published severalmore » conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.« less
Gething, Peter W; Patil, Anand P; Hay, Simon I
2010-04-01
Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.
NASA Technical Reports Server (NTRS)
Jackson, Karen E.
1990-01-01
Scale model technology represents one method of investigating the behavior of advanced, weight-efficient composite structures under a variety of loading conditions. It is necessary, however, to understand the limitations involved in testing scale model structures before the technique can be fully utilized. These limitations, or scaling effects, are characterized. in the large deflection response and failure of composite beams. Scale model beams were loaded with an eccentric axial compressive load designed to produce large bending deflections and global failure. A dimensional analysis was performed on the composite beam-column loading configuration to determine a model law governing the system response. An experimental program was developed to validate the model law under both static and dynamic loading conditions. Laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic were tested to examine a diversity of composite response and failure modes. The model beams were loaded under scaled test conditions until catastrophic failure. A large deflection beam solution was developed to compare with the static experimental results and to analyze beam failure. Also, the finite element code DYCAST (DYnamic Crash Analysis of STructure) was used to model both the static and impulsive beam response. Static test results indicate that the unidirectional and cross ply beam responses scale as predicted by the model law, even under severe deformations. In general, failure modes were consistent between scale models within a laminate family; however, a significant scale effect was observed in strength. The scale effect in strength which was evident in the static tests was also observed in the dynamic tests. Scaling of load and strain time histories between the scale model beams and the prototypes was excellent for the unidirectional beams, but inconsistent results were obtained for the angle ply, cross ply, and quasi-isotropic beams. Results show that valuable information can be obtained from testing on scale model composite structures, especially in the linear elastic response region. However, due to scaling effects in the strength behavior of composite laminates, caution must be used in extrapolating data taken from a scale model test when that test involves failure of the structure.
Studying Cosmic Evolution with 21 cm Intensity Mapping
NASA Astrophysics Data System (ADS)
Anderson, Christopher
This thesis describes early work in the developing field of 21-cm intensity mapping. The 21-cm line is a radio transition due to the hyperfine splitting of the ground state of neutral hydrogen (HI). Intensity mapping utilizes the aggregate redshifted 21-cm emission to map the three-dimensional distribution of HI on large scales. In principle, the 21-cm line can be utilized to map most of the volume of the observable Universe. But the signal is small, and dedicated instruments will be required to reach a high signal-to-noise ratio. Large spectrally smooth astrophysical foregrounds, which dwarf the 21-cm signal, present a significant challenge to the data analysis. I derive the fundamental physics of the 21-cm line and the size of the expected cosmological signal. I also provide an overview of the desired characteristics of a dedicated 21-cm instrument, and I list some instruments that are coming on-line in the next few years. I then describe the data analysis techniques and results for 21-cm intensity maps that were made with two existing radio telescopes, the Green Bank telescope (GBT) and the Parkes telescope. Both observations have detected the 21-cm HI signal by cross-correlating the 21-cm intensity maps with overlapping optical galaxy surveys. The GBT maps have been used to constrain the neutral hydrogen density at a mean redshift (z) of 0.8. The Parkes maps, at a mean redshift of 0.08, probe smaller scales. The Parkes 21-cm intensity maps reveal a lack of small-scale clustering when they are cross-correlated with 2dF optical galaxy maps. This lack of small-scale clustering is partially due to a scale-dependent and galaxy-color-dependent HI-galaxy cross- correlation coefficient. Lastly, I provide an overview of planned future analyses with the Parkes maps, with a proposed multi-beam receiver for the Green Bank telescope, and with simulations of systematic effects on foregrounds.
Learning, climate and the evolution of cultural capacity.
Whitehead, Hal
2007-03-21
Patterns of environmental variation influence the utility, and thus evolution, of different learning strategies. I use stochastic, individual-based evolutionary models to assess the relative advantages of 15 different learning strategies (genetic determination, individual learning, vertical social learning, horizontal/oblique social learning, and contingent combinations of these) when competing in variable environments described by 1/f noise. When environmental variation has little effect on fitness, then genetic determinism persists. When environmental variation is large and equal over all time-scales ("white noise") then individual learning is adaptive. Social learning is advantageous in "red noise" environments when variation over long time-scales is large. Climatic variability increases with time-scale, so that short-lived organisms should be able to rely largely on genetic determination. Thermal climates usually are insufficiently red for social learning to be advantageous for species whose fitness is very determined by temperature. In contrast, population trajectories of many species, especially large mammals and aquatic carnivores, are sufficiently red to promote social learning in their predators. The ocean environment is generally redder than that on land. Thus, while individual learning should be adaptive for many longer-lived organisms, social learning will often be found in those dependent on the populations of other species, especially if they are marine. This provides a potential explanation for the evolution of a prevalence of social learning, and culture, in humans and cetaceans.
Developing eThread pipeline using SAGA-pilot abstraction for large-scale structural bioinformatics.
Ragothaman, Anjani; Boddu, Sairam Chowdary; Kim, Nayong; Feinstein, Wei; Brylinski, Michal; Jha, Shantenu; Kim, Joohyun
2014-01-01
While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread--a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.
Large-scale transportation network congestion evolution prediction using deep learning theory.
Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai
2015-01-01
Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.
Large-Scale Transportation Network Congestion Evolution Prediction Using Deep Learning Theory
Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai
2015-01-01
Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation. PMID:25780910
Developing eThread Pipeline Using SAGA-Pilot Abstraction for Large-Scale Structural Bioinformatics
Ragothaman, Anjani; Feinstein, Wei; Jha, Shantenu; Kim, Joohyun
2014-01-01
While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread—a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure. PMID:24995285
Ellinas, Christos; Allan, Neil; Durugbo, Christopher; Johansson, Anders
2015-01-01
Current societal requirements necessitate the effective delivery of complex projects that can do more while using less. Yet, recent large-scale project failures suggest that our ability to successfully deliver them is still at its infancy. Such failures can be seen to arise through various failure mechanisms; this work focuses on one such mechanism. Specifically, it examines the likelihood of a project sustaining a large-scale catastrophe, as triggered by single task failure and delivered via a cascading process. To do so, an analytical model was developed and tested on an empirical dataset by the means of numerical simulation. This paper makes three main contributions. First, it provides a methodology to identify the tasks most capable of impacting a project. In doing so, it is noted that a significant number of tasks induce no cascades, while a handful are capable of triggering surprisingly large ones. Secondly, it illustrates that crude task characteristics cannot aid in identifying them, highlighting the complexity of the underlying process and the utility of this approach. Thirdly, it draws parallels with systems encountered within the natural sciences by noting the emergence of self-organised criticality, commonly found within natural systems. These findings strengthen the need to account for structural intricacies of a project's underlying task precedence structure as they can provide the conditions upon which large-scale catastrophes materialise.
Felo, Michael; Christensen, Brandon; Higgins, John
2013-01-01
The bioreactor volume delineating the selection of primary clarification technology is not always easily defined. Development of a commercial scale process for the manufacture of therapeutic proteins requires scale-up from a few liters to thousands of liters. While the separation techniques used for protein purification are largely conserved across scales, the separation techniques for primary cell culture clarification vary with scale. Process models were developed to compare monoclonal antibody production costs using two cell culture clarification technologies. One process model was created for cell culture clarification by disc stack centrifugation with depth filtration. A second process model was created for clarification by multi-stage depth filtration. Analyses were performed to examine the influence of bioreactor volume, product titer, depth filter capacity, and facility utilization on overall operating costs. At bioreactor volumes <1,000 L, clarification using multi-stage depth filtration offers cost savings compared to clarification using centrifugation. For bioreactor volumes >5,000 L, clarification using centrifugation followed by depth filtration offers significant cost savings. For bioreactor volumes of ∼ 2,000 L, clarification costs are similar between depth filtration and centrifugation. At this scale, factors including facility utilization, available capital, ease of process development, implementation timelines, and process performance characterization play an important role in clarification technology selection. In the case study presented, a multi-product facility selected multi-stage depth filtration for cell culture clarification at the 500 and 2,000 L scales of operation. Facility implementation timelines, process development activities, equipment commissioning and validation, scale-up effects, and process robustness are examined. © 2013 American Institute of Chemical Engineers.
Intelligent Facades for High Performance Green Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyson, Anna
Progress Towards Net-Zero and Net-Positive-Energy Commercial Buildings and Urban Districts Through Intelligent Building Envelope Strategies Previous research and development of intelligent facades systems has been limited in their contribution towards national goals for achieving on-site net zero buildings, because this R&D has failed to couple the many qualitative requirements of building envelopes such as the provision of daylighting, access to exterior views, satisfying aesthetic and cultural characteristics, with the quantitative metrics of energy harvesting, storage and redistribution. To achieve energy self-sufficiency from on-site solar resources, building envelopes can and must address this gamut of concerns simultaneously. With this project, wemore » have undertaken a high-performance building integrated combined-heat and power concentrating photovoltaic system with high temperature thermal capture, storage and transport towards multiple applications (BICPV/T). The critical contribution we are offering with the Integrated Concentrating Solar Façade (ICSF) is conceived to improve daylighting quality for improved health of occupants and mitigate solar heat gain while maximally capturing and transferring onsite solar energy. The ICSF accomplishes this multi-functionality by intercepting only the direct-normal component of solar energy (which is responsible for elevated cooling loads) thereby transforming a previously problematic source of energy into a high quality resource that can be applied to building demands such as heating, cooling, dehumidification, domestic hot water, and possible further augmentation of electrical generation through organic Rankine cycles. With the ICSF technology, our team is addressing the global challenge in transitioning commercial and residential building stock towards on-site clean energy self-sufficiency, by fully integrating innovative environmental control systems strategies within an intelligent and responsively dynamic building envelope. The advantage of being able to use the entire solar spectrum for active and passive benefits, along with the potential savings of avoiding transmission losses through direct current (DC) transfer to all buildings systems directly from the site of solar conversion, gives the system a compounded economic viability within the commercial and institutional building markets. With a team that spans multiple stakeholders across disparate industries, from CPV to A&E partners that are responsible for the design and development of District and Regional Scale Urban Development, this project demonstrates that integrating utility-scale high efficiency CPV installations with urban and suburban environments is both viable and desirable within the marketplace. The historical schism between utility scale CPV and BIPV has been one of differing scale and cultures. There is no technical reason why utility-scale CPV cannot be located within urban embedded district scale sites of energy harvesting. New models for leasing large areas of district scale roofs and facades are emerging, such that the model for utility scale energy harvesting can be reconciled to commercial and public scale building sites and campuses. This consortium is designed to unite utility scale solar harvesting into building applications for smart grid development.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, G.A.; Commer, M.
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/Lmore » supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.« less
Logistics, Costs, and GHG Impacts of Utility Scale Cofiring with 20% Biomass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boardman, Richard D.; Cafferty, Kara G.; Nichol, Corrie
This report presents the results of an evaluation of utility-scale biomass cofiring in large pulverized coal power plants. The purpose of this evaluation is to assess the cost and greenhouse gas reduction benefits of substituting relatively high volumes of biomass in coal. Two scenarios for cofiring up to 20% biomass with coal (on a lower heating value basis) are presented; (1) woody biomass in central Alabama where Southern Pine is currently produced for the wood products and paper industries, and (2) purpose-grown switchgrass in the Ohio River Valley. These examples are representative of regions where renewable biomass growth rates aremore » high in correspondence with major U.S. heartland power production. While these scenarios may provide a realistic reference for comparing the relative benefits of using a high volume of biomass for power production, this evaluation is not intended to be an analysis of policies concerning renewable portfolio standards or the optimal use of biomass for energy production in the U.S.« less
Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...
2008-01-01
Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-09
... DEPARTMENT OF COMMERCE International Trade Administration [C-570-982] Utility Scale Wind Towers... 202-482-1503, respectively. SUPPLEMENTARY INFORMATION: Background On January 18, 2012, the Department of Commerce (the Department) initiated the countervailing duty investigation of utility scale wind...
Demonstration of Essential Reliability Services by Utility-Scale Solar
Essential Reliability Services by Utility-Scale Solar Photovoltaic Power Plant: Q&A Demonstration of Essential Reliability Services by Utility-Scale Solar Photovoltaic Power Plant: Q&A Webinar Questions & Answers April 27, 2017 Is photovoltaic (PV) generation required to provide grid supportive
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-15
... DEPARTMENT OF COMMERCE International Trade Administration [C-570-982] Utility Scale Wind Towers...''), the Department is issuing a countervailing duty order on utility scale wind towers (``wind towers..., 2012, the Department published the final determination in the countervailing duty investigation of wind...
NASA Technical Reports Server (NTRS)
Cassell, Alan M.
2013-01-01
The testing of 3- and 6-meter diameter Hypersonic Inflatable Aerodynamic Decelerator (HIAD) test articles was completed in the National Full-Scale Aerodynamics Complex 40 ft x 80 ft Wind Tunnel test section. Both models were stacked tori, constructed as 60 degree half-angle sphere cones. The 3-meter HIAD was tested in two configurations. The first 3-meter configuration utilized an instrumented flexible aerodynamic skin covering the inflatable aeroshell surface, while the second configuration employed a flight-like flexible thermal protection system. The 6-meter HIAD was tested in two structural configurations (with and without an aft-mounted stiffening torus near the shoulder), both utilizing an instrumented aerodynamic skin.
Wind energy - A utility perspective
NASA Astrophysics Data System (ADS)
Fung, K. T.; Scheffler, R. L.; Stolpe, J.
1981-03-01
Broad consideration is given to the siting, demand, capital and operating cost and wind turbine design factors involved in a utility company's incorporation of wind powered electrical generation into existing grids. With the requirements of the Southern California Edison service region in mind, it is concluded that although the economic and legal climate for major investments in windpower are favorable, the continued development of large only wind turbine machines (on the scale of NASA's 2.5 MW Mod-2 design) is imperative in order to reduce manpower and maintenance costs. Stress is also put on the use of demonstration projects for both vertical and horizontal axis devices, in order to build up operational experience and confidence.
NASA Astrophysics Data System (ADS)
Song, Z. N.; Sui, H. G.
2018-04-01
High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.
Penson, Brittany N; Ruchensky, Jared R; Morey, Leslie C; Edens, John F
2016-11-01
A substantial amount of research has examined the developmental trajectory of antisocial behavior and, in particular, the relationship between antisocial behavior and maladaptive personality traits. However, research typically has not controlled for previous behavior (e.g., past violence) when examining the utility of personality measures, such as self-report scales of antisocial and borderline traits, in predicting future behavior (e.g., subsequent violence). Examination of the potential interactive effects of measures of both antisocial and borderline traits also is relatively rare in longitudinal research predicting adverse outcomes. The current study utilizes a large sample of youthful offenders ( N = 1,354) from the Pathways to Desistance project to examine the separate effects of the Personality Assessment Inventory Antisocial Features (ANT) and Borderline Features (BOR) scales in predicting future offending behavior as well as trends in other negative outcomes (e.g., substance abuse, violence, employment difficulties) over a 1-year follow-up period. In addition, an ANT × BOR interaction term was created to explore the predictive effects of secondary psychopathy. ANT and BOR both explained unique variance in the prediction of various negative outcomes even after controlling for past indicators of those same behaviors during the preceding year.
Utilizing Wavelet Analysis to assess hydrograph change in northwestern North America
NASA Astrophysics Data System (ADS)
Tang, W.; Carey, S. K.
2017-12-01
Historical streamflow data in the mountainous regions of northwestern North America suggest that changes flows are driven by warming temperature, declining snowpack and glacier extent, and large-scale teleconnections. However, few sites exist that have robust long-term records for statistical analysis, and pervious research has focussed on high and low-flow indices along with trend analysis using Mann-Kendal test and other similar approaches. Furthermore, there has been less emphasis on ascertaining the drivers of change in changes in shape of the streamflow hydrograph compared with traditional flow metrics. In this work, we utilize wavelet analysis to evaluate changes in hydrograph characteristics for snowmelt driven rivers in northwestern North America across a range of scales. Results suggest that wavelets can be used to detect a lengthening and advancement of freshet with a corresponding decline in peak flows. Furthermore, the gradual transition of flows from nival to pluvial regimes in more southerly catchments is evident in the wavelet spectral power through time. This method of change detection is challenged by evaluating the statistical significance of changes in wavelet spectra as related to hydrograph form, yet ongoing work seeks to link these patters to driving weather and climate along with larger scale teleconnections.
Status of DSMT research program
NASA Technical Reports Server (NTRS)
Mcgowan, Paul E.; Javeed, Mehzad; Edighoffer, Harold H.
1991-01-01
The status of the Dynamic Scale Model Technology (DSMT) research program is presented. DSMT is developing scale model technology for large space structures as part of the Control Structure Interaction (CSI) program at NASA Langley Research Center (LaRC). Under DSMT a hybrid-scale structural dynamics model of Space Station Freedom was developed. Space Station Freedom was selected as the focus structure for DSMT since the station represents the first opportunity to obtain flight data on a complex, three-dimensional space structure. Included is an overview of DSMT including the development of the space station scale model and the resulting hardware. Scaling technology was developed for this model to achieve a ground test article which existing test facilities can accommodate while employing realistically scaled hardware. The model was designed and fabricated by the Lockheed Missile and Space Co., and is assembled at LaRc for dynamic testing. Also, results from ground tests and analyses of the various model components are presented along with plans for future subassembly and matted model tests. Finally, utilization of the scale model for enhancing analysis verification of the full-scale space station is also considered.
Experience in using commercial clouds in CMS
NASA Astrophysics Data System (ADS)
Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration
2017-10-01
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.
Experience in using commercial clouds in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, L.; Bockelman, B.; Dykstra, D.
Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less
Large-Scale Flow Structure in Turbulent Nonpremixed Flames under Normal- And Low-Gravity Conditions
NASA Technical Reports Server (NTRS)
Clemens, N. T.; Idicheria, C. A.; Boxx, I. G.
2001-01-01
It is well known that buoyancy has a major influence on the flow structure of turbulent nonpremixed jet flames. Buoyancy acts by inducing baroclinic torques, which generate large-scale vortical structures that can significantly modify the flow field. Furthermore, some suggest that buoyancy can substantially influence the large-scale structure of even nominally momentum-dominated flames, since the low velocity flow outside of the flame will be more susceptible to buoyancy effects. Even subtle buoyancy effects may be important because changes in the large-scale structure affects the local entrainment and fluctuating strain rate, and hence the structure of the flame. Previous studies that have compared the structure of normal- and micro-gravity nonpremixed jet flames note that flames in microgravity are longer and wider than in normal-gravity. This trend was observed for jet flames ranging from laminar to turbulent regimes. Furthermore, imaging of the flames has shown possible evidence of helical instabilities and disturbances starting from the base of the flame in microgravity. In contrast, these characteristics were not observed in normal-gravity. The objective of the present study is to further advance our knowledge of the effects of weak levels of buoyancy on the structure of transitional and turbulent nonpremixed jet flames. In later studies we will utilize the drop tower facilities at NASA Glenn Research Center (GRC), but the preliminary work described in this paper was conducted using the 1.25-second drop tower located at the University of Texas at Austin. A more detailed description of these experiments can be found in Idicheria et al.
Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties
NASA Astrophysics Data System (ADS)
Li, Yongzhe; Vorobyov, Sergiy A.
2018-03-01
In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.
Sudat, Sylvia Ek; Franco, Anjali; Pressman, Alice R; Rosenfeld, Kenneth; Gornet, Elizabeth; Stewart, Walter
2018-02-01
Home-based care coordination and support programs for people with advanced illness work alongside usual care to promote personal care goals, which usually include a preference for home-based end-of-life care. More research is needed to confirm the efficacy of these programs, especially when disseminated on a large scale. Advanced Illness Management is one such program, implemented within a large open health system in northern California, USA. To evaluate the impact of Advanced Illness Management on end-of-life resource utilization, cost of care, and care quality, as indicators of program success in supporting patient care goals. A retrospective-matched observational study analyzing medical claims in the final 3 months of life. Medicare fee-for-service 2010-2014 decedents in northern California, USA. Final month total expenditures for Advanced Illness Management enrollees ( N = 1352) were reduced by US$4824 (US$3379, US$6268) and inpatient payments by US$6127 (US$4874, US$7682). Enrollees also experienced 150 fewer hospitalizations/1000 (101, 198) and 1361 fewer hospital days/1000 (998, 1725). The percentage of hospice enrollees increased by 17.9 percentage points (14.7, 21.0), hospital deaths decreased by 8.2 percentage points (5.5, 10.8), and intensive care unit deaths decreased by 7.1 percentage points (5.2, 8.9). End-of-life chemotherapy use and non-inpatient expenditures in months 2 and 3 prior to death did not differ significantly from the control group. Advanced Illness Management has a positive impact on inpatient utilization, cost of care, hospice enrollment, and site of death. This suggests that home-based support programs for people with advanced illness can be successful on a large scale in supporting personal end-of-life care choices.
Cognitive Rationalizations for Tanning-Bed Use: A Preliminary Exploration
Banerjee, Smita C.; Hay, Jennifer L.; Greene, Kathryn
2016-01-01
Objectives To examine construct and predictive utility of an adapted cognitive rationalization scale for tanning-bed use. Methods Current/former tanning-bed-using undergraduate students (N = 216; 87.6% females; 78.4% white) at a large northeastern university participated in a survey. A cognitive rationalization for tanning-bed use scale was adapted. Standardized self-report measures of past tanning-bed use, advantages of tanning, perceived vulnerability to photoaging, tanning-bed use dependence, and tanning- bed use intention were also administered. Results The cognitive rationalization scale exhibited strong construct and predictive validity. Current tanners and tanning-bed-use-dependent participants endorsed rationalizations more strongly than did former tanners and not-tanning-bed-use-dependent participants respectively. Conclusions Findings indicate that cognitive rationalizations help explain discrepancy between inconsistent cognitions. PMID:23985280
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2014-08-01
In this pilot project, the Building America Partnership for Improved Residential Construction and Florida Power and Light are collaborating to retrofit a large number of homes using a phased approach to both simple and deep retrofits. This project will provide the information necessary to significantly reduce energy use through larger community-scale projects in collaboration with utilities, program administrators and other market leader stakeholders.
Utilization of Seismic and Infrasound Signals for Characterizing Mining Explosions
2001-10-01
different types of mining operations exist, ranging from surface coal cast blasting to hard rock fragmentation blasting in porphyry copper mines. The study...both seismic and infrasound signals. The seismic coupling of large-scale cast blasts in Wyoming, copper fragmentation blasts in Arizona and New Mexico...mining explosions from the copper fragmentation blasts in SE Arizona were observed at Los Alamos. Detected events were among the largest of the blasts
Peabody Picture Vocabulary Test: Proxy for Verbal IQ in Genetic Studies of Autism Spectrum Disorder
ERIC Educational Resources Information Center
Krasileva, Kate E.; Sanders, Stephan J.; Bal, Vanessa Hus
2017-01-01
This study assessed the utility of a brief assessment (the Peabody Picture Vocabulary Test-4th Edition; PPVT4) as a proxy for verbal IQ (VIQ) in large-scale studies of autism spectrum disorder (ASD). In a sample of 2,420 proband with ASD, PPVT4:IQ correlations were strong. PPVT4 scores were, on average, 5.46 points higher than VIQ; 79% of children…
FARMS: The Flexible Agricultural Robotics Manipulator
NASA Technical Reports Server (NTRS)
Gill, Paul S.
1991-01-01
A technology utilization project was established with the Marshall Space Flight Center and the University of Georgia to develop an Earth-based, robotic end effector to process live plant (geranium) material which will improve productivity and efficiency in agricultural systems such as commercial nurseries and greenhouse systems. The aim is to apply this technology to NASA's presence in space, including permanently manned space stations and manned planetary communities requiring large scale food production needs.
High Efficiency Thermoelectric Materials and Devices
NASA Technical Reports Server (NTRS)
Kochergin, Vladimir (Inventor)
2013-01-01
Growth of thermoelectric materials in the form of quantum well super-lattices on three-dimensionally structured substrates provide the means to achieve high conversion efficiency of the thermoelectric module combined with inexpensiveness of fabrication and compatibility with large scale production. Thermoelectric devices utilizing thermoelectric materials in the form of quantum well semiconductor super-lattices grown on three-dimensionally structured substrates provide improved thermoelectric characteristics that can be used for power generation, cooling and other applications..
NASA Astrophysics Data System (ADS)
Kadum, Hawwa; Ali, Naseem; Cal, Raúl
2016-11-01
Hot-wire anemometry measurements have been performed on a 3 x 3 wind turbine array to study the multifractality of the turbulent kinetic energy dissipations. A multifractal spectrum and Hurst exponents are determined at nine locations downstream of the hub height, and bottom and top tips. Higher multifractality is found at 0.5D and 1D downstream of the bottom tip and hub height. The second order of the Hurst exponent and combination factor show an ability to predict the flow state in terms of its development. Snapshot proper orthogonal decomposition is used to identify the coherent and incoherent structures and to reconstruct the stochastic velocity using a specific number of the POD eigenfunctions. The accumulation of the turbulent kinetic energy in top tip location exhibits fast convergence compared to the bottom tip and hub height locations. The dissipation of the large and small scales are determined using the reconstructed stochastic velocities. The higher multifractality is shown in the dissipation of the large scale compared to small-scale dissipation showing consistency with the behavior of the original signals.
Place, Sean P.; Menge, Bruce A.; Hofmann, Gretchen E.
2011-01-01
Summary The marine intertidal zone is characterized by large variation in temperature, pH, dissolved oxygen and the supply of nutrients and food on seasonal and daily time scales. These oceanic fluctuations drive of ecological processes such as recruitment, competition and consumer-prey interactions largely via physiological mehcanisms. Thus, to understand coastal ecosystem dynamics and responses to climate change, it is crucial to understand these mechanisms. Here we utilize transcriptome analysis of the physiological response of the mussel Mytilus californianus at different spatial scales to gain insight into these mechanisms. We used mussels inhabiting different vertical locations within Strawberry Hill on Cape Perpetua, OR and Boiler Bay on Cape Foulweather, OR to study inter- and intra-site variation of gene expression. The results highlight two distinct gene expression signatures related to the cycling of metabolic activity and perturbations to cellular homeostasis. Intermediate spatial scales show a strong influence of oceanographic differences in food and stress environments between sites separated by ~65 km. Together, these new insights into environmental control of gene expression may allow understanding of important physiological drivers within and across populations. PMID:22563136
Pan, Joshua; Meyers, Robin M; Michel, Brittany C; Mashtalir, Nazar; Sizemore, Ann E; Wells, Jonathan N; Cassel, Seth H; Vazquez, Francisca; Weir, Barbara A; Hahn, William C; Marsh, Joseph A; Tsherniak, Aviad; Kadoch, Cigall
2018-05-23
Protein complexes are assemblies of subunits that have co-evolved to execute one or many coordinated functions in the cellular environment. Functional annotation of mammalian protein complexes is critical to understanding biological processes, as well as disease mechanisms. Here, we used genetic co-essentiality derived from genome-scale RNAi- and CRISPR-Cas9-based fitness screens performed across hundreds of human cancer cell lines to assign measures of functional similarity. From these measures, we systematically built and characterized functional similarity networks that recapitulate known structural and functional features of well-studied protein complexes and resolve novel functional modules within complexes lacking structural resolution, such as the mammalian SWI/SNF complex. Finally, by integrating functional networks with large protein-protein interaction networks, we discovered novel protein complexes involving recently evolved genes of unknown function. Taken together, these findings demonstrate the utility of genetic perturbation screens alone, and in combination with large-scale biophysical data, to enhance our understanding of mammalian protein complexes in normal and disease states. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Walker, Gregory K.; Mahanama, Sarith P.; Reichle, Rolf H.
2013-01-01
Offline simulations over the conterminous United States (CONUS) with a land surface model are used to address two issues relevant to the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which a realistic increase in the spatial resolution of forecasted precipitation would improve streamflow forecasts. The addition of error to a soil moisture initialization field is found to lead to a nearly proportional reduction in streamflow forecast skill. The linearity of the response allows the determination of a lower bound for the increase in streamflow forecast skill achievable through improved soil moisture estimation, e.g., through satellite-based soil moisture measurements. An increase in the resolution of precipitation is found to have an impact on large-scale streamflow forecasts only when evaporation variance is significant relative to the precipitation variance. This condition is met only in the western half of the CONUS domain. Taken together, the two studies demonstrate the utility of a continental-scale land surface modeling system as a tool for addressing the science of hydrological prediction.
Biasing and the search for primordial non-Gaussianity beyond the local type
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gleyzes, Jérôme; De Putter, Roland; Doré, Olivier
Primordial non-Gaussianity encodes valuable information about the physics of inflation, including the spectrum of particles and interactions. Significant improvements in our understanding of non-Gaussanity beyond Planck require information from large-scale structure. The most promising approach to utilize this information comes from the scale-dependent bias of halos. For local non-Gaussanity, the improvements available are well studied but the potential for non-Gaussianity beyond the local type, including equilateral and quasi-single field inflation, is much less well understood. In this paper, we forecast the capabilities of large-scale structure surveys to detect general non-Gaussianity through galaxy/halo power spectra. We study how non-Gaussanity can bemore » distinguished from a general biasing model and where the information is encoded. For quasi-single field inflation, significant improvements over Planck are possible in some regions of parameter space. We also show that the multi-tracer technique can significantly improve the sensitivity for all non-Gaussianity types, providing up to an order of magnitude improvement for equilateral non-Gaussianity over the single-tracer measurement.« less
NASA Astrophysics Data System (ADS)
Yulaeva, E.; Fan, Y.; Moosdorf, N.; Richard, S. M.; Bristol, S.; Peters, S. E.; Zaslavsky, I.; Ingebritsen, S.
2015-12-01
The Digital Crust EarthCube building block creates a framework for integrating disparate 3D/4D information from multiple sources into a comprehensive model of the structure and composition of the Earth's upper crust, and to demonstrate the utility of this model in several research scenarios. One of such scenarios is estimation of various crustal properties related to fluid dynamics (e.g. permeability and porosity) at each node of any arbitrary unstructured 3D grid to support continental-scale numerical models of fluid flow and transport. Starting from Macrostrat, an existing 4D database of 33,903 chronostratigraphic units, and employing GeoDeepDive, a software system for extracting structured information from unstructured documents, we construct 3D gridded fields of sediment/rock porosity, permeability and geochemistry for large sedimentary basins of North America, which will be used to improve our understanding of large-scale fluid flow, chemical weathering rates, and geochemical fluxes into the ocean. In this talk, we discuss the methods, data gaps (particularly in geologically complex terrain), and various physical and geological constraints on interpolation and uncertainty estimation.
Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme
Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.; ...
2016-11-07
Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less
Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.
Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less
Preisler, H.K.; Burgan, R.E.; Eidenshink, J.C.; Klaver, Jacqueline M.; Klaver, R.W.
2009-01-01
The current study presents a statistical model for assessing the skill of fire danger indices and for forecasting the distribution of the expected numbers of large fires over a given region and for the upcoming week. The procedure permits development of daily maps that forecast, for the forthcoming week and within federal lands, percentiles of the distributions of (i) number of ignitions; (ii) number of fires above a given size; (iii) conditional probabilities of fires greater than a specified size, given ignition. As an illustration, we used the methods to study the skill of the Fire Potential Index an index that incorporates satellite and surface observations to map fire potential at a national scale in forecasting distributions of large fires. ?? 2009 IAWF.
High Performance Semantic Factoring of Giga-Scale Semantic Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Adolf, Robert D.; Al-Saffar, Sinan
2010-10-04
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moser, M.A.
1996-01-01
Options for successfully using biogas depend on project scale. Almost all biogas from anaerobic digesters must first go through a gas handling system that pressurizes, meters, and filters the biogas. Additional treatment, including hydrogen sulfide-mercaptan scrubbing, gas drying, and carbon dioxide removal may be necessary for specialized uses, but these are complex and expensive processes. Thus, they can be justified only for large-scale projects that require high-quality biogas. Small-scale projects (less than 65 cfm) generally use biogas (as produced) as a boiler fuel or for fueling internal combustion engine-generators to produce electricity. If engines or boilers are selected properly, theremore » should be no need to remove hydrogen sulfide. Small-scale combustion turbines, steam turbines, and fuel cells are not used because of their technical complexity and high capital cost. Biogas cleanup to pipeline or transportation fuel specifications is very costly, and energy economics preclude this level of treatment.« less
LAMMPS strong scaling performance optimization on Blue Gene/Q
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coffman, Paul; Jiang, Wei; Romero, Nichols A.
2014-11-12
LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using anmore » 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.« less
Solar energy/utility interface - The technical issues
NASA Astrophysics Data System (ADS)
Tabors, R. D.; White, D. C.
1982-01-01
The technical and economic factors affecting an interface between solar/wind power sources and utilities are examined. Photovoltaic, solar thermal, and wind powered systems are subject to stochastic local climatic variations and as such may require full back-up services from utilities, which are then in a position of having reserve generating power and power lines and equipment which are used only part time. The low reliability which has degraded some economies of scale formerly associated with large, centralized power plants, and the lowered rate of the increase in electricity usage is taken to commend the inclusion of power sources with a modular nature such as is available from solar derived electrical generation. Technical issues for maintaining the quality of grid power and also effectively metering purchased and supplied back-up power as part of a homeostatic system of energy control are discussed. It is concluded that economic considerations, rather than technical issues, bear the most difficulty in integrating solar technologies into the utility network.
Eco-friendly fly ash utilization: potential for land application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malik, A.; Thapliyal, A.
2009-07-01
The increase in demand for power in domestic, agricultural, and industrial sectors has increased the pressure on coal combustion and aggravated the problem of fly ash generation/disposal. Consequently the research targeting effective utilization of fly ash has also gained momentum. Fly ash has proved to be an economical substitute for expensive adsorbents as well as a suitable raw material for brick manufacturing, zeolite synthesis, etc. Fly ash is a reservoir of essential minerals but is deficient in nitrogen and phosphorus. By amending fly ash with soil and/or various organic materials (sewage sludge, bioprocess materials) as well as microbial inoculants likemore » mycorrhizae, enhanced plant growth can be realized. Based on the sound results of large scale studies, fly ash utilization has grown into prominent discipline supported by various internationally renowned organizations. This paper reviews attempts directed toward various utilization of fly ash, with an emphasis on land application of organic/microbial inoculants amended fly ash.« less
Improving Design Efficiency for Large-Scale Heterogeneous Circuits
NASA Astrophysics Data System (ADS)
Gregerson, Anthony
Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.
Utility-preserving transaction data anonymization with low information loss.
Loukides, Grigorios; Gkoulalas-Divanis, Aris
2012-08-01
Transaction data record various information about individuals, including their purchases and diagnoses, and are increasingly published to support large-scale and low-cost studies in domains such as marketing and medicine. However, the dissemination of transaction data may lead to privacy breaches, as it allows an attacker to link an individual's record to their identity. Approaches that anonymize data by eliminating certain values in an individual's record or by replacing them with more general values have been proposed recently, but they often produce data of limited usefulness. This is because these approaches adopt value transformation strategies that do not guarantee data utility in intended applications and objective measures that may lead to excessive data distortion. In this paper, we propose a novel approach for anonymizing data in a way that satisfies data publishers' utility requirements and incurs low information loss. To achieve this, we introduce an accurate information loss measure and an effective anonymization algorithm that explores a large part of the problem space. An extensive experimental study, using click-stream and medical data, demonstrates that our approach permits many times more accurate query answering than the state-of-the-art methods, while it is comparable to them in terms of efficiency.
Utility-preserving transaction data anonymization with low information loss
Loukides, Grigorios; Gkoulalas-Divanis, Aris
2012-01-01
Transaction data record various information about individuals, including their purchases and diagnoses, and are increasingly published to support large-scale and low-cost studies in domains such as marketing and medicine. However, the dissemination of transaction data may lead to privacy breaches, as it allows an attacker to link an individual’s record to their identity. Approaches that anonymize data by eliminating certain values in an individual’s record or by replacing them with more general values have been proposed recently, but they often produce data of limited usefulness. This is because these approaches adopt value transformation strategies that do not guarantee data utility in intended applications and objective measures that may lead to excessive data distortion. In this paper, we propose a novel approach for anonymizing data in a way that satisfies data publishers’ utility requirements and incurs low information loss. To achieve this, we introduce an accurate information loss measure and an effective anonymization algorithm that explores a large part of the problem space. An extensive experimental study, using click-stream and medical data, demonstrates that our approach permits many times more accurate query answering than the state-of-the-art methods, while it is comparable to them in terms of efficiency. PMID:22563145
The RAPID Toolkit: Facilitating Utility-Scale Renewable Energy Development
energy and bulk transmission projects. The RAPID Toolkit, developed by the National Renewable Energy Renewable Energy Development The RAPID Toolkit: Facilitating Utility-Scale Renewable Energy Development information about federal, state, and local permitting and regulations for utility-scale renewable energy and
78 FR 11146 - Utility Scale Wind Towers From the People's Republic of China: Antidumping Duty Order
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-15
... DEPARTMENT OF COMMERCE International Trade Administration [A-570-981] Utility Scale Wind Towers...''), the Department is issuing an antidumping duty order on utility scale wind towers (``wind towers... investigation of wind towers from the PRC.\\1\\ On February 8, 2013, the ITC notified the Department of its...
Barriers to Research Utilization Scale: psychometric properties of the Turkish version.
Temel, Ayla Bayik; Uysal, Aynur; Ardahan, Melek; Ozkahraman, Sukran
2010-02-01
This paper is report of a study designed to assess the psychometric properties of the Turkish version of the Barriers to Research Utilization Scale. The original Barriers to Research Utilization Scale was developed by Funk et al. in the United States of America. Many researchers in various countries have used this scale to identify barriers to research utilization. A methodological study was carried out at four hospitals. The sample consisted of 300 nurses. Data were collected in 2005 using a socio-demographic form (12 questions) and the Turkish version of the Barriers to Research Utilization Scale. A Likert-type scale composed of four sub-factors and 29 items was used. Means and standard deviations were calculated for interval level data. A P value of <0.05 was considered statistically significant. Language equivalence and content validity were assessed by eight experts. Confirmatory factor analysis revealed that the Turkish version was made up of four subscales. Internal consistency reliability coefficient was 0.92 for the total scale and ranged from 0.73 to 0.80 for the subscales. Total-item correlation coefficients ranged from 0.37 to 0.60. The Turkish version of the scale is similar in structure to the original English language scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.
Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less
Young Kim, Eun; Johnson, Hans J
2013-01-01
A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.
Azad, Ariful; Ouzounis, Christos A; Kyrpides, Nikos C; Buluç, Aydin
2018-01-01
Abstract Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times and memory demands. Here, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ∼70 million nodes with ∼68 billion edges in ∼2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license. PMID:29315405
NASA Technical Reports Server (NTRS)
Lutwack, R.
1974-01-01
A technical assessment of a program to develop photovoltaic power system technology for large-scale national energy applications was made by analyzing and judging the alternative candidate photovoltaic systems and development tasks. A program plan was constructed based on achieving the 10 year objective of a program to establish the practicability of large-scale terrestrial power installations using photovoltaic conversion arrays costing less than $0.50/peak W. Guidelines for the tasks of a 5 year program were derived from a set of 5 year objectives deduced from the 10 year objective. This report indicates the need for an early emphasis on the development of the single-crystal Si photovoltaic system for commercial utilization; a production goal of 5 x 10 to the 8th power peak W/year of $0.50 cells was projected for the year 1985. The developments of other photovoltaic conversion systems were assigned to longer range development roles. The status of the technology developments and the applicability of solar arrays in particular power installations, ranging from houses to central power plants, was scheduled to be verified in a series of demonstration projects. The budget recommended for the first 5 year phase of the program is $268.5M.
Juvrud, Joshua; Gredebäck, Gustaf; Åhs, Fredrik; Lerin, Nils; Nyström, Pär; Kastrati, Granit; Rosén, Jörgen
2018-01-01
There is a need for large-scale remote data collection in a controlled environment, and the in-home availability of virtual reality (VR) and the commercial availability of eye tracking for VR present unique and exciting opportunities for researchers. We propose and provide a proof-of-concept assessment of a robust system for large-scale in-home testing using consumer products that combines psychophysiological measures and VR, here referred to as a Virtual Lab. For the first time, this method is validated by correlating autonomic responses, skin conductance response (SCR), and pupillary dilation, in response to a spider, a beetle, and a ball using commercially available VR. Participants demonstrated greater SCR and pupillary responses to the spider, and the effect was dependent on the proximity of the stimuli to the participant, with a stronger response when the spider was close to the virtual self. We replicated these effects across two experiments and in separate physical room contexts to mimic variability in home environment. Together, these findings demonstrate the utility of pupil dilation as a marker of autonomic arousal and the feasibility to assess this in commercially available VR hardware and support a robust Virtual Lab tool for massive remote testing.
Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.
2017-01-07
Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less
Juvrud, Joshua; Gredebäck, Gustaf; Åhs, Fredrik; Lerin, Nils; Nyström, Pär; Kastrati, Granit; Rosén, Jörgen
2018-01-01
There is a need for large-scale remote data collection in a controlled environment, and the in-home availability of virtual reality (VR) and the commercial availability of eye tracking for VR present unique and exciting opportunities for researchers. We propose and provide a proof-of-concept assessment of a robust system for large-scale in-home testing using consumer products that combines psychophysiological measures and VR, here referred to as a Virtual Lab. For the first time, this method is validated by correlating autonomic responses, skin conductance response (SCR), and pupillary dilation, in response to a spider, a beetle, and a ball using commercially available VR. Participants demonstrated greater SCR and pupillary responses to the spider, and the effect was dependent on the proximity of the stimuli to the participant, with a stronger response when the spider was close to the virtual self. We replicated these effects across two experiments and in separate physical room contexts to mimic variability in home environment. Together, these findings demonstrate the utility of pupil dilation as a marker of autonomic arousal and the feasibility to assess this in commercially available VR hardware and support a robust Virtual Lab tool for massive remote testing. PMID:29867318
Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.; ...
2018-01-05
Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.
Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less
Quantifying predictability in a model with statistical features of the atmosphere
Kleeman, Richard; Majda, Andrew J.; Timofeyev, Ilya
2002-01-01
The Galerkin truncated inviscid Burgers equation has recently been shown by the authors to be a simple model with many degrees of freedom, with many statistical properties similar to those occurring in dynamical systems relevant to the atmosphere. These properties include long time-correlated, large-scale modes of low frequency variability and short time-correlated “weather modes” at smaller scales. The correlation scaling in the model extends over several decades and may be explained by a simple theory. Here a thorough analysis of the nature of predictability in the idealized system is developed by using a theoretical framework developed by R.K. This analysis is based on a relative entropy functional that has been shown elsewhere by one of the authors to measure the utility of statistical predictions precisely. The analysis is facilitated by the fact that most relevant probability distributions are approximately Gaussian if the initial conditions are assumed to be so. Rather surprisingly this holds for both the equilibrium (climatological) and nonequilibrium (prediction) distributions. We find that in most cases the absolute difference in the first moments of these two distributions (the “signal” component) is the main determinant of predictive utility variations. Contrary to conventional belief in the ensemble prediction area, the dispersion of prediction ensembles is generally of secondary importance in accounting for variations in utility associated with different initial conditions. This conclusion has potentially important implications for practical weather prediction, where traditionally most attention has focused on dispersion and its variability. PMID:12429863
Yap, Choon-Kong; Eisenhaber, Birgit; Eisenhaber, Frank; Wong, Wing-Cheong
2016-11-29
While the local-mode HMMER3 is notable for its massive speed improvement, the slower glocal-mode HMMER2 is more exact for domain annotation by enforcing full domain-to-sequence alignments. Since a unit of domain necessarily implies a unit of function, local-mode HMMER3 alone remains insufficient for precise function annotation tasks. In addition, the incomparable E-values for the same domain model by different HMMER builds create difficulty when checking for domain annotation consistency on a large-scale basis. In this work, both the speed of HMMER3 and glocal-mode alignment of HMMER2 are combined within the xHMMER3x2 framework for tackling the large-scale domain annotation task. Briefly, HMMER3 is utilized for initial domain detection so that HMMER2 can subsequently perform the glocal-mode, sequence-to-full-domain alignments for the detected HMMER3 hits. An E-value calibration procedure is required to ensure that the search space by HMMER2 is sufficiently replicated by HMMER3. We find that the latter is straightforwardly possible for ~80% of the models in the Pfam domain library (release 29). However in the case of the remaining ~20% of HMMER3 domain models, the respective HMMER2 counterparts are more sensitive. Thus, HMMER3 searches alone are insufficient to ensure sensitivity and a HMMER2-based search needs to be initiated. When tested on the set of UniProt human sequences, xHMMER3x2 can be configured to be between 7× and 201× faster than HMMER2, but with descending domain detection sensitivity from 99.8 to 95.7% with respect to HMMER2 alone; HMMER3's sensitivity was 95.7%. At extremes, xHMMER3x2 is either the slow glocal-mode HMMER2 or the fast HMMER3 with glocal-mode. Finally, the E-values to false-positive rates (FPR) mapping by xHMMER3x2 allows E-values of different model builds to be compared, so that any annotation discrepancies in a large-scale annotation exercise can be flagged for further examination by dissectHMMER. The xHMMER3x2 workflow allows large-scale domain annotation speed to be drastically improved over HMMER2 without compromising for domain-detection with regard to sensitivity and sequence-to-domain alignment incompleteness. The xHMMER3x2 code and its webserver (for Pfam release 27, 28 and 29) are freely available at http://xhmmer3x2.bii.a-star.edu.sg/ . Reviewed by Thomas Dandekar, L. Aravind, Oliviero Carugo and Shamil Sunyaev. For the full reviews, please go to the Reviewers' comments section.
Basin scale permeability and thermal evolution of a magmatic hydrothermal system
NASA Astrophysics Data System (ADS)
Taron, J.; Hickman, S. H.; Ingebritsen, S.; Williams, C.
2013-12-01
Large-scale hydrothermal systems are potentially valuable energy resources and are of general scientific interest due to extreme conditions of stress, temperature, and reactive chemistry that can act to modify crustal rheology and composition. With many proposed sites for Enhanced Geothermal Systems (EGS) located on the margins of large-scale hydrothermal systems, understanding the temporal evolution of these systems contributes to site selection, characterization and design of EGS. This understanding is also needed to address the long-term sustainability of EGS once they are created. Many important insights into heat and mass transfer within natural hydrothermal systems can be obtained through hydrothermal modeling assuming that stress and permeability structure do not evolve over time. However, this is not fully representative of natural systems, where the effects of thermo-elastic stress changes, chemical fluid-rock interactions, and rock failure on fluid flow and thermal evolution can be significant. The quantitative importance of an evolving permeability field within the overall behavior of a large-scale hydrothermal system is somewhat untested, and providing such a parametric understanding is one of the goals of this study. We explore the thermal evolution of a sedimentary basin hydrothermal system following the emplacement of a magma body. The Salton Sea geothermal field and its associated magmatic system in southern California is utilized as a general backdrop to define the initial state. Working within the general framework of the open-source scientific computing initiative OpenGeoSys (www.opengeosys.org), we introduce full treatment of thermodynamic properties at the extreme conditions following magma emplacement. This treatment utilizes a combination of standard Galerkin and control-volume finite elements to balance fluid mass, mechanical deformation, and thermal energy with consideration of local thermal non-equilibrium (LTNE) between fluids and solids. Permeability is allowed to evolve under several constitutive models tailored to both porous media and fractures, considering the influence of both mechanical stress and diagenesis. In this first analysis, a relatively simple mechanical model is used; complexity will be added incrementally to represent specific characteristics of the Salton Sea hydrothermal field.
The construction of standard gamble utilities.
van Osch, Sylvie M C; Stiggelbout, Anne M
2008-01-01
Health effects for cost-effectiveness analysis are best measured in life years, with quality of life in each life year expressed in terms of utilities. The standard gamble (SG) has been the gold standard for utility measurement. However, the biases of probability weighting, loss aversion, and scale compatibility have an inconclusive effect on SG utilities. We determined their effect on SG utilities using qualitative data to assess the reference point and the focus of attention. While thinking aloud, 45 healthy respondents provided SG utilities for six rheumatoid arthritis health states. Reference points, goals, and focuses of attention were coded. To assess the effect of scale compatibility, correlations were assessed between focus of attention and mean utility. The certain outcome served most frequently as reference point, the SG was perceived as a mixed gamble. Goals were mostly mentioned with respect to this outcome. Scale compatibility led to a significant upward bias in utilities; attention lay relatively more on the low outcome and this was positively correlated with mean utility. SG utilities should be corrected for loss aversion and probability weighting with the mixed correction formula proposed by prospect theory. Scale compatibility will likely still bias SG utilities, calling for research on a correction. Copyright (c) 2007 John Wiley & Sons, Ltd.
Extraterrestrial resource utilization for economy in space missions
NASA Technical Reports Server (NTRS)
Lewis, J. S.; Ramohalli, K.; Triffet, T.
1990-01-01
The NASA/University of Arizona Space Engineering Research Center is dedicated to research on the discovery, characterization, mapping, beneficiation, extraction, processing, and fabrication of useful products from extraterrestrial material. Schemes for the automated production of low-technology products that are likely to be desired in large quantities in the early stages of any large-scale space activity are identified and developed. This paper summarizes the research program, concentrating upon the production of (1) propellants, both cryogenic and storable, (2) volatiles such as water, nitrogen, and carbon dioxide for use in life-support systems (3) structural metals, and (4) refractories for use in aerobrakes and furnace linings.
GPU Accelerated DG-FDF Large Eddy Simulator
NASA Astrophysics Data System (ADS)
Inkarbekov, Medet; Aitzhan, Aidyn; Sammak, Shervin; Givi, Peyman; Kaltayev, Aidarkhan
2017-11-01
A GPU accelerated simulator is developed and implemented for large eddy simulation (LES) of turbulent flows. The filtered density function (FDF) is utilized for modeling of the subgrid scale quantities. The filtered transport equations are solved via a discontinuous Galerkin (DG) and the FDF is simulated via particle based Lagrangian Monte-Carlo (MC) method. It is demonstrated that the GPUs simulations are of the order of 100 times faster than the CPU-based calculations. This brings LES of turbulent flows to a new level, facilitating efficient simulation of more complex problems. The work at Al-Faraby Kazakh National University is sponsored by MoES of RK under Grant 3298/GF-4.
Resource Management for Distributed Parallel Systems
NASA Technical Reports Server (NTRS)
Neuman, B. Clifford; Rao, Santosh
1993-01-01
Multiprocessor systems should exist in the the larger context of distributed systems, allowing multiprocessor resources to be shared by those that need them. Unfortunately, typical multiprocessor resource management techniques do not scale to large networks. The Prospero Resource Manager (PRM) is a scalable resource allocation system that supports the allocation of processing resources in large networks and multiprocessor systems. To manage resources in such distributed parallel systems, PRM employs three types of managers: system managers, job managers, and node managers. There exist multiple independent instances of each type of manager, reducing bottlenecks. The complexity of each manager is further reduced because each is designed to utilize information at an appropriate level of abstraction.
Fabrication of aluminum-carbon composites
NASA Technical Reports Server (NTRS)
Novak, R. C.
1973-01-01
A screening, optimization, and evaluation program is reported of unidirectional carbon-aluminum composites. During the screening phase both large diameter monofilament and small diameter multifilament reinforcements were utilized to determine optimum precursor tape making and consolidation techniques. Difficulty was encountered in impregnating and consolidating the multifiber reinforcements. Large diameter monofilament reinforcement was found easier to fabricate into composites and was selected to carry into the optimization phase in which the hot pressing parameters were refined and the size of the fabricated panels was scaled up. After process optimization the mechanical properties of the carbon-aluminum composites were characterized in tension, stress-rupture and creep, mechanical fatigue, thermal fatigue, thermal aging, thermal expansion, and impact.
NASA Astrophysics Data System (ADS)
Hamada, Y.; O'Connor, B. L.
2012-12-01
Development in arid environments often results in the loss and degradation of the ephemeral streams that provide habitat and critical ecosystem functions such as water delivery, sediment transport, and groundwater recharge. Quantification of these ecosystem functions is challenging because of the episodic nature of runoff events in desert landscapes and the large spatial scale of watersheds that potentially can be impacted by large-scale development. Low-impact development guidelines and regulatory protection of ephemeral streams are often lacking due to the difficulty of accurately mapping and quantifying the critical functions of ephemeral streams at scales larger than individual reaches. Renewable energy development in arid regions has the potential to disturb ephemeral streams at the watershed scale, and it is necessary to develop environmental monitoring applications for ephemeral streams to help inform land management and regulatory actions aimed at protecting and mitigating for impacts related to large-scale land disturbances. This study focuses on developing remote sensing methodologies to identify and monitor impacts on ephemeral streams resulting from the land disturbance associated with utility-scale solar energy development in the desert southwest of the United States. Airborne very high resolution (VHR) multispectral imagery is used to produce stereoscopic, three-dimensional landscape models that can be used to (1) identify and map ephemeral stream channel networks, and (2) support analyses and models of hydrologic and sediment transport processes that pertain to the critical functionality of ephemeral streams. Spectral and statistical analyses are being developed to extract information about ephemeral channel location and extent, micro-topography, riparian vegetation, and soil moisture characteristics. This presentation will demonstrate initial results and provide a framework for future work associated with this project, for developing the necessary field measurements necessary to verify remote sensing landscape models, and for generating hydrologic models and analyses.
Multiscale modeling and general theory of non-equilibrium plasma-assisted ignition and combustion
NASA Astrophysics Data System (ADS)
Yang, Suo; Nagaraja, Sharath; Sun, Wenting; Yang, Vigor
2017-11-01
A self-consistent framework for modeling and simulations of plasma-assisted ignition and combustion is established. In this framework, a ‘frozen electric field’ modeling approach is applied to take advantage of the quasi-periodic behaviors of the electrical characteristics to avoid the re-calculation of electric field for each pulse. The correlated dynamic adaptive chemistry (CO-DAC) method is employed to accelerate the calculation of large and stiff chemical mechanisms. The time-step is dynamically updated during the simulation through a three-stage multi-time scale modeling strategy, which utilizes the large separation of time scales in nanosecond pulsed plasma discharges. A general theory of plasma-assisted ignition and combustion is then proposed. Nanosecond pulsed plasma discharges for ignition and combustion can be divided into four stages. Stage I is the discharge pulse, with time scales of O (1-10 ns). In this stage, input energy is coupled into electron impact excitation and dissociation reactions to generate charged/excited species and radicals. Stage II is the afterglow during the gap between two adjacent pulses, with time scales of O (1 0 0 ns). In this stage, quenching of excited species dissociates O2 and fuel molecules, and provides fast gas heating. Stage III is the remaining gap between pulses, with time scales of O (1-100 µs). The radicals generated during Stages I and II significantly enhance exothermic reactions in this stage. The cumulative effects of multiple pulses is seen in Stage IV, with time scales of O (1-1000 ms), which include preheated gas temperatures and a large pool of radicals and fuel fragments to trigger ignition. For flames, plasma could significantly enhance the radical generation and gas heating in the pre-heat zone, thereby enhancing the flame establishment.
Panepinto, Julie A; Torres, Sylvia; Bendo, Cristiane B; McCavit, Timothy L; Dinu, Bogdan; Sherman-Bien, Sandra; Bemrich-Stolz, Christy; Varni, James W
2014-01-01
Sickle cell disease (SCD) is an inherited blood disorder characterized by a chronic hemolytic anemia that can contribute to fatigue and global cognitive impairment in patients. The study objective was to report on the feasibility, reliability, and validity of the PedsQL™ Multidimensional Fatigue Scale in SCD for pediatric patient self-report ages 5-18 years and parent proxy-report for ages 2-18 years. This was a cross-sectional multi-site study whereby 240 pediatric patients with SCD and 303 parents completed the 18-item PedsQL™ Multidimensional Fatigue Scale. Participants also completed the PedsQL™ 4.0 Generic Core Scales. The PedsQL™ Multidimensional Fatigue Scale evidenced excellent feasibility, excellent reliability for the Total Scale Scores (patient self-report α = 0.90; parent proxy-report α = 0.95), and acceptable reliability for the three individual scales (patient self-report α = 0.77-0.84; parent proxy-report α = 0.90-0.97). Intercorrelations of the PedsQL™ Multidimensional Fatigue Scale with the PedsQL™ Generic Core Scales were predominantly in the large (≥0.50) range, supporting construct validity. PedsQL™ Multidimensional Fatigue Scale Scores were significantly worse with large effects sizes (≥0.80) for patients with SCD than for a comparison sample of healthy children, supporting known-groups discriminant validity. Confirmatory factor analysis demonstrated an acceptable to excellent model fit in SCD. The PedsQL™ Multidimensional Fatigue Scale demonstrated acceptable to excellent measurement properties in SCD. The results demonstrate the relative severity of fatigue symptoms in pediatric patients with SCD, indicating the potential clinical utility of multidimensional assessment of fatigue in patients with SCD in clinical research and practice. © 2013 Wiley Periodicals, Inc.
PedsQL™ Multidimensional Fatigue Scale in Sickle Cell Disease: Feasibility, Reliability and Validity
Panepinto, Julie A.; Torres, Sylvia; Bendo, Cristiane B.; McCavit, Timothy L.; Dinu, Bogdan; Sherman-Bien, Sandra; Bemrich-Stolz, Christy; Varni, James W.
2013-01-01
Background Sickle cell disease (SCD) is an inherited blood disorder characterized by a chronic hemolytic anemia that can contribute to fatigue and global cognitive impairment in patients. The study objective was to report on the feasibility, reliability, and validity of the PedsQL™ Multidimensional Fatigue Scale in SCD for pediatric patient self-report ages 5–18 years and parent proxy-report for ages 2–18 years. Procedure This was a cross-sectional multi-site study whereby 240 pediatric patients with SCD and 303 parents completed the 18-item PedsQL™ Multidimensional Fatigue Scale. Participants also completed the PedsQL™ 4.0 Generic Core Scales. Results The PedsQL™ Multidimensional Fatigue Scale evidenced excellent feasibility, excellent reliability for the Total Scale Scores (patient self-report α = 0.90; parent proxy-report α = 0.95), and acceptable reliability for the three individual scales (patient self-report α = 0.77–0.84; parent proxy-report α = 0.90–0.97). Intercorrelations of the PedsQL™ Multidimensional Fatigue Scale with the PedsQL™ Generic Core Scales were predominantly in the large (≥ 0.50) range, supporting construct validity. PedsQL™ Multidimensional Fatigue Scale Scores were significantly worse with large effects sizes (≥0.80) for patients with SCD than for a comparison sample of healthy children, supporting known-groups discriminant validity. Confirmatory factor analysis demonstrated an acceptable to excellent model fit in SCD. Conclusions The PedsQL™ Multidimensional Fatigue Scale demonstrated acceptable to excellent measurement properties in SCD. The results demonstrate the relative severity of fatigue symptoms in pediatric patients with SCD, indicating the potential clinical utility of multidimensional assessment of fatigue in patients with SCD in clinical research and practice. PMID:24038960
NASA Astrophysics Data System (ADS)
Suzuki, Ryosuke; Nishimura, Motoki; Yuan, Lee Chang; Kamahara, Hirotsugu; Atsuta, Yoichi; Daimon, Hiroyuki
2017-10-01
Utilization of sewage sludge using anaerobic digestion has been promoted for decades. However, it is still relatively uncommon especially in Japan. As an approach to promote the utilization of sewage sludge using anaerobic digestion, an integrated system that combines anaerobic digestion with greenhouse, composting and seaweed cultivation was proposed. Based on the concept of the integrated system, not only sewage sludge can be treated using anaerobic digestion that creates green energy, but also the by-products such as CO2 and heat produced during the process can be utilized for crops production. In this study, the potentials of such integrated system were discussed through the estimation of possible commercialized scale as well as comparison of energy consumption with conventional approach for sewage sludge treatment, which is the incineration. The estimation of possible commercialized scale was calculated based on the carbon flow of the system. Results showed that 25% of the current total electricity of the wastewater treatment plant can be covered by the energy produced using anaerobic digestion of sewage sludge. It was estimated that the total energy consumption of the integrated system was actually 14% lower when compared to incineration approach. In addition to the large amount of crops that can be produced, all in all this study aimed to be the showcase of the potentials of sewage sludge as a biomass by implementing the proposed integrated system. The extra values of producing crops through the utilization of CO2 and heat can serve as a stimulus to the public, which would surely lead to higher interest to implement the utilization of sewage sludge using anaerobic digestion.
Moore, Sara; Wakam, Glenn; Hubbard, Alan E.; Cohen, Mitchell J.
2017-01-01
Introduction Delayed notification and lack of early information hinder timely hospital based activations in large scale multiple casualty events. We hypothesized that Twitter real-time data would produce a unique and reproducible signal within minutes of multiple casualty events and we investigated the timing of the signal compared with other hospital disaster notification mechanisms. Methods Using disaster specific search terms, all relevant tweets from the event to 7 days post-event were analyzed for 5 recent US based multiple casualty events (Boston Bombing [BB], SF Plane Crash [SF], Napa Earthquake [NE], Sandy Hook [SH], and Marysville Shooting [MV]). Quantitative and qualitative analysis of tweet utilization were compared across events. Results Over 3.8 million tweets were analyzed (SH 1.8 m, BB 1.1m, SF 430k, MV 250k, NE 205k). Peak tweets per min ranged from 209–3326. The mean followers per tweeter ranged from 3382–9992 across events. Retweets were tweeted a mean of 82–564 times per event. Tweets occurred very rapidly for all events (<2 mins) and represented 1% of the total event specific tweets in a median of 13 minutes of the first 911 calls. A 200 tweets/min threshold was reached fastest with NE (2 min), BB (7 min), and SF (18 mins). If this threshold was utilized as a signaling mechanism to place local hospitals on standby for possible large scale events, in all case studies, this signal would have preceded patient arrival. Importantly, this threshold for signaling would also have preceded traditional disaster notification mechanisms in SF, NE, and simultaneous with BB and MV. Conclusions Social media data has demonstrated that this mechanism is a powerful, predictable, and potentially important resource for optimizing disaster response. Further investigated is warranted to assess the utility of prospective signally thresholds for hospital based activation. PMID:28982201
Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method
NASA Astrophysics Data System (ADS)
Sun, Yong; Meng, Zhaohai; Li, Fengting
2018-03-01
Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.
Really Large Scale Computer Graphic Projection Using Lasers and Laser Substitutes
NASA Astrophysics Data System (ADS)
Rother, Paul
1989-07-01
This paper reflects on past laser projects to display vector scanned computer graphic images onto very large and irregular surfaces. Since the availability of microprocessors and high powered visible lasers, very large scale computer graphics projection have become a reality. Due to the independence from a focusing lens, lasers easily project onto distant and irregular surfaces and have been used for amusement parks, theatrical performances, concert performances, industrial trade shows and dance clubs. Lasers have been used to project onto mountains, buildings, 360° globes, clouds of smoke and water. These methods have proven successful in installations at: Epcot Theme Park in Florida; Stone Mountain Park in Georgia; 1984 Olympics in Los Angeles; hundreds of Corporate trade shows and thousands of musical performances. Using new ColorRayTM technology, the use of costly and fragile lasers is no longer necessary. Utilizing fiber optic technology, the functionality of lasers can be duplicated for new and exciting projection possibilities. The use of ColorRayTM technology has enjoyed worldwide recognition in conjunction with Pink Floyd and George Michaels' world wide tours.
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Statistical Compression of Wind Speed Data
NASA Astrophysics Data System (ADS)
Tagle, F.; Castruccio, S.; Crippa, P.; Genton, M.
2017-12-01
In this work we introduce a lossy compression approach that utilizes a stochastic wind generator based on a non-Gaussian distribution to reproduce the internal climate variability of daily wind speed as represented by the CESM Large Ensemble over Saudi Arabia. Stochastic wind generators, and stochastic weather generators more generally, are statistical models that aim to match certain statistical properties of the data on which they are trained. They have been used extensively in applications ranging from agricultural models to climate impact studies. In this novel context, the parameters of the fitted model can be interpreted as encoding the information contained in the original uncompressed data. The statistical model is fit to only 3 of the 30 ensemble members and it adequately captures the variability of the ensemble in terms of seasonal internannual variability of daily wind speed. To deal with such a large spatial domain, it is partitioned into 9 region, and the model is fit independently to each of these. We further discuss a recent refinement of the model, which relaxes this assumption of regional independence, by introducing a large-scale component that interacts with the fine-scale regional effects.
CORALINA: a universal method for the generation of gRNA libraries for CRISPR-based screening.
Köferle, Anna; Worf, Karolina; Breunig, Christopher; Baumann, Valentin; Herrero, Javier; Wiesbeck, Maximilian; Hutter, Lukas H; Götz, Magdalena; Fuchs, Christiane; Beck, Stephan; Stricker, Stefan H
2016-11-14
The bacterial CRISPR system is fast becoming the most popular genetic and epigenetic engineering tool due to its universal applicability and adaptability. The desire to deploy CRISPR-based methods in a large variety of species and contexts has created an urgent need for the development of easy, time- and cost-effective methods enabling large-scale screening approaches. Here we describe CORALINA (comprehensive gRNA library generation through controlled nuclease activity), a method for the generation of comprehensive gRNA libraries for CRISPR-based screens. CORALINA gRNA libraries can be derived from any source of DNA without the need of complex oligonucleotide synthesis. We show the utility of CORALINA for human and mouse genomic DNA, its reproducibility in covering the most relevant genomic features including regulatory, coding and non-coding sequences and confirm the functionality of CORALINA generated gRNAs. The simplicity and cost-effectiveness make CORALINA suitable for any experimental system. The unprecedented sequence complexities obtainable with CORALINA libraries are a necessary pre-requisite for less biased large scale genomic and epigenomic screens.
Parallel group independent component analysis for massive fMRI data sets.
Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.
Guldhe, Abhishek; Kumari, Sheena; Ramanna, Luveshan; Ramsundar, Prathana; Singh, Poonam; Rawat, Ismail; Bux, Faizal
2017-12-01
Microalgae are recognized as one of the most powerful biotechnology platforms for many value added products including biofuels, bioactive compounds, animal and aquaculture feed etc. However, large scale production of microalgal biomass poses challenges due to the requirements of large amounts of water and nutrients for cultivation. Using wastewater for microalgal cultivation has emerged as a potential cost effective strategy for large scale microalgal biomass production. This approach also offers an efficient means to remove nutrients and metals from wastewater making wastewater treatment sustainable and energy efficient. Therefore, much research has been conducted in the recent years on utilizing various wastewater streams for microalgae cultivation. This review identifies and discusses the opportunities and challenges of different wastewater streams for microalgal cultivation. Many alternative routes for microalgal cultivation have been proposed to tackle some of the challenges that occur during microalgal cultivation in wastewater such as nutrient deficiency, substrate inhibition, toxicity etc. Scope and challenges of microalgal biomass grown on wastewater for various applications are also discussed along with the biorefinery approach. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
A prototype automatic phase compensation module
NASA Technical Reports Server (NTRS)
Terry, John D.
1992-01-01
The growing demands for high gain and accurate satellite communication systems will necessitate the utilization of large reflector systems. One area of concern of reflector based satellite communication is large scale surface deformations due to thermal effects. These distortions, when present, can degrade the performance of the reflector system appreciable. This performance degradation is manifested by a decrease in peak gain, and increase in sidelobe level, and pointing errors. It is essential to compensate for these distortion effects and to maintain the required system performance in the operating space environment. For this reason the development of a technique to offset the degradation effects is highly desirable. Currently, most research is direct at developing better material for the reflector. These materials have a lower coefficient of linear expansion thereby reducing the surface errors. Alternatively, one can minimize the distortion effects of these large scale errors by adaptive phased array compensation. Adaptive phased array techniques have been studied extensively at NASA and elsewhere. Presented in this paper is a prototype automatic phase compensation module designed and built at NASA Lewis Research Center which is the first stage of development for an adaptive array compensation module.
Plate motions and deformations from geologic and geodetic data
NASA Technical Reports Server (NTRS)
Jordan, Thomas H.
1990-01-01
An analysis of geodetic data in the vicinity of the Crustal Dynamics Program (CDP) site at Vandenberg Air Force Base (VNDN) is presented. The utility of space-geodetic data in the monitoring of transient strains associated with earthquakes in tectonically active areas like California is investigated. Particular interest is in the possibility that space-geodetic methods may be able to provide critical new data on deformations precursory to large seismic events. Although earthquake precursory phenomena are not well understood, the monitoring of small strains in the vicinity of active faults is a promising technique for studying the mechanisms that nucleate large earthquakes and, ultimately, for earthquake prediction. Space-geodetic techniques are now capable of measuring baselines of tens to hundreds of kilometers with a precision of a few parts in 108. Within the next few years, it will be possible to record and analyze large-scale strain variations with this precision continuously in real time. Thus, space-geodetic techniques may become tools for earthquake prediction. In anticipation of this capability, several questions related to the temporal and spatial scales associated with subseismic deformation transients are examined.
Utility photovoltaic group: Status report
NASA Astrophysics Data System (ADS)
Serfass, Jeffrey A.; Hester, Stephen L.; Wills, Bethany N.
1996-01-01
The Utility PhotoVoltaic Group (UPVG) was formed in October of 1992 with a mission to accelerate the use of cost-effective small-scale and emerging grid-connected applications of photovoltaics for the benefit of electric utilities and their customers. The UPVG is now implementing a program to install up to 50 megawatts of photovoltaics in small-scale and grid-connected applications. This program, called TEAM-UP, is a partnership of the U.S. electric utility industry and the U.S. Department of Energy to help develop utility PV markets. TEAM-UP is a utility-directed program to significantly increase utility PV experience by promoting installations of utility PV systems. Two primary program areas are proposed for TEAM-UP: (1) Small-Scale Applications (SSA)—an initiative to aggregate utility purchases of small-scale, grid-independent applications; and (2) Grid-Connected Applications (GCA)—an initiative to identify and competitively award cost-sharing contracts for grid-connected PV systems with high market growth potential, or collective purchase programs involving multiple buyers. This paper describes these programs and outlines the schedule, the procurement status, and the results of the TEAM-UP process.
Precision measurements from very-large scale aerial digital imagery.
Booth, D Terrance; Cox, Samuel E; Berryman, Robert D
2006-01-01
Managers need measurements and resource managers need the length/width of a variety of items including that of animals, logs, streams, plant canopies, man-made objects, riparian habitat, vegetation patches and other things important in resource monitoring and land inspection. These types of measurements can now be easily and accurately obtained from very large scale aerial (VLSA) imagery having spatial resolutions as fine as 1 millimeter per pixel by using the three new software programs described here. VLSA images have small fields of view and are used for intermittent sampling across extensive landscapes. Pixel-coverage among images is influenced by small changes in airplane altitude above ground level (AGL) and orientation relative to the ground, as well as by changes in topography. These factors affect the object-to-camera distance used for image-resolution calculations. 'ImageMeasurement' offers a user-friendly interface for accounting for pixel-coverage variation among images by utilizing a database. 'LaserLOG' records and displays airplane altitude AGL measured from a high frequency laser rangefinder, and displays the vertical velocity. 'Merge' sorts through large amounts of data generated by LaserLOG and matches precise airplane altitudes with camera trigger times for input to the ImageMeasurement database. We discuss application of these tools, including error estimates. We found measurements from aerial images (collection resolution: 5-26 mm/pixel as projected on the ground) using ImageMeasurement, LaserLOG, and Merge, were accurate to centimeters with an error less than 10%. We recommend these software packages as a means for expanding the utility of aerial image data.
NASA Astrophysics Data System (ADS)
Cassani, Mary Kay Kuhr
The objective of this study was to evaluate the effect of two pedagogical models used in general education science on non-majors' science teaching self-efficacy. Science teaching self-efficacy can be influenced by inquiry and cooperative learning, through cognitive mechanisms described by Bandura (1997). The Student Centered Activities for Large Enrollment Undergraduate Programs (SCALE-UP) model of inquiry and cooperative learning incorporates cooperative learning and inquiry-guided learning in large enrollment combined lecture-laboratory classes (Oliver-Hoyo & Beichner, 2004). SCALE-UP was adopted by a small but rapidly growing public university in the southeastern United States in three undergraduate, general education science courses for non-science majors in the Fall 2006 and Spring 2007 semesters. Students in these courses were compared with students in three other general education science courses for non-science majors taught with the standard teaching model at the host university. The standard model combines lecture and laboratory in the same course, with smaller enrollments and utilizes cooperative learning. Science teaching self-efficacy was measured using the Science Teaching Efficacy Belief Instrument - B (STEBI-B; Bleicher, 2004). A science teaching self-efficacy score was computed from the Personal Science Teaching Efficacy (PTSE) factor of the instrument. Using non-parametric statistics, no significant difference was found between teaching models, between genders, within models, among instructors, or among courses. The number of previous science courses was significantly correlated with PTSE score. Student responses to open-ended questions indicated that students felt the larger enrollment in the SCALE-UP room reduced individual teacher attention but that the large round SCALE-UP tables promoted group interaction. Students responded positively to cooperative and hands-on activities, and would encourage inclusion of more such activities in all of the courses. The large enrollment SCALE-UP model as implemented at the host university did not increase science teaching self-efficacy of non-science majors, as hypothesized. This was likely due to limited modification of standard cooperative activities according to the inquiry-guided SCALE-UP model. It was also found that larger SCALE-UP enrollments did not decrease science teaching self-efficacy when standard cooperative activities were used in the larger class.
Scale-up of hydrophobin-assisted recombinant protein production in tobacco BY-2 suspension cells.
Reuter, Lauri J; Bailey, Michael J; Joensuu, Jussi J; Ritala, Anneli
2014-05-01
Plant suspension cell cultures are emerging as an alternative to mammalian cells for production of complex recombinant proteins. Plant cell cultures provide low production cost, intrinsic safety and adherence to current regulations, but low yields and costly purification technology hinder their commercialization. Fungal hydrophobins have been utilized as fusion tags to improve yields and facilitate efficient low-cost purification by surfactant-based aqueous two-phase separation (ATPS) in plant, fungal and insect cells. In this work, we report the utilization of hydrophobin fusion technology in tobacco bright yellow 2 (BY-2) suspension cell platform and the establishment of pilot-scale propagation and downstream processing including first-step purification by ATPS. Green fluorescent protein-hydrophobin fusion (GFP-HFBI) induced the formation of protein bodies in tobacco suspension cells, thus encapsulating the fusion protein into discrete compartments. Cultivation of the BY-2 suspension cells was scaled up in standard stirred tank bioreactors up to 600 L production volume, with no apparent change in growth kinetics. Subsequently, ATPS was applied to selectively capture the GFP-HFBI product from crude cell lysate, resulting in threefold concentration, good purity and up to 60% recovery. The ATPS was scaled up to 20 L volume, without loss off efficiency. This study provides the first proof of concept for large-scale hydrophobin-assisted production of recombinant proteins in tobacco BY-2 cell suspensions. © 2013 Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.
Moricoli, Diego; Muller, William A.; Carbonella, Damiano Cosimo; Balducci, Maria Cristina; Dominici, Sabrina; Fiori, Valentina; Watson, Richard; Weber, Evan; Cianfriglia, Maurizio; Scotlandi, Katia; Magnani, Mauro
2015-01-01
Migration of leukocytes into a site of inflammation involves several steps mediated by various families of adhesion molecules. CD99 play a significant role in transendothelial migration (TEM) of leukocytes. Inhibition of TEM by specific monoclonal antibody (mAb) can provide a potent therapeutic approach to treating inflammatory conditions. However, the therapeutic utilization of whole IgG can lead to an inappropriate activation of Fc receptor-expressing cells inducing serious adverse side effects due to cytokine release. In this regard, specific recombinant antibody in single chain variable fragments (scFvs) originated by phage library may offer a solution by affecting TEM function in a safe clinical context. However, this consideration requires large scale production of functional scFv antibodies under GMP conditions and hence, the absence of toxic reagents utilized for the solubilization and refolding steps of inclusion bodies that may discourage industrial application of these antibody fragments. In order to apply the scFv anti-CD99 named C7A in a clinical setting we herein describe an efficient and large scale production of the antibody fragments expressed in E.coli as insoluble protein avoiding gel filtration chromatography approach, and laborious refolding step pre- and post-purification. Using differential salt elution which is a simple, reproducible and effective procedure we are able to separate scFv in monomer format from aggregates. The purified scFv antibody C7A exhibits inhibitory activity comparable to an antagonistic conventional mAb, thus providing an excellent agent for blocking CD99 signalling. Thanks to the original purification protocol that can be extended to other scFvs that are expressed as inclusion bodies in bacterial systems, the scFv anti-CD99 C7A herein described represents the first step towards the construction of new antibody therapeutic. PMID:24798881
NASA Astrophysics Data System (ADS)
Ajo Franklin, J. B.; Wagner, A. M.; Lindsey, N.; Dou, S.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Ulrich, C.; Gelvin, A.; Morales, A.; James, S. R.; Saari, S.; Ekblaw, I.; Wood, T.; Robertson, M.; Martin, E. R.
2016-12-01
In a warming world, permafrost landscapes are being rapidly transformed by thaw, yielding surface subsidence and groundwater flow alteration. The same transformations pose a threat to arctic infrastructure and can induce catastrophic failure of the roads, runways, and pipelines on which human habitation depends. Scalable solutions to monitoring permafrost thaw dynamics are required to both quantitatively understand biogeochemical feedbacks as well as to protect built infrastructure from damage. Unfortunately, permafrost alteration happens over the time scale of climate change, years to decades, a decided challenge for testing new sensing technologies in a limited context. One solution is to engineer systems capable of rapidly thawing large permafrost units to allow short duration experiments targeting next-generation sensing approaches. We present preliminary results from a large-scale controlled permafrost thaw experiment designed to evaluate the utility of different geophysical approaches for tracking the cause, precursors, and early phases of thaw subsidence. We focus on the use of distributed fiber optic sensing for this challenge and deployed distributed temperature (DTS), strain (DSS), and acoustic (DAS) sensing systems in a 2D array to detect thaw signatures. A 10 x 15 x 1 m section of subsurface permafrost was heated using an array of 120 downhole heaters (60 w) at an experimental site near Fairbanks, AK. Ambient noise analysis of DAS datasets collected at the plot, coupled to shear wave inversion, was utilized to evaluate changes in shear wave velocity associated with heating and thaw. These measurements were confirmed by seismic surveys collected using a semi-permanent orbital seismic source activated on a daily basis. Fiber optic measurements were complemented by subsurface thermistor and thermocouple arrays, timelapse total station surveys, LIDAR, secondary seismic measurements (geophone and broadband recordings), timelapse ERT, borehole NMR, soil moisture measurements, hydrologic measurements, and multi-angle photogrammetry. This unusually dense combination of measurement techniques provides an excellent opportunity to characterize the geophysical signatures of permafrost thaw in a controlled environment.