Electricity distribution networks: Changing regulatory approaches
NASA Astrophysics Data System (ADS)
Cambini, Carlo
2016-09-01
Increasing the penetration of distributed generation and smart grid technologies requires substantial investments. A study proposes an innovative approach that combines four regulatory tools to provide economic incentives for distribution system operators to facilitate these innovative practices.
Distributed utility technology cost, performance, and environmental characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Y; Adelman, S
1995-06-01
Distributed Utility (DU) is an emerging concept in which modular generation and storage technologies sited near customer loads in distribution systems and specifically targeted demand-side management programs are used to supplement conventional central station generation plants to meet customer energy service needs. Research has shown that implementation of the DU concept could provide substantial benefits to utilities. This report summarizes the cost, performance, and environmental and siting characteristics of existing and emerging modular generation and storage technologies that are applicable under the DU concept. It is intended to be a practical reference guide for utility planners and engineers seeking informationmore » on DU technology options. This work was funded by the Office of Utility Technologies of the US Department of Energy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, D.O.
It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less
Semi-inclusive polarised lepton-nucleon scattering and the anomalous gluon contribution
NASA Astrophysics Data System (ADS)
Güllenstern, St.; Veltri, M.; Górnicki, P.; Mankiewicz, L.; Schäfer, A.
1993-08-01
We discuss a new observable for semi-inclusive pion production in polarised lepton-nucleon collisions. This observable is sensitive to the polarised and unpolarised strange quark distribution and the anomalous gluon contribution, provided that their fragmentation functions into pions differ substantially from that of light quarks. From Monte Carlo data generated with our PEPSI code we conclude that HERMES might be able to decide whether the polarized strange quark and gluon distributions are large.
ERIC Educational Resources Information Center
Wisniewski, Janusz L.
1986-01-01
Discussion of a new method of index term dictionary compression in an inverted-file-oriented database highlights a technique of word coding, which generates short fixed-length codes obtained from the index terms themselves by analysis of monogram and bigram statistical distributions. Substantial savings in communication channel utilization are…
Methods and apparatuses for information analysis on shared and distributed computing systems
Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA
2011-02-22
Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-03
..., gas, and hydrogen pipelines and electric transmission and distribution facilities are located outside... Currant Creek would not be mined until Oak Mesa was mined out. Oil and gas resources were another issue that generated substantial public input. Colorado has 8% of all dry natural gas reserves in the U.S...
Super-resolving random-Gaussian apodized photon sieve.
Sabatyan, Arash; Roshaninejad, Parisa
2012-09-10
A novel apodized photon sieve is presented in which random dense Gaussian distribution is implemented to modulate the pinhole density in each zone. The random distribution in dense Gaussian distribution causes intrazone discontinuities. Also, the dense Gaussian distribution generates a substantial number of pinholes in order to form a large degree of overlap between the holes in a few innermost zones of the photon sieve; thereby, clear zones are formed. The role of the discontinuities on the focusing properties of the photon sieve is examined as well. Analysis shows that secondary maxima have evidently been suppressed, transmission has increased enormously, and the central maxima width is approximately unchanged in comparison to the dense Gaussian distribution. Theoretical results have been completely verified by experiment.
Tongonani geothermal power development, Philippines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minson, A.A.C.; Fry, T.J.; Kivell, J.A.
1985-01-01
This paper describes the features, design and construction of a 112 MWe geothermal power project, representing the first stage development of the substantial geothermal resources of the central Philippine region. The project has been undertaken by the Philippine Government. The National Powe Corporation is responsible for generation and distribution facilities and the Philippine National Oil Company Energy Development Corporation is responsible for controlled delivery of steam to the powe station.
Temperature Insensitive and Radiation Hard Photonics
2014-03-19
M. COOK , Lt Col, USAF Deputy Chief, Spacecraft Technology Division Space Vehicles Directorate This report is published in the interest of...Approved for Public Release; distribution is unlimited. ii LIST OF FIGURES Figure 1. OTDM Pulse Multiplexer for Increasing the Output Repetition Rate...QDMLL) for use in extreme environments where ionizing radiation is a substantial threat. Mode-Locked lasers generate a train of optical pulses that have
Three types of solid state remote power controllers
NASA Technical Reports Server (NTRS)
Baker, D. E.
1975-01-01
Three types of solid state Remote Power Controller (RPC) circuits for 120 Vdc spacecraft distribution systems have been developed and evaluated. Both current limiting and noncurrent limiting modes of overload protection were developed and were demonstrated to be feasible. A second generation of circuits was developed which offers comparable performance with substantially less cost and complexity. Electrical efficiency for both generations is 98.5 to 99%. This paper describes various aspects of the circuit design, trade-off studies, and experimental test results. Comparisons of design parameters, component requirements, and engineering model evaluations will emphasize the high efficiency and reliability of the designs.
Optimal Interpolation scheme to generate reference crop evapotranspiration
NASA Astrophysics Data System (ADS)
Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco
2018-05-01
We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.
Kye, Bongoh; Mare, Robert D.
2014-01-01
This study examines the intergenerational effects of changes in women's education in South Korea. We define intergenerational effects as changes in the distribution of educational attainment in an offspring generation associated with the changes in a parental generation. Departing from the previous approach in research on social mobility that has focused on intergenerational association, we examine the changes in the distribution of educational attainment across generations. Using a simulation method based on Mare and Maralani's recursive population renewal model, we examine how intergenerational transmission, assortative mating, and differential fertility influence intergenerational effects. The results point to the following conclusions. First, we find a positive intergenerational effect: improvement in women's education leads to improvement in daughter's education. Second, we find that the magnitude of intergenerational effects substantially depends on assortative marriage and differential fertility: assortative mating amplifies and differential fertility dampens the intergenerational effects. Third, intergenerational effects become bigger for the less educated and smaller for the better educated over time, which is a consequence of educational expansion. We compare our results with Mare and Maralani's original Indonesian study to illustrate how the model of intergenerational effects works in different socioeconomic circumstances. PMID:23017970
System and method for generating steady state confining current for a toroidal plasma fusion reactor
Fisch, Nathaniel J.
1981-01-01
A system for generating steady state confining current for a toroidal plasma fusion reactor providing steady-state generation of the thermonuclear power. A dense, hot toroidal plasma is initially prepared with a confining magnetic field with toroidal and poloidal components. Continuous wave RF energy is injected into said plasma to establish a spectrum of traveling waves in the plasma, where the traveling waves have momentum components substantially either all parallel, or all anti-parallel to the confining magnetic field. The injected RF energy is phased to couple to said traveling waves with both a phase velocity component and a wave momentum component in the direction of the plasma traveling wave components. The injected RF energy has a predetermined spectrum selected so that said traveling waves couple to plasma electrons having velocities in a predetermined range .DELTA.. The velocities in the range are substantially greater than the thermal electron velocity of the plasma. In addition, the range is sufficiently broad to produce a raised plateau having width .DELTA. in the plasma electron velocity distribution so that the plateau electrons provide steady-state current to generate a poloidal magnetic field component sufficient for confining the plasma. In steady state operation of the fusion reactor, the fusion power density in the plasma exceeds the power dissipated in the plasma.
System and method for generating steady state confining current for a toroidal plasma fusion reactor
Bers, Abraham
1981-01-01
A system for generating steady state confining current for a toroidal plasma fusion reactor providing steady-state generation of the thermonuclear power. A dense, hot toroidal plasma is initially prepared with a confining magnetic field with toroidal and poloidal components. Continuous wave RF energy is injected into said plasma to estalish a spectrum of traveling waves in the plasma, where the traveling waves have momentum components substantially either all parallel, or all anti-parallel to the confining magnetic field. The injected RF energy is phased to couple to said traveling waves with both a phase velocity component and a wave momentum component in the direction of the plasma traveling wave components. The injected RF energy has a predetermined spectrum selected so that said traveling waves couple to plasma electrons having velocities in a predetermined range .DELTA.. The velocities in the range are substantially greater than the thermal electron velocity of the plasma. In addition, the range is sufficiently broad to produce a raised plateau having width .DELTA. in the plasma electron velocity distribution so that the plateau electrons provide steady-state current to generate a poloidal magnetic field component sufficient for confining the plasma. In steady state operation of the fusion reactor, the fusion power density in the plasma exceeds the power dissipated inthe plasma.
Imam, Neena; Barhen, Jacob
2009-01-01
For real-time acoustic source localization applications, one of the primary challenges is the considerable growth in computational complexity associated with the emergence of ever larger, active or passive, distributed sensor networks. These sensors rely heavily on battery-operated system components to achieve highly functional automation in signal and information processing. In order to keep communication requirements minimal, it is desirable to perform as much processing on the receiver platforms as possible. However, the complexity of the calculations needed to achieve accurate source localization increases dramatically with the size of sensor arrays, resulting in substantial growth of computational requirements that cannot bemore » readily met with standard hardware. One option to meet this challenge builds upon the emergence of digital optical-core devices. The objective of this work was to explore the implementation of key building block algorithms used in underwater source localization on the optical-core digital processing platform recently introduced by Lenslet Inc. This demonstration of considerably faster signal processing capability should be of substantial significance to the design and innovation of future generations of distributed sensor networks.« less
Private and public consumption across generations in Australia.
Rice, James M; Temple, Jeromey B; McDonald, Peter F
2017-12-01
To investigate intergenerational equity in consumption using the Australian National Transfer Accounts (NTA). Australian NTA estimates of consumption were used to investigate disparities in consumption between people of different ages and generations in Australia between 1981-1982 and 2009-2010. There is a clear patterning of consumption by age, with the distribution by age of consumption funded by the private sector being very different to that of consumption funded by the public sector. Australians have achieved notable equality in total consumption among people between the ages of 20 and 75 years. Substantial disparities exist, however, between different generations, with earlier generations experiencing lower levels of total consumption in real terms at particular ages than later generations. An accurate picture of intergenerational equity in consumption requires consideration of both cohorts and cross sections, as well as consumption funded by both the public and the private sectors. © 2017 AJA Inc.
Identification of nuclear weapons
Mihalczo, J.T.; King, W.T.
1987-04-10
A method and apparatus for non-invasively indentifying different types of nuclear weapons is disclosed. A neutron generator is placed against the weapon to generate a stream of neutrons causing fissioning within the weapon. A first detects the generation of the neutrons and produces a signal indicative thereof. A second particle detector located on the opposite side of the weapon detects the fission particles and produces signals indicative thereof. The signals are converted into a detected pattern and a computer compares the detected pattern with known patterns of weapons and indicates which known weapon has a substantially similar pattern. Either a time distribution pattern or noise analysis pattern, or both, is used. Gamma-neutron discrimination and a third particle detector for fission particles adjacent the second particle detector are preferably used. The neutrons are generated by either a decay neutron source or a pulled neutron particle accelerator.
Alsharif, Ala'a; Kruger, Estie; Tennant, Marc
2012-10-01
Over the past twenty-five years, there has been a substantial increase in work-based demands, thought to be due to an intensifying, competitive work environment. However, more recently, the question of work-life balance is increasingly attracting attention. The purpose of this study was to discover the attitudes of the next generation of dentists in Australia to parenting responsibility and work-life balance perceptions. Questionnaires on work-life balance were distributed to all fourth-year students at three dental schools in Australia. A total of 137 (76 percent) surveys were completed and returned. Most respondents indicated that they would take time off to focus on childcare, and just over half thought childcare should be shared by both parents. Thirty-seven percent felt that a child would have a considerable effect on their careers. Differences were seen in responses when compared by gender. The application of sensitivity analysis to workforce calculations based around changing societal work-life expectations can have substantial effects on predicting workforce data a decade into the future. It is not just the demographic change to a more feminized workforce in Australia that can have substantial effect, but also the change in social expectations of males in regards to parenting.
The use of solution adaptive grids in solving partial differential equations
NASA Technical Reports Server (NTRS)
Anderson, D. A.; Rai, M. M.
1982-01-01
The grid point distribution used in solving a partial differential equation using a numerical method has a substantial influence on the quality of the solution. An adaptive grid which adjusts as the solution changes provides the best results when the number of grid points available for use during the calculation is fixed. Basic concepts used in generating and applying adaptive grids are reviewed in this paper, and examples illustrating applications of these concepts are presented.
Efficient and lightweight current leads
NASA Astrophysics Data System (ADS)
Bromberg, L.; Dietz, A. J.; Michael, P. C.; Gold, C.; Cheadle, M.
2014-01-01
Current leads generate substantial cryogenic heat loads in short length High Temperature Superconductor (HTS) distribution systems. Thermal conduction, as well as Joule losses (I2R) along the current leads, comprises the largest cryogenic loads for short distribution systems. Current leads with two temperature stages have been designed, constructed and tested, with the goal of minimizing the electrical power consumption, and to provide thermal margin for the cable. We present the design of a two-stage current lead system, operating at 140 K and 55 K. This design is very attractive when implemented with a turbo-Brayton cycle refrigerator (two-stage), with substantial power and weight reduction. A heat exchanger is used at each temperature station, with conduction-cooled stages in-between. Compact, efficient heat exchangers are challenging, because of the gaseous coolant. Design, optimization and performance of the heat exchangers used for the current leads will be presented. We have made extensive use of CFD models for optimizing hydraulic and thermal performance of the heat exchangers. The methodology and the results of the optimization process will be discussed. The use of demountable connections between the cable and the terminations allows for ease of assembly, but require means of aggressively cooling the region of the joint. We will also discuss the cooling of the joint. We have fabricated a 7 m, 5 kA cable with second generation HTS tapes. The performance of the system will be described.
Intergenerational aspects of government policy under changing demographic and economic conditions.
Boskin, M J
1987-07-01
Changing demographic and economic conditions in the US require that attention be given to some of the intergenerational equity features of government policy. In particular, social insurance programs and public debt leave public liabilities to future generations. Taken in the aggregate, the effects of rapidly rising public debt and especially social insurance programs are transferring substantial amounts of resources from younger working generations to the expanding generation of retirees. The most crucial element in evaluating the desirability of intergenerational wealth distribution in the long run is the rate of economic growth. A society's monetary, fiscal, tax, and regulatory policies can be more or less conducive to the generation of capital formation, technical change, and economic growth. Policies that influence growth and interest rates will combine with the national deficit to determine how rapidly the debt grows or shrinks. Present accounting procedures are insufficient to provide quantitative answers to the question of what is the impact of a given program on the age-specific distributions of resources. It is important to reconsider the desirability and efficiency of intergenerational redistributions of wealth in the US. It is likely that current policies are not in line with the principles of efficiency, equity, target effectiveness, and cost effectiveness.
LA-iMageS: a software for elemental distribution bioimaging using LA-ICP-MS data.
López-Fernández, Hugo; de S Pessôa, Gustavo; Arruda, Marco A Z; Capelo-Martínez, José L; Fdez-Riverola, Florentino; Glez-Peña, Daniel; Reboiro-Jato, Miguel
2016-01-01
The spatial distribution of chemical elements in different types of samples is an important field in several research areas such as biology, paleontology or biomedicine, among others. Elemental distribution imaging by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) is an effective technique for qualitative and quantitative imaging due to its high spatial resolution and sensitivity. By applying this technique, vast amounts of raw data are generated to obtain high-quality images, essentially making the use of specific LA-ICP-MS imaging software that can process such data absolutely mandatory. Since existing solutions are usually commercial or hard-to-use for average users, this work introduces LA-iMageS, an open-source, free-to-use multiplatform application for fast and automatic generation of high-quality elemental distribution bioimages from LA-ICP-MS data in the PerkinElmer Elan XL format, whose results can be directly exported to external applications for further analysis. A key strength of LA-iMageS is its substantial added value for users, with particular regard to the customization of the elemental distribution bioimages, which allows, among other features, the ability to change color maps, increase image resolution or toggle between 2D and 3D visualizations.
Phase-Reference-Free Experiment of Measurement-Device-Independent Quantum Key Distribution
NASA Astrophysics Data System (ADS)
Wang, Chao; Song, Xiao-Tian; Yin, Zhen-Qiang; Wang, Shuang; Chen, Wei; Zhang, Chun-Mei; Guo, Guang-Can; Han, Zheng-Fu
2015-10-01
Measurement-device-independent quantum key distribution (MDI QKD) is a substantial step toward practical information-theoretic security for key sharing between remote legitimate users (Alice and Bob). As with other standard device-dependent quantum key distribution protocols, such as BB84, MDI QKD assumes that the reference frames have been shared between Alice and Bob. In practice, a nontrivial alignment procedure is often necessary, which requires system resources and may significantly reduce the secure key generation rate. Here, we propose a phase-coding reference-frame-independent MDI QKD scheme that requires no phase alignment between the interferometers of two distant legitimate parties. As a demonstration, a proof-of-principle experiment using Faraday-Michelson interferometers is presented. The experimental system worked at 1 MHz, and an average secure key rate of 8.309 bps was obtained at a fiber length of 20 km between Alice and Bob. The system can maintain a positive key generation rate without phase compensation under normal conditions. The results exhibit the feasibility of our system for use in mature MDI QKD devices and its value for network scenarios.
Apparatuses and methods for generating electric fields
Scott, Jill R; McJunkin, Timothy R; Tremblay, Paul L
2013-08-06
Apparatuses and methods relating to generating an electric field are disclosed. An electric field generator may include a semiconductive material configured in a physical shape substantially different from a shape of an electric field to be generated thereby. The electric field is generated when a voltage drop exists across the semiconductive material. A method for generating an electric field may include applying a voltage to a shaped semiconductive material to generate a complex, substantially nonlinear electric field. The shape of the complex, substantially nonlinear electric field may be configured for directing charged particles to a desired location. Other apparatuses and methods are disclosed.
Kye, Bongoh; Mare, Robert D
2012-11-01
This study examines the intergenerational effects of changes in women's education in South Korea. We define intergenerational effects as changes in the distribution of educational attainment in an offspring generation associated with the changes in a parental generation. Departing from the previous approach in research on social mobility that has focused on intergenerational association, we examine the changes in the distribution of educational attainment across generations. Using a simulation method based on Mare and Maralani's recursive population renewal model, we examine how intergenerational transmission, assortative mating, and differential fertility influence intergenerational effects. The results point to the following conclusions. First, we find a positive intergenerational effect: improvement in women's education leads to improvement in daughter's education. Second, we find that the magnitude of intergenerational effects substantially depends on assortative marriage and differential fertility: assortative mating amplifies and differential fertility dampens the intergenerational effects. Third, intergenerational effects become bigger for the less educated and smaller for the better educated over time, which is a consequence of educational expansion. We compare our results with Mare and Maralani's original Indonesian study to illustrate how the model of intergenerational effects works in different socioeconomic circumstances. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweedie, A.; Doris, E.
Establishing interconnection to the grid is a recognized barrier to the deployment of distributed energy generation. This report compares interconnection processes for photovoltaic projects in California and Germany. This report summarizes the steps of the interconnection process for developers and utilities, the average length of time utilities take to process applications, and paperwork required of project developers. Based on a review of the available literature, this report finds that while the interconnection procedures and timelines are similar in California and Germany, differences in the legal and regulatory frameworks are substantial.
Benefit transfer and spatial heterogeneity of preferences for water quality improvements.
Martin-Ortega, J; Brouwer, R; Ojea, E; Berbel, J
2012-09-15
The improvement in the water quality resulting from the implementation of the EU Water Framework Directive is expected to generate substantial non-market benefits. A wide spread estimation of these benefits across Europe will require the application of benefit transfer. We use a spatially explicit valuation design to account for the spatial heterogeneity of preferences to help generate lower transfer errors. A map-based choice experiment is applied in the Guadalquivir River Basin (Spain), accounting simultaneously for the spatial distribution of water quality improvements and beneficiaries. Our results show that accounting for the spatial heterogeneity of preferences generally produces lower transfer errors. Copyright © 2012 Elsevier Ltd. All rights reserved.
Wang, Zhiping; Chen, Jinyu; Yu, Benli
2017-02-20
We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.
Development of a biomechanical energy harvester.
Li, Qingguo; Naing, Veronica; Donelan, J Maxwell
2009-06-23
Biomechanical energy harvesting-generating electricity from people during daily activities-is a promising alternative to batteries for powering increasingly sophisticated portable devices. We recently developed a wearable knee-mounted energy harvesting device that generated electricity during human walking. In this methods-focused paper, we explain the physiological principles that guided our design process and present a detailed description of our device design with an emphasis on new analyses. Effectively harvesting energy from walking requires a small lightweight device that efficiently converts intermittent, bi-directional, low speed and high torque mechanical power to electricity, and selectively engages power generation to assist muscles in performing negative mechanical work. To achieve this, our device used a one-way clutch to transmit only knee extension motions, a spur gear transmission to amplify the angular speed, a brushless DC rotary magnetic generator to convert the mechanical power into electrical power, a control system to determine when to open and close the power generation circuit based on measurements of knee angle, and a customized orthopaedic knee brace to distribute the device reaction torque over a large leg surface area. The device selectively engaged power generation towards the end of swing extension, assisting knee flexor muscles by producing substantial flexion torque (6.4 Nm), and efficiently converted the input mechanical power into electricity (54.6%). Consequently, six subjects walking at 1.5 m/s generated 4.8 +/- 0.8 W of electrical power with only a 5.0 +/- 21 W increase in metabolic cost. Biomechanical energy harvesting is capable of generating substantial amounts of electrical power from walking with little additional user effort making future versions of this technology particularly promising for charging portable medical devices.
Development of a biomechanical energy harvester
Li, Qingguo; Naing, Veronica; Donelan, J Maxwell
2009-01-01
Background Biomechanical energy harvesting–generating electricity from people during daily activities–is a promising alternative to batteries for powering increasingly sophisticated portable devices. We recently developed a wearable knee-mounted energy harvesting device that generated electricity during human walking. In this methods-focused paper, we explain the physiological principles that guided our design process and present a detailed description of our device design with an emphasis on new analyses. Methods Effectively harvesting energy from walking requires a small lightweight device that efficiently converts intermittent, bi-directional, low speed and high torque mechanical power to electricity, and selectively engages power generation to assist muscles in performing negative mechanical work. To achieve this, our device used a one-way clutch to transmit only knee extension motions, a spur gear transmission to amplify the angular speed, a brushless DC rotary magnetic generator to convert the mechanical power into electrical power, a control system to determine when to open and close the power generation circuit based on measurements of knee angle, and a customized orthopaedic knee brace to distribute the device reaction torque over a large leg surface area. Results The device selectively engaged power generation towards the end of swing extension, assisting knee flexor muscles by producing substantial flexion torque (6.4 Nm), and efficiently converted the input mechanical power into electricity (54.6%). Consequently, six subjects walking at 1.5 m/s generated 4.8 ± 0.8 W of electrical power with only a 5.0 ± 21 W increase in metabolic cost. Conclusion Biomechanical energy harvesting is capable of generating substantial amounts of electrical power from walking with little additional user effort making future versions of this technology particularly promising for charging portable medical devices. PMID:19549313
A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.
Yang, Shaofu; Liu, Qingshan; Wang, Jun
2018-04-01
This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.
NASA's Vision for Potential Energy Reduction from Future Generations of Propulsion Technology
NASA Technical Reports Server (NTRS)
Haller, Bill
2015-01-01
Through a robust partnership with the aviation industry, over the past 50 years NASA programs have helped foster advances in propulsion technology that enabled substantial reductions in fuel consumption for commercial transports. Emerging global trends and continuing environmental concerns are creating challenges that will very likely transform the face of aviation over the next 20-40 years. In recognition of this development, NASA Aeronautics has established a set of Research Thrusts that will help define the future direction of the agency's research technology efforts. Two of these thrusts, Ultra-Efficient Commercial Vehicles and Transition to Low-Carbon Propulsion, serve as cornerstones for the Advanced Air Transport Technology (AATT) project. The AATT project is exploring and developing high-payoff technologies and concepts that are key to continued improvement in energy efficiency and environmental compatibility for future generations of fixed-wing, subsonic transports. The AATT project is primarily focused on the N+3 timeframe, or 3 generations from current technology levels. As should be expected, many of the propulsion system architectures technologies envisioned for N+3 vary significantly from todays engines. The use of batteries in a hybrid-electric configuration or deploying multiple fans distributed across the airframe to enable higher bypass ratios are just two examples of potential advances that could enable substantial energy reductions over current propulsion systems.
Yang, Jinjian; Wu, Qijia; Xiao, Rong; Zhao, Jupeng; Chen, Jian; Jiao, Xiaoguo
2018-04-01
Variations in species morphology and life-history traits strongly correlate with geographic and climatic characteristics. Most studies on morphological variations in animals focus on ectotherms distributed on a large geographic scale across latitudinal and/or altitudinal gradient. However, the morphological variations of spiders living in the same habitats across different seasons have not been reported. In this study, we used the wolf spider, Pardosa astrigera , as a model to determine seasonal differences in adult body size, melanism, fecundity, and egg diameter both in the overwintering and the first generation for 2010 and 2016. The results showed that in 2010, both females and males of the overwintering generation were significantly darker than the first generation. Moreover, the overwintering females were markedly larger and produced more and bigger eggs than the first generation in both 2010 and 2016. Considering the overwintering P. astrigera experiencing low temperature and/or desiccation stress, these results suggest that substantially darker and larger body of the overwintering generation is adaptive to adverse conditions.
Microscale air quality impacts of distributed power generation facilities.
Olaguer, Eduardo P; Knipping, Eladio; Shaw, Stephanie; Ravindran, Satish
2016-08-01
The electric system is experiencing rapid growth in the adoption of a mix of distributed renewable and fossil fuel sources, along with increasing amounts of off-grid generation. New operational regimes may have unforeseen consequences for air quality. A three-dimensional microscale chemical transport model (CTM) driven by an urban wind model was used to assess gaseous air pollutant and particulate matter (PM) impacts within ~10 km of fossil-fueled distributed power generation (DG) facilities during the early afternoon of a typical summer day in Houston, TX. Three types of DG scenarios were considered in the presence of motor vehicle emissions and a realistic urban canopy: (1) a 25-MW natural gas turbine operating at steady state in either simple cycle or combined heating and power (CHP) mode; (2) a 25-MW simple cycle gas turbine undergoing a cold startup with either moderate or enhanced formaldehyde emissions; and (3) a data center generating 10 MW of emergency power with either diesel or natural gas-fired backup generators (BUGs) without pollution controls. Simulations of criteria pollutants (NO2, CO, O3, PM) and the toxic pollutant, formaldehyde (HCHO), were conducted assuming a 2-hr operational time period. In all cases, NOx titration dominated ozone production near the source. The turbine scenarios did not result in ambient concentration enhancements significantly exceeding 1 ppbv for gaseous pollutants or over 1 µg/m(3) for PM after 2 hr of emission, assuming realistic plume rise. In the case of the datacenter with diesel BUGs, ambient NO2 concentrations were enhanced by 10-50 ppbv within 2 km downwind of the source, while maximum PM impacts in the immediate vicinity of the datacenter were less than 5 µg/m(3). Plausible scenarios of distributed fossil generation consistent with the electricity grid's transformation to a more flexible and modernized system suggest that a substantial amount of deployment would be required to significantly affect air quality on a localized scale. In particular, natural gas turbines typically used in distributed generation may have minor effects. Large banks of diesel backup generators such as those used by data centers, on the other hand, may require pollution controls or conversion to natural gas-fired reciprocal internal combustion engines to decrease nitrogen dioxide pollution.
Rapid solution of large-scale systems of equations
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1994-01-01
The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.
Effects of the infectious period distribution on predicted transitions in childhood disease dynamics
Krylova, Olga; Earn, David J. D.
2013-01-01
The population dynamics of infectious diseases occasionally undergo rapid qualitative changes, such as transitions from annual to biennial cycles or to irregular dynamics. Previous work, based on the standard seasonally forced ‘susceptible–exposed–infectious–removed’ (SEIR) model has found that transitions in the dynamics of many childhood diseases result from bifurcations induced by slow changes in birth and vaccination rates. However, the standard SEIR formulation assumes that the stage durations (latent and infectious periods) are exponentially distributed, whereas real distributions are narrower and centred around the mean. Much recent work has indicated that realistically distributed stage durations strongly affect the dynamical structure of seasonally forced epidemic models. We investigate whether inferences drawn from previous analyses of transitions in patterns of measles dynamics are robust to the shapes of the stage duration distributions. As an illustrative example, we analyse measles dynamics in New York City from 1928 to 1972. We find that with a fixed mean infectious period in the susceptible–infectious–removed (SIR) model, the dynamical structure and predicted transitions vary substantially as a function of the shape of the infectious period distribution. By contrast, with fixed mean latent and infectious periods in the SEIR model, the shapes of the stage duration distributions have a less dramatic effect on model dynamical structure and predicted transitions. All these results can be understood more easily by considering the distribution of the disease generation time as opposed to the distributions of individual disease stages. Numerical bifurcation analysis reveals that for a given mean generation time the dynamics of the SIR and SEIR models for measles are nearly equivalent and are insensitive to the shapes of the disease stage distributions. PMID:23676892
Krylova, Olga; Earn, David J D
2013-07-06
The population dynamics of infectious diseases occasionally undergo rapid qualitative changes, such as transitions from annual to biennial cycles or to irregular dynamics. Previous work, based on the standard seasonally forced 'susceptible-exposed-infectious-removed' (SEIR) model has found that transitions in the dynamics of many childhood diseases result from bifurcations induced by slow changes in birth and vaccination rates. However, the standard SEIR formulation assumes that the stage durations (latent and infectious periods) are exponentially distributed, whereas real distributions are narrower and centred around the mean. Much recent work has indicated that realistically distributed stage durations strongly affect the dynamical structure of seasonally forced epidemic models. We investigate whether inferences drawn from previous analyses of transitions in patterns of measles dynamics are robust to the shapes of the stage duration distributions. As an illustrative example, we analyse measles dynamics in New York City from 1928 to 1972. We find that with a fixed mean infectious period in the susceptible-infectious-removed (SIR) model, the dynamical structure and predicted transitions vary substantially as a function of the shape of the infectious period distribution. By contrast, with fixed mean latent and infectious periods in the SEIR model, the shapes of the stage duration distributions have a less dramatic effect on model dynamical structure and predicted transitions. All these results can be understood more easily by considering the distribution of the disease generation time as opposed to the distributions of individual disease stages. Numerical bifurcation analysis reveals that for a given mean generation time the dynamics of the SIR and SEIR models for measles are nearly equivalent and are insensitive to the shapes of the disease stage distributions.
Gene network inference by fusing data from diverse distributions
Žitnik, Marinka; Zupan, Blaž
2015-01-01
Motivation: Markov networks are undirected graphical models that are widely used to infer relations between genes from experimental data. Their state-of-the-art inference procedures assume the data arise from a Gaussian distribution. High-throughput omics data, such as that from next generation sequencing, often violates this assumption. Furthermore, when collected data arise from multiple related but otherwise nonidentical distributions, their underlying networks are likely to have common features. New principled statistical approaches are needed that can deal with different data distributions and jointly consider collections of datasets. Results: We present FuseNet, a Markov network formulation that infers networks from a collection of nonidentically distributed datasets. Our approach is computationally efficient and general: given any number of distributions from an exponential family, FuseNet represents model parameters through shared latent factors that define neighborhoods of network nodes. In a simulation study, we demonstrate good predictive performance of FuseNet in comparison to several popular graphical models. We show its effectiveness in an application to breast cancer RNA-sequencing and somatic mutation data, a novel application of graphical models. Fusion of datasets offers substantial gains relative to inference of separate networks for each dataset. Our results demonstrate that network inference methods for non-Gaussian data can help in accurate modeling of the data generated by emergent high-throughput technologies. Availability and implementation: Source code is at https://github.com/marinkaz/fusenet. Contact: blaz.zupan@fri.uni-lj.si Supplementary information: Supplementary information is available at Bioinformatics online. PMID:26072487
Atmospheric properties measurements and data collection from a hot-air balloon
NASA Astrophysics Data System (ADS)
Watson, Steven M.; Olson, N.; Dalley, R. P.; Bone, W. J.; Kroutil, Robert T.; Herr, Kenneth C.; Hall, Jeff L.; Schere, G. J.; Polak, M. L.; Wilkerson, Thomas D.; Bodrero, Dennis M.; Borys, R. O.; Lowenthal, D.
1995-02-01
Tethered and free-flying manned hot air balloons have been demonstrated as platforms for various atmospheric measurements and remote sensing tasks. We have been performing experiments in these areas since the winter of 1993. These platforms are extremely inexpensive to operate, do not cause disturbances such as prop wash and high airspeeds, and have substantial payload lifting and altitude capabilities. The equipment operated and tested on the balloons included FTIR spectrometers, multi-spectral imaging spectrometer, PM10 Beta attenuation monitor, mid- and far-infrared cameras, a radiometer, video recording equipment, ozone meter, condensation nuclei counter, aerodynamic particle sizer with associated computer equipment, a tethersonde and a 2.9 kW portable generator providing power to the equipment. Carbon monoxide and ozone concentration data and particle concentrations and size distributions were collected as functions of altitude in a wintertime inversion layer at Logan, Utah and summertime conditions in Salt Lake City, Utah and surrounding areas. Various FTIR spectrometers have been flown to characterize chemical plumes emitted from a simulated industrial stack. We also flew the balloon into diesel and fog oil smokes generated by U.S. Army and U.S. Air Force turbine generators to obtain particle size distributions.
Bernardo, U; van Nieukerken, E J; Sasso, R; Gebiola, M; Gualtieri, L; Viggiani, G
2015-04-01
The leafminer Coptodisca sp. (Lepidoptera: Heliozelidae), recently recorded for the first time in Europe on Italian black and common walnut trees, is shown to be the North-American Coptodisca lucifluella (Clemens) based on morphological (forewing pattern) and molecular (cytochrome oxidase c subunit I sequence) evidence. The phylogenetic relatedness of three species feeding on Juglandaceae suggests that C. lucifluella has likely shifted, within the same host plant family, from its original North-American hosts Carya spp. to Juglans spp. Over the few years since its detection, it has established in many regions in Italy and has become a widespread and dominant invasive species. The leafminer completes three to four generations per year, with the first adults emerging in April-May and mature larvae of the last generation starting hibernation in September-October. Although a high larval mortality was recorded in field observations (up to 74%), the impact of the pest was substantial with all leaves infested at the end of the last generation in all 3 years tested. The distribution of the leafminer in the canopy was homogeneous. The species is redescribed and illustrated, a lectotype is designated and a new synonymy is established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, L.; Hedman, B.; Knowles, D.
The U. S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) is directing substantial programs in the development and encouragement of new energy technologies. Among them are renewable energy and distributed energy resource technologies. As part of its ongoing effort to document the status and potential of these technologies, DOE EERE directed the National Renewable Energy Laboratory to lead an effort to develop and publish Distributed Energy Technology Characterizations (TCs) that would provide both the department and energy community with a consistent and objective set of cost and performance data in prospective electric-power generation applications inmore » the United States. Toward that goal, DOE/EERE - joined by the Electric Power Research Institute (EPRI) - published the Renewable Energy Technology Characterizations in December 1997.As a follow-up, DOE EERE - joined by the Gas Research Institute - is now publishing this document, Gas-Fired Distributed Energy Resource Technology Characterizations.« less
Grist, Eric P M; Flegg, Jennifer A; Humphreys, Georgina; Mas, Ignacio Suay; Anderson, Tim J C; Ashley, Elizabeth A; Day, Nicholas P J; Dhorda, Mehul; Dondorp, Arjen M; Faiz, M Abul; Gething, Peter W; Hien, Tran T; Hlaing, Tin M; Imwong, Mallika; Kindermans, Jean-Marie; Maude, Richard J; Mayxay, Mayfong; McDew-White, Marina; Menard, Didier; Nair, Shalini; Nosten, Francois; Newton, Paul N; Price, Ric N; Pukrittayakamee, Sasithon; Takala-Harrison, Shannon; Smithuis, Frank; Nguyen, Nhien T; Tun, Kyaw M; White, Nicholas J; Witkowski, Benoit; Woodrow, Charles J; Fairhurst, Rick M; Sibley, Carol Hopkins; Guerin, Philippe J
2016-10-24
Artemisinin-resistant Plasmodium falciparum malaria parasites are now present across much of mainland Southeast Asia, where ongoing surveys are measuring and mapping their spatial distribution. These efforts require substantial resources. Here we propose a generic 'smart surveillance' methodology to identify optimal candidate sites for future sampling and thus map the distribution of artemisinin resistance most efficiently. The approach uses the 'uncertainty' map generated iteratively by a geostatistical model to determine optimal locations for subsequent sampling. The methodology is illustrated using recent data on the prevalence of the K13-propeller polymorphism (a genetic marker of artemisinin resistance) in the Greater Mekong Subregion. This methodology, which has broader application to geostatistical mapping in general, could improve the quality and efficiency of drug resistance mapping and thereby guide practical operations to eliminate malaria in affected areas.
Analysis of Nearly One Thousand Mammalian Mirtrons Reveals Novel Features of Dicer Substrates
Shenker, Sol; Mohammed, Jaaved; Lai, Eric C.
2015-01-01
Mirtrons are microRNA (miRNA) substrates that utilize the splicing machinery to bypass the necessity of Drosha cleavage for their biogenesis. Expanding our recent efforts for mammalian mirtron annotation, we use meta-analysis of aggregate datasets to identify ~500 novel mouse and human introns that confidently generate diced small RNA duplexes. These comprise nearly 1000 total loci distributed in four splicing-mediated biogenesis subclasses, with 5'-tailed mirtrons as, by far, the dominant subtype. Thus, mirtrons surprisingly comprise a substantial fraction of endogenous Dicer substrates in mammalian genomes. Although mirtron-derived small RNAs exhibit overall expression correlation with their host mRNAs, we observe a subset with substantial differences that suggest regulated processing or accumulation. We identify characteristic sequence, length, and structural features of mirtron loci that distinguish them from bulk introns, and find that mirtrons preferentially emerge from genes with larger numbers of introns. While mirtrons generate miRNA-class regulatory RNAs, we also find that mirtrons exhibit many features that distinguish them from canonical miRNAs. We observe that conventional mirtron hairpins are substantially longer than Drosha-generated pre-miRNAs, indicating that the characteristic length of canonical pre-miRNAs is not a general feature of Dicer substrate hairpins. In addition, mammalian mirtrons exhibit unique patterns of ordered 5' and 3' heterogeneity, which reveal hidden complexity in miRNA processing pathways. These include broad 3'-uridylation of mirtron hairpins, atypically heterogeneous 5' termini that may result from exonucleolytic processing, and occasionally robust decapitation of the 5' guanine (G) of mirtron-5p species defined by splicing. Altogether, this study reveals that this extensive class of non-canonical miRNA bears a multitude of characteristic properties, many of which raise general mechanistic questions regarding the processing of endogenous hairpin transcripts. PMID:26325366
Inferring Biological Structures from Super-Resolution Single Molecule Images Using Generative Models
Maji, Suvrajit; Bruchez, Marcel P.
2012-01-01
Localization-based super resolution imaging is presently limited by sampling requirements for dynamic measurements of biological structures. Generating an image requires serial acquisition of individual molecular positions at sufficient density to define a biological structure, increasing the acquisition time. Efficient analysis of biological structures from sparse localization data could substantially improve the dynamic imaging capabilities of these methods. Using a feature extraction technique called the Hough Transform simple biological structures are identified from both simulated and real localization data. We demonstrate that these generative models can efficiently infer biological structures in the data from far fewer localizations than are required for complete spatial sampling. Analysis at partial data densities revealed efficient recovery of clathrin vesicle size distributions and microtubule orientation angles with as little as 10% of the localization data. This approach significantly increases the temporal resolution for dynamic imaging and provides quantitatively useful biological information. PMID:22629348
Quantitative cardiac SPECT reconstruction with reduced image degradation due to patient anatomy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, B.M.W.; Zhao, X.D.; Gregoriou, G.K.
1994-12-01
Patient anatomy has complicated effects on cardiac SPECT images. The authors investigated reconstruction methods which substantially reduced these effects for improved image quality. A 3D mathematical cardiac-torso (MCAT) phantom which models the anatomical structures in the thorax region were used in the study. The phantom was modified to simulate variations in patient anatomy including regions of natural thinning along the myocardium, body size, diaphragmatic shape, gender, and size and shape of breasts for female patients. Distributions of attenuation coefficients and Tl-201 uptake in different organs in a normal patient were also simulated. Emission projection data were generated from the phantomsmore » including effects of attenuation and detector response. The authors have observed the attenuation-induced artifacts caused by patient anatomy in the conventional FBP reconstructed images. Accurate attenuation compensation using iterative reconstruction algorithms and attenuation maps substantially reduced the image artifacts and improved quantitative accuracy. They conclude that reconstruction methods which accurately compensate for non-uniform attenuation can substantially reduce image degradation caused by variations in patient anatomy in cardiac SPECT.« less
NASA Astrophysics Data System (ADS)
Vauchy, Romain; Robisson, Anne-Charlotte; Martin, Philippe M.; Belin, Renaud C.; Aufore, Laurence; Scheinost, Andreas C.; Hodaj, Fiqiri
2015-01-01
The impact of the cation distribution homogeneity of the U0.54Pu0.45Am0.01O2-x mixed oxide on the americium oxidation state was studied by coupling X-ray diffraction (XRD), electron probe micro analysis (EPMA) and X-ray absorption spectroscopy (XAS). Oxygen-hypostoichiometric Am-bearing uranium-plutonium mixed oxide pellets were fabricated by two different co-milling based processes in order to obtain different cation distribution homogeneities. The americium was generated from β- decay of 241Pu. The XRD analysis of the obtained compounds did not reveal any structural difference between the samples. EPMA, however, revealed a high homogeneity in the cation distribution for one sample, and substantial heterogeneity of the U-Pu (so Am) distribution for the other. The difference in cation distribution was linked to a difference in Am chemistry as investigated by XAS, with Am being present at mixed +III/+IV oxidation state in the heterogeneous compound, whereas only Am(IV) was observed in the homogeneous compound. Previously reported discrepancies on Am oxidation states can hence be explained by cation distribution homogeneity effects.
Alloy substantially free of dendrites and method of forming the same
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Figueredo, Anacleto M.; Apelian, Diran; Findon, Matt M.
2009-04-07
Described herein are alloys substantially free of dendrites. A method includes forming an alloy substantially free of dendrites. A superheated alloy is cooled to form a nucleated alloy. The temperature of the nucleated alloy is controlled to prevent the nuclei from melting. The nucleated alloy is mixed to distribute the nuclei throughout the alloy. The nucleated alloy is cooled with nuclei distributed throughout.
Using Bayesian networks to support decision-focused information retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehner, P.; Elsaesser, C.; Seligman, L.
This paper has described an approach to controlling the process of pulling data/information from distributed data bases in a way that is specific to a persons specific decision making context. Our prototype implementation of this approach uses a knowledge-based planner to generate a plan, an automatically constructed Bayesian network to evaluate the plan, specialized processing of the network to derive key information items that would substantially impact the evaluation of the plan (e.g., determine that replanning is needed), automated construction of Standing Requests for Information (SRIs) which are automated functions that monitor changes and trends in distributed data base thatmore » are relevant to the key information items. This emphasis of this paper is on how Bayesian networks are used.« less
Chetty, Raj; Friedman, John N.; Olsen, Tore; Pistaferri, Luigi
2011-01-01
We show that the effects of taxes on labor supply are shaped by interactions between adjustment costs for workers and hours constraints set by firms. We develop a model in which firms post job offers characterized by an hours requirement and workers pay search costs to find jobs. We present evidence supporting three predictions of this model by analyzing bunching at kinks using Danish tax records. First, larger kinks generate larger taxable income elasticities. Second, kinks that apply to a larger group of workers generate larger elasticities. Third, the distribution of job offers is tailored to match workers' aggregate tax preferences in equilibrium. Our results suggest that macro elasticities may be substantially larger than the estimates obtained using standard microeconometric methods. PMID:21836746
Method and system of doppler correction for mobile communications systems
NASA Technical Reports Server (NTRS)
Georghiades, Costas N. (Inventor); Spasojevic, Predrag (Inventor)
1999-01-01
Doppler correction system and method comprising receiving a Doppler effected signal comprising a preamble signal (32). A delayed preamble signal (48) may be generated based on the preamble signal (32). The preamble signal (32) may be multiplied by the delayed preamble signal (48) to generate an in-phase preamble signal (60). The in-phase preamble signal (60) may be filtered to generate a substantially constant in-phase preamble signal (62). A plurality of samples of the substantially constant in-phase preamble signal (62) may be accumulated. A phase-shifted signal (76) may also be generated based on the preamble signal (32). The phase-shifted signal (76) may be multiplied by the delayed preamble signal (48) to generate an out-of-phase preamble signal (80). The out-of-phase preamble signal (80) may be filtered to generate a substantially constant out-of-phase preamble signal (82). A plurality of samples of the substantially constant out-of-phase signal (82) may be accumulated. A sum of the in-phase preamble samples and a sum of the out-of-phase preamble samples may be normalized relative to each other to generate an in-phase Doppler estimator (92) and an out-of-phase Doppler estimator (94).
Virtual cathode microwave generator having annular anode slit
Kwan, Thomas J. T.; Snell, Charles M.
1988-01-01
A microwave generator is provided for generating microwaves substantially from virtual cathode oscillation. Electrons are emitted from a cathode and accelerated to an anode which is spaced apart from the cathode. The anode has an annular slit therethrough effective to form the virtual cathode. The anode is at least one range thickness relative to electrons reflecting from the virtual cathode. A magnet is provided to produce an optimum magnetic field having the field strength effective to form an annular beam from the emitted electrons in substantial alignment with the annular anode slit. The magnetic field, however, does permit the reflected electrons to axially diverge from the annular beam. The reflected electrons are absorbed by the anode in returning to the real cathode, such that substantially no reflexing electrons occur. The resulting microwaves are produced with a single dominant mode and are substantially monochromatic relative to conventional virtual cathode microwave generators.
Kwan, T.J.T.; Snell, C.M.
1987-03-31
A microwave generator is provided for generating microwaves substantially from virtual cathode oscillation. Electrons are emitted from a cathode and accelerated to an anode which is spaced apart from the cathode. The anode has an annular slit there through effective to form the virtual cathode. The anode is at least one range thickness relative to electrons reflecting from the virtual cathode. A magnet is provided to produce an optimum magnetic field having the field strength effective to form an annular beam from the emitted electrons in substantial alignment with the annular anode slit. The magnetic field, however, does permit the reflected electrons to axially diverge from the annular beam. The reflected electrons are absorbed by the anode in returning to the real cathode, such that substantially no reflexing electrons occur. The resulting microwaves are produced with a single dominant mode and are substantially monochromatic relative to conventional virtual cathode microwave generators. 6 figs.
Hayes, Mark A.; Cryan, Paul M.; Wunder, Michael B.
2015-01-01
Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn—the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as ‘risk from turbines is highest in habitats between hoary bat summering and wintering grounds’. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution. PMID:26208098
Hayes, Mark A.; Cryan, Paul M.; Wunder, Michael B.
2015-01-01
Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn—the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as ‘risk from turbines is highest in habitats between hoary bat summering and wintering grounds’. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution.
Hayes, Mark A; Cryan, Paul M; Wunder, Michael B
2015-01-01
Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn-the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as 'risk from turbines is highest in habitats between hoary bat summering and wintering grounds'. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution.
SINEs as driving forces in genome evolution.
Schmitz, J
2012-01-01
SINEs are short interspersed elements derived from cellular RNAs that repetitively retropose via RNA intermediates and integrate more or less randomly back into the genome. SINEs propagate almost entirely vertically within their host cells and, once established in the germline, are passed on from generation to generation. As non-autonomous elements, their reverse transcription (from RNA to cDNA) and genomic integration depends on the activity of the enzymatic machinery of autonomous retrotransposons, such as long interspersed elements (LINEs). SINEs are widely distributed in eukaryotes, but are especially effectively propagated in mammalian species. For example, more than a million Alu-SINE copies populate the human genome (approximately 13% of genomic space), and few master copies of them are still active. In the organisms where they occur, SINEs are a challenge to genomic integrity, but in the long term also can serve as beneficial building blocks for evolution, contributing to phenotypic heterogeneity and modifying gene regulatory networks. They substantially expand the genomic space and introduce structural variation to the genome. SINEs have the potential to mutate genes, to alter gene expression, and to generate new parts of genes. A balanced distribution and controlled activity of such properties is crucial to maintaining the organism's dynamic and thriving evolution. Copyright © 2012 S. Karger AG, Basel.
Rincon, Diego F; Hoy, Casey W; Cañas, Luis A
2015-04-01
Most predator-prey models extrapolate functional responses from small-scale experiments assuming spatially uniform within-plant predator-prey interactions. However, some predators focus their search in certain plant regions, and herbivores tend to select leaves to balance their nutrient uptake and exposure to plant defenses. Individual-based models that account for heterogeneous within-plant predator-prey interactions can be used to scale-up functional responses, but they would require the generation of explicit prey spatial distributions within-plant architecture models. The silverleaf whitefly, Bemisia tabaci biotype B (Gennadius) (Hemiptera: Aleyrodidae), is a significant pest of tomato crops worldwide that exhibits highly aggregated populations at several spatial scales, including within the plant. As part of an analytical framework to understand predator-silverleaf whitefly interactions, the objective of this research was to develop an algorithm to generate explicit spatial counts of silverleaf whitefly nymphs within tomato plants. The algorithm requires the plant size and the number of silverleaf whitefly individuals to distribute as inputs, and includes models that describe infestation probabilities per leaf nodal position and the aggregation pattern of the silverleaf whitefly within tomato plants and leaves. The output is a simulated number of silverleaf whitefly individuals for each leaf and leaflet on one or more plants. Parameter estimation was performed using nymph counts per leaflet censused from 30 artificially infested tomato plants. Validation revealed a substantial agreement between algorithm outputs and independent data that included the distribution of counts of both eggs and nymphs. This algorithm can be used in simulation models that explore the effect of local heterogeneity on whitefly-predator dynamics. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Geist, E.L.; Bilek, S.L.; Arcas, D.; Titov, V.V.
2006-01-01
Source parameters affecting tsunami generation and propagation for the Mw > 9.0 December 26, 2004 and the Mw = 8.6 March 28, 2005 earthquakes are examined to explain the dramatic difference in tsunami observations. We evaluate both scalar measures (seismic moment, maximum slip, potential energy) and finite-source repre-sentations (distributed slip and far-field beaming from finite source dimensions) of tsunami generation potential. There exists significant variability in local tsunami runup with respect to the most readily available measure, seismic moment. The local tsunami intensity for the December 2004 earthquake is similar to other tsunamigenic earthquakes of comparable magnitude. In contrast, the March 2005 local tsunami was deficient relative to its earthquake magnitude. Tsunami potential energy calculations more accurately reflect the difference in tsunami severity, although these calculations are dependent on knowledge of the slip distribution and therefore difficult to implement in a real-time system. A significant factor affecting tsunami generation unaccounted for in these scalar measures is the location of regions of seafloor displacement relative to the overlying water depth. The deficiency of the March 2005 tsunami seems to be related to concentration of slip in the down-dip part of the rupture zone and the fact that a substantial portion of the vertical displacement field occurred in shallow water or on land. The comparison of the December 2004 and March 2005 Sumatra earthquakes presented in this study is analogous to previous studies comparing the 1952 and 2003 Tokachi-Oki earthquakes and tsunamis, in terms of the effect slip distribution has on local tsunamis. Results from these studies indicate the difficulty in rapidly assessing local tsunami runup from magnitude and epicentral location information alone.
The effect of noise-induced variance on parameter recovery from reaction times.
Vadillo, Miguel A; Garaizar, Pablo
2016-03-31
Technical noise can compromise the precision and accuracy of the reaction times collected in psychological experiments, especially in the case of Internet-based studies. Although this noise seems to have only a small impact on traditional statistical analyses, its effects on model fit to reaction-time distributions remains unexplored. Across four simulations we study the impact of technical noise on parameter recovery from data generated from an ex-Gaussian distribution and from a Ratcliff Diffusion Model. Our results suggest that the impact of noise-induced variance tends to be limited to specific parameters and conditions. Although we encourage researchers to adopt all measures to reduce the impact of noise on reaction-time experiments, we conclude that the typical amount of noise-induced variance found in these experiments does not pose substantial problems for statistical analyses based on model fitting.
NASA Astrophysics Data System (ADS)
Chatterjee, Sandeep; BoŻek, Piotr
2018-05-01
Thermalized matter created in noncentral relativistic heavy-ion collisions is expected to be tilted in the reaction plane with respect to the beam axis. The most notable consequence of this forward-backward symmetry breaking is the observation of rapidity-odd directed flow for charged particles. On the other hand, the production points for heavy quarks are forward-backward symmetric and shifted in the transverse plane with respect to the fireball. The drag on heavy quarks from the asymmetrically distributed thermalized matter generates substantial directed flow for heavy flavor mesons. We predict a very large rapidity-odd directed flow of D mesons in noncentral Au-Au collisions at √{sN N}=200 GeV , several times larger than for charged particles. A possible experimental observation of a large directed flow for heavy flavor mesons would represent an almost direct probe of the three-dimensional distribution of matter in heavy-ion collisions.
NASA Astrophysics Data System (ADS)
Arun, S.; Choudhury, Vishal; Balaswamy, V.; Supradeepa, V. R.
2018-02-01
We have demonstrated a 34 W continuous wave supercontinuum using the standard telecom fiber (SMF 28e). The supercontinuum spans over a bandwidth of 1000 nm (>1 octave) from 880nm to 1900 nm with a substantial power spectral density of >1mW/nm from 880-1350 nm and 50-100mW/nm in 1350-1900 nm. The distributed feedback Raman laser architecture was used for pumping the supercontinuum which ensured high efficiency Raman conversions and helped in achieving a very high efficiency of 44% for supercontinuum generation. Using this architecture, Yb laser operating at any wavelength can be used for generating the supercontinuum and this was demonstrated by using two different Yb lasers operating at 1117nm and 1085 nm to pump the supercontinuum.
29 CFR 4043.27 - Distribution to a substantial owner.
Code of Federal Regulations, 2014 CFR
2014-07-01
... TERMINATIONS REPORTABLE EVENTS AND CERTAIN OTHER NOTIFICATION REQUIREMENTS Post-Event Notice of Reportable Events § 4043.27 Distribution to a substantial owner. (a) Reportable event. A reportable event occurs for... does not exceed the limitation (as of the date the reportable event occurs) under section 415(b)(1)(A...
29 CFR 4043.27 - Distribution to a substantial owner.
Code of Federal Regulations, 2011 CFR
2011-07-01
... TERMINATIONS REPORTABLE EVENTS AND CERTAIN OTHER NOTIFICATION REQUIREMENTS Post-Event Notice of Reportable Events § 4043.27 Distribution to a substantial owner. (a) Reportable event. A reportable event occurs for... does not exceed the limitation (as of the date the reportable event occurs) under section 415(b)(1)(A...
29 CFR 4043.27 - Distribution to a substantial owner.
Code of Federal Regulations, 2013 CFR
2013-07-01
... TERMINATIONS REPORTABLE EVENTS AND CERTAIN OTHER NOTIFICATION REQUIREMENTS Post-Event Notice of Reportable Events § 4043.27 Distribution to a substantial owner. (a) Reportable event. A reportable event occurs for... does not exceed the limitation (as of the date the reportable event occurs) under section 415(b)(1)(A...
29 CFR 4043.27 - Distribution to a substantial owner.
Code of Federal Regulations, 2010 CFR
2010-07-01
... TERMINATIONS REPORTABLE EVENTS AND CERTAIN OTHER NOTIFICATION REQUIREMENTS Post-Event Notice of Reportable Events § 4043.27 Distribution to a substantial owner. (a) Reportable event. A reportable event occurs for... does not exceed the limitation (as of the date the reportable event occurs) under section 415(b)(1)(A...
29 CFR 4043.27 - Distribution to a substantial owner.
Code of Federal Regulations, 2012 CFR
2012-07-01
... TERMINATIONS REPORTABLE EVENTS AND CERTAIN OTHER NOTIFICATION REQUIREMENTS Post-Event Notice of Reportable Events § 4043.27 Distribution to a substantial owner. (a) Reportable event. A reportable event occurs for... does not exceed the limitation (as of the date the reportable event occurs) under section 415(b)(1)(A...
A Mapping of the Electron Localization Function for Earth Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbs, Gerald V.; Cox, David F.; Ross, Nancy
2005-06-01
The electron localization function, ELF, generated for a number of geometry-optimized earth materials, provides a graphical representation of the spatial localization of the probability electron density distribution as embodied in domains ascribed to localized bond and lone pair electrons. The lone pair domains, displayed by the silica polymorphs quartz, coesite and cristobalite, are typically banana-shaped and oriented perpendicular to the plane of the SiOSi angle at ~0.60 Å from the O atom on the reflex side of the angle. With decreasing angle, the domains increase in magnitude, indicating an increase in the nucleophilic character of the O atom, rendering itmore » more susceptible to potential electrophilic attack. The Laplacian isosurface maps of the experimental and theoretical electron density distribution for coesite substantiates the increase in the size of the domain with decreasing angle. Bond pair domains are displayed along each of the SiO bond vectors as discrete concave hemispherically-shaped domains at ~0.70 Å from the O atom. For more closed-shell ionic bonded interactions, the bond and lone pair domains are often coalesced, resulting in concave hemispherical toroidal-shaped domains with local maxima centered along the bond vectors. As the shared covalent character of the bonded interactions increases, the bond and lone pair domains are better developed as discrete domains. ELF isosurface maps generated for the earth materials tremolite, diopside, talc and dickite display banana-shaped lone pair domains associated with the bridging O atoms of SiOSi angles and concave hemispherical toroidal bond pair domains associated with the nonbridging ones. The lone pair domains in dickite and talc provide a basis for understanding the bonded interactions between the adjacent neutral layers. Maps were also generated for beryl, cordierite, quartz, low albite, forsterite, wadeite, åkermanite, pectolite, periclase, hurlbutite, thortveitite and vanthoffite. Strategies are reviewed for finding potential H docking sites in the silica polymorphs and related materials. As observed in an earlier study, the ELF is capable of generating bond and lone pair domains that are similar in number and arrangement to those provided by Laplacian and deformation electron density distributions. The formation of the bond and lone pair domains in the silica polymorphs and the progressive decrease in the SiO length as the value of the electron density at the bond critical point increases indicates that the SiO bonded interaction has a substantial component of covalent character.« less
Conceptual design of the 7 megawatt Mod-5B wind turbine generator
NASA Technical Reports Server (NTRS)
Douglas, R. R.
1982-01-01
Similar to MOD-2, the MOD-5B wind turbine generator system is designed for the sole purpose of providing electrical power for distribution by a major utility network. The objectives of the MOD-2 and MOD-5B programs are essentially identical with one important exception; the cost-of-electricity (COE) target is reduced from 4 cent/Kwhr on MOD-2 to 3 cent/Kwhr on MOD-5B, based on mid 1977 dollars and large quantity production. The MOD-5B concept studies and eventual concept selection confirmed that the program COE targets could not only be achieved but substantially bettered. Starting from the established MOD-2 technology as a base, this achievement resulted from a combination of concept changes, size changes, and design refinements. The result of this effort is a wind turbine system that can compete with conventional power generation over significant geographical areas, increasing commercial market potential by an order of magnitude.
Convergence of pattern generator outputs on a common mechanism of diaphragm motor unit recruitment
Mantilla, Carlos B.; Seven, Yasin B.; Sieck, Gary C.
2014-01-01
Motor units are the final element of neuromotor control. In manner analogous to the organization of neuromotor control in other skeletal muscles, diaphragm motor units comprise phrenic motoneurons located in the cervical spinal cord that innervate the diaphragm muscle, the main inspiratory muscle in mammals. Diaphragm motor units play a primary role in sustaining ventilation, but are also active in other non-ventilatory behaviors, including coughing, sneezing, vomiting, defecation and parturition. Diaphragm muscle fibers comprise all fiber types. Thus, diaphragm motor units display substantial differences in contractile and fatigue properties, but importantly properties of the motoneuron and muscle fibers within a motor unit are matched. As in other skeletal muscles, diaphragm motor units are recruited in order such that motor units that display greater fatigue resistance are recruited earlier and more often than more fatigable motor units. The properties of the motor unit population are critical determinants of the function of a skeletal muscle across the range of possible motor tasks. Accordingly, fatigue-resistant motor units are sufficient to generate the forces necessary for ventilatory behaviors whereas more fatigable units are only activated during expulsive behaviors important for airway clearance. Neuromotor control of diaphragm motor units may reflect selective inputs from distinct pattern generators distributed according to the motor unit properties necessary to accomplish these different motor tasks. In contrast, widely-distributed inputs to phrenic motoneurons from various pattern generators (e.g., for breathing, coughing or vocalization) would dictate recruitment order based on intrinsic electrophysiological properties. PMID:24746055
Duan, Yuanyuan; Griggs, Jason A
2015-06-01
Further investigations are required to evaluate the mechanical behaviour of newly developed polymer-matrix composite (PMC) blocks for computer-aided design/computer-aided manufacturing (CAD/CAM) applications. The purpose of this study was to investigate the effect of elasticity on the stress distribution in dental crowns made of glass-ceramic and PMC materials using finite element (FE) analysis. Elastic constants of two materials were determined by ultrasonic pulse velocity using an acoustic thickness gauge. Three-dimensional solid models of a full-coverage dental crown on a first mandibular molar were generated based on X-ray micro-CT scanning images. A variety of load case-material property combinations were simulated and conducted using FE analysis. The first principal stress distribution in the crown and luting agent was plotted and analyzed. The glass-ceramic crown had stress concentrations on the occlusal surface surrounding the area of loading and the cemented surface underneath the area of loading, while the PMC crown had only stress concentration on the occlusal surface. The PMC crown had lower maximum stress than the glass-ceramic crown in all load cases, but this difference was not substantial when the loading had a lateral component. Eccentric loading did not substantially increase the maximum stress in the prosthesis. Both materials are resistant to fracture with physiological occlusal load. The PMC crown had lower maximum stress than the glass-ceramic crown, but the effect of a lateral loading component was more pronounced for a PMC crown than for a glass-ceramic crown. Knowledge of the stress distribution in dental crowns with low modulus of elasticity will aid clinicians in planning treatments that include such restorations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nonlinear Monte Carlo model of superdiffusive shock acceleration with magnetic field amplification
NASA Astrophysics Data System (ADS)
Bykov, Andrei M.; Ellison, Donald C.; Osipov, Sergei M.
2017-03-01
Fast collisionless shocks in cosmic plasmas convert their kinetic energy flow into the hot downstream thermal plasma with a substantial fraction of energy going into a broad spectrum of superthermal charged particles and magnetic fluctuations. The superthermal particles can penetrate into the shock upstream region producing an extended shock precursor. The cold upstream plasma flow is decelerated by the force provided by the superthermal particle pressure gradient. In high Mach number collisionless shocks, efficient particle acceleration is likely coupled with turbulent magnetic field amplification (MFA) generated by the anisotropic distribution of accelerated particles. This anisotropy is determined by fast particle transport, making the problem strongly nonlinear and multiscale. Here, we present a nonlinear Monte Carlo model of collisionless shock structure with superdiffusive propagation of high-energy Fermi accelerated particles coupled to particle acceleration and MFA, which affords a consistent description of strong shocks. A distinctive feature of the Monte Carlo technique is that it includes the full angular anisotropy of the particle distribution at all precursor positions. The model reveals that the superdiffusive transport of energetic particles (i.e., Lévy-walk propagation) generates a strong quadruple anisotropy in the precursor particle distribution. The resultant pressure anisotropy of the high-energy particles produces a nonresonant mirror-type instability that amplifies compressible wave modes with wavelengths longer than the gyroradii of the highest-energy protons produced by the shock.
Computational design and refinement of self-heating lithium ion batteries
NASA Astrophysics Data System (ADS)
Yang, Xiao-Guang; Zhang, Guangsheng; Wang, Chao-Yang
2016-10-01
The recently discovered self-heating lithium ion battery has shown rapid self-heating from subzero temperatures and superior power thereafter, delivering a practical solution to poor battery performance at low temperatures. Here, we describe and validate an electrochemical-thermal coupled model developed specifically for computational design and improvement of the self-heating Li-ion battery (SHLB) where nickel foils are embedded in its structure. Predicting internal cell characteristics, such as current, temperature and Li-concentration distributions, the model is used to discover key design factors affecting the time and energy needed for self-heating and to explore advanced cell designs with the highest self-heating efficiency. It is found that ohmic heat generated in the nickel foil accounts for the majority of internal heat generation, resulting in a large internal temperature gradient from the nickel foil toward the outer cell surface. The large through-plane temperature gradient leads to highly non-uniform current distribution, and more importantly, is found to be the decisive factor affecting the heating time and energy consumption. A multi-sheet cell design is thus proposed and demonstrated to substantially minimize the temperature gradient, achieving 30% more rapid self-heating with 27% less energy consumption than those reported in the literature.
Non-linear Evolution of Velocity Ring Distributions: Generation of Whistler Waves
NASA Astrophysics Data System (ADS)
Mithaiwala, M.; Rudakov, L.; Ganguli, G.
2010-12-01
Although it is typically believed that an ion ring velocity distribution has a stability threshold, we find that they are universally unstable. This can substantially impact the understanding of dynamics in both laboratory and space plasmas. A high ring density neutralizes the stabilizing effect of ion Landau damping in a warm plasma and the ring is unstable to the generation of waves below the lower hybrid frequency- even for a very high temperature plasma. For ring densities lower than the background plasma density there is a slow instability with growth rate less than the background ion cyclotron frequency and consequently the background ion response is magnetized. This is in addition to the widely discussed fast instability where the wave growth rate exceeds the background ion cyclotron frequency and hence the background ions are effectively unmagnetized. Thus, even a low density ring is unstable to waves around the lower hybrid frequency range for any ring speed. This implies that effectively there is no velocity threshold for a sufficiently cold ring. The importance of these conclusions on the nonlinear evolution of space plasmas, in particular to solar wind-comet interaction, post-magnetospheric storm conditions, and chemical release experiments in the ionosphere will be discussed.
NASA Astrophysics Data System (ADS)
Bodin, P.; Olin, S.; Pugh, T. A. M.; Arneth, A.
2014-12-01
Food security can be defined as stable access to food of good nutritional quality. In Sub Saharan Africa access to food is strongly linked to local food production and the capacity to generate enough calories to sustain the local population. Therefore it is important in these regions to generate not only sufficiently high yields but also to reduce interannual variability in food production. Traditionally, climate impact simulation studies have focused on factors that underlie maximum productivity ignoring the variability in yield. By using Modern Portfolio Theory, a method stemming from economics, we here calculate optimum current and future crop selection that maintain current yield while minimizing variance, vs. maintaining variance while maximizing yield. Based on simulated yield using the LPJ-GUESS dynamic vegetation model, the results show that current cropland distribution for many crops is close to these optimum distributions. Even so, the optimizations displayed substantial potential to either increase food production and/or to decrease its variance regionally. Our approach can also be seen as a method to create future scenarios for the sown areas of crops in regions where local food production is important for food security.
The impact of electric vehicles on the outlook of future energy system
NASA Astrophysics Data System (ADS)
Zhuk, A.; Buzoverov, E.
2018-02-01
Active promotion of electric vehicles (EVs) and technology of fast EV charging in the medium term may cause significant peak loads on the energy system, what necessitates making strategic decisions related to the development of generating capacities, distribution networks with EV charging infrastructure, and priorities in the development of battery electric vehicles and vehicles with electrochemical generators. The paper analyses one of the most significant aspects of joint development of electric transport system and energy system in the conditions of substantial growth of energy consumption by EVs. The assessments of per-unit-costs of operation and depreciation of EV power unit were made, taking into consideration the expenses of electric power supply. The calculations show that the choice of electricity buffering method for EV fast charging depends on the character of electricity infrastructure in the region where the electric transport is operating. In the conditions of high density of electricity network and a large number of EVs, the stationary storage facilities or the technology of distributed energy storage in EV batteries - vehicle-to-grid (V2G) technology may be used for buffering. In the conditions of low density and low capacity of electricity networks, the most economical solution could be usage of EVs with traction power units based on the combination of air-aluminum electrochemical generator and a buffer battery of small capacity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, W Jr
1981-07-01
This report describes results of a parametric study of quantities of radioactive materials that might be discharged by a tornado-generated depressurization on contaminated process cells within the presently inoperative Nuclear Fuel Services' (NFS) fuel reprocessing facility near West Valley, New York. The study involved the following tasks: determining approximate quantities of radioactive materials in the cells and characterizing particle-size distribution; estimating the degree of mass reentrainment from particle-size distribution and from air speed data presented in Part 1; and estimating the quantities of radioactive material (source term) released from the cells to the atmosphere. The study has shown that improperlymore » sealed manipulator ports in the Process Mechanical Cell (PMC) present the most likely pathway for release of substantial quantities of radioactive material in the atmosphere under tornado accident conditions at the facility.« less
Sanders, John M; Beshore, Douglas C; Culberson, J Christopher; Fells, James I; Imbriglio, Jason E; Gunaydin, Hakan; Haidle, Andrew M; Labroli, Marc; Mattioni, Brian E; Sciammetta, Nunzio; Shipe, William D; Sheridan, Robert P; Suen, Linda M; Verras, Andreas; Walji, Abbas; Joshi, Elizabeth M; Bueters, Tjerk
2017-08-24
High-throughput screening (HTS) has enabled millions of compounds to be assessed for biological activity, but challenges remain in the prioritization of hit series. While biological, absorption, distribution, metabolism, excretion, and toxicity (ADMET), purity, and structural data are routinely used to select chemical matter for further follow-up, the scarcity of historical ADMET data for screening hits limits our understanding of early hit compounds. Herein, we describe a process that utilizes a battery of in-house quantitative structure-activity relationship (QSAR) models to generate in silico ADMET profiles for hit series to enable more complete characterizations of HTS chemical matter. These profiles allow teams to quickly assess hit series for desirable ADMET properties or suspected liabilities that may require significant optimization. Accordingly, these in silico data can direct ADMET experimentation and profoundly impact the progression of hit series. Several prospective examples are presented to substantiate the value of this approach.
Quantum key distribution using continuous-variable non-Gaussian states
NASA Astrophysics Data System (ADS)
Borelli, L. F. M.; Aguiar, L. S.; Roversi, J. A.; Vidiella-Barranco, A.
2016-02-01
In this work, we present a quantum key distribution protocol using continuous-variable non-Gaussian states, homodyne detection and post-selection. The employed signal states are the photon added then subtracted coherent states (PASCS) in which one photon is added and subsequently one photon is subtracted from the field. We analyze the performance of our protocol, compared with a coherent state-based protocol, for two different attacks that could be carried out by the eavesdropper (Eve). We calculate the secret key rate transmission in a lossy line for a superior channel (beam-splitter) attack, and we show that we may increase the secret key generation rate by using the non-Gaussian PASCS rather than coherent states. We also consider the simultaneous quadrature measurement (intercept-resend) attack, and we show that the efficiency of Eve's attack is substantially reduced if PASCS are used as signal states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Placidi, M.; Jung, J. -Y.; Ratti, A.
2014-07-25
This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibilitymore » when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.« less
The Coalescent Process in Models with Selection
Kaplan, N. L.; Darden, T.; Hudson, R. R.
1988-01-01
Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685
Computer simulations of planetary accretion dynamics: Sensitivity to initial conditions
NASA Technical Reports Server (NTRS)
Isaacman, R.; Sagan, C.
1976-01-01
The implications and limitations of program ACRETE were tested. The program is a scheme based on Newtonian physics and accretion with unit sticking efficiency, devised to simulate the origin of the planets. The dependence of the results on a variety of radial and vertical density distribution laws, the ratio of gas to dust in the solar nebula, the total nebular mass, and the orbital eccentricity of the accreting grains was explored. Only for a small subset of conceivable cases are planetary systems closely like our own generated. Many models have tendencies towards one of two preferred configurations: multiple star systems, or planetary systems in which Jovian planets either have substantially smaller masses than in our system or are absent altogether. But for a wide range of cases recognizable planetary systems are generated - ranging from multiple star systems with accompanying planets, to systems with Jovian planets at several hundred AU, to single stars surrounded only by asteroids.
Medina, Jared; Cason, Samuel
2017-09-01
A substantial number of studies have been published over the last decade, claiming that transcranial direct current stimulation (tDCS) can influence performance on cognitive tasks. However, there is some skepticism regarding the efficacy of tDCS, and evidence from meta-analyses are mixed. One major weakness of these meta-analyses is that they only examine outcomes in published studies. Given biases towards publishing positive results in the scientific literature, there may be a substantial "file-drawer" of unpublished negative results in the tDCS literature. Furthermore, multiple researcher degrees of freedom can also inflate published p-values. Recently, Simonsohn, Nelson and Simmons (2014) created a novel meta-analytic tool that examines the distribution of significant p-values in a literature, and compares it to expected distributions with different effect sizes. Using this tool, one can assess whether the selected studies have evidential value. Therefore, we examined a random selection of studies that used tDCS to alter performance on cognitive tasks, and tDCS studies on working memory in a recently published meta-analysis (Mancuso et al., 2016). Using a p-curve analysis, we found no evidence that the tDCS studies had evidential value (33% power or greater), with the estimate of statistical power of these studies being approximately 14% for the cognitive studies, and 5% (what would be expected from randomly generated data) for the working memory studies. It is likely that previous tDCS studies are substantially underpowered, and we provide suggestions for future research to increase the evidential value of future tDCS studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Treatment of distributions where substantially all contributions are employee contributions (temporary). 1.72(e)-1T Section 1.72(e)-1T Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Items Specifically Included in...
Combined online and offline adaptive radiation therapy: a dosimetric feasibility study.
Yang, Chengliang; Liu, Feng; Ahunbay, Ergun; Chang, Yu-Wen; Lawton, Colleen; Schultz, Christopher; Wang, Dian; Firat, Selim; Erickson, Beth; Li, X Allen
2014-01-01
The purpose of this work is to explore a new adaptive radiation therapy (ART) strategy, combined "online and offline" ART, that can fully account for interfraction variations similar to the existing online ART but with substantially reduced online effort. The concept for the combined ART is to perform online ART only for the fractions with obvious interfraction variations and to deliver the ART plan for that online fraction as well as the subsequent fractions until the next online fraction needs to be adapted. To demonstrate the idea, the daily computed tomographic (CT) data acquired during image guided radiation therapy (IGRT) with an in-room CT (CTVision, Siemens Healthcare, Amarillo, TX) for 6 representative patients (including 2 prostate, 1 head-and-neck, and 1 pancreatic cancer, 1 adrenal carcinoma, and 1 craniopharyngioma patients) were analyzed. Three types of plans were generated based on the following selected daily CTs: (1) IGRT repositioning plan, generated by applying the repositioning shifts to the original plan (representing the current IGRT practice); (2) Re-Opt plan, generated with full-scope optimization; and (3) ART plan, either online ART plan generated with an online ART tool (RealArt, Prowess Inc, Concord, CA) or offline ART plan generated with shifts from the online ART plan. Various dose-volume parameters were compared with measure dosimetric benefits of the ART plans based on daily dose distributions and the cumulative dose maps obtained with deformable image registration. In general, for all the cases studied, the ART (with 3-5 online ART) and Re-Opt plans provide comparable plan quality and offer significantly better target coverage and normal tissue sparing when compared with the repositioning plans. This improvement is statistically significant. The combined online and offline ART is dosimetrically equivalent to the online ART but with substantially reduced online effort, and enables immediate delivery of the adaptive plan when an obvious anatomic change is observed. Copyright © 2014 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Non-Maxwellian fast particle effects in gyrokinetic GENE simulations
NASA Astrophysics Data System (ADS)
Di Siena, A.; Görler, T.; Doerk, H.; Bilato, R.; Citrin, J.; Johnson, T.; Schneider, M.; Poli, E.; JET Contributors
2018-04-01
Fast ions have recently been found to significantly impact and partially suppress plasma turbulence both in experimental and numerical studies in a number of scenarios. Understanding the underlying physics and identifying the range of their beneficial effect is an essential task for future fusion reactors, where highly energetic ions are generated through fusion reactions and external heating schemes. However, in many of the gyrokinetic codes fast ions are, for simplicity, treated as equivalent-Maxwellian-distributed particle species, although it is well known that to rigorously model highly non-thermalised particles, a non-Maxwellian background distribution function is needed. To study the impact of this assumption, the gyrokinetic code GENE has recently been extended to support arbitrary background distribution functions which might be either analytical, e.g., slowing down and bi-Maxwellian, or obtained from numerical fast ion models. A particular JET plasma with strong fast-ion related turbulence suppression is revised with these new code capabilities both with linear and nonlinear gyrokinetic simulations. It appears that the fast ion stabilization tends to be less strong but still substantial with more realistic distributions, and this improves the quantitative power balance agreement with experiments.
Comparison of Aero-Propulsive Performance Predictions for Distributed Propulsion Configurations
NASA Technical Reports Server (NTRS)
Borer, Nicholas K.; Derlaga, Joseph M.; Deere, Karen A.; Carter, Melissa B.; Viken, Sally A.; Patterson, Michael D.; Litherland, Brandon L.; Stoll, Alex M.
2017-01-01
NASA's X-57 "Maxwell" flight demonstrator incorporates distributed electric propulsion technologies in a design that will achieve a significant reduction in energy used in cruise flight. A substantial portion of these energy savings come from beneficial aerodynamic-propulsion interaction. Previous research has shown the benefits of particular instantiations of distributed propulsion, such as the use of wingtip-mounted cruise propellers and leading edge high-lift propellers. However, these benefits have not been reduced to a generalized design or analysis approach suitable for large-scale design exploration. This paper discusses the rapid, "design-order" toolchains developed to investigate the large, complex tradespace of candidate geometries for the X-57. Due to the lack of an appropriate, rigorous set of validation data, the results of these tools were compared to three different computational flow solvers for selected wing and propulsion geometries. The comparisons were conducted using a common input geometry, but otherwise different input grids and, when appropriate, different flow assumptions to bound the comparisons. The results of these studies showed that the X-57 distributed propulsion wing should be able to meet the as-designed performance in cruise flight, while also meeting or exceeding targets for high-lift generation in low-speed flight.
Convergence of pattern generator outputs on a common mechanism of diaphragm motor unit recruitment.
Mantilla, Carlos B; Seven, Yasin B; Sieck, Gary C
2014-01-01
Motor units are the final element of neuromotor control. In manner analogous to the organization of neuromotor control in other skeletal muscles, diaphragm motor units comprise phrenic motoneurons located in the cervical spinal cord that innervate the diaphragm muscle, the main inspiratory muscle in mammals. Diaphragm motor units play a primary role in sustaining ventilation but are also active in other nonventilatory behaviors, including coughing, sneezing, vomiting, defecation, and parturition. Diaphragm muscle fibers comprise all fiber types. Thus, diaphragm motor units display substantial differences in contractile and fatigue properties, but importantly, properties of the motoneuron and muscle fibers within a motor unit are matched. As in other skeletal muscles, diaphragm motor units are recruited in order such that motor units that display greater fatigue resistance are recruited earlier and more often than more fatigable motor units. The properties of the motor unit population are critical determinants of the function of a skeletal muscle across the range of possible motor tasks. Accordingly, fatigue-resistant motor units are sufficient to generate the forces necessary for ventilatory behaviors, whereas more fatigable units are only activated during expulsive behaviors important for airway clearance. Neuromotor control of diaphragm motor units may reflect selective inputs from distinct pattern generators distributed according to the motor unit properties necessary to accomplish these different motor tasks. In contrast, widely distributed inputs to phrenic motoneurons from various pattern generators (e.g., for breathing, coughing, or vocalization) would dictate recruitment order based on intrinsic electrophysiological properties. © 2014 Elsevier B.V. All rights reserved.
Structure of the Magnetotail Current Sheet
NASA Technical Reports Server (NTRS)
Larson, Douglas J.; Kaufmann, Richard L.
1996-01-01
An orbit tracing technique was used to generate current sheets for three magnetotail models. Groups of ions were followed to calculate the resulting cross-tail current. Several groups then were combined to produce a current sheet. The goal is a model in which the ions and associated electrons carry the electric current distribution needed to generate the magnetic field B in which ion orbits were traced. The region -20 R(sub E) less than x less than - 14 R(sub E) in geocentric solar magnetospheric coordinates was studied. Emphasis was placed on identifying the categories of ion orbits which contribute most to the cross-tail current and on gaining physical insight into the manner by which the ions carry the observed current distribution. Ions that were trapped near z = 0, ions that magnetically mirrored throughout the current sheet, and ions that mirrored near the Earth all were needed. The current sheet structure was determined primarily by ion magnetization currents. Electrons of the observed energies carried relatively little cross-tail current in these quiet time current sheets. Distribution functions were generated and integrated to evaluate fluid parameters. An earlier model in which B depended only on z produced a consistent current sheet, but it did not provide a realistic representation of the Earth's middle magnetotail. In the present study, B changed substantially in the x and z directions but only weakly in the y direction within our region of interest. Plasmas with three characteristic particle energies were used with each of the magnetic field models. A plasma was found for each model in which the density, average energy, cross-tail current, and bulk flow velocity agreed well with satellite observations.
Structure of the Magnetotail Current Sheet
NASA Technical Reports Server (NTRS)
Larson, Douglas J.; Kaufmann, Richard L.
1996-01-01
An orbit tracing technique was used to generate current sheets for three magnetotail models. Groups of ions were followed to calculate the resulting cross-tail current. Several groups then were combined to produce a current sheet. The goal is a model in which the ions and associated electrons carry the electric current distribution needed to generate the magnetic field B in which ion orbits were traced. The region -20 R(E) less than x less than -14 R(E) in geocentric solar magnetospheric coordinates was studied. Emphasis was placed on identifying the categories of ion orbits which contribute most to the cross-tail current and on gaining physical insight into the manner by which the ions carry the observed current distribution. Ions that were trapped near z = 0, ions that magnetically mirrored throughout the current sheet, and ions that mirrored near the Earth all were needed. The current sheet structure was determined primarily by ion magnetization currents. Electrons of the observed energies carried relatively little cross-tail current in these quiet time current sheets. Distribution functions were generated and integrated to evaluate fluid parameters. An earlier model in which B depended only on z produced a consistent current sheet, but it did not provide a realistic representation of the Earth's middle magnetotail. In the present study, B changed substantially in the x and z directions but only weakly in the y direction within our region of interest. Plasmas with three characteristic particle energies were used with each of the magnetic field models. A plasma was found for each model in which the density, average energy, cross-tail current, and bulk flow velocity agreed well with satellite observations.
Mittag, U.; Kriechbaumer, A.; Rittweger, J.
2017-01-01
The authors propose a new 3D interpolation algorithm for the generation of digital geometric 3D-models of bones from existing image stacks obtained by peripheral Quantitative Computed Tomography (pQCT) or Magnetic Resonance Imaging (MRI). The technique is based on the interpolation of radial gray value profiles of the pQCT cross sections. The method has been validated by using an ex-vivo human tibia and by comparing interpolated pQCT images with images from scans taken at the same position. A diversity index of <0.4 (1 meaning maximal diversity) even for the structurally complex region of the epiphysis, along with the good agreement of mineral-density-weighted cross-sectional moment of inertia (CSMI), demonstrate the high quality of our interpolation approach. Thus the authors demonstrate that this interpolation scheme can substantially improve the generation of 3D models from sparse scan sets, not only with respect to the outer shape but also with respect to the internal gray-value derived material property distribution. PMID:28574415
WE-A-17A-12: The Influence of Eye Plaque Design On Dose Distributions and Dose- Volume Histograms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aryal, P; Molloy, JA; Rivard, MJ
Purpose: To investigate the effect of slot design of the model EP917 plaque on dose distributions and dose-volume histograms (DVHs). Methods: The dimensions and orientation of the slots in EP917 plaques were measured. In the MCNP5 radiation simulation geometry, dose distributions on orthogonal planes and DVHs for a tumor and sclera were generated for comparisons. 27 slot designs and 13 plaques were evaluated and compared with the published literature and the Plaque Simulator clinical treatment planning system. Results: The dosimetric effect of the gold backing composition and mass density was < 3%. Slot depth, width, and length changed the centralmore » axis (CAX) dose distributions by < 1% per 0.1 mm in design variation. Seed shifts in the slot towards the eye and shifts of the {sup 125} I-coated Ag rod within the capsule had the greatest impact on CAX dose distribution, increasing by 14%, 9%, 4%, and 2.5% at 1, 2, 5, and 10 mm, respectively, from the inner sclera. Along the CAX, dose from the full plaque geometry using the measured slot design was 3.4% ± 2.3% higher than the manufacturer-provided geometry. D{sub 10} for the simulated tumor, inner sclera, and outer sclera for the measured plaque was also higher, but 9%, 10%, and 20%, respectively. In comparison to the measured plaque design, a theoretical plaque having narrow and deep slots delivered 30%, 37%, and 62% lower D{sub 10} doses to the tumor, inner sclera, and outer sclera, respectively. CAX doses at −1, 0, 1, and 2 mm were also lower by a factor of 2.6, 1.4, 1.23, and 1.13, respectively. Conclusion: The study identified substantial sensitivity of the EP917 plaque dose distributions to slot design. However, it did not identify substantial dosimetric variations based on radionuclide choice ({sup 125}I, {sup 103}Pd, or {sup 131}Cs). COMS plaques provided lower scleral doses with similar tumor dose coverage.« less
Michiels, Bart; Heyvaert, Mieke; Onghena, Patrick
2018-04-01
The conditional power (CP) of the randomization test (RT) was investigated in a simulation study in which three different single-case effect size (ES) measures were used as the test statistics: the mean difference (MD), the percentage of nonoverlapping data (PND), and the nonoverlap of all pairs (NAP). Furthermore, we studied the effect of the experimental design on the RT's CP for three different single-case designs with rapid treatment alternation: the completely randomized design (CRD), the randomized block design (RBD), and the restricted randomized alternation design (RRAD). As a third goal, we evaluated the CP of the RT for three types of simulated data: data generated from a standard normal distribution, data generated from a uniform distribution, and data generated from a first-order autoregressive Gaussian process. The results showed that the MD and NAP perform very similarly in terms of CP, whereas the PND performs substantially worse. Furthermore, the RRAD yielded marginally higher power in the RT, followed by the CRD and then the RBD. Finally, the power of the RT was almost unaffected by the type of the simulated data. On the basis of the results of the simulation study, we recommend at least 20 measurement occasions for single-case designs with a randomized treatment order that are to be evaluated with an RT using a 5% significance level. Furthermore, we do not recommend use of the PND, because of its low power in the RT.
Krylov, Igor B; Kompanets, Mykhailo O; Novikova, Katerina V; Opeida, Iosip O; Kushch, Olga V; Shelimov, Boris N; Nikishin, Gennady I; Levitsky, Dmitri O; Terent'ev, Alexander O
2016-01-14
Nitroxyl radicals are widely used in chemistry, materials sciences, and biology. Imide-N-oxyl radicals are subclass of unique nitroxyl radicals that proved to be useful catalysts and mediators of selective oxidation and CH-functionalization. An efficient metal-free method was developed for the generation of imide-N-oxyl radicals from N-hydroxyimides at room temperature by the reaction with (diacetoxyiodo)benzene. The method allows for the production of high concentrations of free radicals and provides high resolution of their EPR spectra exhibiting the superhyperfine structure from benzene ring protons distant from the radical center. An analysis of the spectra shows that, regardless of the electronic effects of the substituents in the benzene ring, the superhyperfine coupling constant of an unpaired electron with the distant protons at positions 4 and 5 of the aromatic system is substantially greater than that with the protons at positions 3 and 6 that are closer to the N-oxyl radical center. This is indicative of an unusual character of the spin density distribution of the unpaired electron in substituted phthalimide-N-oxyl radicals. Understanding of the nature of the electron density distribution in imide-N-oxyl radicals may be useful for the development of commercial mediators of oxidation based on N-hydroxyimides.
A weighted U-statistic for genetic association analyses of sequencing data.
Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing
2014-12-01
With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol. © 2014 WILEY PERIODICALS, INC.
Chodera, John D; Shirts, Michael R
2011-11-21
The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.
NASA Astrophysics Data System (ADS)
Yepes, Pablo P.; Eley, John G.; Liu, Amy; Mirkovic, Dragan; Randeniya, Sharmalee; Titt, Uwe; Mohan, Radhe
2016-04-01
Monte Carlo (MC) methods are acknowledged as the most accurate technique to calculate dose distributions. However, due its lengthy calculation times, they are difficult to utilize in the clinic or for large retrospective studies. Track-repeating algorithms, based on MC-generated particle track data in water, accelerate dose calculations substantially, while essentially preserving the accuracy of MC. In this study, we present the validation of an efficient dose calculation algorithm for intensity modulated proton therapy, the fast dose calculator (FDC), based on a track-repeating technique. We validated the FDC algorithm for 23 patients, which included 7 brain, 6 head-and-neck, 5 lung, 1 spine, 1 pelvis and 3 prostate cases. For validation, we compared FDC-generated dose distributions with those from a full-fledged Monte Carlo based on GEANT4 (G4). We compared dose-volume-histograms, 3D-gamma-indices and analyzed a series of dosimetric indices. More than 99% of the voxels in the voxelized phantoms describing the patients have a gamma-index smaller than unity for the 2%/2 mm criteria. In addition the difference relative to the prescribed dose between the dosimetric indices calculated with FDC and G4 is less than 1%. FDC reduces the calculation times from 5 ms per proton to around 5 μs.
XIAO, Xiangming; DONG, Jinwei; QIN, Yuanwei; WANG, Zongming
2016-01-01
Information of paddy rice distribution is essential for food production and methane emission calculation. Phenology-based algorithms have been utilized in the mapping of paddy rice fields by identifying the unique flooding and seedling transplanting phases using multi-temporal moderate resolution (500 m to 1 km) images. In this study, we developed simple algorithms to identify paddy rice at a fine resolution at the regional scale using multi-temporal Landsat imagery. Sixteen Landsat images from 2010–2012 were used to generate the 30 m paddy rice map in the Sanjiang Plain, northeast China—one of the major paddy rice cultivation regions in China. Three vegetation indices, Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Land Surface Water Index (LSWI), were used to identify rice fields during the flooding/transplanting and ripening phases. The user and producer accuracies of paddy rice on the resultant Landsat-based paddy rice map were 90% and 94%, respectively. The Landsat-based paddy rice map was an improvement over the paddy rice layer on the National Land Cover Dataset, which was generated through visual interpretation and digitalization on the fine-resolution images. The agricultural census data substantially underreported paddy rice area, raising serious concern about its use for studies on food security. PMID:27695637
Brown, Kenneth Dewayne [Grain Valley, MO; Dunson, David [Kansas City, MO
2006-08-08
A distributed data transmitter (DTXR) which is an adaptive data communication microwave transmitter having a distributable architecture of modular components, and which incorporates both digital and microwave technology to provide substantial improvements in physical and operational flexibility. The DTXR has application in, for example, remote data acquisition involving the transmission of telemetry data across a wireless link, wherein the DTXR is integrated into and utilizes available space within a system (e.g., a flight vehicle). In a preferred embodiment, the DTXR broadly comprises a plurality of input interfaces; a data modulator; a power amplifier; and a power converter, all of which are modularly separate and distinct so as to be substantially independently physically distributable and positionable throughout the system wherever sufficient space is available.
Brown, Kenneth Dewayne [Grain Valley, MO; Dunson, David [Kansas City, MO
2008-06-03
A distributed data transmitter (DTXR) which is an adaptive data communication microwave transmitter having a distributable architecture of modular components, and which incorporates both digital and microwave technology to provide substantial improvements in physical and operational flexibility. The DTXR has application in, for example, remote data acquisition involving the transmission of telemetry data across a wireless link, wherein the DTXR is integrated into and utilizes available space within a system (e.g., a flight vehicle). In a preferred embodiment, the DTXR broadly comprises a plurality of input interfaces; a data modulator; a power amplifier; and a power converter, all of which are modularly separate and distinct so as to be substantially independently physically distributable and positionable throughout the system wherever sufficient space is available.
Permanent magnet edge-field quadrupole
Tatchyn, R.O.
1997-01-21
Planar permanent magnet edge-field quadrupoles for use in particle accelerating machines and in insertion devices designed to generate spontaneous or coherent radiation from moving charged particles are disclosed. The invention comprises four magnetized rectangular pieces of permanent magnet material with substantially similar dimensions arranged into two planar arrays situated to generate a field with a substantially dominant quadrupole component in regions close to the device axis. 10 figs.
Permanent magnet edge-field quadrupole
Tatchyn, Roman O.
1997-01-01
Planar permanent magnet edge-field quadrupoles for use in particle accelerating machines and in insertion devices designed to generate spontaneous or coherent radiation from moving charged particles are disclosed. The invention comprises four magnetized rectangular pieces of permanent magnet material with substantially similar dimensions arranged into two planar arrays situated to generate a field with a substantially dominant quadrupole component in regions close to the device axis.
Extended core for motor/generator
Shoykhet, Boris A.
2005-05-10
An extended stator core in a motor/generator can be utilized to mitigate losses in end regions of the core and a frame of the motor/generator. To mitigate the losses, the stator core can be extended to a length substantially equivalent to or greater than a length of a magnetically active portion in the rotor. Alternatively, a conventional length stator core can be utilized with a shortened magnetically active portion to mitigate losses in the motor/generator. To mitigate the losses in the core caused by stator winding, the core can be extended to a length substantially equivalent or greater than a length of stator winding.
Extended core for motor/generator
Shoykhet, Boris A.
2006-08-22
An extended stator core in a motor/generator can be utilized to mitigate losses in end regions of the core and a frame of the motor/generator. To mitigate the losses, the stator core can be extended to a length substantially equivalent to or greater than a length of a magnetically active portion in the rotor. Alternatively, a conventional length stator core can be utilized with a shortened magnetically active portion to mitigate losses in the motor/generator. To mitigate the losses in the core caused by stator winding, the core can be extended to a length substantially equivalent or greater than a length of stator winding.
Numerical simulation of helical flow in a cylindrical channel
NASA Astrophysics Data System (ADS)
Vasiliev, A.; Sukhanovskii, A.; Stepanov, R.
2017-06-01
Numerical simulation of the helical flow in a cylindrical channel with diverter was carried out using open-source software OpenFOAM Extend 4.0. The velocity, vorticity and helicity density distributions were analyzed. It was shown that azimithal contribution of helicity is negative near the wall and positive in the center. In opposite axial helicity contribution is negative in the center and positive near the wall. Analysis of helicity of non-axisymmetric part of the flow showed that it has substantial values near the diverter but than rapidly decreases with y (axial coordinate) and further downstream it can be neglected. Dependencies of integrated values of azimuthal Hϕ and axial Hy contributions of helicity density on y show a remarkable quantitative similarity. It was found that integral values of Hϕ and Hy are negative for all y. Magnitudes of Hϕ and Hy decrease after the diverter up to y ≈ 70 mm and after that monotonically increase. The flow behind the diverter is characterized by substantial amount of helicity and can be used as a helicity generator.
Analysis on Voltage Profile of Distribution Network with Distributed Generation
NASA Astrophysics Data System (ADS)
Shao, Hua; Shi, Yujie; Yuan, Jianpu; An, Jiakun; Yang, Jianhua
2018-02-01
Penetration of distributed generation has some impacts on a distribution network in load flow, voltage profile, reliability, power loss and so on. After the impacts and the typical structures of the grid-connected distributed generation are analyzed, the back/forward sweep method of the load flow calculation of the distribution network is modelled including distributed generation. The voltage profiles of the distribution network affected by the installation location and the capacity of distributed generation are thoroughly investigated and simulated. The impacts on the voltage profiles are summarized and some suggestions to the installation location and the capacity of distributed generation are given correspondingly.
NiftyNet: a deep-learning platform for medical imaging.
Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom
2018-05-01
Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Deriving photometric redshifts using fuzzy archetypes and self-organizing maps - I. Methodology
NASA Astrophysics Data System (ADS)
Speagle, Joshua S.; Eisenstein, Daniel J.
2017-07-01
We propose a method to substantially increase the flexibility and power of template fitting-based photometric redshifts by transforming a large number of galaxy spectral templates into a corresponding collection of 'fuzzy archetypes' using a suitable set of perturbative priors designed to account for empirical variation in dust attenuation and emission-line strengths. To bypass widely separated degeneracies in parameter space (e.g. the redshift-reddening degeneracy), we train self-organizing maps (SOMs) on large 'model catalogues' generated from Monte Carlo sampling of our fuzzy archetypes to cluster the predicted observables in a topologically smooth fashion. Subsequent sampling over the SOM then allows full reconstruction of the relevant probability distribution functions (PDFs). This combined approach enables the multimodal exploration of known variation among galaxy spectral energy distributions with minimal modelling assumptions. We demonstrate the power of this approach to recover full redshift PDFs using discrete Markov chain Monte Carlo sampling methods combined with SOMs constructed from Large Synoptic Survey Telescope ugrizY and Euclid YJH mock photometry.
Evaluating multiple determinants of the structure of plant-animal mutualistic networks.
Vázquez, Diego P; Chacoff, Natacha P; Cagnolo, Luciano
2009-08-01
The structure of mutualistic networks is likely to result from the simultaneous influence of neutrality and the constraints imposed by complementarity in species phenotypes, phenologies, spatial distributions, phylogenetic relationships, and sampling artifacts. We develop a conceptual and methodological framework to evaluate the relative contributions of these potential determinants. Applying this approach to the analysis of a plant-pollinator network, we show that information on relative abundance and phenology suffices to predict several aggregate network properties (connectance, nestedness, interaction evenness, and interaction asymmetry). However, such information falls short of predicting the detailed network structure (the frequency of pairwise interactions), leaving a large amount of variation unexplained. Taken together, our results suggest that both relative species abundance and complementarity in spatiotemporal distribution contribute substantially to generate observed network patters, but that this information is by no means sufficient to predict the occurrence and frequency of pairwise interactions. Future studies could use our methodological framework to evaluate the generality of our findings in a representative sample of study systems with contrasting ecological conditions.
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
On the Computation of Sound by Large-Eddy Simulations
NASA Technical Reports Server (NTRS)
Piomelli, Ugo; Streett, Craig L.; Sarkar, Sutanu
1997-01-01
The effect of the small scales on the source term in Lighthill's acoustic analogy is investigated, with the objective of determining the accuracy of large-eddy simulations when applied to studies of flow-generated sound. The distribution of the turbulent quadrupole is predicted accurately, if models that take into account the trace of the SGS stresses are used. Its spatial distribution is also correct, indicating that the low-wave-number (or frequency) part of the sound spectrum can be predicted well by LES. Filtering, however, removes the small-scale fluctuations that contribute significantly to the higher derivatives in space and time of Lighthill's stress tensor T(sub ij). The rms fluctuations of the filtered derivatives are substantially lower than those of the unfiltered quantities. The small scales, however, are not strongly correlated, and are not expected to contribute significantly to the far-field sound; separate modeling of the subgrid-scale density fluctuations might, however, be required in some configurations.
A sensor-less LED dimming system based on daylight harvesting with BIPV systems.
Yoo, Seunghwan; Kim, Jonghun; Jang, Cheol-Yong; Jeong, Hakgeun
2014-01-13
Artificial lighting in office buildings typically requires 30% of the total energy consumption of the building, providing a substantial opportunity for energy savings. To reduce the energy consumed by indoor lighting, we propose a sensor-less light-emitting diode (LED) dimming system using daylight harvesting. In this study, we used light simulation software to quantify and visualize daylight, and analyzed the correlation between photovoltaic (PV) power generation and indoor illumination in an office with an integrated PV system. In addition, we calculated the distribution of daylight illumination into the office and dimming ratios for the individual control of LED lights. Also, we were able directly to use the electric power generated by PV system. As a result, power consumption for electric lighting was reduced by 40 - 70% depending on the season and the weather conditions. Thus, the dimming system proposed in this study can be used to control electric lighting to reduce energy use cost-effectively and simply.
Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter
2017-06-28
High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Mostazir, Mohammod; Jeffery, Alison; Voss, Linda; Wilkin, Terence
2017-01-01
Pre-diabetes is a state of beta-cell stress caused by excess demand for insulin. Body mass is an important determinant of insulin demand, and BMI has risen substantially over recent time. We sought to model changes in the parameters of glucose control against rising BMI over the past 25years. Using random coefficient mixed models, we established the correlations between HbA1C, fasting glucose, fasting insulin, HOMA2-IR and BMI in contemporary (2015) children (N=307) at ages 5-16y from the EarlyBird study, and modelled their corresponding values 25years ago according to the distribution of BMI in the UK Growth Standards (1990). There was little change in HbA1C or fasting glucose over the 25y period at any age or in either gender. On the other hand, the estimates for fasting insulin and HOMA2-IR were substantially higher in both genders in 2015 compared with 1990. Insofar as it is determined by body mass, there has been a substantial rise in beta cell demand among children over the past 25years. The change could be detected by fasting insulin and HOMA2-IR, but not by fasting glucose or HbA1C. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
An update of Quaternary faults of central and eastern Oregon
Weldon, Ray J.; Fletcher, D.K.; Weldon, E.M.; Scharer, K.M.; McCrory, P.A.
2002-01-01
This is the online version of a CD-ROM publication. We have updated the eastern portion of our previous active fault map of Oregon (Pezzopane, Nakata, and Weldon, 1992) as a contribution to the larger USGS effort to produce digital maps of active faults in the Pacific Northwest region. The 1992 fault map has seen wide distribution and has been reproduced in essentially all subsequent compilations of active faults of Oregon. The new map provides a substantial update of known active or suspected active faults east of the Cascades. Improvements in the new map include (1) many newly recognized active faults, (2) a linked ArcInfo map and reference database, (3) more precise locations for previously recognized faults on shaded relief quadrangles generated from USGS 30-m digital elevations models (DEM), (4) more uniform coverage resulting in more consistent grouping of the ages of active faults, and (5) a new category of 'possibly' active faults that share characteristics with known active faults, but have not been studied adequately to assess their activity. The distribution of active faults has not changed substantially from the original Pezzopane, Nakata and Weldon map. Most faults occur in the south-central Basin and Range tectonic province that is located in the backarc portion of the Cascadia subduction margin. These faults occur in zones consisting of numerous short faults with similar rates, ages, and styles of movement. Many active faults strongly correlate with the most active volcanic centers of Oregon, including Newberry Craters and Crater Lake.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1986-02-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less
Estimation of distributional parameters for censored trace-level water-quality data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1984-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less
Reducing the likelihood of long tennis matches.
Barnett, Tristan; Alan, Brown; Pollard, Graham
2006-01-01
Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.
Reducing the Likelihood of Long Tennis Matches
Barnett, Tristan; Alan, Brown; Pollard, Graham
2006-01-01
Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key Points The cumulant generating function has nice properties for calculating the parameters of distributions in a tennis match A final tiebreaker set reduces the length of matches as currently being used in the US Open A new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match. PMID:24357951
IpexT: Integrated Planning and Execution for Military Satellite Tele-Communications
NASA Technical Reports Server (NTRS)
Plaunt, Christian; Rajan, Kanna
2004-01-01
The next generation of military communications satellites may be designed as a fast packet-switched constellation of spacecraft able to withstand substantial bandwidth capacity fluctuation in the face of dynamic resource utilization and rapid environmental changes including jamming of communication frequencies and unstable weather phenomena. We are in the process of designing an integrated scheduling and execution tool which will aid in the analysis of the design parameters needed for building such a distributed system for nominal and battlefield communications. This paper discusses the design of such a system based on a temporal constraint posting planner/scheduler and a smart executive which can cope with a dynamic environment to make a more optimal utilization of bandwidth than the current circuit switched based approach.
Non-linear photochemical pathways in laser-induced atmospheric aerosol formation
Mongin, Denis; Slowik, Jay G.; Schubert, Elise; Brisset, Jean-Gabriel; Berti, Nicolas; Moret, Michel; Prévôt, André S. H.; Baltensperger, Urs; Kasparian, Jérôme; Wolf, Jean-Pierre
2015-01-01
We measured the chemical composition and the size distribution of aerosols generated by femtosecond-Terawatt laser pulses in the atmosphere using an aerosol mass spectrometer (AMS). We show that nitric acid condenses in the form of ammonium nitrate, and that oxidized volatile organics also contribute to particle growth. These two components account for two thirds and one third, respectively, of the dry laser-condensed mass. They appear in two different modes centred at 380 nm and 150 nm. The number concentration of particles between 25 and 300 nm increases by a factor of 15. Pre-existing water droplets strongly increase the oxidative properties of the laser-activated atmosphere, substantially enhancing the condensation of organics under laser illumination. PMID:26450172
NASA Astrophysics Data System (ADS)
Dolinina, V. I.; Koterov, V. N.; Pyatakhin, Mikhail V.; Urin, B. M.
1989-02-01
Numerical methods were used to investigate theoretically the dynamics of the energy balance of a discharge in a CO-N2 mixture, taking into account the mutual influence of the distributions of the electron energy and of the populations of the molecules over the vibrational levels. It was shown that this influence plays a decisive part in substantially redistributing the pump energy between the vibrational levels of the CO and N2 molecules in favor of the N2 molecules. A stabilizing action of the nitrogen on the thermal regime of the CO laser-active medium was discovered and the range of optimal CO:N2 ratios was determined.
Heterogeneous fuel for hybrid rocket
NASA Technical Reports Server (NTRS)
Stickler, David B. (Inventor)
1996-01-01
Heterogeneous fuel compositions suitable for use in hybrid rocket engines and solid-fuel ramjet engines, The compositions include mixtures of a continuous phase, which forms a solid matrix, and a dispersed phase permanently distributed therein. The dispersed phase or the matrix vaporizes (or melts) and disperses into the gas flow much more rapidly than the other, creating depressions, voids and bumps within and on the surface of the remaining bulk material that continuously roughen its surface, This effect substantially enhances heat transfer from the combusting gas flow to the fuel surface, producing a correspondingly high burning rate, The dispersed phase may include solid particles, entrained liquid droplets, or gas-phase voids having dimensions roughly similar to the displacement scale height of the gas-flow boundary layer generated during combustion.
Fultz, B.T.
1980-12-05
Apparatus is provided for detecting radiation such as gamma rays and x-rays generated in backscatter Moessbauer effect spectroscopy and x-ray spectrometry, which has a large window for detecting radiation emanating over a wide solid angle from a specimen and which generates substantially the same output pulse height for monoenergetic radiation that passes through any portion of the detection chamber. The apparatus includes a substantially toroidal chamber with conductive walls forming a cathode, and a wire anode extending in a circle within the chamber with the anode lying closer to the inner side of the toroid which has the least diameter than to the outer side. The placement of the anode produces an electric field, in a region close to the anode, which has substantially the same gradient in all directions extending radially from the anode, so that the number of avalanche electrons generated by ionizing radiation is independent of the path of the radiation through the chamber.
Fultz, Brent T.
1983-01-01
Apparatus is provided for detecting radiation such as gamma rays and X-rays generated in backscatter Mossbauer effect spectroscopy and X-ray spectrometry, which has a large "window" for detecting radiation emanating over a wide solid angle from a specimen and which generates substantially the same output pulse height for monoenergetic radiation that passes through any portion of the detection chamber. The apparatus includes a substantially toroidal chamber with conductive walls forming a cathode, and a wire anode extending in a circle within the chamber with the anode lying closer to the inner side of the toroid which has the least diameter than to the outer side. The placement of the anode produces an electric field, in a region close to the anode, which has substantially the same gradient in all directions extending radially from the anode, so that the number of avalanche electrons generated by ionizing radiation is independent of the path of the radiation through the chamber.
Waste minimization charges up recycling of spent lead-acid batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Queneau, P.B.; Troutman, A.L.
Substantial strides are being made to minimize waste generated form spent lead-acid battery recycling. The Center for Hazardous Materials Research (Pittsburgh) recently investigated the potential for secondary lead smelters to recover lead from battery cases and other materials found at hazardous waste sites. Primary and secondary lead smelters in the U.S. and Canada are processing substantial tons of lead wastes, and meeting regulatory safeguards. Typical lead wastes include contaminated soil, dross and dust by-products from industrial lead consumers, tetraethyl lead residues, chemical manufacturing by-products, leaded glass, china clay waste, munitions residues and pigments. The secondary lead industry also is developingmore » and installing systems to convert process inputs to products with minimum generation of liquid, solid and gaseous wastes. The industry recently has made substantial accomplishments that minimize waste generation during lead production from its bread and butter feedstock--spent lead-acid batteries.« less
Haage, P; Adam, G; Karaagac, S; Pfeffer, J; Glowinski, A; Döhmen, S; Günther, R W
2001-04-01
To evaluate a new technique with mechanical administration of aerosolized gadolinium (Gd)-DTPA for MR visualization of lung ventilation. Ten experimental procedures were performed in six domestic pigs. Gd-DTPA was aerosolized by a small-particle generator. The intubated animals were mechanically aerosolized with the nebulized contrast agent and studied on a 1.5-T MR imager. Respiratory gated T1-weighted turbo spin-echo images were obtained before, during, and after contrast administration. Pulmonary signal intensity (SI) changes were calculated for corresponding regions of both lungs. Homogeneity of aerosol distribution was graded independently by two radiologists. To achieve a comparable SI increase as attained in previous trials that used manual aerosol ventilation, a ventilation period of 20 minutes (formerly 30 minutes) was sufficient. Mean SI changes of 116% were observed after that duration. Contrast delivery was rated evenly distributed in all cases by the reviewers. The feasibility of applying Gd-DTPA as a contrast agent to demonstrate pulmonary ventilation in large animals has been described before. The results of this refined technique substantiate the potential of Gd-based ventilation MR imaging by improving aerosol distribution and shortening the nebulization duration in the healthy lung.
GALAXY: A new hybrid MOEA for the optimal design of Water Distribution Systems
NASA Astrophysics Data System (ADS)
Wang, Q.; Savić, D. A.; Kapelan, Z.
2017-03-01
A new hybrid optimizer, called genetically adaptive leaping algorithm for approximation and diversity (GALAXY), is proposed for dealing with the discrete, combinatorial, multiobjective design of Water Distribution Systems (WDSs), which is NP-hard and computationally intensive. The merit of GALAXY is its ability to alleviate to a great extent the parameterization issue and the high computational overhead. It follows the generational framework of Multiobjective Evolutionary Algorithms (MOEAs) and includes six search operators and several important strategies. These operators are selected based on their leaping ability in the objective space from the global and local search perspectives. These strategies steer the optimization and balance the exploration and exploitation aspects simultaneously. A highlighted feature of GALAXY lies in the fact that it eliminates majority of parameters, thus being robust and easy-to-use. The comparative studies between GALAXY and three representative MOEAs on five benchmark WDS design problems confirm its competitiveness. GALAXY can identify better converged and distributed boundary solutions efficiently and consistently, indicating a much more balanced capability between the global and local search. Moreover, its advantages over other MOEAs become more substantial as the complexity of the design problem increases.
Structural model for fluctuations in financial markets
NASA Astrophysics Data System (ADS)
Anand, Kartik; Khedair, Jonathan; Kühn, Reimer
2018-05-01
In this paper we provide a comprehensive analysis of a structural model for the dynamics of prices of assets traded in a market which takes the form of an interacting generalization of the geometric Brownian motion model. It is formally equivalent to a model describing the stochastic dynamics of a system of analog neurons, which is expected to exhibit glassy properties and thus many metastable states in a large portion of its parameter space. We perform a generating functional analysis, introducing a slow driving of the dynamics to mimic the effect of slowly varying macroeconomic conditions. Distributions of asset returns over various time separations are evaluated analytically and are found to be fat-tailed in a manner broadly in line with empirical observations. Our model also allows us to identify collective, interaction-mediated properties of pricing distributions and it predicts pricing distributions which are significantly broader than their noninteracting counterparts, if interactions between prices in the model contain a ferromagnetic bias. Using simulations, we are able to substantiate one of the main hypotheses underlying the original modeling, viz., that the phenomenon of volatility clustering can be rationalized in terms of an interplay between the dynamics within metastable states and the dynamics of occasional transitions between them.
The Distributed Geothermal Market Demand Model (dGeo): Documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCabe, Kevin; Mooney, Meghan E; Sigrin, Benjamin O
The National Renewable Energy Laboratory (NREL) developed the Distributed Geothermal Market Demand Model (dGeo) as a tool to explore the potential role of geothermal distributed energy resources (DERs) in meeting thermal energy demands in the United States. The dGeo model simulates the potential for deployment of geothermal DERs in the residential and commercial sectors of the continental United States for two specific technologies: ground-source heat pumps (GHP) and geothermal direct use (DU) for district heating. To quantify the opportunity space for these technologies, dGeo leverages a highly resolved geospatial database and robust bottom-up, agent-based modeling framework. This design is consistentmore » with others in the family of Distributed Generation Market Demand models (dGen; Sigrin et al. 2016), including the Distributed Solar Market Demand (dSolar) and Distributed Wind Market Demand (dWind) models. dGeo is intended to serve as a long-term scenario-modeling tool. It has the capability to simulate the technical potential, economic potential, market potential, and technology deployment of GHP and DU through the year 2050 under a variety of user-defined input scenarios. Through these capabilities, dGeo can provide substantial analytical value to various stakeholders interested in exploring the effects of various techno-economic, macroeconomic, financial, and policy factors related to the opportunity for GHP and DU in the United States. This report documents the dGeo modeling design, methodology, assumptions, and capabilities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Broderick, Robert; Mather, Barry
2016-05-01
This report analyzes distribution-integration challenges, solutions, and research needs in the context of distributed generation from PV (DGPV) deployment to date and the much higher levels of deployment expected with achievement of the U.S. Department of Energy's SunShot targets. Recent analyses have improved estimates of the DGPV hosting capacities of distribution systems. This report uses these results to statistically estimate the minimum DGPV hosting capacity for the contiguous United States using traditional inverters of approximately 170 GW without distribution system modifications. This hosting capacity roughly doubles if advanced inverters are used to manage local voltage and additional minor, low-cost changesmore » could further increase these levels substantially. Key to achieving these deployment levels at minimum cost is siting DGPV based on local hosting capacities, suggesting opportunities for regulatory, incentive, and interconnection innovation. Already, pre-computed hosting capacity is beginning to expedite DGPV interconnection requests and installations in select regions; however, realizing SunShot-scale deployment will require further improvements to DGPV interconnection processes, standards and codes, and compensation mechanisms so they embrace the contributions of DGPV to system-wide operations. SunShot-scale DGPV deployment will also require unprecedented coordination of the distribution and transmission systems. This includes harnessing DGPV's ability to relieve congestion and reduce system losses by generating closer to loads; minimizing system operating costs and reserve deployments through improved DGPV visibility; developing communication and control architectures that incorporate DGPV into system operations; providing frequency response, transient stability, and synthesized inertia with DGPV in the event of large-scale system disturbances; and potentially managing reactive power requirements due to large-scale deployment of advanced inverter functions. Finally, additional local and system-level value could be provided by integrating DGPV with energy storage and 'virtual storage,' which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Together, continued innovation across this rich distribution landscape can enable the very-high deployment levels envisioned by SunShot.« less
Ahn, Hyo-Sung; Kim, Byeong-Yeon; Lim, Young-Hun; Lee, Byung-Hun; Oh, Kwang-Kyo
2018-03-01
This paper proposes three coordination laws for optimal energy generation and distribution in energy network, which is composed of physical flow layer and cyber communication layer. The physical energy flows through the physical layer; but all the energies are coordinated to generate and flow by distributed coordination algorithms on the basis of communication information. First, distributed energy generation and energy distribution laws are proposed in a decoupled manner without considering the interactive characteristics between the energy generation and energy distribution. Second, a joint coordination law to treat the energy generation and energy distribution in a coupled manner taking account of the interactive characteristics is designed. Third, to handle over- or less-energy generation cases, an energy distribution law for networks with batteries is designed. The coordination laws proposed in this paper are fully distributed in the sense that they are decided optimally only using relative information among neighboring nodes. Through numerical simulations, the validity of the proposed distributed coordination laws is illustrated.
Generation of urban road dust from anti-skid and asphalt concrete aggregates.
Tervahattu, Heikki; Kupiainen, Kaarle J; Räisänen, Mika; Mäkelä, Timo; Hillamo, Risto
2006-04-30
Road dust forms an important component of airborne particulate matter in urban areas. In many winter cities the use of anti-skid aggregates and studded tires enhance the generation of mineral particles. The abrasion particles dominate the PM10 during springtime when the material deposited in snow is resuspended. This paper summarizes the results from three test series performed in a test facility to assess the factors that affect the generation of abrasion components of road dust. Concentrations, mass size distribution and composition of the particles were studied. Over 90% of the particles were aluminosilicates from either anti-skid or asphalt concrete aggregates. Mineral particles were observed mainly in the PM10 fraction, the fine fraction being 12% and submicron size being 6% of PM10 mass. The PM10 concentrations increased as a function of the amount of anti-skid aggregate dispersed. The use of anti-skid aggregate increased substantially the amount of PM10 originated from the asphalt concrete. It was concluded that anti-skid aggregate grains contribute to pavement wear. The particle size distribution of the anti-skid aggregates had great impact on PM10 emissions which were additionally enhanced by studded tires, modal composition, and texture of anti-skid aggregates. The results emphasize the interaction of tires, anti-skid aggregate, and asphalt concrete pavement in the production of dust emissions. They all must be taken into account when measures to reduce road dust are considered. The winter maintenance and springtime cleaning must be performed properly with methods which are efficient in reducing PM10 dust.
Observability of characteristic binary-induced structures in circumbinary disks
NASA Astrophysics Data System (ADS)
Avramenko, R.; Wolf, S.; Illenseer, T. F.
2017-07-01
Context. A substantial fraction of protoplanetary disks form around stellar binaries. The binary system generates a time-dependent non-axisymmetric gravitational potential, inducing strong tidal forces on the circumbinary disk. This leads to a change in basic physical properties of the circumbinary disk, which should in turn result in unique structures that are potentially observable with the current generation of instruments. Aims: The goal of this study is to identify these characteristic structures, constrain the physical conditions that cause them, and evaluate the feasibility of observing them in circumbinary disks. Methods: To achieve this, first we perform 2D hydrodynamic simulations. The resulting density distributions are post-processed with a 3D radiative transfer code to generate re-emission and scattered light maps. Based on these distributions, we study the influence of various parameters, such as the mass of the stellar components, mass of the disk, and binary separation on observable features in circumbinary disks. Results: We find that the Atacama Large (sub-)Millimetre Array (ALMA) as well as the European Extremely Large Telescope (E-ELT) are capable of tracing asymmetries in the inner region of circumbinary disks, which are affected most by the binary-disk interaction. Observations at submillimetre/millimetre wavelengths allow the detection of the density waves at the inner rim of the disk and inner cavity. With the E-ELT one can partially resolve the innermost parts of the disk in the infrared wavelength range, including the disk's rim, accretion arms, and potentially the expected circumstellar disks around each of the binary components.
Vutukuru, Satish; Carreras-Sospedra, Marc; Brouwer, Jacob; Dabdub, Donald
2011-12-01
Distributed power generation-electricity generation that is produced by many small stationary power generators distributed throughout an urban air basin-has the potential to supply a significant portion of electricity in future years. As a result, distributed generation may lead to increased pollutant emissions within an urban air basin, which could adversely affect air quality. However, the use of combined heating and power with distributed generation may reduce the energy consumption for space heating and air conditioning, resulting in a net decrease of pollutant and greenhouse gas emissions. This work used a systematic approach based on land-use geographical information system data to determine the spatial and temporal distribution of distributed generation emissions in the San Joaquin Valley Air Basin of California and simulated the potential air quality impacts using state-of-the-art three-dimensional computer models. The evaluation of the potential market penetration of distributed generation focuses on the year 2023. In general, the air quality impacts of distributed generation were found to be small due to the restrictive 2007 California Air Resources Board air emission standards applied to all distributed generation units and due to the use of combined heating and power. Results suggest that if distributed generation units were allowed to emit at the current Best Available Control Technology standards (which are less restrictive than the 2007 California Air Resources Board standards), air quality impacts of distributed generation could compromise compliance with the federal 8-hr average ozone standard in the region.
Vutukuru, Satish; Carreras-Sospedra, Marc; Brouwer, Jacob; Dabdub, Donald
2011-12-01
Distributed power generation-electricity generation that is produced by many small stationary power generators distributed throughout an urban air basin-has the potential to supply a significant portion of electricity in future years. As a result, distributed generation may lead to increased pollutant emissions within an urban air basin, which could adversely affect air quality. However, the use of combined heating and power with distributed generation may reduce the energy consumption for space heating and air conditioning, resulting in a net decrease of pollutant and greenhouse gas emissions. This work used a systematic approach based on land-use geographical information system data to determine the spatial and temporal distribution of distributed generation emissions in the San Joaquin Valley Air Basin of California and simulated the potential air quality impacts using state-of-the-art three-dimensional computer models. The evaluation of the potential market penetration of distributed generation focuses on the year 2023. In general, the air quality impacts of distributed generation were found to be small due to the restrictive 2007 California Air Resources Board air emission standards applied to all distributed generation units and due to the use of combined heating and power. Results suggest that if distributed generation units were allowed to emit at the current Best Available Control Technology standards (which are less restrictive than the 2007 California Air Resources Board standards), air quality impacts of distributed generation could compromise compliance with the federal 8-hr average ozone standard in the region. [Box: see text].
Temperature limited heater with a conduit substantially electrically isolated from the formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinegar, Harold J; Sandberg, Chester Ledlie
2009-07-14
A system for heating a hydrocarbon containing formation is described. A conduit may be located in an opening in the formation. The conduit includes ferromagnetic material. An electrical conductor is positioned inside the conduit, and is electrically coupled to the conduit at or near an end portion of the conduit so that the electrical conductor and the conduit are electrically coupled in series. Electrical current flows in the electrical conductor in a substantially opposite direction to electrical current flow in the conduit during application of electrical current to the system. The flow of electrons is substantially confined to the insidemore » of the conduit by the electromagnetic field generated from electrical current flow in the electrical conductor so that the outside surface of the conduit is at or near substantially zero potential at 25.degree. C. The conduit may generate heat and heat the formation during application of electrical current.« less
High resolution, high rate X-ray spectrometer
Goulding, Frederick S.; Landis, Donald A.
1987-01-01
A pulse processing system (10) for use in an X-ray spectrometer in which a ain channel pulse shaper (12) and a fast channel pulse shaper (13) each produce a substantially symmetrical triangular pulse (f, p) for each event detected by the spectrometer, with the pulse width of the pulses being substantially independent of the magnitude of the detected event and with the pulse width of the fast pulses (p) being substantially shorter than the pulse width of the main channel pulses (f). A pile-up rejector circuit (19) allows output pulses to be generated, with amplitudes linearly related to the magnitude of the detected events, whenever the peak of a main channel pulse (f) is not affected by a preceding or succeeding main channel pulse, while inhibiting output pulses wherein peak magnitudes of main channel pulses are affected by adjacent pulses. The substantially symmetrical triangular main channel pulses (f) are generated by the weighted addition (27-31) of successive RC integrations (24, 25, 26) of an RC differentiated step wave (23). The substantially symmetrical triangular fast channel pulses (p) are generated by the RC integration ( 43) of a bipolar pulse (o) in which the amplitude of the second half is 1/e that of the first half, with the RC time constant of integration being equal to one-half the width of the bipolar pulse.
NASA Technical Reports Server (NTRS)
Liu, Ketao (Inventor); Uetrecht, David S. (Inventor)
2002-01-01
A method, apparatus, article of manufacture, and a memory structure for compensating for instrument induced spacecraft jitter is disclosed. The apparatus comprises a spacecraft control processor for producing an actuator command signal, a signal generator, for producing a cancellation signal having at least one harmonic having a frequency and an amplitude substantially equal to that of a disturbance harmonic interacting with a spacecraft structural resonance and a phase substantially out of phase with the disturbance harmonic interacting with the spacecraft structural resonance, and at least one spacecraft control actuator, communicatively coupled to the spacecraft control processor and the signal generator for inducing satellite motion according to the actuator command signal and the cancellation signal. The method comprises the steps of generating a cancellation signal having at least one harmonic having a frequency and an amplitude substantially equal to that of a disturbance harmonic interacting with a spacecraft structural resonance and a phase substantially out of phase with the disturbance harmonic interacting with the spacecraft structural resonance, and providing the cancellation signal to a spacecraft control actuator. The apparatus comprises a storage device tangibly embodying the method steps described above.
Growth is required for perception of water availability to pattern root branches in plants.
Robbins, Neil E; Dinneny, José R
2018-01-23
Water availability is a potent regulator of plant development and induces root branching through a process termed hydropatterning. Hydropatterning enables roots to position lateral branches toward regions of high water availability, such as wet soil or agar media, while preventing their emergence where water is less available, such as in air. The mechanism by which roots perceive the spatial distribution of water during hydropatterning is unknown. Using primary roots of Zea mays (maize) we reveal that developmental competence for hydropatterning is limited to the growth zone of the root tip. Past work has shown that growth generates gradients in water potential across an organ when asymmetries exist in the distribution of available water. Using mathematical modeling, we predict that substantial growth-sustained water potential gradients are also generated in the hydropatterning competent zone and that such biophysical cues inform the patterning of lateral roots. Using diverse chemical and environmental treatments we experimentally demonstrate that growth is necessary for normal hydropatterning of lateral roots. Transcriptomic characterization of the local response of tissues to a moist surface or air revealed extensive regulation of signaling and physiological pathways, some of which we show are growth-dependent. Our work supports a "sense-by-growth" mechanism governing hydropatterning, by which water availability cues are rendered interpretable through growth-sustained water movement. Copyright © 2018 the Author(s). Published by PNAS.
Growth is required for perception of water availability to pattern root branches in plants
2018-01-01
Water availability is a potent regulator of plant development and induces root branching through a process termed hydropatterning. Hydropatterning enables roots to position lateral branches toward regions of high water availability, such as wet soil or agar media, while preventing their emergence where water is less available, such as in air. The mechanism by which roots perceive the spatial distribution of water during hydropatterning is unknown. Using primary roots of Zea mays (maize) we reveal that developmental competence for hydropatterning is limited to the growth zone of the root tip. Past work has shown that growth generates gradients in water potential across an organ when asymmetries exist in the distribution of available water. Using mathematical modeling, we predict that substantial growth-sustained water potential gradients are also generated in the hydropatterning competent zone and that such biophysical cues inform the patterning of lateral roots. Using diverse chemical and environmental treatments we experimentally demonstrate that growth is necessary for normal hydropatterning of lateral roots. Transcriptomic characterization of the local response of tissues to a moist surface or air revealed extensive regulation of signaling and physiological pathways, some of which we show are growth-dependent. Our work supports a “sense-by-growth” mechanism governing hydropatterning, by which water availability cues are rendered interpretable through growth-sustained water movement. PMID:29317538
A Nonparametric Approach For Representing Interannual Dependence In Monthly Streamflow Sequences
NASA Astrophysics Data System (ADS)
Sharma, A.; Oneill, R.
The estimation of risks associated with water management plans requires generation of synthetic streamflow sequences. The mathematical algorithms used to generate these sequences at monthly time scales are found lacking in two main respects: inability in preserving dependence attributes particularly at large (seasonal to interannual) time lags; and, a poor representation of observed distributional characteristics, in partic- ular, representation of strong assymetry or multimodality in the probability density function. Proposed here is an alternative that naturally incorporates both observed de- pendence and distributional attributes in the generated sequences. Use of a nonpara- metric framework provides an effective means for representing the observed proba- bility distribution, while the use of a Svariable kernelT ensures accurate modeling of & cedil;streamflow data sets that contain a substantial number of zero flow values. A careful selection of prior flows imparts the appropriate short-term memory, while use of an SaggregateT flow variable allows representation of interannual dependence. The non- & cedil;parametric simulation model is applied to monthly flows from the Beaver River near Beaver, Utah, USA, and the Burrendong dam inflows, New South Wales, Australia. Results indicate that while the use of traditional simulation approaches leads to an inaccurate representation of dependence at long (annual and interannual) time scales, the proposed model can simulate both short and long-term dependence. As a result, the proposed model ensures a significantly improved representation of reservoir storage statistics, particularly for systems influenced by long droughts. It is important to note that the proposed method offers a simpler and better alternative to conventional dis- aggregation models as: (a) a separate annual flow series is not required, (b) stringent assumptions relating annual and monthly flows are not needed, and (c) the method does not require the specification of a "water year", instead ensuring that the sum of any sequence of flows lasting twelve months will result in the type of dependence that is observed in the historical annual flow series.
Biasetti, Jacopo; Pustavoitau, Aliaksei; Spazzini, Pier Giorgio
2017-01-01
Mechanical circulatory support devices, such as total artificial hearts and left ventricular assist devices, rely on external energy sources for their continuous operation. Clinically approved power supplies rely on percutaneous cables connecting an external energy source to the implanted device with the associated risk of infections. One alternative, investigated in the 70s and 80s, employs a fully implanted nuclear power source. The heat generated by the nuclear decay can be converted into electricity to power circulatory support devices. Due to the low conversion efficiencies, substantial levels of waste heat are generated and must be dissipated to avoid tissue damage, heat stroke, and death. The present work computationally evaluates the ability of the blood flow in the descending aorta to remove the locally generated waste heat for subsequent full-body distribution and dissipation, with the specific aim of investigating methods for containment of local peak temperatures within physiologically acceptable limits. To this aim, coupled fluid–solid heat transfer computational models of the blood flow in the human aorta and different heat exchanger architectures are developed. Particle tracking is used to evaluate temperature histories of cells passing through the heat exchanger region. The use of the blood flow in the descending aorta as a heat sink proves to be a viable approach for the removal of waste heat loads. With the basic heat exchanger design, blood thermal boundary layer temperatures exceed 50°C, possibly damaging blood cells and proteins. Improved designs of the heat exchanger, with the addition of fins and heat guides, allow for drastically lower blood temperatures, possibly leading to a more biocompatible implant. The ability to maintain blood temperatures at biologically compatible levels will ultimately allow for the body-wise distribution, and subsequent dissipation, of heat loads with minimum effects on the human physiology. PMID:29094038
Biasetti, Jacopo; Pustavoitau, Aliaksei; Spazzini, Pier Giorgio
2017-01-01
Mechanical circulatory support devices, such as total artificial hearts and left ventricular assist devices, rely on external energy sources for their continuous operation. Clinically approved power supplies rely on percutaneous cables connecting an external energy source to the implanted device with the associated risk of infections. One alternative, investigated in the 70s and 80s, employs a fully implanted nuclear power source. The heat generated by the nuclear decay can be converted into electricity to power circulatory support devices. Due to the low conversion efficiencies, substantial levels of waste heat are generated and must be dissipated to avoid tissue damage, heat stroke, and death. The present work computationally evaluates the ability of the blood flow in the descending aorta to remove the locally generated waste heat for subsequent full-body distribution and dissipation, with the specific aim of investigating methods for containment of local peak temperatures within physiologically acceptable limits. To this aim, coupled fluid-solid heat transfer computational models of the blood flow in the human aorta and different heat exchanger architectures are developed. Particle tracking is used to evaluate temperature histories of cells passing through the heat exchanger region. The use of the blood flow in the descending aorta as a heat sink proves to be a viable approach for the removal of waste heat loads. With the basic heat exchanger design, blood thermal boundary layer temperatures exceed 50°C, possibly damaging blood cells and proteins. Improved designs of the heat exchanger, with the addition of fins and heat guides, allow for drastically lower blood temperatures, possibly leading to a more biocompatible implant. The ability to maintain blood temperatures at biologically compatible levels will ultimately allow for the body-wise distribution, and subsequent dissipation, of heat loads with minimum effects on the human physiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darghouth, Naim; Barbose, Galen; Wiser, Ryan
2010-03-30
Net metering has become a widespread policy in the U.S. for supporting distributed photovoltaics (PV) adoption. Though specific design details vary, net metering allows customers with PV to reduce their electric bills by offsetting their consumption with PV generation, independent of the timing of the generation relative to consumption - in effect, compensating the PV generation at retail electricity rates (Rose et al. 2009). While net metering has played an important role in jump-starting the residential PV market in the U.S., challenges to net metering policies have emerged in a number of states and contexts, and alternative compensation methods aremore » under consideration. Moreover, one inherent feature of net metering is that the value of the utility bill savings it provides to customers with PV depends heavily on the structure of the underlying retail electricity rate, as well as on the characteristics of the customer and PV system. Consequently, the value of net metering - and the impact of moving to alternative compensation mechanisms - can vary substantially from one customer to the next. For these reasons, it is important for policymakers and others that seek to support the development of distributed PV to understand both how the bill savings varies under net metering, and how the bill savings under net metering compares to other possible compensation mechanisms. To advance this understanding, we analyze the bill savings from PV for residential customers of California's two largest electric utilities, Pacific Gas and Electric (PG&E) and Southern California Edison (SCE). The analysis is based on hourly load data from a sample of 215 residential customers located in the service territories of the two utilities, matched with simulated hourly PV production for the same time period based on data from the nearest of 73 weather stations in the state.« less
Second harmonic generation in resonant optical structures
Eichenfield, Matt; Moore, Jeremy; Friedmann, Thomas A.; Olsson, Roy H.; Wiwi, Michael; Padilla, Camille; Douglas, James Kenneth; Hattar, Khalid Mikhiel
2018-01-09
An optical second-harmonic generator (or spontaneous parametric down-converter) includes a microresonator formed of a nonlinear optical medium. The microresonator supports at least two modes that can be phase matched at different frequencies so that light can be converted between them: A first resonant mode having substantially radial polarization and a second resonant mode having substantially vertical polarization. The first and second modes have the same radial order. The thickness of the nonlinear medium is less than one-half the pump wavelength within the medium.
NASA Astrophysics Data System (ADS)
Kim, J.; Lee, J.; Kang, H.
2017-12-01
Phragmites australis is one of the representative vegetation of coastal wetlands which is distributed in North America, East Asia and European Countries. In North America, P. australis has invaded large areas of coastal wetlands, which causes various ecological problems such as increases in methane emission and reduction in biodiversity. In South Korea, P. australis is rapidly expanded in tidal marshes in Suncheon Bay. The expansion of P. australis enhanced methane emission by increasing dissolved organic carbon and soil moisture, and changing in relative abundances of methanogen, methanotroph, and sulfate reducing bacteria. Microbial community structure might be also shifted and affect methane cycle, but accurate observation on microbial community structure has not been fully illustrated yet. Therefore, we tried to monitor the changing microbial community structure due to P. australis expansion by using Next Generation Sequencing (NGS). NGS results showed that microbial community was substantially changed with the expansion. We also observed seasonal variations and chronosequence of microbial community structures along the expansion of P. australis, which showed distinctive changing patterns. P. australis expansion substantially affected microbial community structure in tidal marsh which may play an important role in methane cycle in tidal marshes.
Space Weather Model Testing And Validation At The Community Coordinated Modeling Center
NASA Astrophysics Data System (ADS)
Hesse, M.; Kuznetsova, M.; Rastaetter, L.; Falasca, A.; Keller, K.; Reitan, P.
The Community Coordinated Modeling Center (CCMC) is a multi-agency partner- ship aimed at the creation of next generation space weather models. The goal of the CCMC is to undertake the research and developmental work necessary to substantially increase the present-day modeling capability for space weather purposes, and to pro- vide models for transition to the rapid prototyping centers at the space weather forecast centers. This goal requires close collaborations with and substantial involvement of the research community. The physical regions to be addressed by CCMC-related activities range from the solar atmosphere to the Earth's upper atmosphere. The CCMC is an integral part of NASA's Living With aStar initiative, of the National Space Weather Program Implementation Plan, and of the Department of Defense Space Weather Tran- sition Plan. CCMC includes a facility at NASA Goddard Space Flight Center, as well as distributed computing facilities provided by the Air Force. CCMC also provides, to the research community, access to state-of-the-art space research models. In this paper we will provide updates on CCMC status, on current plans, research and devel- opment accomplishments and goals, and on the model testing and validation process undertaken as part of the CCMC mandate.
How Turbulence Can Set the Radial Distribution of Gas Giants Formed by Pebble Accretion
NASA Astrophysics Data System (ADS)
Morley Rosenthal, Mickey; Murray-Clay, Ruth
2018-04-01
I discuss how turbulence impacts the orbital separation at which the cores of gas giants can form via pebble accretion. While pebble accretion is extremely rapid for massive planets, I demonstrate that pebble accretion is inhibited at protoplanet masses, an effect which is strongly enhanced in a turbulent disk. Using these considerations I derive a “minimum” mass, past which pebble accretion proceeds on timescales less than the disk lifetime. By considering core formation where early growth to this minimum mass proceeds by gravitational focusing of planetesimals, I demonstrate that the the semi-major axes where gas giants can form are more restricted as the strength of the nebular turbulence increases — e.g. formation can only occur at distances < 30 AU for α > 10^-2. I also examine the implications of turbulence on the mass gas giants can reach before opening a substantial gap and halting growth. I find that while weak turbulence allows gas giants to form far out in the disk, the masses of these planets are substantially lower (< 1 Jupiter mass), which would preclude them from having been detected by the current generation of direct imaging surveys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Best, G.; Weikert, N.B.
1984-05-29
A cutting roller for a mining machine, having a substantially conical closure member arranged to face the workings and a tubular body member which has a larger diameter at the end nearer the face working face than at the discharge end. The tubular member carries at least one cutting blade, and the closure member mounts at least one cutting blade; each blade is provided at its edge region with a plurality of bit holders for the attachment of cutter bits. The outer surface of the body member merges into the substantially conical closure member in a smooth, even curve, somore » that the outside diameter of the body member in the region of the working face is substantially greater than the diameter in the region of the discharge end of the cutting roller. The roller is provided with liquid distribution channels on each cutting blade, which channels are connected to a single liquid distribution ring channel in the region of the substantially conical closure member.« less
Light collection device for flame emission detectors
Woodruff, Stephen D.; Logan, Ronald G.; Pineault, Richard L.
1990-01-01
A light collection device for use in a flame emission detection system such as an on-line, real-time alkali concentration process stream monitor is disclosed which comprises a sphere coated on its interior with a highly diffuse reflective paint which is positioned over a flame emission source, and one or more fiber optic cables which transfer the light generated at the interior of the sphere to a detecting device. The diffuse scattering of the light emitted by the flame uniformly distributes the light in the sphere, and the collection efficiency of the device is greater than that obtainable in the prior art. The device of the present invention thus provides enhanced sensitivity and reduces the noise associated with flame emission detectors, and can achieve substantial improvements in alkali detection levels.
System and method for identifying, reporting, and evaluating presence of substance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Maurice; Lusby, Michael; Van Hook, Arthur
A system and method for identifying, reporting, and evaluating a presence of a solid, liquid, gas, or other substance of interest, particularly a dangerous, hazardous, or otherwise threatening chemical, biological, or radioactive substance. The system comprises one or more substantially automated, location self-aware remote sensing units; a control unit; and one or more data processing and storage servers. Data is collected by the remote sensing units and transmitted to the control unit; the control unit generates and uploads a report incorporating the data to the servers; and thereafter the report is available for review by a hierarchy of responsive andmore » evaluative authorities via a wide area network. The evaluative authorities include a group of relevant experts who may be widely or even globally distributed.« less
System and method for identifying, reporting, and evaluating presence of substance
Smith, Maurice [Kansas City, MO; Lusby, Michael [Kansas City, MO; Van Hook, Arthur [Lotawana, MO; Cook, Charles J [Raytown, MO; Wenski, Edward G [Lenexa, KS; Solyom, David [Overland Park, KS
2012-02-14
A system and method for identifying, reporting, and evaluating a presence of a solid, liquid, gas, or other substance of interest, particularly a dangerous, hazardous, or otherwise threatening chemical, biological, or radioactive substance. The system comprises one or more substantially automated, location self-aware remote sensing units; a control unit; and one or more data processing and storage servers. Data is collected by the remote sensing units and transmitted to the control unit; the control unit generates and uploads a report incorporating the data to the servers; and thereafter the report is available for review by a hierarchy of responsive and evaluative authorities via a wide area network. The evaluative authorities include a group of relevant experts who may be widely or even globally distributed.
System And Method For Identifying, Reporting, And Evaluating Presence Of Substance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Maurice; Lusby, Michael; Hook, Arthur Van
A system and method for identifying, reporting, and evaluating a presence of a solid, liquid, gas, or other substance of interest, particularly a dangerous, hazardous, or otherwise threatening chemical, biological, or radioactive substance. The system comprises one or more substantially automated, location self-aware remote sensing units; a control unit; and one or more data processing and storage servers. Data is collected by the remote sensing units and transmitted to the control unit; the control unit generates and uploads a report incorporating the data to the servers; and thereafter the report is available for review by a hierarchy of responsive andmore » evaluative authorities via a wide area network. The evaluative authorities include a group of relevant experts who may be widely or even globally distributed.« less
Dispersed SiC nanoparticles in Ni observed by ultra-small-angle X-ray scattering
Xie, R.; Ilavsky, J.; Huang, H. F.; ...
2016-11-24
In this paper, a metal-ceramic composite, nickel reinforced with SiC nanoparticles, was synthesized and characterized for its potential application in next-generation molten salt nuclear reactors. Synchrotron ultra-small-angle X-ray scattering (USAXS) measurements were conducted on the composite. The size distribution and number density of the SiC nanoparticles in the material were obtained through data modelling. Scanning and transmission electron microscopy characterization were performed to substantiate the results of the USAXS measurements. Tensile tests were performed on the samples to measure the change in their yield strength after doping with the nanoparticles. Finally, the average interparticle distance was calculated from the USAXSmore » results and is related to the increased yield strength of the composite.« less
Two-sided Topp-Leone Weibull distribution
NASA Astrophysics Data System (ADS)
Podeang, Krittaya; Bodhisuwan, Winai
2017-11-01
In this paper, we introduce a general class of lifetime distributions, called the two-sided Topp-Leone generated family of distribution. A special case of new family is the two-sided Topp-Leone Weibull distribution. This distribution used the two-sided Topp-Leone distribution as a generator for the Weibull distribution. The two-sided Topp-Leone Weibull distribution is presented in several shapes of distributions such as decreasing, unimodal, and bimodal which make this distribution more than flexible than the Weibull distribution. Its quantile function is presented. The parameter estimation method by using maximum likelihood estimation is discussed. The proposed distribution is applied to the strength data set, remission times of bladder cancer patients data set and time to failure of turbocharger data set. We compare the proposed distribution to the Topp-Leone Generated Weibull distribution. In conclusion, the two-sided Topp-Leone Weibull distribution performs similarly as the Topp-Leone Generated Weibull distribution in the first and second data sets. However, the proposed distribution can perform better than fit to Topp-Leone Generated Weibull distribution for the other.
Distributed Generation of Electricity and its Environmental Impacts
Distributed generation refers to technologies that generate electricity at or near where it will be used. Learn about how distributed energy generation can support the delivery of clean, reliable power to additional customers.
An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures
NASA Astrophysics Data System (ADS)
Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.
2009-07-01
A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.
Stochastic Computations in Cortical Microcircuit Models
Maass, Wolfgang
2013-01-01
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126
Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations
NASA Astrophysics Data System (ADS)
Kozak, P.
2014-12-01
Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.
Method and apparatus for wind turbine air gap control
Grant, James Jonathan; Bagepalli, Bharat Sampathkumaran; Jansen, Patrick Lee; DiMascio, Paul Stephen; Gadre, Aniruddha Dattatraya; Qu, Ronghai
2007-02-20
Methods and apparatus for assembling a wind turbine generator are provided. The wind turbine generator includes a core and a plurality of stator windings circumferentially spaced about a generator longitudinal axis, a rotor rotatable about the generator longitudinal axis wherein the rotor includes a plurality of magnetic elements coupled to a radially outer periphery of the rotor such that an airgap is defined between the stator windings and the magnetic elements and the plurality of magnetic elements including a radially inner periphery having a first diameter. The wind turbine generator also includes a bearing including a first member in rotatable engagement with a radially inner second member, the first member including a radially outer periphery, a diameter of the radially outer periphery of the first member being substantially equal to the first diameter, the rotor coupled to the stator through the bearing such that a substantially uniform airgap is maintained.
Method and apparatus for anti-islanding protection of distributed generations
Ye, Zhihong; John, Vinod; Wang, Changyong; Garces, Luis Jose; Zhou, Rui; Li, Lei; Walling, Reigh Allen; Premerlani, William James; Sanza, Peter Claudius; Liu, Yan; Dame, Mark Edward
2006-03-21
An apparatus for anti-islanding protection of a distributed generation with respect to a feeder connected to an electrical grid is disclosed. The apparatus includes a sensor adapted to generate a voltage signal representative of an output voltage and/or a current signal representative of an output current at the distributed generation, and a controller responsive to the signals from the sensor. The controller is productive of a control signal directed to the distributed generation to drive an operating characteristic of the distributed generation out of a nominal range in response to the electrical grid being disconnected from the feeder.
A Review of Microgrid Architectures and Control Strategy
NASA Astrophysics Data System (ADS)
Jadav, Krishnarajsinh A.; Karkar, Hitesh M.; Trivedi, I. N.
2017-12-01
In this paper microgrid architecture and various converters control strategies are reviewed. Microgrid is defined as interconnected network of distributed energy resources, loads and energy storage systems. This emerging concept realizes the potential of distributed generators. AC microgrid interconnects various AC distributed generators like wind turbine and DC distributed generators like PV, fuel cell using inverter. While in DC microgrid output of an AC distributed generator must be converted to DC using rectifiers and DC distributed generator can be directly interconnected. Hybrid microgrid is the solution to avoid this multiple reverse conversions AC-DC-AC and DC-AC-DC that occur in the individual AC-DC microgrid. In hybrid microgrid all AC distributed generators will be connected in AC microgrid and DC distributed generators will be connected in DC microgrid. Interlinking converter is used for power balance in both microgrids, which transfer power from one microgrid to other if any microgrid is overloaded. At the end, review of interlinking converter control strategies is presented.
Elastin Shapes Small Molecule Distribution in Lymph Node Conduits.
Lin, Yujia; Louie, Dante; Ganguly, Anutosh; Wu, Dequan; Huang, Peng; Liao, Shan
2018-05-01
The spatial and temporal Ag distribution determines the subsequent T cell and B cell activation at the distinct anatomical locations in the lymph node (LN). It is well known that LN conduits facilitate small Ag distribution in the LN, but the mechanism of how Ags travel along LN conduits remains poorly understood. In C57BL/6J mice, using FITC as a fluorescent tracer to study lymph distribution in the LN, we found that FITC preferentially colocalized with LN capsule-associated (LNC) conduits. Images generated using a transmission electron microscope showed that LNC conduits are composed of solid collagen fibers and are wrapped with fibroblastic cells. Superresolution images revealed that high-intensity FITC is typically colocalized with elastin fibers inside the LNC conduits. Whereas tetramethylrhodamine isothiocyanate appears to enter LNC conduits as effectively as FITC, fluorescently-labeled Alexa-555-conjugated OVA labels significantly fewer LNC conduits. Importantly, injection of Alexa-555-conjugated OVA with LPS substantially increases OVA distribution along elastin fibers in LNC conduits, indicating immune stimulation is required for effective OVA traveling along elastin in LN conduits. Finally, elastin fibers preferentially surround lymphatic vessels in the skin and likely guide fluid flow to the lymphatic vessels. Our studies demonstrate that fluid or small molecules are preferentially colocalized with elastin fibers. Although the exact mechanism of how elastin fibers regulate Ag trafficking remains to be explored, our results suggest that elastin can be a potentially new target to direct Ag distribution in the LN during vaccine design. Copyright © 2018 by The American Association of Immunologists, Inc.
77 FR 76380 - Partner's Distributive Share
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-28
...'s Distributive Share AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Final regulations... partnership's allocations are substantial. However, this commenter also explained that many partnerships are... partnership. This commenter further explained that, provided the partnership's assumptions are reasonable...
NASA Astrophysics Data System (ADS)
Sharma, Ashish; Levko, Dmitry; Raja, Laxminarayan
2016-09-01
We present a computational model of nanosecond streamers generated in helium bubbles immersed in distilled water at the atmospheric pressure conditions. The model is based on the self-consistent, multispecies and the continuum description of plasma and takes into account the presence of water vapor in the gas bubble for a more accurate description of the kinetics of the discharge. We find that the dynamic characteristics of the streamer discharge are completely different at low and high over voltages. We observe that the polarity of the trigger voltage has a substantial effect on initiation, transition and evolution stages of streamers with the volumetric distribution of species in the streamer channel much more uniform for negative trigger voltages due to the presence of multiple streamers. We also find that the presence of water vapor significantly influences the distribution of the dominant species in the streamer trail and has a profound effect on the flux of the dominant species to the bubble wall. The research reported in this publication was supported by Competitive Research Funding from King Abdullah University of Science and Technology (KAUST).
Enhancing grain boundary ionic conductivity in mixed ionic–electronic conductors
Lin, Ye; Fang, Shumin; Su, Dong; ...
2015-04-10
Mixed ionic–electronic conductors are widely used in devices for energy conversion and storage. Grain boundaries in these materials have nanoscale spatial dimensions, which can generate substantial resistance to ionic transport due to dopant segregation. Here, we report the concept of targeted phase formation in a Ce 0.8Gd 0.2O 2₋δ–CoFe 2O 4 composite that serves to enhance the grain boundary ionic conductivity. Using transmission electron microscopy and spectroscopy approaches, we probe the grain boundary charge distribution and chemical environments altered by the phase reaction between the two constituents. The formation of an emergent phase successfully avoids segregation of the Gd dopantmore » and depletion of oxygen vacancies at the Ce 0.8Gd 0.2O 2₋δ–Ce 0.8Gd 0.2O 2₋δ grain boundary. This results in superior grain boundary ionic conductivity as demonstrated by the enhanced oxygen permeation flux. Lastly, this work illustrates the control of mesoscale level transport properties in mixed ionic–electronic conductor composites through processing induced modifications of the grain boundary defect distribution.« less
An experimental study of the validity of the heat-field concept for sonic-boom alleviation
NASA Technical Reports Server (NTRS)
Swigart, R. J.
1974-01-01
An experimental program was carried out in the NASA-Langley 4 ft x 4 ft supersonic pressure tunnel to investigate the validity of the heat-field concept for sonic boom alleviation. The concept involves heating the flow about a supersonic aircraft in such a manner as to obtain an increase in effective aircraft length and yield an effective aircraft shape that will result in a shock-free pressure signature on the ground. First, a basic body-of-revolution representing an SST configuration with its lift equivalence in volume was tested to provide a baseline pressure signature. Second, a model having a 5/2-power area distribution which, according to theory, should yield a linear pressure rise with no front shock wave was tested. Third, the concept of providing the 5/2-power area distribution by using an off-axis slender fin below the basic body was investigated. Then a substantial portion (approximately 40 percent) of the solid fin was replaced by a heat field generated by passing heated nitrogen through the rear of the fin.
Pepper, Mitzy; Fujita, Matthew K; Moritz, Craig; Keogh, J Scott
2011-04-01
Refugia featured prominently in shaping evolutionary trajectories during repeated cycles of glaciation in the Quaternary, particularly in the Northern Hemisphere. The Southern Hemisphere instead experienced cycles of severe aridification but little is known about the temporal presence and role of refugia for arid-adapted biota. Isolated mountain ranges located in the Australian arid zone likely provided refugia for many species following Mio/Pliocene (<15 Ma) aridification; however, the evolutionary consequences of the recent development of widespread sand deserts is largely unknown. To test alternative hypotheses of ancient vs. recent isolation, we generated a 10 gene data set to assess divergence history among saxicolous geckos in the genus Heteronotia that have distributions confined to major rocky ranges in the arid zone. Phylogenetic analyses show that each rocky range harbours a divergent lineage, and substantial intraspecific diversity is likely due to topographic complexity in these areas. Old divergences (~4 Ma) among lineages pre-date the formation of the geologically young sand deserts (<1 Ma), suggesting that Pliocene climate shifts fractured the distributions of biota long before the spread of the deserts. © 2011 Blackwell Publishing Ltd.
Keefer, D.K.
2000-01-01
The 1989 Loma Prieta, California earthquake (moment magnitude, M=6.9) generated landslides throughout an area of about 15,000 km2 in central California. Most of these landslides occurred in an area of about 2000 km2 in the mountainous terrain around the epicenter, where they were mapped during field investigations immediately following the earthquake. The distribution of these landslides is investigated statistically, using regression and one-way analysisof variance (ANOVA) techniques to determine how the occurrence of landslides correlates with distance from the earthquake source, slope steepness, and rock type. The landslide concentration (defined as the number of landslide sources per unit area) has a strong inverse correlation with distance from the earthquake source and a strong positive correlation with slope steepness. The landslide concentration differs substantially among the various geologic units in the area. The differences correlate to some degree with differences in lithology and degree of induration, but this correlation is less clear, suggesting a more complex relationship between landslide occurrence and rock properties. ?? 2000 Elsevier Science B.V. All rights reserved.
Enhancing grain boundary ionic conductivity in mixed ionic–electronic conductors
Lin, Ye; Fang, Shumin; Su, Dong; Brinkman, Kyle S; Chen, Fanglin
2015-01-01
Mixed ionic–electronic conductors are widely used in devices for energy conversion and storage. Grain boundaries in these materials have nanoscale spatial dimensions, which can generate substantial resistance to ionic transport due to dopant segregation. Here, we report the concept of targeted phase formation in a Ce0.8Gd0.2O2−δ–CoFe2O4 composite that serves to enhance the grain boundary ionic conductivity. Using transmission electron microscopy and spectroscopy approaches, we probe the grain boundary charge distribution and chemical environments altered by the phase reaction between the two constituents. The formation of an emergent phase successfully avoids segregation of the Gd dopant and depletion of oxygen vacancies at the Ce0.8Gd0.2O2−δ–Ce0.8Gd0.2O2−δ grain boundary. This results in superior grain boundary ionic conductivity as demonstrated by the enhanced oxygen permeation flux. This work illustrates the control of mesoscale level transport properties in mixed ionic–electronic conductor composites through processing induced modifications of the grain boundary defect distribution. PMID:25857355
Enhancing grain boundary ionic conductivity in mixed ionic-electronic conductors.
Lin, Ye; Fang, Shumin; Su, Dong; Brinkman, Kyle S; Chen, Fanglin
2015-04-10
Mixed ionic-electronic conductors are widely used in devices for energy conversion and storage. Grain boundaries in these materials have nanoscale spatial dimensions, which can generate substantial resistance to ionic transport due to dopant segregation. Here, we report the concept of targeted phase formation in a Ce0.8Gd0.2O2-δ-CoFe2O4 composite that serves to enhance the grain boundary ionic conductivity. Using transmission electron microscopy and spectroscopy approaches, we probe the grain boundary charge distribution and chemical environments altered by the phase reaction between the two constituents. The formation of an emergent phase successfully avoids segregation of the Gd dopant and depletion of oxygen vacancies at the Ce0.8Gd0.2O2-δ-Ce0.8Gd0.2O2-δ grain boundary. This results in superior grain boundary ionic conductivity as demonstrated by the enhanced oxygen permeation flux. This work illustrates the control of mesoscale level transport properties in mixed ionic-electronic conductor composites through processing induced modifications of the grain boundary defect distribution.
Enhancing grain boundary ionic conductivity in mixed ionic–electronic conductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Ye; Fang, Shumin; Su, Dong
Mixed ionic–electronic conductors are widely used in devices for energy conversion and storage. Grain boundaries in these materials have nanoscale spatial dimensions, which can generate substantial resistance to ionic transport due to dopant segregation. Here, we report the concept of targeted phase formation in a Ce 0.8Gd 0.2O 2₋δ–CoFe 2O 4 composite that serves to enhance the grain boundary ionic conductivity. Using transmission electron microscopy and spectroscopy approaches, we probe the grain boundary charge distribution and chemical environments altered by the phase reaction between the two constituents. The formation of an emergent phase successfully avoids segregation of the Gd dopantmore » and depletion of oxygen vacancies at the Ce 0.8Gd 0.2O 2₋δ–Ce 0.8Gd 0.2O 2₋δ grain boundary. This results in superior grain boundary ionic conductivity as demonstrated by the enhanced oxygen permeation flux. Lastly, this work illustrates the control of mesoscale level transport properties in mixed ionic–electronic conductor composites through processing induced modifications of the grain boundary defect distribution.« less
The influence of the environment on the propagation of protostellar outflows
NASA Astrophysics Data System (ADS)
Moraghan, Anthony; Smith, Michael D.; Rosen, Alexander
2008-06-01
The properties of bipolar outflows depend on the structure in the environment as well as the nature of the jet. To help distinguish between the two, we investigate here the properties pertaining to the ambient medium. We execute axisymmetric hydrodynamic simulations, injecting continuous atomic jets into molecular media with density gradients (protostellar cores) and density discontinuities (thick swept-up sheets). We determine the distribution of outflowing mass with radial velocity (the mass spectrum) to quantify our approach and to compare to observationally determined values. We uncover a sequence from clump entrainment in the flanks to bow shock sweeping as the density profile steepens. We also find that the dense, highly supersonic outflows remain collimated but can become turbulent after passing through a shell. The mass spectra vary substantially in time, especially at radial speeds exceeding 15 kms-1. The mass spectra also vary according to the conditions: both envelope-type density distributions and the passage through dense sheets generate considerably steeper mass spectra than a uniform medium. The simulations suggest that observed outflows penetrate highly non-uniform media.
Large-Eddy Simulations of Noise Generation in Supersonic Jets at Realistic Engine Temperatures
NASA Astrophysics Data System (ADS)
Liu, Junhui; Corrigan, Andrew; Kailasanath, K.; Taylor, Brian
2015-11-01
Large-eddy simulations (LES) have been carried out to investigate the noise generation in highly heated supersonic jets at temperatures similar to those observed in high-performance jet engine exhausts. It is found that the exhaust temperature of high-performance jet engines can range from 1000K at an intermediate power to above 2000K at a maximum afterburning power. In low-temperature jets, the effects of the variation of the specific heat ratio as well as the radial temperature profile near the nozzle exit are small and are ignored, but it is not clear whether those effects can be also ignored in highly heated jets. The impact of the variation of the specific heat ratio is assessed by comparing LES results using a variable specific heat ratio with those using a constant specific heat ratio. The impact on both the flow field and the noise distributions are investigated. Because the total temperature near the nozzle wall can be substantially lower than the nozzle total temperature either due to the heating loss through the nozzle wall or due to the cooling applied near the wall, this lower wall temperature may impact the temperature in the shear layer, and thus impact the noise generation. The impact of the radial temperature profile on the jet noise generation is investigated by comparing results of lower nozzle wall temperatures with those of the adiabatic wall condition.
Behrman, Jere R; Schott, Whitney; Mani, Subha; Crookston, Benjamin T; Dearden, Kirk; Duc, Le Thuc; Fernald, Lia C H; Stein, Aryeh D
2017-07-01
Academic and policy literatures on intergenerational transmissions of poverty and inequality suggest that improving schooling attainment and income for parents in poor households will lessen poverty and inequality in their children's generation through increased human capital accumulated by their children. However, magnitudes of such effects are unknown. We use data on children born in the 21 st century in four developing countries to simulate how changes in parents' schooling attainment and consumption would affect poverty and inequality in both the parent's and their children's generations. We find that increasing minimum schooling or income substantially reduces poverty and inequality in the parent's generation, but does not carry over to reducing poverty and inequality substantially in the children's generation. Therefore, while reductions in poverty and inequality in the parents' generation are desirable in themselves to improve welfare among current adults, they are not likely to have large impacts in reducing poverty and particularly in reducing inequality in human capital in the next generation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... distribution, if the distribution was motivated in whole or substantial part by a corporate business purpose (within the meaning of § 1.355-2(b)) other than a business purpose to facilitate the acquisition or a..., the distribution was motivated by a business purpose to facilitate the acquisition or a similar...
Distributed Generation to Support Development-Focused Climate Action
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, Sadie; Gagnon, Pieter; Stout, Sherry
2016-09-01
This paper explores the role of distributed generation, with a high renewable energy contribution, in supporting low emission climate-resilient development. The paper presents potential impacts on development (via energy access), greenhouse gas emission mitigation, and climate resilience directly associated with distributed generation, as well as specific actions that may enhance or increase the likelihood of climate and development benefits. This paper also seeks to provide practical and timely insights to support distributed generation policymaking and planning within the context of common climate and development goals as the distributed generation landscape rapidly evolves globally. Country-specific distributed generation policy and program examples,more » as well as analytical tools that can inform efforts internationally, are also highlighted throughout the paper.« less
Variations in the Life Cycle of Anemone patens L. (Ranunculaceae) in Wild Populations of Canada
Kricsfalusy, Vladimir
2016-01-01
Based on a study of a perennial herb Anemone patens L. (Ranunculaceae) in a variety of natural habitats in Saskatchewan, Canada, eight life stages (seed, seedling, juvenile, immature, vegetative, generative, subsenile, and senile) are distinguished and characterized in detail. The species ontogenetic growth patterns are investigated. A. patens has a long life cycle that may last for several decades which leads to the formation of compact clumps. The distribution and age of clumps vary substantially in different environments with different levels of disturbance. The plant ontogeny includes the regular cycle with reproduction occurring through seeds. There is an optional subsenile vegetative disintegration at the end of the life span. The following variations in the life cycle of A. patens are identified: with slower development in young age, with an accelerated development, with omission of the generative stage, with retrogression to previous life stages in mature age, and with vegetative dormancy. The range of variations in the life cycle of A. patens may play an important role in maintaining population stability in different environmental conditions and management regimes. PMID:27376340
Biosignatures from Earth-like planets around M dwarfs.
Segura, Antígona; Kasting, James F; Meadows, Victoria; Cohen, Martin; Scalo, John; Crisp, David; Butler, Rebecca A H; Tinetti, Giovanna
2005-12-01
Coupled one-dimensional photochemical-climate calculations have been performed for hypothetical Earth-like planets around M dwarfs. Visible/near-infrared and thermal-infrared synthetic spectra of these planets were generated to determine which biosignature gases might be observed by a future, space-based telescope. Our star sample included two observed active M dwarfs-AD Leo and GJ 643-and three quiescent model stars. The spectral distribution of these stars in the ultraviolet generates a different photochemistry on these planets. As a result, the biogenic gases CH4, N2O, and CH3Cl have substantially longer lifetimes and higher mixing ratios than on Earth, making them potentially observable by space-based telescopes. On the active M-star planets, an ozone layer similar to Earth's was developed that resulted in a spectroscopic signature comparable to the terrestrial one. The simultaneous detection of O2 (or O3) and a reduced gas in a planet's atmosphere has been suggested as strong evidence for life. Planets circling M stars may be good locations to search for such evidence.
Madi, Asaf; Poran, Asaf; Shifrut, Eric; Reich-Zeliger, Shlomit; Greenstein, Erez; Zaretsky, Irena; Arnon, Tomer; Laethem, Francois Van; Singer, Alfred; Lu, Jinghua; Sun, Peter D; Cohen, Irun R; Friedman, Nir
2017-01-01
Diversity of T cell receptor (TCR) repertoires, generated by somatic DNA rearrangements, is central to immune system function. However, the level of sequence similarity of TCR repertoires within and between species has not been characterized. Using network analysis of high-throughput TCR sequencing data, we found that abundant CDR3-TCRβ sequences were clustered within networks generated by sequence similarity. We discovered a substantial number of public CDR3-TCRβ segments that were identical in mice and humans. These conserved public sequences were central within TCR sequence-similarity networks. Annotated TCR sequences, previously associated with self-specificities such as autoimmunity and cancer, were linked to network clusters. Mechanistically, CDR3 networks were promoted by MHC-mediated selection, and were reduced following immunization, immune checkpoint blockade or aging. Our findings provide a new view of T cell repertoire organization and physiology, and suggest that the immune system distributes its TCR sequences unevenly, attending to specific foci of reactivity. DOI: http://dx.doi.org/10.7554/eLife.22057.001 PMID:28731407
Birdwell, Justin E.; Lewan, Michael; Bake, Kyle D.; Bolin, Trudy B.; Craddock, Paul R.; Forsythe, Julia C.; Pomerantz, Andrew E.
2018-01-01
Previous studies on the distribution of bulk sulfur species in bitumen before and after artificial thermal maturation using various pyrolysis methods have indicated that the quantities of reactive (sulfide, sulfoxide) and thermally stable (thiophene) sulfur moieties change following consistent trends under increasing thermal stress. These trends show that sulfur distributions change during maturation in ways that are similar to those of carbon, most clearly illustrated by the increase in aromatic sulfur (thiophenic) as a function of thermal maturity. In this study, we have examined the sulfur moiety distributions of retained bitumen from a set of pre- and post-pyrolysis rock samples in an organic sulfur-rich, calcareous oil shale from the Upper Cretaceous Ghareb Formation. Samples collected from outcrop in Jordan were subjected to hydrous pyrolysis (HP). Sulfur speciation in extracted bitumens was examined using K-edge X-ray absorption near-edge structure (XANES) spectroscopy. The most substantial changes in sulfur distribution occurred at temperatures up to the point of maximum bitumen generation (∼300 °C) as determined from comparison of the total organic carbon content for samples before and after extraction. Organic sulfide in bitumen decreased with increasing temperature at relatively low thermal stress (200–300 °C) and was not detected in extracts from rocks subjected to HP at temperatures above around 300 °C. Sulfoxide content increased between 200 and 280 °C, but decreased at higher temperatures. The concentration of thiophenic sulfur increased up to 300 °C, and remained essentially stable under increasing thermal stress (mg-S/g-bitumen basis). The ratio of stable-to-reactive+stable sulfur moieties ([thiophene/(sulfide+sulfoxide+thiophene)], T/SST) followed a sigmoidal trend with HP temperature, increasing slightly up to 240 °C, followed by a substantial increase between 240 and 320 °C, and approaching a constant value (∼0.95) at temperatures above 320 °C. This sulfur moiety ratio appears to provide complementary thermal maturity information to geochemical parameters derived from other analyses of extracted source rocks.
Rothendler, James A; Rose, Adam J; Reisman, Joel I; Berlowitz, Dan R; Kazis, Lewis E
2012-01-01
While developed for managing individuals with atrial fibrillation, risk stratification schemes for stroke, such as CHADS2, may be useful in population-based studies, including those assessing process of care. We investigated how certain decisions in identifying diagnoses from administrative data affect the apparent prevalence of CHADS2-associated diagnoses and distribution of scores. Two sets of ICD-9 codes (more restrictive/ more inclusive) were defined for each CHADS2-associated diagnosis. For stroke/transient ischemic attack (TIA), the more restrictive set was applied to only inpatient data. We varied the number of years (1-3) in searching for relevant codes, and, except for stroke/TIA, the number of instances (1 vs. 2) that diagnoses were required to appear. The impact of choices on apparent disease prevalence varied by type of choice and condition, but was often substantial. Choices resulting in substantial changes in prevalence also tended to be associated with more substantial effects on the distribution of CHADS2 scores. PMID:22937488
Force Balance and Substorm Effects in the Magnetotail
NASA Technical Reports Server (NTRS)
Kaufmann, Richard L.; Larson, Douglas J.; Kontodinas, Ioannis D.; Ball, Bryan M.
1997-01-01
A model of the quiet time middle magnetotail is developed using a consistent orbit tracing technique. The momentum equation is used to calculate geocentric solar magnetospheric components of the particle and electromagnetic forces throughout the current sheet. Ions generate the dominant x and z force components. Electron and ion forces almost cancel in the y direction because the two species drift earthward at comparable speeds. The force viewpoint is applied to a study of some substorm processes. Generation of the rapid flows seen during substorm injection and bursty bulk flow events implies substantial force imbalances. The formation of a substorm diversion loop is one cause of changes in the magnetic field and therefore in the electromagnetic force. It is found that larger forces are produced when the cross-tail current is diverted to the ionosphere than would be produced if the entire tail current system simply decreased. Plasma is accelerated while the forces are unbalanced resulting in field lines within a diversion loop becoming more dipolar. Field lines become more stretched and the plasma sheet becomes thinner outside a diversion loop. Mechanisms that require thin current sheets to produce current disruption then can create additional diversion loops in the newly thinned regions. This process may be important during multiple expansion substorms and in differentiating pseudoexpansions from full substorms. It is found that the tail field model used here can be generated by a variety of particle distribution functions. However, for a given energy distribution the mixture of particle mirror or reflection points is constrained by the consistency requirement. The study of uniqueness also leads to the development of a technique to select guiding center electrons that will produce charge neutrality all along a flux tube containing nonguiding center ions without the imposition of a parallel electric field.
Potential effects of regional pumpage on groundwater age distribution
Zinn, Brendan A.; Konikow, Leonard F.
2007-01-01
Groundwater ages estimated from environmental tracers can help calibrate groundwater flow models. Groundwater age represents a mixture of traveltimes, with the distribution of ages determined by the detailed structure of the flow field, which can be prone to significant transient variability. Effects of pumping on age distribution were assessed using direct age simulation in a hypothetical layered aquifer system. A steady state predevelopment age distribution was computed first. A well field was then introduced, and pumpage caused leakage into the confined aquifer of older water from an overlying confining unit. Large changes in simulated groundwater ages occurred in both the aquifer and the confining unit at high pumping rates, and the effects propagated a substantial distance downgradient from the wells. The range and variance of ages contributing to the well increased substantially during pumping. The results suggest that the groundwater age distribution in developed aquifers may be affected by transient leakage from low‐permeability material, such as confining units, under certain hydrogeologic conditions.
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M
2018-03-13
Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.
van Dijk, J P; Eiglsperger, U; Hellmann, D; Giannakopoulos, N N; McGill, K C; Schindler, H J; Lapatki, B G
2016-09-01
To study motor unit activity in the medio-lateral extension of the masseter using an adapted scanning EMG technique that allows studying the territories of multiple motor units (MUs) in one scan. We studied the m. masseter of 10 healthy volunteers in whom two scans were performed. A monopolar scanning needle and two pairs of fine-wire electrodes were inserted into the belly of the muscle. The signals of the fine wire electrodes were decomposed into the contribution of single MUs and used as a trigger for the scanning needle. In this manner multiple MU territory scans were obtained simultaneously. We determined 161 MU territories. The maximum number of territories obtained in one scan was 15. The median territory size was 4.0mm. Larger and smaller MU territories were found throughout the muscle. The presented technique showed its feasibility in obtaining multiple MU territories in one scan. MUs were active throughout the depth of the muscle. The distribution of electrical and anatomical size of MUs substantiates the heterogeneous distribution of MUs throughout the muscle volume. This distributed activity may be of functional significance for the stabilization of the muscle during force generation. Copyright © 2016 International Federation of Clinical Neurophysiology. All rights reserved.
Making Sense of Conflict in Distributed Teams: A Design Science Approach
ERIC Educational Resources Information Center
Zhang, Guangxuan
2016-01-01
Conflict is a substantial, pervasive activity in team collaboration. It may arise because of differences in goals, differences in ways of working, or interpersonal dissonance. The specific focus for this research is the conflict in distributed teams. As opposed to traditional teams, participants of distributed teams are geographically dispersed…
NASA Astrophysics Data System (ADS)
Flores, Robert Joseph
Distributed generation can provide many benefits over traditional central generation such as increased reliability and efficiency while reducing emissions. Despite these potential benefits, distributed generation is generally not purchased unless it reduces energy costs. Economic dispatch strategies can be designed such that distributed generation technologies reduce overall facility energy costs. In this thesis, a microturbine generator is dispatched using different economic control strategies, reducing the cost of energy to the facility. Several industrial and commercial facilities are simulated using acquired electrical, heating, and cooling load data. Industrial and commercial utility rate structures are modeled after Southern California Edison and Southern California Gas Company tariffs and used to find energy costs for the simulated buildings and corresponding microturbine dispatch. Using these control strategies, building models, and utility rate models, a parametric study examining various generator characteristics is performed. An economic assessment of the distributed generation is then performed for both the microturbine generator and parametric study. Without the ability to export electricity to the grid, the economic value of distributed generation is limited to reducing the individual costs that make up the cost of energy for a building. Any economic dispatch strategy must be built to reduce these individual costs. While the ability of distributed generation to reduce cost depends of factors such as electrical efficiency and operations and maintenance cost, the building energy demand being serviced has a strong effect on cost reduction. Buildings with low load factors can accept distributed generation with higher operating costs (low electrical efficiency and/or high operations and maintenance cost) due to the value of demand reduction. As load factor increases, lower operating cost generators are desired due to a larger portion of the building load being met in an effort to reduce demand. In addition, buildings with large thermal demand have access to the least expensive natural gas, lowering the cost of operating distributed generation. Recovery of exhaust heat from DG reduces cost only if the buildings thermal demand coincides with the electrical demand. Capacity limits exist where annual savings from operation of distributed generation decrease if further generation is installed. For low operating cost generators, the approximate limit is the average building load. This limit decreases as operating costs increase. In addition, a high capital cost of distributed generation can be accepted if generator operating costs are low. As generator operating costs increase, capital cost must decrease if a positive economic performance is desired.
Neufeld, Kenneth W.
1996-01-01
An electromechanical cryocooler is disclosed for substantially reducing vibrations caused by the cooler. The direction of the force of the vibrations is measured and a counterforce sufficient to substantially reduce this vibration is calculated and generated. The counterforce is 180.degree. out of phase with the direction of the force of the vibrations.
Daugirdas, John T; Levin, Nathan W; Kotanko, Peter; Depner, Thomas A; Kuhlmann, Martin K; Chertow, Glenn M; Rocco, Michael V
2008-01-01
A number of denominators for scaling the dose of dialysis have been proposed as alternatives to the urea distribution volume (V). These include resting energy expenditure (REE), mass of high metabolic rate organs (HMRO), visceral mass, and body surface area. Metabolic rate is an unlikely denominator as it varies enormously among humans with different levels of activity and correlates poorly with the glomerular filtration rate. Similarly, scaling based on HMRO may not be optimal, as many organs with high metabolic rates such as spleen, brain, and heart are unlikely to generate unusually large amounts of uremic toxins. Visceral mass, in particular the liver and gut, has potential merit as a denominator for scaling; liver size is related to protein intake and the liver, along with the gut, is known to be responsible for the generation of suspected uremic toxins. Surface area is time-honored as a scaling method for glomerular filtration rate and scales similarly to liver size. How currently recommended dialysis doses might be affected by these alternative rescaling methods was modeled by applying anthropometric equations to a large group of dialysis patients who participated in the HEMO study. The data suggested that rescaling to REE would not be much different from scaling to V. Scaling to HMRO mass would mandate substantially higher dialysis doses for smaller patients of either gender. Rescaling to liver mass would require substantially more dialysis for women compared with men at all levels of body size. Rescaling to body surface area would require more dialysis for smaller patients of either gender and also more dialysis for women of any size. Of these proposed alternative rescaling measures, body surface area may be the best, because it reflects gender-based scaling of liver size and thereby the rate of generation of uremic toxins.
The GOES-R Product Generation Architecture - Post CDR Update
NASA Astrophysics Data System (ADS)
Dittberner, G.; Kalluri, S.; Weiner, A.
2012-12-01
The GOES-R system will substantially improve the accuracy of information available to users by providing data from significantly enhanced instruments, which will generate an increased number and diversity of products with higher resolution, and much shorter relook times. Considerably greater compute and memory resources are necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
The GOES-R Product Generation Architecture
NASA Astrophysics Data System (ADS)
Dittberner, G. J.; Kalluri, S.; Hansen, D.; Weiner, A.; Tarpley, A.; Marley, S.
2011-12-01
The GOES-R system will substantially improve users' ability to succeed in their work by providing data with significantly enhanced instruments, higher resolution, much shorter relook times, and an increased number and diversity of products. The Product Generation architecture is designed to provide the computer and memory resources necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
The Always-Connected Generation
ERIC Educational Resources Information Center
Bull, Glen
2010-01-01
The Pew Internet and American Life project characterizes the millennials--the first generation to come of age in the new millennium--as the first "always-connected" generation. Significant aspects of culture are changing as a result. A changing world where all students are connected all the time has substantial educational implications. Despite…
Real-Time Optimization and Control of Next-Generation Distribution
Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution Infrastructure This project develops innovative, real-time optimization and control methods for next-generation
Ultrafine particles and nitrogen oxides generated by gas and electric cooking.
Dennekamp, M; Howarth, S; Dick, C A; Cherrie, J W; Donaldson, K; Seaton, A
2001-08-01
To measure the concentrations of particles less than 100 nm diameter and of oxides of nitrogen generated by cooking with gas and electricity, to comment on possible hazards to health in poorly ventilated kitchens. Experiments with gas and electric rings, grills, and ovens were used to compare different cooking procedures. Nitrogen oxides (NO(x)) were measured by a chemiluminescent ML9841A NO(x) analyser. A TSI 3934 scanning mobility particle sizer was used to measure average number concentration and size distribution of aerosols in the size range 10-500 nm. High concentrations of particles are generated by gas combustion, by frying, and by cooking of fatty foods. Electric rings and grills may also generate particles from their surfaces. In experiments where gas burning was the most important source of particles, most particles were in the size range 15-40 nm. When bacon was fried on the gas or electric rings the particles were of larger diameter, in the size range 50-100 nm. The smaller particles generated during experiments grew in size with time because of coagulation. Substantial concentrations of NO(X) were generated during cooking on gas; four rings for 15 minutes produced 5 minute peaks of about 1000 ppb nitrogen dioxide and about 2000 ppb nitric oxide. Cooking in a poorly ventilated kitchen may give rise to potentially toxic concentrations of numbers of particles. Very high concentrations of oxides of nitrogen may also be generated by gas cooking, and with no extraction and poor ventilation, may reach concentrations at which adverse health effects may be expected. Although respiratory effects of exposure to NO(x) might be anticipated, recent epidemiology suggests that cardiac effects cannot be excluded, and further investigation of this is desirable.
Neufeld, K.W.
1996-12-10
An electromechanical cryocooler is disclosed for substantially reducing vibrations caused by the cooler. The direction of the force of the vibrations is measured and a counterforce sufficient to substantially reduce this vibration is calculated and generated. The counterforce is 180{degree} out of phase with the direction of the force of the vibrations. 3 figs.
Di, Yanming; Schafer, Daniel W.; Wilhelm, Larry J.; Fox, Samuel E.; Sullivan, Christopher M.; Curzon, Aron D.; Carrington, James C.; Mockler, Todd C.; Chang, Jeff H.
2011-01-01
GENE-counter is a complete Perl-based computational pipeline for analyzing RNA-Sequencing (RNA-Seq) data for differential gene expression. In addition to its use in studying transcriptomes of eukaryotic model organisms, GENE-counter is applicable for prokaryotes and non-model organisms without an available genome reference sequence. For alignments, GENE-counter is configured for CASHX, Bowtie, and BWA, but an end user can use any Sequence Alignment/Map (SAM)-compliant program of preference. To analyze data for differential gene expression, GENE-counter can be run with any one of three statistics packages that are based on variations of the negative binomial distribution. The default method is a new and simple statistical test we developed based on an over-parameterized version of the negative binomial distribution. GENE-counter also includes three different methods for assessing differentially expressed features for enriched gene ontology (GO) terms. Results are transparent and data are systematically stored in a MySQL relational database to facilitate additional analyses as well as quality assessment. We used next generation sequencing to generate a small-scale RNA-Seq dataset derived from the heavily studied defense response of Arabidopsis thaliana and used GENE-counter to process the data. Collectively, the support from analysis of microarrays as well as the observed and substantial overlap in results from each of the three statistics packages demonstrates that GENE-counter is well suited for handling the unique characteristics of small sample sizes and high variability in gene counts. PMID:21998647
Reduction of Helicopter Blade-Vortex Interaction Noise by Active Rotor Control Technology
NASA Technical Reports Server (NTRS)
Yu, Yung H.; Gmelin, Bernd; Splettstoesser, Wolf; Brooks, Thomas F.; Philippe, Jean J.; Prieur, Jean
1997-01-01
Helicopter blade-vortex interaction noise is one of the most severe noise sources and is very important both in community annoyance and military detection. Research over the decades has substantially improved basic physical understanding of the mechanisms generating rotor blade-vortex interaction noise and also of controlling techniques, particularly using active rotor control technology. This paper reviews active rotor control techniques currently available for rotor blade vortex interaction noise reduction, including higher harmonic pitch control, individual blade control, and on-blade control technologies. Basic physical mechanisms of each active control technique are reviewed in terms of noise reduction mechanism and controlling aerodynamic or structural parameters of a blade. Active rotor control techniques using smart structures/materials are discussed, including distributed smart actuators to induce local torsional or flapping deformations, Published by Elsevier Science Ltd.
Apparatus for mounting photovoltaic power generating systems on buildings
Russell, Miles C [Lincoln, MA
2009-08-18
Rectangular photovoltaic (PV) modules are mounted on a building roof by mounting stands that are distributed in rows and columns. Each stand comprises a base plate and first and second different height brackets attached to opposite ends of the base plate. Each first and second bracket comprises two module-support members. One end of each module is pivotally attached to and supported by a first module-support member of a first bracket and a second module-support member of another first bracket. At its other end each module rests on but is connected by flexible tethers to module-support members of two different second brackets. The tethers are sized to allow the modules to pivot up away from the module-support members on which they rest to a substantially horizontal position in response to wind uplift forces.
Simonsohn, Uri; Simmons, Joseph P; Nelson, Leif D
2015-12-01
When studies examine true effects, they generate right-skewed p-curves, distributions of statistically significant results with more low (.01 s) than high (.04 s) p values. What else can cause a right-skewed p-curve? First, we consider the possibility that researchers report only the smallest significant p value (as conjectured by Ulrich & Miller, 2015), concluding that it is a very uncommon problem. We then consider more common problems, including (a) p-curvers selecting the wrong p values, (b) fake data, (c) honest errors, and (d) ambitiously p-hacked (beyond p < .05) results. We evaluate the impact of these common problems on the validity of p-curve analysis, and provide practical solutions that substantially increase its robustness. (c) 2015 APA, all rights reserved).
Stochastic Dynamical Model of a Growing Citation Network Based on a Self-Exciting Point Process
NASA Astrophysics Data System (ADS)
Golosovsky, Michael; Solomon, Sorin
2012-08-01
We put under experimental scrutiny the preferential attachment model that is commonly accepted as a generating mechanism of the scale-free complex networks. To this end we chose a citation network of physics papers and traced the citation history of 40 195 papers published in one year. Contrary to common belief, we find that the citation dynamics of the individual papers follows the superlinear preferential attachment, with the exponent α=1.25-1.3. Moreover, we show that the citation process cannot be described as a memoryless Markov chain since there is a substantial correlation between the present and recent citation rates of a paper. Based on our findings we construct a stochastic growth model of the citation network, perform numerical simulations based on this model and achieve an excellent agreement with the measured citation distributions.
NEXRAD and the Broadcast Weather Industry: Preparing to Share the Technology.
NASA Astrophysics Data System (ADS)
Robertson, Michele M.; Droegemeier, Kelvin K.
1990-01-01
This paper describes results from a survey designed to establish the current level of radar and computer technology of the television weather industry, and to assess the awareness and attitudes of television weather forecasters toward the Next Generation Weather Radar (NEXRAD) program and its potential impact on the field of broadcast meteorology. The survey was distributed to one affiliate station in each of the 213 national television markets, and a 46% response rate was achieved over a 4-week period. The survey results indicate substantial awareness of and interest in NEXRAD, along with a willingness to learn more about its capabilities and potential for use in the private sector. Survey participants suggested that potential private NEXRAD users work directly with the National Weather Service (NWS) and its affiliates so as to fully utilize the capabilities of the new radar system.
Borowka, S; Greiner, N; Heinrich, G; Jones, S P; Kerner, M; Schlenk, J; Schubert, U; Zirke, T
2016-07-01
We present the calculation of the cross section and invariant mass distribution for Higgs boson pair production in gluon fusion at next-to-leading order (NLO) in QCD. Top-quark masses are fully taken into account throughout the calculation. The virtual two-loop amplitude has been generated using an extension of the program GoSam supplemented with an interface to Reduze for the integral reduction. The occurring integrals have been calculated numerically using the program SecDec. Our results, including the full top-quark mass dependence for the first time, allow us to assess the validity of various approximations proposed in the literature, which we also recalculate. We find substantial deviations between the NLO result and the different approximations, which emphasizes the importance of including the full top-quark mass dependence at NLO.
Space-assisted irrigation management: an operational perspective
NASA Astrophysics Data System (ADS)
Calera Belmonte, Alfonso; Jochum, Anne M.; Cuesta Garcia, Andres
2004-10-01
Irrigation Advisory Services (IAS) are the natural management instruments to achieve a better efficiency in the use of water for irrigation. IAS help farmers to apply water according to the actual crop water requirements and thus, to optimize production and cost-effectiveness. The project DEMETER (DEMonstration of Earth observation TEchnologies in Routine irrigation advisory services) aims at assessing and demonstrating how the performance and cost-effectiveness of IAS is substantially improved by the incorporation of Earth observation (EO) techniques and Information Society Technology (IT) into their day-to-day operations. EO allows for efficiently monitoring crop water requirements of each field in extended areas. The incorporation of IT in the generation and distribution of information makes that information easily available to IAS and to its associated farmers (the end-users) in a personalized way. This paper describes the methodology and selected results.
Progress on Complex Langevin simulations of a finite density matrix model for QCD
NASA Astrophysics Data System (ADS)
Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus; Zafeiropoulos, Savvas
2018-03-01
We study the Stephanov model, which is an RMT model for QCD at finite density, using the Complex Langevin algorithm. Naive implementation of the algorithm shows convergence towards the phase quenched or quenched theory rather than to intended theory with dynamical quarks. A detailed analysis of this issue and a potential resolution of the failure of this algorithm are discussed. We study the effect of gauge cooling on the Dirac eigenvalue distribution and time evolution of the norm for various cooling norms, which were specifically designed to remove the pathologies of the complex Langevin evolution. The cooling is further supplemented with a shifted representation for the random matrices. Unfortunately, none of these modifications generate a substantial improvement on the complex Langevin evolution and the final results still do not agree with the analytical predictions.
PDF4LHC recommendations for LHC Run II
Butterworth, Jon; Carrazza, Stefano; Cooper-Sarkar, Amanda; ...
2016-01-06
We provide an updated recommendation for the usage of sets of parton distribution functions (PDFs) and the assessment of PDF and PDF+αs uncertainties suitable for applications at the LHC Run II. We review developments since the previous PDF4LHC recommendation, and discuss and compare the new generation of PDFs, which include substantial information from experimental data from the Run I of the LHC. We then propose a new prescription for the combination of a suitable subset of the available PDF sets, which is presented in terms of a single combined PDF set. Lastly, we finally discuss tools which allow for themore » delivery of this combined set in terms of optimized sets of Hessian eigenvectors or Monte Carlo replicas, and their usage, and provide some examples of their application to LHC phenomenology.« less
Microalga propels along vorticity direction in a shear flow
NASA Astrophysics Data System (ADS)
Chengala, Anwar; Hondzo, Miki; Sheng, Jian
2013-05-01
Using high-speed digital holographic microscopy and microfluidics, we discover that, when encountering fluid flow shear above a threshold, unicellular green alga Dunaliella primolecta migrates unambiguously in the cross-stream direction that is normal to the plane of shear and coincides with the local fluid flow vorticity. The flow shear drives motile microalgae to collectively migrate in a thin two-dimensional horizontal plane and consequently alters the spatial distribution of microalgal cells within a given suspension. This shear-induced algal migration differs substantially from periodic rotational motion of passive ellipsoids, known as Jeffery orbits, as well as gyrotaxis by bottom-heavy swimming microalgae in a shear flow due to the subtle interplay between torques generated by gravity and viscous shear. Our findings could facilitate mechanistic solutions for modeling planktonic thin layers and sustainable cultivation of microalgae for human nutrition and bioenergy feedstock.
Alcohol industry and governmental revenue from young Australians.
Li, Ian W; Si, Jiawei
2016-11-01
Objective The aim of the present study was to estimate the revenues collected by government and industry from alcohol consumption by young Australians in 2010. Methods Statistical analyses were performed on data from the Australian National Drug Strategy Household Survey 2010 and alcohol data collected from an online retailer to calculate the proportion, frequency, quantity and revenues from alcohol consumption by young Australians. Results One-third of adolescents (12-17 years old) and 85% of young adults (18-25 years old) consume alcohol. More than half the adolescents' alcohol consumption is from ready-to-drink spirits. Revenue generated from alcohol consumption by 12-25 year olds is estimated at $4.8 billion in 2010 (2014 Australian dollars): $2.8 billion to industry (sales) and $2.0 billion to government (taxes). Conclusions Alcohol consumption by young Australians is prevalent, and young Australian drinkers consume alcohol in substantial amounts. The industry and taxation revenue from young drinkers is also considerable. It would be in the public interest to divert some of this revenue towards health initiatives to reduce drinking by young people, especially given the high societal costs of alcohol consumption. What is known about the topic? Australian adolescents aged 12-17 years consume substantial amounts of alcohol, and substantial amounts of revenue are generated from alcohol sales to them. What does this paper add? This paper provides recent estimates of alcohol consumption and revenue generated by Australian adolescents, and extends estimates to young adults aged 18-25 years. What are the implications for practitioners? A substantial proportion of Australian young people consume alcohol. The sales and taxation revenue generated from young people's drinking is substantial at A$4.8 billion in 2010 and is higher in real terms than estimates from previous studies. Some of the alcohol taxation revenue could be diverted to health promotion and education for young people, because the costs of alcohol consumption in terms of health outcomes and productivity losses for these age groups are expected to be especially high.
Technology Solutions | Distributed Generation Interconnection Collaborative
technologies, both hardware and software, can support the wider adoption of distributed generation on the grid . As the penetration of distributed-generation photovoltaics (DGPV) has risen rapidly in recent years posed by high penetrations of distributed PV. Other promising technologies include new utility software
Spatial Distribution of Small Water Body Types in Indiana Ecoregions
Due to their large numbers and biogeochemical activity, small water bodies (SWBs), such as ponds and wetlands, can have substantial cumulative effects on hydrologic and biogeochemical processes. Using updated National Wetland Inventory data, we describe the spatial distribution o...
Bimodal and multimodal plant biomass particle mixtures
Dooley, James H.
2013-07-09
An industrial feedstock of plant biomass particles having fibers aligned in a grain, wherein the particles are individually characterized by a length dimension (L) aligned substantially parallel to the grain, a width dimension (W) normal to L and aligned cross grain, and a height dimension (H) normal to W and L, wherein the L.times.H dimensions define a pair of substantially parallel side surfaces characterized by substantially intact longitudinally arrayed fibers, the W.times.H dimensions define a pair of substantially parallel end surfaces characterized by crosscut fibers and end checking between fibers, and the L.times.W dimensions define a pair of substantially parallel top and bottom surfaces, and wherein the particles in the feedstock are collectively characterized by having a bimodal or multimodal size distribution.
TK3 eBook Software to Author, Distribute, and Use Electronic Course Content for Medical Education
ERIC Educational Resources Information Center
Morton, David A.; Foreman, K. Bo; Goede, Patricia A.; Bezzant, John L.; Albertine, Kurt H.
2007-01-01
The methods for authoring and distributing course content are undergoing substantial changes due to advancement in computer technology. Paper has been the traditional method to author and distribute course content. Paper enables students to personalize content through highlighting and note taking but does not enable the incorporation of multimedia…
Life cycle water use for electricity generation: a review and harmonization of literature estimates
NASA Astrophysics Data System (ADS)
Meldrum, J.; Nettles-Anderson, S.; Heath, G.; Macknick, J.
2013-03-01
This article provides consolidated estimates of water withdrawal and water consumption for the full life cycle of selected electricity generating technologies, which includes component manufacturing, fuel acquisition, processing, and transport, and power plant operation and decommissioning. Estimates were gathered through a broad search of publicly available sources, screened for quality and relevance, and harmonized for methodological differences. Published estimates vary substantially, due in part to differences in production pathways, in defined boundaries, and in performance parameters. Despite limitations to available data, we find that: water used for cooling of thermoelectric power plants dominates the life cycle water use in most cases; the coal, natural gas, and nuclear fuel cycles require substantial water per megawatt-hour in most cases; and, a substantial proportion of life cycle water use per megawatt-hour is required for the manufacturing and construction of concentrating solar, geothermal, photovoltaic, and wind power facilities. On the basis of the best available evidence for the evaluated technologies, total life cycle water use appears lowest for electricity generated by photovoltaics and wind, and highest for thermoelectric generation technologies. This report provides the foundation for conducting water use impact assessments of the power sector while also identifying gaps in data that could guide future research.
Complex earthquake rupture and local tsunamis
Geist, E.L.
2002-01-01
In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.
On the stability of pick-up ion ring distributions in the outer heliosheath
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summerlin, Errol J.; Viñas, Adolfo F.; Moore, Thomas E.
The 'secondary energetic neutral atom (ENA)' hypothesis for the ribbon feature observed by the Interstellar Boundary Explorer (IBEX) posits that the neutral component of the solar wind continues beyond the heliopause and charge exchanges with interstellar ions in the Outer Heliosheath (OHS). This creates pick-up ions that gyrate about the draped interstellar magnetic field (ISMF) lines at pitch angles near 90° on the locus where the ISMF lies tangential to the heliopause and perpendicular to the heliocentric radial direction. This location closely coincides with the location of the ribbon feature according to the prevailing inferences of the ISMF orientation andmore » draping. The locally gyrating ions undergo additional charge exchange and escape as free-flying neutral atoms, many of which travel back toward the inner solar system and are imaged by IBEX as a ribbon tracing out the locus described above. For this mechanism to succeed, the pick-up ions must diffuse in pitch angle slowly enough to permit secondary charge exchange before their pitch angle distribution substantially broadens away from 90°. Previous work using linear Vlasov dispersion analysis of parallel propagating waves has suggested that the ring distribution in the OHS is highly unstable, which, if true, would make the secondary ENA hypothesis incapable of rendering the observed ribbon. In this paper, we extend this earlier work to more realistic ring distribution functions. We find that, at the low densities necessary to produce the observed IBEX ribbon via the secondary ENA hypothesis, growth rates are highly sensitive to the temperature of the beam and that even very modest temperatures of the ring beam corresponding to beam widths of <1° are sufficient to damp the self-generated waves associated with the ring beam. Thus, at least from the perspective of linear Vlasov dispersion analysis of parallel propagating waves, there is no reason to expect that the ring distributions necessary to produce the observed IBEX ENA flux via the secondary ENA hypothesis will be unstable to their own self-generated turbulence.« less
On the Stability of Pick-up Ion Ring Distributions in the Outer Heliosheath
NASA Astrophysics Data System (ADS)
Summerlin, Errol J.; Viñas, Adolfo F.; Moore, Thomas E.; Christian, Eric R.; Cooper, John F.
2014-10-01
The "secondary energetic neutral atom (ENA)" hypothesis for the ribbon feature observed by the Interstellar Boundary Explorer (IBEX) posits that the neutral component of the solar wind continues beyond the heliopause and charge exchanges with interstellar ions in the Outer Heliosheath (OHS). This creates pick-up ions that gyrate about the draped interstellar magnetic field (ISMF) lines at pitch angles near 90° on the locus where the ISMF lies tangential to the heliopause and perpendicular to the heliocentric radial direction. This location closely coincides with the location of the ribbon feature according to the prevailing inferences of the ISMF orientation and draping. The locally gyrating ions undergo additional charge exchange and escape as free-flying neutral atoms, many of which travel back toward the inner solar system and are imaged by IBEX as a ribbon tracing out the locus described above. For this mechanism to succeed, the pick-up ions must diffuse in pitch angle slowly enough to permit secondary charge exchange before their pitch angle distribution substantially broadens away from 90°. Previous work using linear Vlasov dispersion analysis of parallel propagating waves has suggested that the ring distribution in the OHS is highly unstable, which, if true, would make the secondary ENA hypothesis incapable of rendering the observed ribbon. In this paper, we extend this earlier work to more realistic ring distribution functions. We find that, at the low densities necessary to produce the observed IBEX ribbon via the secondary ENA hypothesis, growth rates are highly sensitive to the temperature of the beam and that even very modest temperatures of the ring beam corresponding to beam widths of <1° are sufficient to damp the self-generated waves associated with the ring beam. Thus, at least from the perspective of linear Vlasov dispersion analysis of parallel propagating waves, there is no reason to expect that the ring distributions necessary to produce the observed IBEX ENA flux via the secondary ENA hypothesis will be unstable to their own self-generated turbulence.
NASA Astrophysics Data System (ADS)
Gilmanshin, I. R.; Gilmanshina, S. I.
2017-09-01
The urgency of the formation of competence in the field of energy saving in the process of studying engineering and technical disciplines at the university is substantiated. The author’s definition of the competence in the field of energy saving is given, allowing to consider the necessity of its formation among students - future engineers as a way to create technologies of a new generation. The essence of this competence is revealed. The system of work, pedagogical conditions and technologies of its formation in the conditions of the federal university is substantiated.
Generation of Referring Expressions: Assessing the Incremental Algorithm
ERIC Educational Resources Information Center
van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard
2012-01-01
A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…
NASA Astrophysics Data System (ADS)
Goodlet, Brent R.; Mills, Leah; Bales, Ben; Charpagne, Marie-Agathe; Murray, Sean P.; Lenthe, William C.; Petzold, Linda; Pollock, Tresa M.
2018-06-01
Bayesian inference is employed to precisely evaluate single crystal elastic properties of novel γ -γ ' Co- and CoNi-based superalloys from simple and non-destructive resonant ultrasound spectroscopy (RUS) measurements. Nine alloys from three Co-, CoNi-, and Ni-based alloy classes were evaluated in the fully aged condition, with one alloy per class also evaluated in the solution heat-treated condition. Comparisons are made between the elastic properties of the three alloy classes and among the alloys of a single class, with the following trends observed. A monotonic rise in the c_{44} (shear) elastic constant by a total of 12 pct is observed between the three alloy classes as Co is substituted for Ni. Elastic anisotropy ( A) is also increased, with a large majority of the nearly 13 pct increase occurring after Co becomes the dominant constituent. Together the five CoNi alloys, with Co:Ni ratios from 1:1 to 1.5:1, exhibited remarkably similar properties with an average A 1.8 pct greater than the Ni-based alloy CMSX-4. Custom code demonstrating a substantial advance over previously reported methods for RUS inversion is also reported here for the first time. CmdStan-RUS is built upon the open-source probabilistic programing language of Stan and formulates the inverse problem using Bayesian methods. Bayesian posterior distributions are efficiently computed with Hamiltonian Monte Carlo (HMC), while initial parameterization is randomly generated from weakly informative prior distributions. Remarkably robust convergence behavior is demonstrated across multiple independent HMC chains in spite of initial parameterization often very far from actual parameter values. Experimental procedures are substantially simplified by allowing any arbitrary misorientation between the specimen and crystal axes, as elastic properties and misorientation are estimated simultaneously.
Recovering Wood and McCarthy's ERP-prototypes by means of ERP-specific procrustes-rotation.
Beauducel, André
2018-02-01
The misallocation of treatment-variance on the wrong component has been discussed in the context of temporal principal component analysis of event-related potentials. There is, until now, no rotation-method that can perfectly recover Wood and McCarthy's prototypes without making use of additional information on treatment-effects. In order to close this gap, two new methods: for component rotation were proposed. After Varimax-prerotation, the first method identifies very small slopes of successive loadings. The corresponding loadings are set to zero in a target-matrix for event-related orthogonal partial Procrustes- (EPP-) rotation. The second method generates Gaussian normal distributions around the peaks of the Varimax-loadings and performs orthogonal Procrustes-rotation towards these Gaussian distributions. Oblique versions of this Gaussian event-related Procrustes- (GEP) rotation and of EPP-rotation are based on Promax-rotation. A simulation study revealed that the new orthogonal rotations recover Wood and McCarthy's prototypes and eliminate misallocation of treatment-variance. In an additional simulation study with a more pronounced overlap of the prototypes GEP Promax-rotation reduced the variance misallocation slightly more than EPP Promax-rotation. Comparison with Existing Method(s): Varimax- and conventional Promax-rotations resulted in substantial misallocations of variance in simulation studies when components had temporal overlap. A substantially reduced misallocation of variance occurred with the EPP-, EPP Promax-, GEP-, and GEP Promax-rotations. Misallocation of variance can be minimized by means of the new rotation methods: Making use of information on the temporal order of the loadings may allow for improvements of the rotation of temporal PCA components. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christie, R.H.; Chung, Haeyong; Rebeck, G.W.
1996-04-01
The very low density lipoprotein receptor (VLDL-r) is a cell-surface molecule specialized for the internalization of multiple diverse ligands, including apolipoprotein E (apoE)-containing lipoprotein particles, via clathrin-coated pits. Its structure is similar to the low-density lipoprotein receptor (LDL-r), although the two have substantially different systemic distributions and regulatory pathways. The present work examines the distribution of VLDL-r in the central nervous system (CNS) and in relation to senile plaques in Alzheimer disease (AD). VLDL-r is present on resting and activated microglia, particularly those associated with senile plaques (SPs). VLDL-r immunoreactivity is also found in cortical neurons. Two exons of VLDL-rmore » mRNA are differentially spliced in the mature receptor mRNA. One set of splice forms gives rise to receptors containing (or lacking) an extracellular O-linked glycosylation domain near the transmembrane portion of the molecule. The other set of splice forms appears to be brain-specific, and is responsible for the presence or absence of one of the cysteine-rich repeat regions in the binding region of the molecule. Ratios of the receptor variants generated from these splice forms do not differ substantially across different cortical areas or in AD. We hypothesize that VLDL-r might contribute to metabolism of apoE and apoE/A{beta} complexes in the brain. Further characterization of apoE receptors in Alzheimer brain may help lay the groundwork for understanding the role of apoE in the CNS and in the pathophysiology of AD. 43 refs., 5 figs.« less
Akbulut-Yuksel, Mevlude; Kugler, Adriana D
2016-12-01
It is well known that a substantial part of income and education is passed on from parents to children, generating substantial persistence in socioeconomic status across generations. In this paper, we examine whether another form of human capital, health, is also largely transmitted from generation to generation. Using data from the NLSY, we first present new evidence on intergenerational transmission of health outcomes in the U.S., including weight, height, the body mass index (BMI), asthma and depression for both natives and immigrants. We show that between 50% and 70% of the mothers' health status persists in both native and immigrant children, and that, on average, immigrants experience higher persistence than natives in BMI. We also find that the longer immigrants remain in the U.S., the less intergenerational persistence there is and the more immigrants look like native children. Unfortunately, the more generations immigrant families remain in the U.S., the more children of immigrants resemble natives' higher BMI. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Interactions between land use, climate and hydropower in Scotland
NASA Astrophysics Data System (ADS)
Sample, James
2015-04-01
To promote the transition towards a low carbon economy, the Scottish Government has adopted ambitious energy targets, including generating all electricity from renewable sources by 2020. To achieve this, continued investment will be required across a range of sustainable technologies. Hydropower has a long history in Scotland and the present-day operational capacity of ~1.5 GW makes a substantial contribution to the national energy budget. In addition, there remains potential for ~500 MW of further development, mostly in the form of small to medium size run-of-river schemes. Climate change is expected to lead to an intensification of the global hydrological cycle, leading to changes in both the magnitude and seasonality of river flows. There may also be indirect effects, such as changing land use, enhanced evapotranspiration rates and an increased demand for irrigation, all of which could affect the water available for energy generation. Preliminary assessments of hydropower commonly use flow duration curves (FDCs) to estimate the power generation potential at proposed new sites. In this study, we use spatially distributed modelling to generate daily and monthly FDCs on a 1 km by 1 km grid across Scotland, using a variety of future land use and climate change scenarios. Parameter-related uncertainty in the model has been constrained using Bayesian Markov Chain Monte Carlo (MCMC) techniques to derive posterior probability distributions for key model parameters. Our results give an indication of the sensitivity and vulnerability of Scotland's run-of-river hydropower resources to possible changes in climate and land use. The effects are spatially variable and the range of uncertainty is sometimes large, but consistent patterns do emerge. For example, many locations are predicted to experience enhanced seasonality, with significantly lower power generation potential in the summer months and greater potential during the autumn and winter. Some sites may require infrastructural changes in order to continue operating at optimum efficiency. We discuss the implications and limitations of our results, and highlight design and adaptation options for maximising the resilience of hydropower installations under changing future flow patterns.
Indication of multiscaling in the volatility return intervals of stock markets
NASA Astrophysics Data System (ADS)
Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene
2008-01-01
The distribution of the return intervals τ between price volatilities above a threshold height q for financial records has been approximated by a scaling behavior. To explore how accurate is the scaling and therefore understand the underlined nonlinear mechanism, we investigate intraday data sets of 500 stocks which consist of Standard & Poor’s 500 index. We show that the cumulative distribution of return intervals has systematic deviations from scaling. We support this finding by studying the m -th moment μm≡⟨(τ/⟨τ⟩)m⟩1/m , which show a certain trend with the mean interval ⟨τ⟩ . We generate surrogate records using the Schreiber method, and find that their cumulative distributions almost collapse to a single curve and moments are almost constant for most ranges of ⟨τ⟩ . Those substantial differences suggest that nonlinear correlations in the original volatility sequence account for the deviations from a single scaling law. We also find that the original and surrogate records exhibit slight tendencies for short and long ⟨τ⟩ , due to the discreteness and finite size effects of the records, respectively. To avoid as possible those effects for testing the multiscaling behavior, we investigate the moments in the range 10<⟨τ⟩≤100 , and find that the exponent α from the power law fitting μm˜⟨τ⟩α has a narrow distribution around α≠0 which depends on m for the 500 stocks. The distribution of α for the surrogate records are very narrow and centered around α=0 . This suggests that the return interval distribution exhibits multiscaling behavior due to the nonlinear correlations in the original volatility.
Spatial Distribution of Small Water Body Types across Indiana Ecoregions
Due to their large numbers and biogeochemical activity, small water bodies (SWB), such as ponds and wetlands, can have substantial cumulative effects on hydrologic, biogeochemical, and biological processes; yet the spatial distributions of various SWB types are often unknown. Usi...
NASA Astrophysics Data System (ADS)
Amran, Tengku Sarah Tengku; Ismail, Mohamad Pauzi; Ahmad, Mohamad Ridzuan; Amin, Mohamad Syafiq Mohd; Sani, Suhairy; Masenwat, Noor Azreen; Ismail, Mohd Azmi; Hamid, Shu-Hazri Abdul
2017-01-01
A water pipe is any pipe or tubes designed to transport and deliver water or treated drinking with appropriate quality, quantity and pressure to consumers. The varieties include large diameter main pipes, which supply entire towns, smaller branch lines that supply a street or group of buildings or small diameter pipes located within individual buildings. This distribution system (underground) is used to describe collectively the facilities used to supply water from its source to the point of usage. Therefore, a leaking in the underground water distribution piping system increases the likelihood of safe water leaving the source or treatment facility becoming contaminated before reaching the consumer. Most importantly, leaking can result in wastage of water which is precious natural resources. Furthermore, they create substantial damage to the transportation system and structure within urban and suburban environments. This paper presents a study on the possibility of using ground penetrating radar (GPR) with frequency of 1GHz to detect pipes and leakages in underground water distribution piping system. Series of laboratory experiment was designed to investigate the capability and efficiency of GPR in detecting underground pipes (metal and PVC) and water leakages. The data was divided into two parts: 1. detecting/locating underground water pipe, 2. detecting leakage of underground water pipe. Despite its simplicity, the attained data is proved to generate a satisfactory result indicating GPR is capable and efficient, in which it is able to detect the underground pipe and presence of leak of the underground pipe.
Space Weather Modeling at the Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Hesse M.
2005-01-01
The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership, which aims at the creation of next generation space weather models. The goal of the CCMC is to support the research and developmental work necessary to substantially increase the present-day modeling capability for space weather purposes, and to provide models for transition to the rapid prototyping centers at the space weather forecast centers. This goal requires dose collaborations with and substantial involvement of the research community. The physical regions to be addressed by CCMC-related activities range from the solar atmosphere to the Earth's upper atmosphere. The CCMC is an integral part of the National Space Weather Program Implementation Plan, of NASA's Living With a Star (LWS) initiative, and of the Department of Defense Space Weather Transition Plan. CCMC includes a facility at NASA Goddard Space Flight Center, as well as distributed computing facilities provided by the US Air Force. CCMC also provides, to the research community, access to state-of-the-art space research models. In this paper we will provide updates on CCMC status, on current plans, research and development accomplishments and goals, and on the model testing and validation process undertaken as part of the CCMC mandate. Special emphasis will be on solar and heliospheric models currently residing at CCMC, and on plans for validation and verification.
NASA Astrophysics Data System (ADS)
Akbardin, J.; Parikesit, D.; Riyanto, B.; TMulyono, A.
2018-05-01
Zones that produce land fishery commodity and its yields have characteristics that is limited in distribution capability because infrastructure conditions availability. High demand for fishery commodities caused to a growing distribution at inefficient distribution distance. The development of the gravity theory with the limitation of movement generation from the production zone can increase the interaction inter-zones by distribution distances effectively and efficiently with shorter movement distribution distances. Regression analysis method with multiple variable of transportation infrastructure condition based on service level and quantitative capacity is determined to estimate the 'mass' of movement generation that is formed. The resulting movement distribution (Tid) model has the equation Tid = 27.04 -0.49 tid. Based on barrier function of power model with calibration value β = 0.0496. In the way of development of the movement generation 'mass' boundary at production zone will shorten the distribution distance effectively with shorter distribution distances. Shorter distribution distances will increase the accessibility inter-zones to interact according to the magnitude of the movement generation 'mass'.
Bartlett, Jessica Dym; Kotake, Chie; Fauth, Rebecca; Easterbrooks, M Ann
2017-01-01
A maternal history of childhood maltreatment is thought to be a potent risk factor for child abuse and neglect, yet the extent of continuity across generations is unclear, with studies reporting vastly different rates of intergenerational transmission. Disparate findings may be due to lack of attention to the nature of maltreatment experiences in each generation. We sought to expand the current literature by examining the role of maltreatment type, perpetrator identity, and substantiation status of reports to child protective services (CPS) on intergenerational maltreatment among adolescent mothers (n=417) and their children. We found that when mothers had at least one report of childhood maltreatment (substantiated or not), the odds that they maltreated their children increased by 72% (OR=2.52), compared to mothers who are not maltreated, but the odds were considerably lower when we limited analysis to substantiated reports. Both a maternal history of substantiated neglect and multiple type maltreatment (neglect and physical or sexual abuse) were associated with increased risk of child maltreatment, yet the likelihood of children experiencing multiple maltreatment perpetrated with their mothers identified as perpetrators increased over 300% when mothers had a childhood history of multiple maltreatment. Copyright © 2016 Elsevier Ltd. All rights reserved.
A self-contamination model for the formation of globular star clusters
NASA Astrophysics Data System (ADS)
Brown, James Howard
Described here is a model of globular cluster formation which allows the self contamination of the cluster by an earlier generation of massive stars. It is first shown that such self-contamination naturally produces an Fe/H in the range from -2.5 to -1.0, precisely the same range observed in the metal poor (halo) globular clusters; this also seems to require that the disk clusters started with a substantial initial metallicity. To minimize the problem of creating homogeneous globular clusters, the second (currently observed) generation of stars is assumed to form in the expanding supershell around the first generation stars. Both numerical and analytic models are used to address this problem. The most important result of this investigation was that the late evolution of the supershell is the most important, and that this phase of the evolution is dominated by the external medium in which the cloud is embedded. This result and the requirement that only the most tightly bound systems may become globular clusters lead to the conclusion that a globular cluster with the mass and binding energy typically observed can be formed at star formation efficiences as low as 10-20 percent. Furthermore, self contamination requires that the typical Fe/H of a bound system be about -1.6, independent of the free parameters of the model, allowing the clusters and field stars to form with different metallicity distributions in spite of their forming at the same time. Since the formation of globular clusters in this model is tied to the external pressure, the halo globular cluster masses and distribution can be used as probes of the early galactic structure. In particular, this model requires an increase in the typical globular cluster mass as one moves out from the galactic center; the masses of the halo clusters are examined, and they show considerable evidence for such a gradient. Based on a pressure distribution derived from this data, the effect of the galactic tidal field on the model is also investigated using an N-body simulation.
NASA Astrophysics Data System (ADS)
McLarty, Dustin Fogle
Distributed energy systems are a promising means by which to reduce both emissions and costs. Continuous generators must be responsive and highly efficiency to support building dynamics and intermittent on-site renewable power. Fuel cell -- gas turbine hybrids (FC/GT) are fuel-flexible generators capable of ultra-high efficiency, ultra-low emissions, and rapid power response. This work undertakes a detailed study of the electrochemistry, chemistry and mechanical dynamics governing the complex interaction between the individual systems in such a highly coupled hybrid arrangement. The mechanisms leading to the compressor stall/surge phenomena are studied for the increased risk posed to particular hybrid configurations. A novel fuel cell modeling method introduced captures various spatial resolutions, flow geometries, stack configurations and novel heat transfer pathways. Several promising hybrid configurations are analyzed throughout the work and a sensitivity analysis of seven design parameters is conducted. A simple estimating method is introduced for the combined system efficiency of a fuel cell and a turbine using component performance specifications. Existing solid oxide fuel cell technology is capable of hybrid efficiencies greater than 75% (LHV) operating on natural gas, and existing molten carbonate systems greater than 70% (LHV). A dynamic model is calibrated to accurately capture the physical coupling of a FC/GT demonstrator tested at UC Irvine. The 2900 hour experiment highlighted the sensitivity to small perturbations and a need for additional control development. Further sensitivity studies outlined the responsiveness and limits of different control approaches. The capability for substantial turn-down and load following through speed control and flow bypass with minimal impact on internal fuel cell thermal distribution is particularly promising to meet local demands or provide dispatchable support for renewable power. Advanced control and dispatch heuristics are discussed using a case study of the UCI central plant. Thermal energy storage introduces a time horizon into the dispatch optimization which requires novel solution strategies. Highly efficient and responsive generators are required to meet the increasingly dynamic loads of today's efficient buildings and intermittent local renewable wind and solar power. Fuel cell gas turbine hybrids will play an integral role in the complex and ever-changing solution to local electricity production.
Wu, Abraham J; Bosch, Walter R; Chang, Daniel T; Hong, Theodore S; Jabbour, Salma K; Kleinberg, Lawrence R; Mamon, Harvey J; Thomas, Charles R; Goodman, Karyn A
2015-07-15
Current guidelines for esophageal cancer contouring are derived from traditional 2-dimensional fields based on bony landmarks, and they do not provide sufficient anatomic detail to ensure consistent contouring for more conformal radiation therapy techniques such as intensity modulated radiation therapy (IMRT). Therefore, we convened an expert panel with the specific aim to derive contouring guidelines and generate an atlas for the clinical target volume (CTV) in esophageal or gastroesophageal junction (GEJ) cancer. Eight expert academically based gastrointestinal radiation oncologists participated. Three sample cases were chosen: a GEJ cancer, a distal esophageal cancer, and a mid-upper esophageal cancer. Uniform computed tomographic (CT) simulation datasets and accompanying diagnostic positron emission tomographic/CT images were distributed to each expert, and the expert was instructed to generate gross tumor volume (GTV) and CTV contours for each case. All contours were aggregated and subjected to quantitative analysis to assess the degree of concordance between experts and to generate draft consensus contours. The panel then refined these contours to generate the contouring atlas. The κ statistics indicated substantial agreement between panelists for each of the 3 test cases. A consensus CTV atlas was generated for the 3 test cases, each representing common anatomic presentations of esophageal cancer. The panel agreed on guidelines and principles to facilitate the generalizability of the atlas to individual cases. This expert panel successfully reached agreement on contouring guidelines for esophageal and GEJ IMRT and generated a reference CTV atlas. This atlas will serve as a reference for IMRT contours for clinical practice and prospective trial design. Subsequent patterns of failure analyses of clinical datasets using these guidelines may require modification in the future. Copyright © 2015 Elsevier Inc. All rights reserved.
Expert consensus contouring guidelines for IMRT in esophageal and gastroesophageal junction cancer
Wu, Abraham J.; Bosch, Walter R.; Chang, Daniel T.; Hong, Theodore S.; Jabbour, Salma K.; Kleinberg, Lawrence R.; Mamon, Harvey J.; Thomas, Charles R.; Goodman, Karyn A.
2015-01-01
Purpose/Objective(s) Current guidelines for esophageal cancer contouring are derived from traditional two-dimensional fields based on bony landmarks, and do not provide sufficient anatomical detail to ensure consistent contouring for more conformal radiotherapy techniques such as intensity-modulated radiation therapy (IMRT). Therefore, we convened an expert panel with the specific aim to derive contouring guidelines and generate an atlas for the clinical target volume (CTV) in esophageal or gastroesophageal junction (GEJ) cancer. Methods and Materials Eight expert academically-based gastrointestinal radiation oncologists participated. Three sample cases were chosen: a GEJ cancer, a distal esophageal cancer, and a mid-upper esophageal cancer. Uniform CT simulation datasets and an accompanying diagnostic PET-CT were distributed to each expert, and he/she was instructed to generate gross tumor volume (GTV) and CTV contours for each case. All contours were aggregated and subjected to quantitative analysis to assess the degree of concordance between experts and generate draft consensus contours. The panel then refined these contours to generate the contouring atlas. Results Kappa statistics indicated substantial agreement between panelists for each of the three test cases. A consensus CTV atlas was generated for the three test cases, each representing common anatomic presentations of esophageal cancer. The panel agreed on guidelines and principles to facilitate the generalizability of the atlas to individual cases. Conclusions This expert panel successfully reached agreement on contouring guidelines for esophageal and GEJ IMRT and generated a reference CTV atlas. This atlas will serve as a reference for IMRT contours for clinical practice and prospective trial design. Subsequent patterns of failure analyses of clinical datasets utilizing these guidelines may require modification in the future. PMID:26104943
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Abraham J., E-mail: wua@mskcc.org; Bosch, Walter R.; Chang, Daniel T.
Purpose/Objective(s): Current guidelines for esophageal cancer contouring are derived from traditional 2-dimensional fields based on bony landmarks, and they do not provide sufficient anatomic detail to ensure consistent contouring for more conformal radiation therapy techniques such as intensity modulated radiation therapy (IMRT). Therefore, we convened an expert panel with the specific aim to derive contouring guidelines and generate an atlas for the clinical target volume (CTV) in esophageal or gastroesophageal junction (GEJ) cancer. Methods and Materials: Eight expert academically based gastrointestinal radiation oncologists participated. Three sample cases were chosen: a GEJ cancer, a distal esophageal cancer, and a mid-upper esophagealmore » cancer. Uniform computed tomographic (CT) simulation datasets and accompanying diagnostic positron emission tomographic/CT images were distributed to each expert, and the expert was instructed to generate gross tumor volume (GTV) and CTV contours for each case. All contours were aggregated and subjected to quantitative analysis to assess the degree of concordance between experts and to generate draft consensus contours. The panel then refined these contours to generate the contouring atlas. Results: The κ statistics indicated substantial agreement between panelists for each of the 3 test cases. A consensus CTV atlas was generated for the 3 test cases, each representing common anatomic presentations of esophageal cancer. The panel agreed on guidelines and principles to facilitate the generalizability of the atlas to individual cases. Conclusions: This expert panel successfully reached agreement on contouring guidelines for esophageal and GEJ IMRT and generated a reference CTV atlas. This atlas will serve as a reference for IMRT contours for clinical practice and prospective trial design. Subsequent patterns of failure analyses of clinical datasets using these guidelines may require modification in the future.« less
Origin of the Colorado River experimental flood in Grand Canyon
Andrews, E.D.; Pizzi, L.A.
2000-01-01
The Colorado River is one of the most highly regulated and extensively utilized rivers in the world. Total reservoir storage is approximately four times the mean annual runoff of ~17 x 109 m3 year -1. Reservoir storage and regulation have decreased annual peak discharges and hydroelectric power generation has increased daily flow variability. In recent years, the incidental impacts of this development have become apparent especially along the Colorado River through Grand Canyon National Park downstream from Glen Canyon Dam and caused widespread concern. Since the completion of Glen Canyon Dam, the number and size of sand bars, which are used by recreational river runners and form the habitat for native fishes, have decreased substantially. Following an extensive hydrological and geomorphic investigation, an experimental flood release from the Glen Canyon Dam was proposed to determine whether sand bars would be rebuilt by a relatively brief period of flow substantially greater than the normal operating regime. This proposed release, however, was constrained by the Law of the River, the body of law developed over 70 years to control and distribute Colorado River water, the needs of hydropower users and those dependent upon hydropower revenues, and the physical constraints of the dam itself. A compromise was reached following often difficult negotiations and an experimental flood to rebuild sand bars was released in 1996. This flood, and the process by which it came about, gives hope to resolving the difficult and pervasive problem of allocation of water resources among competing interests.The Colorado River is one of the most highly regulated and extensively utilized rivers in the world. Total reservoir storage is approximately four times the mean annual runoff of approximately 17??109 m3 year-1. Reservoir storage and regulation have decreased annual peak discharges and hydroelectric power generation has increased daily flow variability. In recent years, the incidental impacts of this development have become apparent especially along the Colorado River through Grand Canyon National Park downstream from Glen Canyon Dam and caused widespread concern. Since the completion of Glen Canyon Dam, the number and size of sand bars, which are used by recreational river runners and form the habitat for native fishes, have decreased substantially. Following an extensive hydrological and geomorphic investigation, an experimental flood release from the Glen Canyon Dam was proposed to determine whether sand bars would be rebuilt by a relatively brief period of flow substantially greater than the normal operating regime. This proposed release, however, was constrained by the Law of the River, the body of law developed over 70 years to control and distribute Colorado River water, the needs of hydropower users and those dependent upon hydropower revenues, and the physical constraints of the dam itself. A compromise was reached following often difficult negotiations and an experimental flood to rebuild sand bars was released in 1996. This flood, and the process by which it came about, gives hope to resolving the difficult and pervasive problem of allocation of water resources among competing interests.
Limitations on orchid recruitment: not a simple picture
M.K. McCormick; D.L. Taylor; K Juhaszova; R.K Burnett; D.F. Whigham; J.P. O' Neill
2012-01-01
Mycorrhizal fungi have substantial potential to influence plant distribution, especially in specialized orchids and mycoheterotrophic plants. However, little is known about environmental factors that influence the distribution of mycorrhizal fungi. Previous studies using seed packets have been unable to distinguish whether germination patterns resulted from the...
ERIC Educational Resources Information Center
Maxwell, Scott E.; Cole, David A.; Mitchell, Melissa A.
2011-01-01
Maxwell and Cole (2007) showed that cross-sectional approaches to mediation typically generate substantially biased estimates of longitudinal parameters in the special case of complete mediation. However, their results did not apply to the more typical case of partial mediation. We extend their previous work by showing that substantial bias can…
Empirical Analysis and Refinement of Expert System Knowledge Bases
1988-08-31
refinement. Both a simulated case generation program, and a random rule basher were developed to enhance rule refinement experimentation. *Substantial...the second fiscal year 88 objective was fully met. Rule Refinement System Simulated Rule Basher Case Generator Stored Cases Expert System Knowledge...generated until the rule is satisfied. Cases may be randomly generated for a given rule or hypothesis. Rule Basher Given that one has a correct
INVESTIGATION INTO THE REJUVENATION OF SPENT ELECTROLESS NICKEL BATHS BY ELECTRODIALYSIS
Electroless nickel plating generates substantially more waste than other metal-finishing processes due to the inherent limited bath life and the need for regular bath disposal. Electrodialysis can be used to generate electroless nickel baths, but poor membrane permselectivity, l...
Guillerme, Thomas; Cooper, Natalie
2016-05-01
Analyses of living and fossil taxa are crucial for understanding biodiversity through time. The total evidence method allows living and fossil taxa to be combined in phylogenies, using molecular data for living taxa and morphological data for living and fossil taxa. With this method, substantial overlap of coded anatomical characters among living and fossil taxa is vital for accurately inferring topology. However, although molecular data for living species are widely available, scientists generating morphological data mainly focus on fossils. Therefore, there are fewer coded anatomical characters in living taxa, even in well-studied groups such as mammals. We investigated the number of coded anatomical characters available in phylogenetic matrices for living mammals and how these were phylogenetically distributed across orders. Eleven of 28 mammalian orders have less than 25% species with available characters; this has implications for the accurate placement of fossils, although the issue is less pronounced at higher taxonomic levels. In most orders, species with available characters are randomly distributed across the phylogeny, which may reduce the impact of the problem. We suggest that increased morphological data collection efforts for living taxa are needed to produce accurate total evidence phylogenies. © 2016 The Authors.
Tominaga, Koji; Aherne, Julian; Watmough, Shaun A; Alveteg, Mattias; Cosby, Bernard J; Driscoll, Charles T; Posch, Maximilian; Pourmokhtarian, Afshin
2010-12-01
The performance and prediction uncertainty (owing to parameter and structural uncertainties) of four dynamic watershed acidification models (MAGIC, PnET-BGC, SAFE, and VSD) were assessed by systematically applying them to data from the Hubbard Brook Experimental Forest (HBEF), New Hampshire, where long-term records of precipitation and stream chemistry were available. In order to facilitate systematic evaluation, Monte Carlo simulation was used to randomly generate common model input data sets (n = 10,000) from parameter distributions; input data were subsequently translated among models to retain consistency. The model simulations were objectively calibrated against observed data (streamwater: 1963-2004, soil: 1983). The ensemble of calibrated models was used to assess future response of soil and stream chemistry to reduced sulfur deposition at the HBEF. Although both hindcast (1850-1962) and forecast (2005-2100) predictions were qualitatively similar across the four models, the temporal pattern of key indicators of acidification recovery (stream acid neutralizing capacity and soil base saturation) differed substantially. The range in predictions resulted from differences in model structure and their associated posterior parameter distributions. These differences can be accommodated by employing multiple models (ensemble analysis) but have implications for individual model applications.
3D Visualization for Phoenix Mars Lander Science Operations
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol
2012-01-01
Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.
Universally Unstable Nature of Velocity Ring Distributions
NASA Astrophysics Data System (ADS)
Mithaiwala, Manish
2010-11-01
Although it is typically believed that an ion ring velocity distribution has a stability threshold, we find that they are universally unstable. This can substantially impact the understanding of dynamics in both laboratory and space plasmas. A high ring density neutralizes the stabilizing effect of ion Landau damping in a warm plasma and the ring is unstable to the generation of waves below the lower hybrid frequency- even for a very high temperature plasma. For ring densities lower than the background plasma density there is a slow instability with growth rate less than the background ion cyclotron frequency and consequently the background ion response is magnetized. This is in addition to the widely discussed fast instability where the wave growth rate exceeds the background ion cyclotron frequency and hence the background ions are effectively unmagnetized. Thus, even a low density ring is unstable to waves around the lower hybrid frequency range for any ring speed. This implies that effectively there is no velocity threshold for a sufficiently cold ring. The importance of these conclusions on the nonlinear evolution of space plasmas, in particular to solar wind-comet interaction, post-magnetospheric storm conditions, and chemical release experiments in the ionosphere will be discussed.
Microstructure and performance of rare earth element-strengthened plasma-facing tungsten material
Luo, Laima; Shi, Jing; Lin, Jinshan; Zan, Xiang; Zhu, Xiaoyong; Xu, Qiu; Wu, Yucheng
2016-01-01
Pure W and W-(2%, 5%, 10%) Lu alloys were manufactured via mechanical alloying for 20 h and a spark plasma sintering process at 1,873 K for 2 min. The effects of Lu doping on the microstructure and performance of W were investigated using various techniques. For irradiation performance analysis, thermal desorption spectroscopy (TDS) measurements were performed from room temperature to 1,000 K via infrared irradiation with a heating rate of 1 K/s after implantations of He+ and D+ ions. TDS measurements were conducted to investigate D retention behavior. Microhardness was dramatically enhanced, and the density initially increased and then decreased with Lu content. The D retention performance followed the same trend as the density. Second-phase particles identified as Lu2O3 particles were completely distributed over the W grain boundaries and generated an effective grain refinement. Transgranular and intergranular fracture modes were observed on the fracture surface of the sintered W-Lu samples, indicating some improvement of strength and toughness. The amount and distribution of Lu substantially affected the properties of W. Among the investigated alloy compositions, W-5%Lu exhibited the best overall performance. PMID:27596002
Narayanan, Vignesh; Jagannathan, Sarangapani
2017-09-07
In this paper, a distributed control scheme for an interconnected system composed of uncertain input affine nonlinear subsystems with event triggered state feedback is presented by using a novel hybrid learning scheme-based approximate dynamic programming with online exploration. First, an approximate solution to the Hamilton-Jacobi-Bellman equation is generated with event sampled neural network (NN) approximation and subsequently, a near optimal control policy for each subsystem is derived. Artificial NNs are utilized as function approximators to develop a suite of identifiers and learn the dynamics of each subsystem. The NN weight tuning rules for the identifier and event-triggering condition are derived using Lyapunov stability theory. Taking into account, the effects of NN approximation of system dynamics and boot-strapping, a novel NN weight update is presented to approximate the optimal value function. Finally, a novel strategy to incorporate exploration in online control framework, using identifiers, is introduced to reduce the overall cost at the expense of additional computations during the initial online learning phase. System states and the NN weight estimation errors are regulated and local uniformly ultimately bounded results are achieved. The analytical results are substantiated using simulation studies.
Payne, Liam; Heard, Peter J; Scott, Thomas B
2015-01-01
Pile grade A (PGA) graphite was used as a material for moderating and reflecting neutrons in the UK's first generation Magnox nuclear power reactors. As all but one of these reactors are now shut down there is a need to understand the residual state of the material prior to decommissioning of the cores, in particular the location and concentration of key radio-contaminants such as 14C. The oxidation behaviour of unirradiated PGA graphite was studied, in the temperature range 600-1050°C, in air and nitrogen using thermogravimetric analysis, scanning electron microscopy and X-ray tomography to investigate the possibility of using thermal degradation techniques to examine 14C distribution within irradiated material. The thermal decomposition of PGA graphite was observed to follow the three oxidation regimes historically identified by previous workers with limited, uniform oxidation at temperatures below 600°C and substantial, external oxidation at higher temperatures. This work demonstrates that the different oxidation regimes of PGA graphite could be developed into a methodology to characterise the distribution and concentration of 14C in irradiated graphite by thermal treatment.
Payne, Liam; Heard, Peter J.; Scott, Thomas B.
2015-01-01
Pile grade A (PGA) graphite was used as a material for moderating and reflecting neutrons in the UK’s first generation Magnox nuclear power reactors. As all but one of these reactors are now shut down there is a need to understand the residual state of the material prior to decommissioning of the cores, in particular the location and concentration of key radio-contaminants such as 14C. The oxidation behaviour of unirradiated PGA graphite was studied, in the temperature range 600–1050°C, in air and nitrogen using thermogravimetric analysis, scanning electron microscopy and X-ray tomography to investigate the possibility of using thermal degradation techniques to examine 14C distribution within irradiated material. The thermal decomposition of PGA graphite was observed to follow the three oxidation regimes historically identified by previous workers with limited, uniform oxidation at temperatures below 600°C and substantial, external oxidation at higher temperatures. This work demonstrates that the different oxidation regimes of PGA graphite could be developed into a methodology to characterise the distribution and concentration of 14C in irradiated graphite by thermal treatment. PMID:26575374
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zackrisson, Erik; Binggeli, Christian; Finlator, Kristian
In this study, using four different suites of cosmological simulations, we generate synthetic spectra for galaxies with different Lyman-continuum escape fractions (f (esc)) at redshiftsmore » $$z\\approx 7$$–9, in the rest-frame wavelength range relevant for the James Webb Space Telescope ( JWST) NIRSpec instrument. By investigating the effects of realistic star formation histories and metallicity distributions on the EW(Hβ)–β diagram (previously proposed as a tool for identifying galaxies with very high f (esc)), we find that neither of these effects are likely to jeopardize the identification of galaxies with extreme Lyman-continuum leakage. Based on our models, we expect that essentially all $$z\\approx 7\\mbox{–}9$$ galaxies that exhibit rest-frame $$\\mathrm{EW}({\\rm{H}}\\beta )\\lesssim 30$$ Å to have $${f}_{\\mathrm{esc}}\\gt 0.5$$. Incorrect assumptions concerning the ionizing fluxes of stellar populations or the dust properties of $$z\\gt 6$$ galaxies can in principle bias the selection, but substantial model deficiencies of this type should at the same time be evident from offsets in the observed distribution of $$z\\gt 6$$ galaxies in the EW(Hβ)–β diagram compared to the simulated distribution. Such offsets would thereby allow JWST/NIRSpec measurements of these observables to serve as input for further model refinement.« less
Letcher, B.H.; Coombs, J.A.; Nislow, K.H.
2011-01-01
Phenotypic variation in body size can result from within-cohort variation in birth dates, among-individual growth variation and size-selective processes. We explore the relative effects of these processes on the maintenance of wide observed body size variation in stream-dwelling brook trout (Salvelinus fontinalis). Based on the analyses of multiple recaptures of individual fish, it appears that size distributions are largely determined by the maintenance of early size variation. We found no evidence for size-dependent compensatory growth (which would reduce size variation) and found no indication that size-dependent survival substantially influenced body size distributions. Depensatory growth (faster growth by larger individuals) reinforced early size variation, but was relatively strong only during the first sampling interval (age-0, fall). Maternal decisions on the timing and location of spawning could have a major influence on early, and as our results suggest, later (>age-0) size distributions. If this is the case, our estimates of heritability of body size (body length=0.25) will be dominated by processes that generate and maintain early size differences. As a result, evolutionary responses to environmental change that are mediated by body size may be largely expressed via changes in the timing and location of reproduction. Published 2011. This article is a US Government work and is in the public domain in the USA.
Zackrisson, Erik; Binggeli, Christian; Finlator, Kristian; ...
2017-02-09
In this study, using four different suites of cosmological simulations, we generate synthetic spectra for galaxies with different Lyman-continuum escape fractions (f (esc)) at redshiftsmore » $$z\\approx 7$$–9, in the rest-frame wavelength range relevant for the James Webb Space Telescope ( JWST) NIRSpec instrument. By investigating the effects of realistic star formation histories and metallicity distributions on the EW(Hβ)–β diagram (previously proposed as a tool for identifying galaxies with very high f (esc)), we find that neither of these effects are likely to jeopardize the identification of galaxies with extreme Lyman-continuum leakage. Based on our models, we expect that essentially all $$z\\approx 7\\mbox{–}9$$ galaxies that exhibit rest-frame $$\\mathrm{EW}({\\rm{H}}\\beta )\\lesssim 30$$ Å to have $${f}_{\\mathrm{esc}}\\gt 0.5$$. Incorrect assumptions concerning the ionizing fluxes of stellar populations or the dust properties of $$z\\gt 6$$ galaxies can in principle bias the selection, but substantial model deficiencies of this type should at the same time be evident from offsets in the observed distribution of $$z\\gt 6$$ galaxies in the EW(Hβ)–β diagram compared to the simulated distribution. Such offsets would thereby allow JWST/NIRSpec measurements of these observables to serve as input for further model refinement.« less
Excited-state dissociation dynamics of phenol studied by a new time-resolved technique
NASA Astrophysics Data System (ADS)
Lin, Yen-Cheng; Lee, Chin; Lee, Shih-Huang; Lee, Yin-Yu; Lee, Yuan T.; Tseng, Chien-Ming; Ni, Chi-Kung
2018-02-01
Phenol is an important model molecule for the theoretical and experimental investigation of dissociation in the multistate potential energy surfaces. Recent theoretical calculations [X. Xu et al., J. Am. Chem. Soc. 136, 16378 (2014)] suggest that the phenoxyl radical produced in both the X and A states from the O-H bond fission in phenol can contribute substantially to the slow component of photofragment translational energy distribution. However, current experimental techniques struggle to separate the contributions from different dissociation pathways. A new type of time-resolved pump-probe experiment is described that enables the selection of the products generated from a specific time window after molecules are excited by a pump laser pulse and can quantitatively characterize the translational energy distribution and branching ratio of each dissociation pathway. This method modifies conventional photofragment translational spectroscopy by reducing the acceptance angles of the detection region and changing the interaction region of the pump laser beam and the molecular beam along the molecular beam axis. The translational energy distributions and branching ratios of the phenoxyl radicals produced in the X, A, and B states from the photodissociation of phenol at 213 and 193 nm are reported. Unlike other techniques, this method has no interference from the undissociated hot molecules. It can ultimately become a standard pump-probe technique for the study of large molecule photodissociation in multistates.
DG Planning with Amalgamation of Operational and Reliability Considerations
NASA Astrophysics Data System (ADS)
Battu, Neelakanteshwar Rao; Abhyankar, A. R.; Senroy, Nilanjan
2016-04-01
Distributed Generation has been playing a vital role in dealing issues related to distribution systems. This paper presents an approach which provides policy maker with a set of solutions for DG placement to optimize reliability and real power loss of the system. Optimal location of a Distributed Generator is evaluated based on performance indices derived for reliability index and real power loss. The proposed approach is applied on a 15-bus radial distribution system and a 18-bus radial distribution system with conventional and wind distributed generators individually.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barton, A.R. Jr.; Redwine, J.C.
1985-03-01
Major areas of concern to power companies include the leaching of both solid wastes and stored coal, land subsidence and sinkhole development, and seepage away from all types of impoundments. These groundwater considerations can produce substantial increases in the cost of generating electricity. The leaching of fly ash, bottom ash, coal piles, and other materials has recently developed into an area of major environmental concern. Federal, state, and local regulations require various degrees of leachate monitoring. Land subsidence and sinkhole development can adversely affect power-generating facilities and frequently result in substantial property losses. Seepage from impoundments of all sorts (formore » example, ash ponds or hydroelectric facilities) may result in substantial water losses, lost generation, reduced stability of structures, and in extreme cases, abandonment or failure of dikes and dams. The groundwater manual is organized into three volumes. Volume 1 explains hydrogeologic concepts basic to understanding the occurrence, availability, and importance of underground waters and aquifers. It also contains a glossary of terms on subsurface hydrology and discusses such topics as the hydrologic cycle, groundwater quality in the 12 major US groundwater regions, and groundwater regulation. (ACR)« less
Rodhouse, Thomas J.; Ormsbee, Patricia C.; Irvine, Kathryn M.; Vierling, Lee A.; Szewczak, Joseph M.; Vierling, Kerri T.
2015-01-01
Landscape keystone structures associated with roosting habitat emerged as regionally important predictors of bat distributions. The challenges of bat monitoring have constrained previous species distribution modelling efforts to temporally static presence-only approaches. Our approach extends to broader spatial and temporal scales than has been possible in the past for bats, making a substantial increase in capacity for bat conservation.
Space and surface power for the space exploration initiative: Results from project outreach
NASA Technical Reports Server (NTRS)
Shipbaugh, C.; Solomon, K.; Gonzales, D.; Juncosa, M.; Bauer, T.; Salter, R.
1991-01-01
The analysis and evaluations of the Space and Surface Power panel, one of eight panels created by RAND to screen and analyze submissions to the Space Exploration Initiative (SEI) Outreach Program, is documented. In addition to managing and evaluating the responses, or submissions, to this public outreach program, RAND conducted its own analysis and evaluation relevent to SEI mission concepts, systems, and technologies. The Power panel screened and analyzed submissions for which a substantial portion of the concepts involved power generation sources, transmission, distribution, thermal management, and handling of power (including conditioning, conversion, packaging, and enhancements in system components). A background discussion of the areas the Power panel covered and the issues the reviewers considered pertinent to the analysis of power submissions are presented. An overview of each of the highest-ranked submissions and then a discussion of these submissions is presented. The results of the analysis is presented.
Stimulation of waste decomposition in an old landfill by air injection.
Wu, Chuanfu; Shimaoka, Takayuki; Nakayama, Hirofumi; Komiya, Teppei; Chai, Xiaoli
2016-12-01
Three pilot-scale lysimeters were operated for 4.5years to quantify the change in the carbon and nitrogen pool in an old landfill under various air injection conditions. The results indicate that air injection at the bottom layer facilitated homogeneous distribution of oxygen in the waste matrix. Substantial total organic carbon (TOC) decomposition and methane generation reduction were achieved. Considerable amount of nitrogen was removed, suggesting that in situ nitrogen removal via the effective simultaneous nitrification and denitrification mechanism is viable. Moreover, material mass change measurements revealed a slight mass reduction of aged MSW (by approximately 4.0%) after 4.5years of aeration. Additionally, experiments revealed that intensive aeration during the final stage of the experiment did not further stimulate the degradation of the aged MSW. Therefore, elimination of the labile fraction of aged MSW should be considered the objective of in situ aeration. Copyright © 2016 Elsevier Ltd. All rights reserved.
Effective Team Support: From Modeling to Software Agents
NASA Technical Reports Server (NTRS)
Remington, Roger W. (Technical Monitor); John, Bonnie; Sycara, Katia
2003-01-01
The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and engineers and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in modeling infrastructure and task infrastructure. Work is continuing under a different contract to complete empirical data collection, cognitive modeling, and the building of software agents to support the teams task.
Capitalizing on a current fad to promote poison help: (1-800-222-1222).
Krenzelok, Edward P; Klick, Ross N; Burke, Thomas V; Mrvos, Rita
2007-01-01
The distinctive yellow Lance Armstrong 'Live Strong' silicon wristbands, which support cancer research, have reached iconic status and spawned substantial interest from other organizations seeking to capitalize on the same awareness opportunity. To promote the national toll-free Poison Help telephone number, a regional poison information center developed and introduced a Poison Help wristband. The RPIC worked with a marketing firm to design the Poison Help wristband, conduct a feasibility analysis to determine the financial viability of the project and develop a plan to market and sell the wristbands. The wristbands were a unique color, contained the words Poison Help and the national toll-free telephone number. Approximately 50,000 wristbands were distributed in the first four months. By developing a practical application for a popular item, the RPIC increased poison center awareness and, as a secondary benefit, generated revenue to support other poison prevention education endeavors.
NASA Technical Reports Server (NTRS)
Remington, Roger W. (Technical Monitor); John, Bonnie E.; Sycara, Katia
2005-01-01
The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in completing a system for empirical data collection, cognitive modeling, and the building of software agents to support a team's tasks, and in running experiments for the collection of baseline data.
Gooyit, Major; Miranda, Pedro O; Wenthur, Cody J; Ducime, Alex; Janda, Kim D
2017-03-15
Active vaccination examining a single hapten engendered with a series of peptidic linkers has resulted in the production of antimethamphetamine antibodies. Given the limited chemical complexity of methamphetamine, the structure of the linker species embedded within the hapten could have a substantial effect on the ultimate efficacy of the resulting vaccines. Herein, we investigate linker effects by generating a series of methamphetamine haptens that harbor a linker with varying amino acid identity, peptide length, and associated carrier protein. Independent changes in each of these parameters were found to result in alterations in both the quantity and quality of the antibodies induced by vaccination. Although it was found that the consequence of the linker design was also dependent on the identity of the carrier protein, we demonstrate overall that the inclusion of a short, structurally simple, amino acid linker benefits the efficacy of a methamphetamine vaccine in limiting brain penetration of the free drug.
A Comparison of Three Programming Models for Adaptive Applications
NASA Technical Reports Server (NTRS)
Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)
2000-01-01
We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.
Priority for the worse-off and the social cost of carbon
NASA Astrophysics Data System (ADS)
Adler, Matthew; Anthoff, David; Bosetti, Valentina; Garner, Greg; Keller, Klaus; Treich, Nicolas
2017-06-01
The social cost of carbon (SCC) is a key tool in climate policy. The SCC expresses in monetary terms the social impact of the emission of a ton of CO2 in a given year. The SCC is calculated using a `social welfare function’ (SWF): a method for assessing social welfare. The dominant SWF in climate policy is the discounted-utilitarian SWF. Individuals’ well-being numbers (utilities) are summed, and the values for later generations are reduced (`discounted’). This SWF has been criticized for ignoring the distribution of well-being and including an arbitrary time preference. Here, we use a `prioritarian’ SWF, with no time discount, to calculate the SCC. This SWF gives extra weight (`priority’) to worse-off individuals. Prioritarianism is a well-developed concept in ethics and welfare economics, but has been rarely used in climate scholarship. We find substantial differences between the discounted-utilitarian and non-discounted prioritarian SCCs.
Gonen, Eran; Grossman, Gershon
2015-09-01
Conventional reciprocating pistons, normally found in thermoacoustic engines, tend to introduce complex impedance characteristics, including acoustic, mechanical, and electrical portions. System behavior and performance usually rely on proper tuning processes and selection of an optimal point of operation, affected substantially by complementary hardware, typically adjusted for the specific application. The present study proposes an alternative perspective on the alternator behavior, by considering the relative motion between gas and piston during the engine mode of operation. Direct analytical derivation of the velocity distribution inside a tight seal gap and the associated impedance is employed to estimate the electro-acoustic conversion efficiency, thus indicating how to improve the system performance. The influence of acoustic phase, gap dimensions, and working conditions is examined, suggesting the need to develop tighter and longer seal gaps, having increased impedance, to allow optimization for use in upcoming sustainable power generation solutions and smart grids.
Chemical fractionation of Cu and Zn in stormwater, roadway dust and stormwater pond sediments
Camponelli, Kimberly M.; Lev, Steven M.; Snodgrass, Joel W.; Landa, Edward R.; Casey, Ryan E.
2010-01-01
This study evaluated the chemical fractionation of Cu and Zn from source to deposition in a stormwater system. Cu and Zn concentrations and chemical fractionation were determined for roadway dust, roadway runoff and pond sediments. Stormwater Cu and Zn concentrations were used to generate cumulative frequency distributions to characterize potential exposure to pond-dwelling organisms. Dissolved stormwater Zn exceeded USEPA acute and chronic water quality criteria in approximately 20% of storm samples and 20% of the storm duration sampled. Dissolved Cu exceeded the previously published chronic criterion in 75% of storm samples and duration and exceeded the acute criterion in 45% of samples and duration. The majority of sediment Cu (92–98%) occurred in the most recalcitrant phase, suggesting low bioavailability; Zn was substantially more available (39–62% recalcitrant). Most sediment concentrations for Cu and Zn exceeded published threshold effect concentrations and Zn often exceeded probable effect concentrations in surface sediments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blackman, Harold; Moore, Joseph
2014-06-30
The ultimate goal of the National Geothermal Data System (NGDS) is to support the discovery and generation of geothermal sources of energy. The NGDS was designed and has been implemented to provide online access to important geothermal-related data from a network of data providers in order to: • Increase the efficiency of exploration, development and usage of geothermal energy by providing a basis for financial risk analysis of potential sites • Assist state and federal agencies in making land and resource management assessments • Foster the discovery of new geothermal resources by supporting ongoing and future geothermal-related research • Increasemore » public awareness of geothermal energy It is through the implementation of this distributed data system and its subsequent use that substantial increases to the general access and understanding of geothermal related data will result. NGDS provides a mechanism for the sharing of data thereby fostering the discovery of new resources and supporting ongoing geothermal research.« less
Novel second-stage solar concentrator for parabolic troughs
NASA Astrophysics Data System (ADS)
Collares-Pereira, Manuel; Mendes, Joao F.
1995-08-01
Conventional parabolic troughs can be combined with second stage concentrators (SSC), to increase temperature and pressure inside the absorber, making possible the direct production of steam, improving substantially the overall system efficiency and leading to a new generation of distributed solar power plants. To attain this objective, research is needed at the optical, thermodynamic, system control, and engineering levels. In what concerns the receiver of such a system, different practical solutions have been proposed recently and in the past for the geometry of the second stage concentrator: CPC type and others. In this work we discuss these solutions and we propose a new one, 100% efficient in energy collection while reaching a total concentration ratio which is almost 65% of the thermodynamic limit. This SSC has an asymmetric elliptical geometry, rendering possible a smooth solution for the reflectors while maintaining a reasonable size for the receiver.
Principles of nanoparticle design for overcoming biological barriers to drug delivery
Blanco, Elvin; Shen, Haifa; Ferrari, Mauro
2016-01-01
Biological barriers to drug transport prevent successful accumulation of nanotherapeutics specifically at diseased sites, limiting efficacious responses in disease processes ranging from cancer to inflammation. Although substantial research efforts have aimed to incorporate multiple functionalities and moieties within the overall nanoparticle design, many of these strategies fail to adequately address these barriers. Obstacles, such as nonspecific distribution and inadequate accumulation of therapeutics, remain formidable challenges to drug developers. A reimagining of conventional nanoparticles is needed to successfully negotiate these impediments to drug delivery. Site-specific delivery of therapeutics will remain a distant reality unless nanocarrier design takes into account the majority, if not all, of the biological barriers that a particle encounters upon intravenous administration. By successively addressing each of these barriers, innovative design features can be rationally incorporated that will create a new generation of nanotherapeutics, realizing a paradigmatic shift in nanoparticle-based drug delivery. PMID:26348965
Progress on Complex Langevin simulations of a finite density matrix model for QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloch, Jacques; Glesaan, Jonas; Verbaarschot, Jacobus
We study the Stephanov model, which is an RMT model for QCD at finite density, using the Complex Langevin algorithm. Naive implementation of the algorithm shows convergence towards the phase quenched or quenched theory rather than to intended theory with dynamical quarks. A detailed analysis of this issue and a potential resolution of the failure of this algorithm are discussed. We study the effect of gauge cooling on the Dirac eigenvalue distribution and time evolution of the norm for various cooling norms, which were specifically designed to remove the pathologies of the complex Langevin evolution. The cooling is further supplementedmore » with a shifted representation for the random matrices. Unfortunately, none of these modifications generate a substantial improvement on the complex Langevin evolution and the final results still do not agree with the analytical predictions.« less
Sweet spots, EROI, and the limits to Bakken production
NASA Astrophysics Data System (ADS)
Waggoner, Egan Greiner
The Bakken Formation has generated attention due to its substantial role in the recent surge in US domestic oil production. However there may be significant problems in extrapolating past successes because production is not distributed equally, but is concentrated in "sweet spots." These sweet spots are saturated with wells, and some productive fields are declining already. If we are to maintain a consistent or increasing level of production from more marginal areas, an increasing number of wells must be drilled. As the most attractive areas for exploration and production appear already to have been drilled, new fields are likely to be less energetically and economically profitable. I analyze current and future production using the Energy Return on Investment (EROI) metric, a ratio of energy outputs over energy inputs. My results indicate that EROISTND for the sweet spot Parshall Field is 63:1 and the more energy cost-inclusive EROIFIN is 9:1.
CSP cogeneration of electricity and desalinated water at the Pentakomo field facility
NASA Astrophysics Data System (ADS)
Papanicolas, C. N.; Bonanos, A. M.; Georgiou, M. C.; Guillen, E.; Jarraud, N.; Marakkos, C.; Montenon, A.; Stiliaris, E.; Tsioli, E.; Tzamtzis, G.; Votyakov, E. V.
2016-05-01
The Cyprus Institute's Pentakomo Field Facility (PFF) is a major infrastructure for research, development and testing of technologies relating to concentrated solar power (CSP) and solar seawater desalination. It is located at the south coast of Cyprus near the sea and its environmental conditions are fully monitored. It provides a test facility specializing in the development of CSP systems suitable for island and coastal environments with particular emphasis on small units (<25 MWth) endowed with substantial storage, suitable for use in isolation or distributed in small power grids. The first major experiment to take place at the PFF concerns the development of a pilot/experimental facility for the co-generation of electricity and desalinated seawater from CSP. Specifically, the experimental plant consists of a heliostat-central receiver system for solar harvesting, thermal energy storage in molten salts followed by a Rankine cycle for electricity production and a multiple-effect distillation (MED) unit for desalination.
NASA Technical Reports Server (NTRS)
Pindera, Marek-Jerzy; Salzar, Robert S.; Williams, Todd O.
1993-01-01
The utility of a recently developed analytical micromechanics model for the response of metal matrix composites under thermal loading is illustrated by comparison with the results generated using the finite-element approach. The model is based on the concentric cylinder assemblage consisting of an arbitrary number of elastic or elastoplastic sublayers with isotropic or orthotropic, temperature-dependent properties. The elastoplastic boundary-value problem of an arbitrarily layered concentric cylinder is solved using the local/global stiffness matrix formulation (originally developed for elastic layered media) and Mendelson's iterative technique of successive elastic solutions. These features of the model facilitate efficient investigation of the effects of various microstructural details, such as functionally graded architectures of interfacial layers, on the evolution of residual stresses during cool down. The available closed-form expressions for the field variables can readily be incorporated into an optimization algorithm in order to efficiently identify optimal configurations of graded interfaces for given applications. Comparison of residual stress distributions after cool down generated using finite-element analysis and the present micromechanics model for four composite systems with substantially different temperature-dependent elastic, plastic, and thermal properties illustrates the efficacy of the developed analytical scheme.
Non-contact finger vein acquisition system using NIR laser
NASA Astrophysics Data System (ADS)
Kim, Jiman; Kong, Hyoun-Joong; Park, Sangyun; Noh, SeungWoo; Lee, Seung-Rae; Kim, Taejeong; Kim, Hee Chan
2009-02-01
Authentication using finger vein pattern has substantial advantage than other biometrics. Because human vein patterns are hidden inside the skin and tissue, it is hard to forge vein structure. But conventional system using NIR LED array has two drawbacks. First, direct contact with LED array raise sanitary problem. Second, because of discreteness of LEDs, non-uniform illumination exists. We propose non-contact finger vein acquisition system using NIR laser and Laser line generator lens. Laser line generator lens makes evenly distributed line laser from focused laser light. Line laser is aimed on the finger longitudinally. NIR camera was used for image acquisition. 200 index finger vein images from 20 candidates are collected. Same finger vein pattern extraction algorithm was used to evaluate two sets of images. Acquired images from proposed non-contact system do not show any non-uniform illumination in contrary with conventional system. Also results of matching are comparable to conventional system. We developed Non-contact finger vein acquisition system. It can prevent potential cross contamination of skin diseases. Also the system can produce uniformly illuminated images unlike conventional system. With the benefit of non-contact, proposed system shows almost equivalent performance compared with conventional system.
Ecology of American martens in coastal northwestern California
Keith M. Slauson; William J. Zielinski; John P. Hayes
2002-01-01
Throughout their geographic distribution, American martens (Martes americana) are closely associated with late-successional mesic coniferous forests having complex physical structure at or near the ground (Bissonette et al. 1989, Buskirk and Ruggiero 1994). Recently, Zielinski et al. (2000b) documented a substantial decline in the distribution of a recognized...
Observation versus classification in supervised category learning.
Levering, Kimery R; Kurtz, Kenneth J
2015-02-01
The traditional supervised classification paradigm encourages learners to acquire only the knowledge needed to predict category membership (a discriminative approach). An alternative that aligns with important aspects of real-world concept formation is learning with a broader focus to acquire knowledge of the internal structure of each category (a generative approach). Our work addresses the impact of a particular component of the traditional classification task: the guess-and-correct cycle. We compare classification learning to a supervised observational learning task in which learners are shown labeled examples but make no classification response. The goals of this work sit at two levels: (1) testing for differences in the nature of the category representations that arise from two basic learning modes; and (2) evaluating the generative/discriminative continuum as a theoretical tool for understand learning modes and their outcomes. Specifically, we view the guess-and-correct cycle as consistent with a more discriminative approach and therefore expected it to lead to narrower category knowledge. Across two experiments, the observational mode led to greater sensitivity to distributional properties of features and correlations between features. We conclude that a relatively subtle procedural difference in supervised category learning substantially impacts what learners come to know about the categories. The results demonstrate the value of the generative/discriminative continuum as a tool for advancing the psychology of category learning and also provide a valuable constraint for formal models and associated theories.
Thermal emf generated by laser emission along thin metal films
NASA Astrophysics Data System (ADS)
Konov, V. I.; Nikitin, P. I.; Satiukov, D. G.; Uglov, S. A.
1991-07-01
Substantial pulse thermal emf values (about 1.5 V) have been detected along the substrate during the interaction of laser emission with thin metal films (Ni, Ti, and Bi) sprayed on corrugated substrates. Relationships are established between the irradiation conditions and parameters of the generated electrical signals. Possible mechanisms of thermal emf generation and promising applications are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grishkov, V. E.; Uryupin, S. A., E-mail: uryupin@sci.lebedev.ru
2016-09-15
It is shown that the nonlinear currents generated in plasma by a radiation pulse with a frequency exceeding the electron plasma frequency change substantially due to a reduction in the effective electron–ion collision frequency.
Spatial Burnout in Water Reactors with Nonuniform Startup Distributions of Uranium and Boron
NASA Technical Reports Server (NTRS)
Fox, Thomas A.; Bogart, Donald
1955-01-01
Spatial burnout calculations have been made of two types of water moderated cylindrical reactor using boron as a burnable poison to increase reactor life. Specific reactors studied were a version of the Submarine Advanced Reactor (sAR) and a supercritical water reactor (SCW) . Burnout characteristics such as reactivity excursion, neutron-flux and heat-generation distributions, and uranium and boron distributions have been determined for core lives corresponding to a burnup of approximately 7 kilograms of fully enriched uranium. All reactivity calculations have been based on the actual nonuniform distribution of absorbers existing during intervals of core life. Spatial burnout of uranium and boron and spatial build-up of fission products and equilibrium xenon have been- considered. Calculations were performed on the NACA nuclear reactor simulator using two-group diff'usion theory. The following reactor burnout characteristics have been demonstrated: 1. A significantly lower excursion in reactivity during core life may be obtained by nonuniform rather than uniform startup distribution of uranium. Results for SCW with uranium distributed to provide constant radial heat generation and a core life corresponding to a uranium burnup of 7 kilograms indicated a maximum excursion in reactivity of 2.5 percent. This compared to a maximum excursion of 4.2 percent obtained for the same core life when w'anium was uniformly distributed at startup. Boron was incorporated uniformly in these cores at startup. 2. It is possible to approach constant radial heat generation during the life of a cylindrical core by means of startup nonuniform radial and axial distributions of uranium and boron. Results for SCW with nonuniform radial distribution of uranium to provide constant radial heat generation at startup and with boron for longevity indicate relatively small departures from the initially constant radial heat generation distribution during core life. Results for SAR with a sinusoidal distribution rather than uniform axial distributions of boron indicate significant improvements in axial heat generation distribution during the greater part of core life. 3. Uranium investments for cylindrical reactors with nonuniform radial uranium distributions which provide constant radial heat generation per unit core volume are somewhat higher than for reactors with uniform uranium concentration at startup. On the other hand, uranium investments for reactors with axial boron distributions which approach constant axial heat generation are somewhat smaller than for reactors with uniform boron distributions at startup.
Effect of mutation mechanisms on variant composition and distribution in Caenorhabditis elegans
Wang, Jiou
2017-01-01
Genetic diversity is maintained by continuing generation and removal of variants. While examining over 800,000 DNA variants in wild isolates of Caenorhabditis elegans, we made a discovery that the proportions of variant types are not constant across the C. elegans genome. The variant proportion is defined as the fraction of a specific variant type (e.g. single nucleotide polymorphism (SNP) or indel) within a broader set of variants (e.g. all variants or all non-SNPs). The proportions of most variant types show a correlation with the recombination rate. These correlations can be explained as a result of a concerted action of two mutation mechanisms, which we named Morgan and Sanger mechanisms. The two proposed mechanisms act according to the distinct components of the recombination rate, specifically the genetic and physical distance. Regression analysis was used to explore the characteristics and contributions of the two mutation mechanisms. According to our model, ~20–40% of all mutations in C. elegans wild populations are derived from programmed meiotic double strand breaks, which precede chromosomal crossovers and thus may be the point of origin for the Morgan mechanism. A substantial part of the known correlation between the recombination rate and variant distribution appears to be caused by the mutations generated by the Morgan mechanism. Mathematically integrating the mutation model with background selection model gives a more complete depiction of how the variant landscape is shaped in C. elegans. Similar analysis should be possible in other species by examining the correlation between the recombination rate and variant landscape within the context of our mutation model. PMID:28135268
Potentiostatic control of ionic liquid surface film formation on ZE41 magnesium alloy.
Efthimiadis, Jim; Neil, Wayne C; Bunter, Andrew; Howlett, Patrick C; Hinton, Bruce R W; MacFarlane, Douglas R; Forsyth, Maria
2010-05-01
The generation of potentially corrosion-resistant films on light metal alloys of magnesium have been investigated. Magnesium alloy, ZE41 [Mg-Zn-Rare Earth (RE)-Zr, nominal composition approximately 4 wt % Zn, approximately 1.7 wt % RE (Ce), approximately 0.6 wt % Zr, remaining balance, Mg], was exposed under potentiostatic control to the ionic liquid trihexyl(tetradecyl)phosphonium diphenylphosphate, denoted [P(6,6,6,14)][DPP]. During exposure to this IL, a bias potential, shifted from open circuit, was applied to the ZE41 surface. Electrochemical impedance spectroscopy (EIS) and chronoamperometry (CA) were used to monitor the evolution of film formation on the metal surface during exposure. The EIS data indicate that, of the four bias potentials examined, applying a potential of -200 mV versus OCP during the exposure period resulted in surface films of greatest resistance. Both EIS measurements and scanning electron microscopy (SEM) imaging indicate that these surfaces are substantially different to those formed without potential bias. Time of flight-secondary ion mass spectrometry (ToF-SIMS) elemental mapping of the films was utilized to ascertain the distribution of the ionic liquid cationic and anionic species relative to the microstructural surface features of ZE41 and indicated a more uniform distribution compared with the surface following exposure in the absence of a bias potential. Immersion of the treated ZE41 specimens in a chloride contaminated salt solution clearly indicated that the ionic liquid generated surface films offered significant protection against pitting corrosion, although the intermetallics were still insufficiently protected by the IL and hence favored intergranular corrosion processes.
Real time testing of intelligent relays for synchronous distributed generation islanding detection
NASA Astrophysics Data System (ADS)
Zhuang, Davy
As electric power systems continue to grow to meet ever-increasing energy demand, their security, reliability, and sustainability requirements also become more stringent. The deployment of distributed energy resources (DER), including generation and storage, in conventional passive distribution feeders, gives rise to integration problems involving protection and unintentional islanding. Distributed generators need to be islanded for safety reasons when disconnected or isolated from the main feeder as distributed generator islanding may create hazards to utility and third-party personnel, and possibly damage the distribution system infrastructure, including the distributed generators. This thesis compares several key performance indicators of a newly developed intelligent islanding detection relay, against islanding detection devices currently used by the industry. The intelligent relay employs multivariable analysis and data mining methods to arrive at decision trees that contain both the protection handles and the settings. A test methodology is developed to assess the performance of these intelligent relays on a real time simulation environment using a generic model based on a real-life distribution feeder. The methodology demonstrates the applicability and potential advantages of the intelligent relay, by running a large number of tests, reflecting a multitude of system operating conditions. The testing indicates that the intelligent relay often outperforms frequency, voltage and rate of change of frequency relays currently used for islanding detection, while respecting the islanding detection time constraints imposed by standing distributed generator interconnection guidelines.
1989-08-01
Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S
NASA Astrophysics Data System (ADS)
DeAngelis, Anthony M.
Changes in the characteristics of daily precipitation in response to global warming may have serious impacts on human life and property. An analysis of precipitation in climate models is performed to evaluate how well the models simulate the present climate and how precipitation may change in the future. Models participating in phase 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) have substantial biases in their simulation of heavy precipitation intensity over parts of North America during the 20th century. Despite these biases, the large-scale atmospheric circulation accompanying heavy precipitation is either simulated realistically or the strength of the circulation is overestimated. The biases are not related to the large-scale flow in a simple way, pointing toward the importance of other model deficiencies, such as coarse horizontal resolution and convective parameterizations, for the accurate simulation of intense precipitation. Although the models may not sufficiently simulate the intensity of precipitation, their realistic portrayal of the large-scale circulation suggests that projections of future precipitation may be reliable. In the CMIP5 ensemble, the distribution of daily precipitation is projected to undergo substantial changes in response to future atmospheric warming. The regional distribution of these changes was investigated, revealing that dry days and days with heavy-extreme precipitation are projected to increase at the expense of light-moderate precipitation over much of the middle and low latitudes. Such projections have serious implications for future impacts from flood and drought events. In other places, changes in the daily precipitation distribution are characterized by a shift toward either wetter or drier conditions in the future, with heavy-extreme precipitation projected to increase in all but the driest subtropical subsidence regions. Further analysis shows that increases in heavy precipitation in midlatitudes are largely explained by thermodynamics, including increases in atmospheric water vapor. However, in low latitudes and northern high latitudes, changes in vertical velocity accompanying heavy precipitation are also important. The strength of the large-scale atmospheric circulation is projected to change in accordance with vertical velocity in many places, though the circulation patterns, and therefore physical mechanisms that generate heavy precipitation, may remain the same.
Electron-phonon relaxation and excited electron distribution in gallium nitride
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhukov, V. P.; Donostia International Physics Center; Tyuterev, V. G., E-mail: valtyut00@mail.ru
2016-08-28
We develop a theory of energy relaxation in semiconductors and insulators highly excited by the long-acting external irradiation. We derive the equation for the non-equilibrium distribution function of excited electrons. The solution for this function breaks up into the sum of two contributions. The low-energy contribution is concentrated in a narrow range near the bottom of the conduction band. It has the typical form of a Fermi distribution with an effective temperature and chemical potential. The effective temperature and chemical potential in this low-energy term are determined by the intensity of carriers' generation, the speed of electron-phonon relaxation, rates ofmore » inter-band recombination, and electron capture on the defects. In addition, there is a substantial high-energy correction. This high-energy “tail” largely covers the conduction band. The shape of the high-energy “tail” strongly depends on the rate of electron-phonon relaxation but does not depend on the rates of recombination and trapping. We apply the theory to the calculation of a non-equilibrium distribution of electrons in an irradiated GaN. Probabilities of optical excitations from the valence to conduction band and electron-phonon coupling probabilities in GaN were calculated by the density functional perturbation theory. Our calculation of both parts of distribution function in gallium nitride shows that when the speed of the electron-phonon scattering is comparable with the rate of recombination and trapping then the contribution of the non-Fermi “tail” is comparable with that of the low-energy Fermi-like component. So the high-energy contribution can essentially affect the charge transport in the irradiated and highly doped semiconductors.« less
Diagnostic and model dependent uncertainty of simulated Tibetan permafrost area
NASA Astrophysics Data System (ADS)
Wang, W.; Rinke, A.; Moore, J. C.; Cui, X.; Ji, D.; Li, Q.; Zhang, N.; Wang, C.; Zhang, S.; Lawrence, D. M.; McGuire, A. D.; Zhang, W.; Delire, C.; Koven, C.; Saito, K.; MacDougall, A.; Burke, E.; Decharme, B.
2015-03-01
We perform a land surface model intercomparison to investigate how the simulation of permafrost area on the Tibetan Plateau (TP) varies between 6 modern stand-alone land surface models (CLM4.5, CoLM, ISBA, JULES, LPJ-GUESS, UVic). We also examine the variability in simulated permafrost area and distribution introduced by 5 different methods of diagnosing permafrost (from modeled monthly ground temperature, mean annual ground and air temperatures, air and surface frost indexes). There is good agreement (99-135 x 104 km2) between the two diagnostic methods based on air temperature which are also consistent with the best current observation-based estimate of actual permafrost area (101 x 104 km2). However the uncertainty (1-128 x 104 km2) using the three methods that require simulation of ground temperature is much greater. Moreover simulated permafrost distribution on TP is generally only fair to poor for these three methods (diagnosis of permafrost from monthly, and mean annual ground temperature, and surface frost index), while permafrost distribution using air temperature based methods is generally good. Model evaluation at field sites highlights specific problems in process simulations likely related to soil texture specification and snow cover. Models are particularly poor at simulating permafrost distribution using definition that soil temperature remains at or below 0°C for 24 consecutive months, which requires reliable simulation of both mean annual ground temperatures and seasonal cycle, and hence is relatively demanding. Although models can produce better permafrost maps using mean annual ground temperature and surface frost index, analysis of simulated soil temperature profiles reveals substantial biases. The current generation of land surface models need to reduce biases in simulated soil temperature profiles before reliable contemporary permafrost maps and predictions of changes in permafrost distribution can be made for the Tibetan Plateau.
NASA Astrophysics Data System (ADS)
Lee, H.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.
2017-12-01
Cloud radar Doppler spectra provide rich information for evaluating the fidelity of particle size distributions from cloud models. The intrinsic simplifications of bulk microphysics schemes generally preclude the generation of plausible Doppler spectra, unlike bin microphysics schemes, which develop particle size distributions more organically at substantial computational expense. However, bin microphysics schemes face the difficulty of numerical diffusion leading to overly rapid large drop formation, particularly while solving the stochastic collection equation (SCE). Because such numerical diffusion can cause an even greater overestimation of radar reflectivity, an accurate method for solving the SCE is essential for bin microphysics schemes to accurately simulate Doppler spectra. While several methods have been proposed to solve the SCE, here we examine those of Berry and Reinhardt (1974, BR74), Jacobson et al. (1994, J94), and Bott (2000, B00). Using a simple box model to simulate drop size distribution evolution during precipitation formation with a realistic kernel, it is shown that each method yields a converged solution as the resolution of the drop size grid increases. However, the BR74 and B00 methods yield nearly identical size distributions in time, whereas the J94 method produces consistently larger drops throughout the simulation. In contrast to an earlier study, the performance of the B00 method is found to be satisfactory; it converges at relatively low resolution and long time steps, and its computational efficiency is the best among the three methods considered here. Finally, a series of idealized stratocumulus large-eddy simulations are performed using the J94 and B00 methods. The reflectivity size distributions and Doppler spectra obtained from the different SCE solution methods are presented and compared with observations.
Saltré, Frédérik; Duputié, Anne; Gaucherel, Cédric; Chuine, Isabelle
2015-02-01
Recent efforts to incorporate migration processes into species distribution models (SDMs) are allowing assessments of whether species are likely to be able to track their future climate optimum and the possible causes of failing to do so. Here, we projected the range shift of European beech over the 21st century using a process-based SDM coupled to a phenomenological migration model accounting for population dynamics, according to two climate change scenarios and one land use change scenario. Our model predicts that the climatically suitable habitat for European beech will shift north-eastward and upward mainly because (i) higher temperature and precipitation, at the northern range margins, will increase survival and fruit maturation success, while (ii) lower precipitations and higher winter temperature, at the southern range margins, will increase drought mortality and prevent bud dormancy breaking. Beech colonization rate of newly climatically suitable habitats in 2100 is projected to be very low (1-2% of the newly suitable habitats colonised). Unexpectedly, the projected realized contraction rate was higher than the projected potential contraction rate. As a result, the realized distribution of beech is projected to strongly contract by 2100 (by 36-61%) mainly due to a substantial increase in climate variability after 2050, which generates local extinctions, even at the core of the distribution, the frequency of which prevents beech recolonization during more favourable years. Although European beech will be able to persist in some parts of the trailing edge of its distribution, the combined effects of climate and land use changes, limited migration ability, and a slow life-history are likely to increase its threat status in the near future. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng
2018-02-01
Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.
Horowitz, Arthur J.
2013-01-01
Hurricane Irene and Tropical Storm Lee, both of which made landfall in the U.S. between late August and early September 2011, generated record or near record water discharges in 41 coastal rivers between the North Carolina/South Carolina border and the U.S./Canadian border. Despite the discharge of substantial amounts of suspended sediment from many of these rivers, as well as the probable influx of substantial amounts of eroded material from the surrounding basins, the geochemical effects on the <63-µm fractions of the bed sediments appear relatively limited [<20% of the constituents determined (256 out of 1394)]. Based on surface area measurements, this lack of change occurred despite substantial alterations in both the grain size distribution and the composition of the bed sediments. The sediment-associated constituents which display both concentration increases and decreases include: total sulfur (TS), Hg, Ag, total organic carbon (TOC), total nitrogen (TN), Zn, Se, Co, Cu, Pb, As, Cr, and total carbon (TC). As a group, these constituents tend to be associated either with urbanization/elevated population densities and/or wastewater/solid sludge. The limited number of significant sediment-associated chemical changes that were detected probably resulted from two potential processes: (1) the flushing of in-stream land-use affected sediments that were replaced by baseline material more representative of local geology and/or soils (declining concentrations), and/or (2) the inclusion of more heavily affected material as a result of urban nonpoint-source runoff and/or releases from flooded treatment facilities (increasing concentrations). Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
Using classical population genetics tools with heterochroneous data: time matters!
Depaulis, Frantz; Orlando, Ludovic; Hänni, Catherine
2009-01-01
New polymorphism datasets from heterochroneous data have arisen thanks to recent advances in experimental and microbial molecular evolution, and the sequencing of ancient DNA (aDNA). However, classical tools for population genetics analyses do not take into account heterochrony between subsets, despite potential bias on neutrality and population structure tests. Here, we characterize the extent of such possible biases using serial coalescent simulations. We first use a coalescent framework to generate datasets assuming no or different levels of heterochrony and contrast most classical population genetic statistics. We show that even weak levels of heterochrony ( approximately 10% of the average depth of a standard population tree) affect the distribution of polymorphism substantially, leading to overestimate the level of polymorphism theta, to star like trees, with an excess of rare mutations and a deficit of linkage disequilibrium, which are the hallmark of e.g. population expansion (possibly after a drastic bottleneck). Substantial departures of the tests are detected in the opposite direction for more heterochroneous and equilibrated datasets, with balanced trees mimicking in particular population contraction, balancing selection, and population differentiation. We therefore introduce simple corrections to classical estimators of polymorphism and of the genetic distance between populations, in order to remove heterochrony-driven bias. Finally, we show that these effects do occur on real aDNA datasets, taking advantage of the currently available sequence data for Cave Bears (Ursus spelaeus), for which large mtDNA haplotypes have been reported over a substantial time period (22-130 thousand years ago (KYA)). Considering serial sampling changed the conclusion of several tests, indicating that neglecting heterochrony could provide significant support for false past history of populations and inappropriate conservation decisions. We therefore argue for systematically considering heterochroneous models when analyzing heterochroneous samples covering a large time scale.
Indigenous Health and Socioeconomic Status in India
Subramanian, S. V; Smith, George Davey; Subramanyam, Malavika
2006-01-01
Background Systematic evidence on the patterns of health deprivation among indigenous peoples remains scant in developing countries. We investigate the inequalities in mortality and substance use between indigenous and non-indigenous, and within indigenous, groups in India, with an aim to establishing the relative contribution of socioeconomic status in generating health inequalities. Methods and Findings Cross-sectional population-based data were obtained from the 1998–1999 Indian National Family Health Survey. Mortality, smoking, chewing tobacco use, and alcohol use were four separate binary outcomes in our analysis. Indigenous status in the context of India was operationalized through the Indian government category of scheduled tribes, or Adivasis, which refers to people living in tribal communities characterized by distinctive social, cultural, historical, and geographical circumstances. Indigenous groups experience excess mortality compared to non-indigenous groups, even after adjusting for economic standard of living (odds ratio 1.22; 95% confidence interval 1.13–1.30). They are also more likely to smoke and (especially) drink alcohol, but the prevalence of chewing tobacco is not substantially different between indigenous and non-indigenous groups. There are substantial health variations within indigenous groups, such that indigenous peoples in the bottom quintile of the indigenous-peoples-specific standard of living index have an odds ratio for mortality of 1.61 (95% confidence interval 1.33–1.95) compared to indigenous peoples in the top fifth of the wealth distribution. Smoking, drinking alcohol, and chewing tobacco also show graded associations with socioeconomic status within indigenous groups. Conclusions Socioeconomic status differentials substantially account for the health inequalities between indigenous and non-indigenous groups in India. However, a strong socioeconomic gradient in health is also evident within indigenous populations, reiterating the overall importance of socioeconomic status for reducing population-level health disparities, regardless of indigeneity. PMID:17076556
Analysis and Application of Microgrids
NASA Astrophysics Data System (ADS)
Yue, Lu
New trends of generating electricity locally and utilizing non-conventional or renewable energy sources have attracted increasing interests due to the gradual depletion of conventional fossil fuel energy sources. The new type of power generation is called Distributed Generation (DG) and the energy sources utilized by Distributed Generation are termed Distributed Energy Sources (DERs). With DGs embedded in the distribution networks, they evolve from passive distribution networks to active distribution networks enabling bidirectional power flows in the networks. Further incorporating flexible and intelligent controllers and employing future technologies, active distribution networks will turn to a Microgrid. A Microgrid is a small-scale, low voltage Combined with Heat and Power (CHP) supply network designed to supply electrical and heat loads for a small community. To further implement Microgrids, a sophisticated Microgrid Management System must be integrated. However, due to the fact that a Microgrid has multiple DERs integrated and is likely to be deregulated, the ability to perform real-time OPF and economic dispatch with fast speed advanced communication network is necessary. In this thesis, first, problems such as, power system modelling, power flow solving and power system optimization, are studied. Then, Distributed Generation and Microgrid are studied and reviewed, including a comprehensive review over current distributed generation technologies and Microgrid Management Systems, etc. Finally, a computer-based AC optimization method which minimizes the total transmission loss and generation cost of a Microgrid is proposed and a wireless communication scheme based on synchronized Code Division Multiple Access (sCDMA) is proposed. The algorithm is tested with a 6-bus power system and a 9-bus power system.
Competition and Cooperation of Distributed Generation and Power System
NASA Astrophysics Data System (ADS)
Miyake, Masatoshi; Nanahara, Toshiya
Advances in distributed generation technologies together with the deregulation of an electric power industry can lead to a massive introduction of distributed generation. Since most of distributed generation will be interconnected to a power system, coordination and competition between distributed generators and large-scale power sources would be a vital issue in realizing a more desirable energy system in the future. This paper analyzes competitions between electric utilities and cogenerators from the viewpoints of economic and energy efficiency based on the simulation results on an energy system including a cogeneration system. First, we examine best response correspondence of an electric utility and a cogenerator with a noncooperative game approach: we obtain a Nash equilibrium point. Secondly, we examine the optimum strategy that attains the highest social surplus and the highest energy efficiency through global optimization.
Data Transparency | Distributed Generation Interconnection Collaborative |
quality and availability are increasingly vital for reducing the costs of distributed generation completion in certain areas, increasing accountability for utility application processing. As distributed PV NREL, HECO, TSRG Improving Data Transparency for the Distributed PV Interconnection Process: Emergent
Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D T; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung
2014-01-01
Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the source code of CloudDOE to further incorporate more MapReduce bioinformatics tools into CloudDOE and support next-generation big data open source tools, e.g., Hadoop BigTop and Spark. CloudDOE is distributed under Apache License 2.0 and is freely available at http://clouddoe.iis.sinica.edu.tw/.
Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D. T.; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung
2014-01-01
Background Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. Results We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. Conclusions CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the source code of CloudDOE to further incorporate more MapReduce bioinformatics tools into CloudDOE and support next-generation big data open source tools, e.g., Hadoop BigTop and Spark. Availability: CloudDOE is distributed under Apache License 2.0 and is freely available at http://clouddoe.iis.sinica.edu.tw/. PMID:24897343
Hathaway, R.M.; McNellis, J.M.
1989-01-01
Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously. The new approaches and expanded use of computers will require substantial increases in the quantity and sophistication of the Division 's computer resources. The requirements presented in this report will be used to develop technical specifications that describe the computer resources needed during the 1990's. (USGS)
System and method for islanding detection and prevention in distributed generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhowmik, Shibashis; Mazhari, Iman; Parkhideh, Babak
Various examples are directed to systems and methods for detecting an islanding condition at an inverter configured to couple a distributed generation system to an electrical grid network. A controller may determine a command frequency and a command frequency variation. The controller may determine that the command frequency variation indicates a potential islanding condition and send to the inverter an instruction to disconnect the distributed generation system from the electrical grid network. When the distributed generation system is disconnected from the electrical grid network, the controller may determine whether the grid network is valid.
NASA Astrophysics Data System (ADS)
Ye, Q.; Robinson, E. S.; Mahfouz, N.; Sullivan, R. C.; Donahue, N. M.
2016-12-01
Secondary organic aerosols (SOA) dominate the mass of fine particles in the atmosphere. Their formation involves both oxidation of volatile organics from various sources that produce products with uncertain volatilities, and diffusion of these products into the condensed phase. Therefore, constraining volatility distribution and diffusion timescales of the constituents in SOA are important in predicting size, concentration and composition of SOA, as well as how these properties of SOA evolve in the atmosphere. In this work, we demonstrate how carefully designed laboratory isothermal dilution experiments in smog chambers can shed light into the volatility distribution and any diffusion barriers of common types of SOA over time scales relevant to atmospheric transport and diurnal cycling. We choose SOA made from mono-terpenes (alpha-pinene and limonene) and toluene to represent biogenic and anthropogenic SOA. We look into how moisture content can alter any evaporation behaviors of SOA by varying relative humidity during SOA generation and during dilution process. This provides insight into whether diffusion in the condensed phase is rate limiting in reaching gas/particle equilibrium of semi-volatile organic compounds. Our preliminary results show that SOA from alpha-pinene evaporates continuously over several hours of experiments, and there is no substantial discernible differences over wide ranges of the chamber humidity. SOA from toluene oxidation shows slower evaporation. We fit these experimental data using absorptive partitioning theory and a particle dynamic model to obtain volatility distributions and to predict particle size evolution. This in the end will help us to improve representation of SOA in large scale chemical transport models.
Harnett, Mark T.; Magee, Jeffrey C.
2015-01-01
The apical tuft is the most remote area of the dendritic tree of neocortical pyramidal neurons. Despite its distal location, the apical dendritic tuft of layer 5 pyramidal neurons receives substantial excitatory synaptic drive and actively processes corticocortical input during behavior. The properties of the voltage-activated ion channels that regulate synaptic integration in tuft dendrites have, however, not been thoroughly investigated. Here, we use electrophysiological and optical approaches to examine the subcellular distribution and function of hyperpolarization-activated cyclic nucleotide-gated nonselective cation (HCN) channels in rat layer 5B pyramidal neurons. Outside-out patch recordings demonstrated that the amplitude and properties of ensemble HCN channel activity were uniform in patches excised from distal apical dendritic trunk and tuft sites. Simultaneous apical dendritic tuft and trunk whole-cell current-clamp recordings revealed that the pharmacological blockade of HCN channels decreased voltage compartmentalization and enhanced the generation and spread of apical dendritic tuft and trunk regenerative activity. Furthermore, multisite two-photon glutamate uncaging demonstrated that HCN channels control the amplitude and duration of synaptically evoked regenerative activity in the distal apical dendritic tuft. In contrast, at proximal apical dendritic trunk and somatic recording sites, the blockade of HCN channels decreased excitability. Dynamic-clamp experiments revealed that these compartment-specific actions of HCN channels were heavily influenced by the local and distributed impact of the high density of HCN channels in the distal apical dendritic arbor. The properties and subcellular distribution pattern of HCN channels are therefore tuned to regulate the interaction between integration compartments in layer 5B pyramidal neurons. PMID:25609619
Subscribe to DGIC Updates | Distributed Generation Interconnection
Distributed Generation Interconnection Collaborative. Subscribe Please provide and submit the following information to subscribe. The mailing list addresses are never sold, rented, distributed, or disclosed in any
This report is a generic verification protocol by which EPA’s Environmental Technology Verification program tests newly developed equipment for distributed generation of electric power, usually micro-turbine generators and internal combustion engine generators. The protocol will ...
Content-based histopathology image retrieval using CometCloud.
Qi, Xin; Wang, Daihou; Rodero, Ivan; Diaz-Montes, Javier; Gensure, Rebekah H; Xing, Fuyong; Zhong, Hua; Goodell, Lauri; Parashar, Manish; Foran, David J; Yang, Lin
2014-08-26
The development of digital imaging technology is creating extraordinary levels of accuracy that provide support for improved reliability in different aspects of the image analysis, such as content-based image retrieval, image segmentation, and classification. This has dramatically increased the volume and rate at which data are generated. Together these facts make querying and sharing non-trivial and render centralized solutions unfeasible. Moreover, in many cases this data is often distributed and must be shared across multiple institutions requiring decentralized solutions. In this context, a new generation of data/information driven applications must be developed to take advantage of the national advanced cyber-infrastructure (ACI) which enable investigators to seamlessly and securely interact with information/data which is distributed across geographically disparate resources. This paper presents the development and evaluation of a novel content-based image retrieval (CBIR) framework. The methods were tested extensively using both peripheral blood smears and renal glomeruli specimens. The datasets and performance were evaluated by two pathologists to determine the concordance. The CBIR algorithms that were developed can reliably retrieve the candidate image patches exhibiting intensity and morphological characteristics that are most similar to a given query image. The methods described in this paper are able to reliably discriminate among subtle staining differences and spatial pattern distributions. By integrating a newly developed dual-similarity relevance feedback module into the CBIR framework, the CBIR results were improved substantially. By aggregating the computational power of high performance computing (HPC) and cloud resources, we demonstrated that the method can be successfully executed in minutes on the Cloud compared to weeks using standard computers. In this paper, we present a set of newly developed CBIR algorithms and validate them using two different pathology applications, which are regularly evaluated in the practice of pathology. Comparative experimental results demonstrate excellent performance throughout the course of a set of systematic studies. Additionally, we present and evaluate a framework to enable the execution of these algorithms across distributed resources. We show how parallel searching of content-wise similar images in the dataset significantly reduces the overall computational time to ensure the practical utility of the proposed CBIR algorithms.
Interactions Between Land Use, Climate and Hydropower in Scotland
NASA Astrophysics Data System (ADS)
Sample, J.
2014-12-01
To promote the transition towards a low carbon economy, the Scottish Government has adopted ambitious energy targets, including generating all electricity from renewable sources by 2020. To achieve this, continued investment will be required across a range of sustainable technologies. Hydropower has a long history in Scotland and the present-day operational capacity of ~1.5 GW makes a substantial contribution to the national energy budget. In addition, there remains potential for ~500 MW of further development, mostly in the form of small to medium size run-of-river schemes. Climate change is expected to lead to an intensification of the global hydrological cycle, leading to changes in both the magnitude and seasonality of river flows. There may also be indirect effects, such as changing land use, enhanced evapotranspiration rates and an increased demand for irrigation, all of which could affect the water available for energy generation. Preliminary assessments of hydropower commonly use flow duration curves (FDCs) to estimate the power generation potential at proposed new sites. In this study, we use spatially distributed modelling to generate daily and monthly FDCs for a range of Scottish catchments using a variety of future land use and climate change scenarios. These are then used to assess Scotland's future hydropower potential under different flow regimes. The results are spatially variable and include large uncertainties, but some consistent patterns emerge. Many locations are predicted to experience enhanced seasonality, with lower power generation potential in the summer months and greater potential during the autumn and winter. Some sites may require infrastructural changes in order to continue operating at optimum efficiency. We discuss the implications and limitations of our results, and highlight design and adaptation options for maximising the resilience of hydropower installations under changing future flow patterns.
NASA Astrophysics Data System (ADS)
Pomeroy, J. W.; Carey, S. K.; Granger, R. J.; Hedstrom, N. R.; Janowicz, R.; Pietroniro, A.; Quinton, W. L.
2002-12-01
The supply of water to large northern catchments such as the Mackenzie and Yukon Rivers is dominated by snowmelt runoff from first order mountain catchments. In order to understand the timing, peak and duration of the snowmelt freshet at larger scale it is important to appreciate the spatial and temporal variability of snowmelt and runoff processes at the source. For this reason a comprehensive hydrology study of a Yukon River headwaters catchment, Wolf Creek Research Basin, near Whitehorse, has focussed on the spatial variability of snow ablation and snowmelt runoff generation and the consequences for the water balance in a mountain tundra zone. In northern mountain tundra, surface energetics vary with receipt of solar radiation, shrub vegetation cover and initial snow accumulation. Therefore the timing of snowmelt is controlled by aspect, in that south facing slopes become snow-free 4-5 weeks before the north facing. Runoff generation differs widely between the slopes; there is normally no spring runoff generated from the south facing slope as all meltwater evaporates or infiltrates. On the north facing slope, snowmelt provides substantial runoff to hillside macropores which rapidly route water to the stream channel. Macropore distribution is associated with organic terrain and discontinuous permafrost, which in turn result from the summer surface energetics. Therefore the influence of small-scale snow redistribution and energetics as controlled by topography must be accounted for when calculating contributing areas to larger scale catchments, and estimating the effectiveness of snowfall in generating streamflow. This concept is quite distinct from the drainage controlled contributing area that has been found useful in temperate-zone hydrology.
White-tailed deer distribution in response to patch burning on rangeland
M. G. Meek; S. M. Cooper; M. K. Owens; R. M. Cooper; A. L. Wappel
2008-01-01
Management of rangelands has changed substantially over the past few decades; today there is greater emphasis on wildlife management and increased interest in using natural disturbances such as fire to manage rangeland plant and animal communities. To determine the effect of prescribed fires on the distribution of white-tailed deer (Odocoileus virginianus...
Observations and simulations of microplastic marine debris in the ocean surface boundary layer
NASA Astrophysics Data System (ADS)
Kukulka, T.; Brunner, K.; Proskurowski, G. K.; Lavender Law, K. L.
2016-02-01
Motivated by observations of buoyant microplastic marine debris (MPMD) in the ocean surface boundary layer (OSBL), this study applies a large eddy simulation model and a parametric one-dimensional column model to examine vertical distributions of MPMD. MPMD is widely distributed in vast regions of the subtropical gyres and has emerged as a major open ocean pollutant whose distribution is subject to upper ocean turbulence. The models capture wind-driven turbulence, Langmuir turbulence (LT), and enhanced turbulent kinetic energy input due to breaking waves (BW). Model results are only consistent with MPMD observations if LT effects are included. Neither BW nor shear-driven turbulence is capable of deeply submerging MPMD, suggesting that the observed vertical MPMD distributions are a characteristic signature of wave-driven LT. Thus, this study demonstrates that LT substantially increases turbulent transport in the OSBL, resulting in deep submergence of buoyant tracers. The parametric model is applied to eleven years of observations in the North Atlantic and North Pacific subtropical gyres to show that surface measurements substantially underestimate MPMD concentrations by a factor of three to thirteen.
The place of solar power: an economic analysis of concentrated and distributed solar power.
Banoni, Vanessa Arellano; Arnone, Aldo; Fondeur, Maria; Hodge, Annabel; Offner, J Patrick; Phillips, Jordan K
2012-04-23
This paper examines the cost and benefits, both financial and environmental, of two leading forms of solar power generation, grid-tied photovoltaic cells and Dish Stirling Systems, using conventional carbon-based fuel as a benchmark. First we define how these solar technologies will be implemented and why. Then we delineate a model city and its characteristics, which will be used to test the two methods of solar-powered electric distribution. Then we set the constraining assumptions for each technology, which serve as parameters for our calculations. Finally, we calculate the present value of the total cost of conventional energy needed to power our model city and use this as a benchmark when analyzing both solar models' benefits and costs. The preeminent form of distributed electricity generation, grid-tied photovoltaic cells under net-metering, allow individual homeowners a degree of electric self-sufficiency while often turning a profit. However, substantial subsidies are required to make the investment sensible. Meanwhile, large dish Stirling engine installations have a significantly higher potential rate of return, but face a number of pragmatic limitations. This paper concludes that both technologies are a sensible investment for consumers, but given that the dish Stirling consumer receives 6.37 dollars per watt while the home photovoltaic system consumer receives between 0.9 and 1.70 dollars per watt, the former appears to be a superior option. Despite the large investment, this paper deduces that it is far more feasible to get few strong investors to develop a solar farm of this magnitude, than to get 150,000 households to install photovoltaic arrays in their roofs. Potential implications of the solar farm construction include an environmental impact given the size of land require for this endeavour. However, the positive aspects, which include a large CO2 emission reduction aggregated over the lifespan of the farm, outweigh any minor concerns or potential externalities.
The place of solar power: an economic analysis of concentrated and distributed solar power
2012-01-01
Background This paper examines the cost and benefits, both financial and environmental, of two leading forms of solar power generation, grid-tied photovoltaic cells and Dish Stirling Systems, using conventional carbon-based fuel as a benchmark. Methods First we define how these solar technologies will be implemented and why. Then we delineate a model city and its characteristics, which will be used to test the two methods of solar-powered electric distribution. Then we set the constraining assumptions for each technology, which serve as parameters for our calculations. Finally, we calculate the present value of the total cost of conventional energy needed to power our model city and use this as a benchmark when analyzing both solar models’ benefits and costs. Results The preeminent form of distributed electricity generation, grid-tied photovoltaic cells under net-metering, allow individual homeowners a degree of electric self-sufficiency while often turning a profit. However, substantial subsidies are required to make the investment sensible. Meanwhile, large dish Stirling engine installations have a significantly higher potential rate of return, but face a number of pragmatic limitations. Conclusions This paper concludes that both technologies are a sensible investment for consumers, but given that the dish Stirling consumer receives 6.37 dollars per watt while the home photovoltaic system consumer receives between 0.9 and 1.70 dollars per watt, the former appears to be a superior option. Despite the large investment, this paper deduces that it is far more feasible to get few strong investors to develop a solar farm of this magnitude, than to get 150,000 households to install photovoltaic arrays in their roofs. Potential implications of the solar farm construction include an environmental impact given the size of land require for this endeavour. However, the positive aspects, which include a large CO2 emission reduction aggregated over the lifespan of the farm, outweigh any minor concerns or potential externalities. PMID:22540991
McIntyre, Di; Ataguba, John E
2012-03-01
South Africa is considering introducing a universal health care system. A key concern for policy-makers and the general public is whether or not this reform is affordable. Modelling the resource and revenue generation requirements of alternative reform options is critical to inform decision-making. This paper considers three reform scenarios: universal coverage funded by increased allocations to health from general tax and additional dedicated taxes; an alternative reform option of extending private health insurance coverage to all formal sector workers and their dependents with the remainder using tax-funded services; and maintaining the status quo. Each scenario was modelled over a 15-year period using a spreadsheet model. Statistical analyses were also undertaken to evaluate the impact of options on the distribution of health care financing burden and benefits from using health services across socio-economic groups. Universal coverage would result in total health care spending levels equivalent to 8.6% of gross domestic product (GDP), which is comparable to current spending levels. It is lower than the status quo option (9.5% of GDP) and far lower than the option of expanding private insurance cover (over 13% of GDP). However, public funding of health services would have to increase substantially. Despite this, universal coverage would result in the most progressive financing system if the additional public funding requirements are generated through a surcharge on taxable income (but not if VAT is increased). The extended private insurance scheme option would be the least progressive and would impose a very high payment burden; total health care payments on average would be 10.7% of household consumption expenditure compared with the universal coverage (6.7%) and status quo (7.5%) options. The least pro-rich distribution of service benefits would be achieved under universal coverage. Universal coverage is affordable and would promote health system equity, but needs careful design to ensure its long-term sustainability.
Distributed Generation Market Demand Model | NREL
Demand Model The Distributed Generation Market Demand (dGen) model simulates the potential adoption of distributed energy resources (DERs) for residential, commercial, and industrial entities in the dGen model can help develop deployment forecasts for distributed resources, including sensitivity to
Electrical power generating system
NASA Technical Reports Server (NTRS)
Nola, F. J. (Inventor)
1983-01-01
A power generating system for adjusting coupling an induction motor, as a generator, to an A.C. power line wherein the motor and power line are connected through a triac is described. The triac is regulated to normally turn on at a relatively late point in each half cycle of its operation, whereby at less than operating speed, and thus when the induction motor functions as a motor rather than as a generator, power consumption from the line is substantially reduced.
Spatiotemporal and geometric optimization of sensor arrays for detecting analytes fluids
Lewis, Nathan S.; Freund, Michael S.; Briglin, Shawn M.; Tokumaru, Phil; Martin, Charles R.; Mitchell, David T.
2006-10-17
Sensor arrays and sensor array systems for detecting analytes in fluids. Sensors configured to generate a response upon introduction of a fluid containing one or more analytes can be located on one or more surfaces relative to one or more fluid channels in an array. Fluid channels can take the form of pores or holes in a substrate material. Fluid channels can be formed between one or more substrate plates. Sensor can be fabricated with substantially optimized sensor volumes to generate a response having a substantially maximized signal to noise ratio upon introduction of a fluid containing one or more target analytes. Methods of fabricating and using such sensor arrays and systems are also disclosed.
Spatiotemporal and geometric optimization of sensor arrays for detecting analytes in fluids
Lewis, Nathan S [La Canada, CA; Freund, Michael S [Winnipeg, CA; Briglin, Shawn S [Chittenango, NY; Tokumaru, Phillip [Moorpark, CA; Martin, Charles R [Gainesville, FL; Mitchell, David [Newtown, PA
2009-09-29
Sensor arrays and sensor array systems for detecting analytes in fluids. Sensors configured to generate a response upon introduction of a fluid containing one or more analytes can be located on one or more surfaces relative to one or more fluid channels in an array. Fluid channels can take the form of pores or holes in a substrate material. Fluid channels can be formed between one or more substrate plates. Sensor can be fabricated with substantially optimized sensor volumes to generate a response having a substantially maximized signal to noise ratio upon introduction of a fluid containing one or more target analytes. Methods of fabricating and using such sensor arrays and systems are also disclosed.
Naturally p-Hydroxybenzoylated Lignins in Palms
Fachuang Lu; Steven D. Karlen; Matt Regner; Hoon Kim; Sally A. Ralph; Run-Cang Sun; Ken-ichi Kuroda; Mary Ann Augustin; Raymond Mawson; Henry Sabarez; Tanoj Singh; Gerardo Jimenez-Monteon; Sarani Zakaria; Stefan Hill; Philip J. Harris; Wout Boerjan; Curtis G. Wilkerson; Shawn D. Mansfield; John Ralph
2015-01-01
The industrial production of palm oil concurrently generates a substantial amount of empty fruit bunch (EFB) fibers that could be used as a feedstock in a lignocellulose based biorefinery. Lignin byproducts generated by this process may offer opportunities for the isolation of value-added products, such as p-hydroxybenzoate (pBz),...
Role of plasma electrons in the generation of a gas discharge plasma
NASA Astrophysics Data System (ADS)
Gruzdev, V. A.; Zalesski, V. G.; Rusetski, I. S.
2012-12-01
The role of different ionization mechanisms in penning-type gas discharges used to generate an emitting plasma in plasma electron sources is considered. It is shown that, under certain conditions, a substantial contribution to the process of gas ionization is provided by plasma electrons.
Design of a power management and distribution system for a thermionic-diode powered spacecraft
NASA Technical Reports Server (NTRS)
Kimnach, Greg L.
1996-01-01
The Electrical Systems Development Branch of the Power Technology Division at the NASA Lewis Research Center in Cleveland, Ohio is designing a Power Management and Distribution (PMAD) System for the Air Force's Integrated Solar Upper Stage (ISUS) Engine Ground Test Demonstration (EGD). The ISUS program uses solar-thermal propulsion to perform orbit transfers from Low Earth Orbit (LEO) to Geosynchronous Orbit (GEO) and from LEO to Molnya. The ISUS uses the same energy conversion receiver to perform the LEO to High Earth Orbit (HEO) transfer and to generate on-orbit electric power for the payloads. On-orbit power generation is accomplished via two solar concentrators heating a dual-cavity graphite-core which has Thermionic Diodes (TMD's) encircling each cavity. The graphite core and concentrators together are called the Receiver and Concentrator (RAC). The TDM-emitters reach peak temperatures of approximately 2200K, and the TID-collectors are run at approximately 1000K. Because of the high Specific Impulse (I(sup sp)) of solar thermal propulsion relative to chemical propulsion, and because a common bus is used for communications, GN&C, power, etc., a substantial increase in payload weight is possible. This potentially allows for a stepdown in the required launch vehicle size or class for similar payload weight using conventional chemical propulsion and a separate spacecraft bus. The ISUS power system is to provide 1000W(sub e) at 28+/-6V(sub dc) to the payload/spacecraft from a maximum TID generation capability of 1070W(sub e) at 2200K. Producing power with this quality, protecting the spacecraft from electrical faults and accommodating operational constraints of the TID's are the responsibilities of the PMAD system. The design strategy and system options examined along with the proposed designs for the Flight and EGD configurations are discussed herein.
Stakeholder Convening and Working Groups | Solar Research | NREL
. Distributed Generation Interconnection Collaborative Established in 2013 by NREL, the Distributed Generation Interconnection Collaborative (DGIC) provides a forum for the exchange of best practices for distributed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hodge, Brian S; Cho, Gyu-Jung
Voltage regulation devices have been traditionally installed and utilized to support distribution voltages. Installations of distributed energy resources (DERs) in distribution systems are rapidly increasing, and many of these generation resources have variable and uncertain power output. These generators can significantly change the voltage profile for a feeder; therefore, in the distribution system planning stage of the optimal operation and dispatch of voltage regulation devices, possible high penetrations of DERs should be considered. In this paper, we model the IEEE 34-bus test feeder, including all essential equipment. An optimization method is adopted to determine the optimal siting and operation ofmore » the voltage regulation devices in the presence of distributed solar power generation. Finally, we verify the optimal configuration of the entire system through the optimization and simulation results.« less
Bansal, Ravi; Hao, Xuejun; Liu, Jun; Peterson, Bradley S.
2014-01-01
Many investigators have tried to apply machine learning techniques to magnetic resonance images (MRIs) of the brain in order to diagnose neuropsychiatric disorders. Usually the number of brain imaging measures (such as measures of cortical thickness and measures of local surface morphology) derived from the MRIs (i.e., their dimensionality) has been large (e.g. >10) relative to the number of participants who provide the MRI data (<100). Sparse data in a high dimensional space increases the variability of the classification rules that machine learning algorithms generate, thereby limiting the validity, reproducibility, and generalizability of those classifiers. The accuracy and stability of the classifiers can improve significantly if the multivariate distributions of the imaging measures can be estimated accurately. To accurately estimate the multivariate distributions using sparse data, we propose to estimate first the univariate distributions of imaging data and then combine them using a Copula to generate more accurate estimates of their multivariate distributions. We then sample the estimated Copula distributions to generate dense sets of imaging measures and use those measures to train classifiers. We hypothesize that the dense sets of brain imaging measures will generate classifiers that are stable to variations in brain imaging measures, thereby improving the reproducibility, validity, and generalizability of diagnostic classification algorithms in imaging datasets from clinical populations. In our experiments, we used both computer-generated and real-world brain imaging datasets to assess the accuracy of multivariate Copula distributions in estimating the corresponding multivariate distributions of real-world imaging data. Our experiments showed that diagnostic classifiers generated using imaging measures sampled from the Copula were significantly more accurate and more reproducible than were the classifiers generated using either the real-world imaging measures or their multivariate Gaussian distributions. Thus, our findings demonstrate that estimated multivariate Copula distributions can generate dense sets of brain imaging measures that can in turn be used to train classifiers, and those classifiers are significantly more accurate and more reproducible than are those generated using real-world imaging measures alone. PMID:25093634
Coordinated control of micro-grid based on distributed moving horizon control.
Ma, Miaomiao; Shao, Liyang; Liu, Xiangjie
2018-05-01
This paper proposed the distributed moving horizon coordinated control scheme for the power balance and economic dispatch problems of micro-grid based on distributed generation. We design the power coordinated controller for each subsystem via moving horizon control by minimizing a suitable objective function. The objective function of distributed moving horizon coordinated controller is chosen based on the principle that wind power subsystem has the priority to generate electricity while photovoltaic power generation coordinates with wind power subsystem and the battery is only activated to meet the load demand when necessary. The simulation results illustrate that the proposed distributed moving horizon coordinated controller can allocate the output power of two generation subsystems reasonably under varying environment conditions, which not only can satisfy the load demand but also limit excessive fluctuations of output power to protect the power generation equipment. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Scott, W.E.; McGimsey, R.G.
1994-01-01
The 1989-1990 eruption of Redoubt Volcano spawned about 20 areally significant tephra-fall deposits between December 14, 1989 and April 26, 1990. Tephra plumes rose to altitudes of 7 to more than 10 km and were carried mainly northward and eastward by prevailing winds, where they substantially impacted air travel, commerce, and other activities. In comparison to notable eruptions of the recent past, the Redoubt events produced a modest amount of tephra-fall deposits - 6 ?? 107 to 5 ?? 1010 kg for individual events and a total volume (dense-rock equivalent) of about 3-5 ?? 107 m3 of andesite and dacite. Two contrasting tephra types were generated by these events. Pumiceous tephra-fall deposits of December 14 and 15 were followed on December 16 and all later events by fine-grained lithic-crystal tephra deposits, much of which fell as particle aggregates. The change in the character of the tephra-fall deposits reflects their fundamentally different modes of origin. The pumiceous deposits were produced by magmatically driven explosions. The finegrained lithic-crystal deposits were generated by two processes. Hydrovolcanic vent explosions generated tephrafall deposits of December 16 and 19. Such explosions continued as a tephra source, but apparently with diminishing importance, during events of January and February. Ash clouds of lithic pyroclastic flows generated by collapse of actively growing lava domes probably contributed to tephra-fall deposits of all events from January 2 to April 26, and were the sole source of tephra fall for at least the last 4 deposits. ?? 1994.
Compression techniques in tele-radiology
NASA Astrophysics Data System (ADS)
Lu, Tianyu; Xiong, Zixiang; Yun, David Y.
1999-10-01
This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.
Estimating Skin Cancer Risk: Evaluating Mobile Computer-Adaptive Testing.
Djaja, Ngadiman; Janda, Monika; Olsen, Catherine M; Whiteman, David C; Chien, Tsair-Wei
2016-01-22
Response burden is a major detriment to questionnaire completion rates. Computer adaptive testing may offer advantages over non-adaptive testing, including reduction of numbers of items required for precise measurement. Our aim was to compare the efficiency of non-adaptive (NAT) and computer adaptive testing (CAT) facilitated by Partial Credit Model (PCM)-derived calibration to estimate skin cancer risk. We used a random sample from a population-based Australian cohort study of skin cancer risk (N=43,794). All 30 items of the skin cancer risk scale were calibrated with the Rasch PCM. A total of 1000 cases generated following a normal distribution (mean [SD] 0 [1]) were simulated using three Rasch models with three fixed-item (dichotomous, rating scale, and partial credit) scenarios, respectively. We calculated the comparative efficiency and precision of CAT and NAT (shortening of questionnaire length and the count difference number ratio less than 5% using independent t tests). We found that use of CAT led to smaller person standard error of the estimated measure than NAT, with substantially higher efficiency but no loss of precision, reducing response burden by 48%, 66%, and 66% for dichotomous, Rating Scale Model, and PCM models, respectively. CAT-based administrations of the skin cancer risk scale could substantially reduce participant burden without compromising measurement precision. A mobile computer adaptive test was developed to help people efficiently assess their skin cancer risk.
The Hidden Magnetic Field of the Young Neutron Star in Kesteven 79
NASA Astrophysics Data System (ADS)
Shabaltas, Natalia; Lai, Dong
2012-04-01
Recent observations of the central compact object in the Kesteven 79 supernova remnant show that this neutron star (NS) has a weak dipole magnetic field (a few × 1010 G) but an anomalously large (~64%) pulse fraction in its surface X-ray emission. We explore the idea that a substantial sub-surface magnetic field exists in the NS crust, which produces diffuse hot spots on the stellar surface due to anisotropic heat conduction, and gives rise to the observed X-ray pulsation. We develop a general-purpose method, termed "Temperature Template with Full Transport" (TTFT), that computes the synthetic pulse profile of surface X-ray emission from NSs with arbitrary magnetic field and surface temperature distributions, taking into account magnetic atmosphere opacities, beam pattern, vacuum polarization, and gravitational light bending. We show that a crustal toroidal magnetic field of order a few × 1014 G or higher, varying smoothly across the crust, can produce sufficiently distinct surface hot spots to generate the observed pulse fraction in the Kes 79 NS. This result suggests that substantial sub-surface magnetic fields, much stronger than the "visible" dipole fields, may be buried in the crusts of some young NSs, and such hidden magnetic fields can play an important role in their observational manifestations. The general TTFT tool we have developed can also be used for studying radiation from other magnetic NSs.
Space Weather Modeling at the Community Coordinated Modeling Center
NASA Astrophysics Data System (ADS)
Hesse, M.; Falasca, A.; Johnson, J.; Keller, K.; Kuznetsova, M.; Rastaetter, L.
2003-04-01
The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership aimed at the creation of next generation space weather models. The goal of the CCMC is to support the research and developmental work necessary to substantially increase the present-day modeling capability for space weather purposes, and to provide models for transition to the rapid prototyping centers at the space weather forecast centers. This goal requires close collaborations with and substantial involvement of the research community. The physical regions to be addressed by CCMC-related activities range from the solar atmosphere to the Earth's upper atmosphere. The CCMC is an integral part of NASA's Living With a Star (LWS) initiative, of the National Space Weather Program Implementation Plan, and of the Department of Defense Space Weather Transition Plan. CCMC includes a facility at NASA Goddard Space Flight Center, as well as distributed computing facilities provided by the US Air Force. CCMC also provides, to the research community, access to state-of-the-art space research models. In this paper we will provide updates on CCMC status, on current plans, research and development accomplishments and goals, and on the model testing and validation process undertaken as part of the CCMC mandate. We will demonstrate the capabilities of models resident at CCMC via the analysis of a geomagnetic storm, driven by a shock in the solar wind.
NASA Technical Reports Server (NTRS)
Ruane, Alex C.; Goldberg, Richard; Chryssanthacopoulos, James
2014-01-01
The AgMERRA and AgCFSR climate forcing datasets provide daily, high-resolution, continuous, meteorological series over the 1980-2010 period designed for applications examining the agricultural impacts of climate variability and climate change. These datasets combine daily resolution data from retrospective analyses (the Modern-Era Retrospective Analysis for Research and Applications, MERRA, and the Climate Forecast System Reanalysis, CFSR) with in situ and remotely-sensed observational datasets for temperature, precipitation, and solar radiation, leading to substantial reductions in bias in comparison to a network of 2324 agricultural-region stations from the Hadley Integrated Surface Dataset (HadISD). Results compare favorably against the original reanalyses as well as the leading climate forcing datasets (Princeton, WFD, WFD-EI, and GRASP), and AgMERRA distinguishes itself with substantially improved representation of daily precipitation distributions and extreme events owing to its use of the MERRA-Land dataset. These datasets also peg relative humidity to the maximum temperature time of day, allowing for more accurate representation of the diurnal cycle of near-surface moisture in agricultural models. AgMERRA and AgCFSR enable a number of ongoing investigations in the Agricultural Model Intercomparison and Improvement Project (AgMIP) and related research networks, and may be used to fill gaps in historical observations as well as a basis for the generation of future climate scenarios.
Edge seal for a porous gas distribution plate of a fuel cell
Feigenbaum, Haim; Pudick, Sheldon; Singh, Rajindar
1984-01-01
In an improved seal for a gas distribution plate of a fuel cell, a groove is provided extending along an edge of the plate. A member of resinous material is arranged within the groove and a paste comprising an immobilized acid is arranged surrounding the member and substantially filling the groove. The seal, which is impervious to the gas being distributed, is resistant to deterioration by the electrolyte of the cell.
McGeachy, P; Khan, R
2012-07-01
In early stage prostate cancer, low dose rate (LDR) prostate brachytherapy is a favorable treatment modality, where small radioactive seeds are permanently implanted throughout the prostate. Treatment centres currently rely on a commercial optimization algorithm, IPSA, to generate seed distributions for treatment plans. However, commercial software does not allow the user access to the source code, thus reducing the flexibility for treatment planning and impeding any implementation of new and, perhaps, improved clinical techniques. An open source genetic algorithm (GA) has been encoded in MATLAB to generate seed distributions for a simplified prostate and urethra model. To assess the quality of the seed distributions created by the GA, both the GA and IPSA were used to generate seed distributions for two clinically relevant scenarios and the quality of the GA distributions relative to IPSA distributions and clinically accepted standards for seed distributions was investigated. The first clinically relevant scenario involved generating seed distributions for three different prostate volumes (19.2 cc, 32.4 cc, and 54.7 cc). The second scenario involved generating distributions for three separate seed activities (0.397 mCi, 0.455 mCi, and 0.5 mCi). Both GA and IPSA met the clinically accepted criteria for the two scenarios, where distributions produced by the GA were comparable to IPSA in terms of full coverage of the prostate by the prescribed dose, and minimized dose to the urethra, which passed straight through the prostate. Further, the GA offered improved reduction of high dose regions (i.e hot spots) within the planned target volume. © 2012 American Association of Physicists in Medicine.
Ultrafine particles and nitrogen oxides generated by gas and electric cooking
Dennekamp, M; Howarth, S; Dick, C; Cherrie, J; Donaldson, K; Seaton, A
2001-01-01
OBJECTIVES—To measure the concentrations of particles less than 100 nm diameter and of oxides of nitrogen generated by cooking with gas and electricity, to comment on possible hazards to health in poorly ventilated kitchens. METHODS—Experiments with gas and electric rings, grills, and ovens were used to compare different cooking procedures. Nitrogen oxides (NOx) were measured by a chemiluminescent ML9841A NOx analyser. A TSI 3934 scanning mobility particle sizer was used to measure average number concentration and size distribution of aerosols in the size range 10-500 nm. RESULTS—High concentrations of particles are generated by gas combustion, by frying, and by cooking of fatty foods. Electric rings and grills may also generate particles from their surfaces. In experiments where gas burning was the most important source of particles, most particles were in the size range 15-40 nm. When bacon was fried on the gas or electric rings the particles were of larger diameter, in the size range 50-100 nm. The smaller particles generated during experiments grew in size with time because of coagulation. Substantial concentrations of NOX were generated during cooking on gas; four rings for 15 minutes produced 5 minute peaks of about 1000 ppb nitrogen dioxide and about 2000 ppb nitric oxide. CONCLUSIONS—Cooking in a poorly ventilated kitchen may give rise to potentially toxic concentrations of numbers of particles. Very high concentrations of oxides of nitrogen may also be generated by gas cooking, and with no extraction and poor ventilation, may reach concentrations at which adverse health effects may be expected. Although respiratory effects of exposure to NOx might be anticipated, recent epidemiology suggests that cardiac effects cannot be excluded, and further investigation of this is desirable. Keywords: cooking fuels; nitrogen oxides; ultrafine particles PMID:11452045
Business Models and Regulation | Distributed Generation Interconnection
@nrel.gov 303-384-4641 Utilities and regulators are responding to the growth of distributed generation with new business models and approaches. The growing role of distributed resources in the electricity Electric Cooperative, Groton Utilities Distributed Solar for Small Utilities A recording of the webinar is
Size distributions of micro-bubbles generated by a pressurized dissolution method
NASA Astrophysics Data System (ADS)
Taya, C.; Maeda, Y.; Hosokawa, S.; Tomiyama, A.; Ito, Y.
2012-03-01
Size of micro-bubbles is widely distributed in the range of one to several hundreds micrometers and depends on generation methods, flow conditions and elapsed times after the bubble generation. Although a size distribution of micro-bubbles should be taken into account to improve accuracy in numerical simulations of flows with micro-bubbles, a variety of the size distribution makes it difficult to introduce the size distribution in the simulations. On the other hand, several models such as the Rosin-Rammler equation and the Nukiyama-Tanazawa equation have been proposed to represent the size distribution of particles or droplets. Applicability of these models to the size distribution of micro-bubbles has not been examined yet. In this study, we therefore measure size distribution of micro-bubbles generated by a pressurized dissolution method by using a phase Doppler anemometry (PDA), and investigate the applicability of the available models to the size distributions of micro-bubbles. Experimental apparatus consists of a pressurized tank in which air is dissolved in liquid under high pressure condition, a decompression nozzle in which micro-bubbles are generated due to pressure reduction, a rectangular duct and an upper tank. Experiments are conducted for several liquid volumetric fluxes in the decompression nozzle. Measurements are carried out at the downstream region of the decompression nozzle and in the upper tank. The experimental results indicate that (1) the Nukiyama-Tanasawa equation well represents the size distribution of micro-bubbles generated by the pressurized dissolution method, whereas the Rosin-Rammler equation fails in the representation, (2) the bubble size distribution of micro-bubbles can be evaluated by using the Nukiyama-Tanasawa equation without individual bubble diameters, when mean bubble diameter and skewness of the bubble distribution are given, and (3) an evaluation method of visibility based on the bubble size distribution and bubble number density is proposed, and the evaluated visibility agrees well with the visibility measured in the upper tank.
The Microphysical Structure of Extreme Precipitation as Inferred from Ground-Based Raindrop Spectra.
NASA Astrophysics Data System (ADS)
Uijlenhoet, Remko; Smith, James A.; Steiner, Matthias
2003-05-01
The controls on the variability of raindrop size distributions in extreme rainfall and the associated radar reflectivity-rain rate relationships are studied using a scaling-law formalism for the description of raindrop size distributions and their properties. This scaling-law formalism enables a separation of the effects of changes in the scale of the raindrop size distribution from those in its shape. Parameters controlling the scale and shape of the scaled raindrop size distribution may be related to the microphysical processes generating extreme rainfall. A global scaling analysis of raindrop size distributions corresponding to rain rates exceeding 100 mm h1, collected during the 1950s with the Illinois State Water Survey raindrop camera in Miami, Florida, reveals that extreme rain rates tend to be associated with conditions in which the variability of the raindrop size distribution is strongly number controlled (i.e., characteristic drop sizes are roughly constant). This means that changes in properties of raindrop size distributions in extreme rainfall are largely produced by varying raindrop concentrations. As a result, rainfall integral variables (such as radar reflectivity and rain rate) are roughly proportional to each other, which is consistent with the concept of the so-called equilibrium raindrop size distribution and has profound implications for radar measurement of extreme rainfall. A time series analysis for two contrasting extreme rainfall events supports the hypothesis that the variability of raindrop size distributions for extreme rain rates is strongly number controlled. However, this analysis also reveals that the actual shapes of the (measured and scaled) spectra may differ significantly from storm to storm. This implies that the exponents of power-law radar reflectivity-rain rate relationships may be similar, and close to unity, for different extreme rainfall events, but their prefactors may differ substantially. Consequently, there is no unique radar reflectivity-rain rate relationship for extreme rain rates, but the variability is essentially reduced to one free parameter (i.e., the prefactor). It is suggested that this free parameter may be estimated on the basis of differential reflectivity measurements in extreme rainfall.
The economics (or lack thereof) of aerosol geoengineering
NASA Astrophysics Data System (ADS)
Goes, M.; Keller, K.; Tuana, N.
2009-04-01
Anthropogenic greenhouse gas emissions are changing the Earth's climate and impose substantial risks for current and future generations. What are scientifically sound, economically viable, and ethically defendable strategies to manage these climate risks? Ratified international agreements call for a reduction of greenhouse gas emissions to avoid dangerous anthropogenic interference with the climate system. Recent proposals, however, call for the deployment of a different approach: to geoengineer climate by injecting aerosol precursors into the stratosphere. Published economic studies typically suggest that substituting aerosol geoengineering for abatement of carbon dioxide emissions results in large net monetary benefits. However, these studies neglect the risks of aerosol geoengineering due to (i) the potential for future geoengineering failures and (ii) the negative impacts associated with the aerosol forcing. Here we use a simple integrated assessment model of climate change to analyze potential economic impacts of aerosol geoengineering strategies over a wide range of uncertain parameters such as climate sensitivity, the economic damages due to climate change, and the economic damages due to aerosol geoengineering forcing. The simplicity of the model provides the advantages of parsimony and transparency, but it also imposes severe caveats on the interpretation of the results. For example, the analysis is based on a globally aggregated model and is hence silent on the question of intragenerational distribution of costs and benefits. In addition, the analysis neglects the effects of endogenous learning about the climate system. We show that the risks associated with a future geoengineering failure and negative impacts of aerosol forcings can cause geoenginering strategies to fail an economic cost-benefit test. One key to this finding is that a geoengineering failure would lead to dramatic and abrupt climatic changes. The monetary damages due to this failure can dominate the cost-benefit analysis because the monetary damages of climate change are expected to increase with the rate of change. Substituting aerosol geoengineering for greenhouse gas emission abatement might fail not only an economic cost-benefit test but also an ethical test of distributional justice. Substituting aerosol geoengineering for greenhouse gas emissions abatements constitutes a conscious risk transfer to future generations. Intergenerational justice demands distributional justice, namely that present generations may not create benefits for themselves in exchange for burdens on future generations. We use the economic model to quantify this risk transfer to better inform the judgment of whether substituting aerosol geoengineering for carbon dioxide emission abatement fails this ethical test.
Substantiation of the Parameters of the Central Distributor for Mineral Fertilizers
ERIC Educational Resources Information Center
Nukeshev, Sayakhat O.; Eskhozhin, Kairat D.; Tokushev, Masgut H.; Zhazykbayeva, Zhazira M.
2016-01-01
The main problem of distribution systems with a centralized metering seed actions of pneumatic planters is deficient feed-rate consistency of seeds in supply coulters. Thus, the purpose of the study is to develop the optimal ways of decreasing the irregular distribution of the seeds and mineral fertilizers in the coulters. In order to achieve this…
ERIC Educational Resources Information Center
Luschei, Thomas F.; Chudgar, Amita; Rew, W. Joshua
2013-01-01
Background/Context: Although substantial evidence from the United States indicates that more qualified teachers are disproportionately concentrated among academically and economically advantaged children, little cross-national research has examined the distribution of teacher qualifications across schools and students. As a result, we know little…
Resampling and Distribution of the Product Methods for Testing Indirect Effects in Complex Models
ERIC Educational Resources Information Center
Williams, Jason; MacKinnon, David P.
2008-01-01
Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional "z" test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a…
Distributional Language Learning: Mechanisms and Models of Category Formation
ERIC Educational Resources Information Center
Aslin, Richard N.; Newport, Elissa L.
2014-01-01
In the past 15 years, a substantial body of evidence has confirmed that a powerful distributional learning mechanism is present in infants, children, adults and (at least to some degree) in nonhuman animals as well. The present article briefly reviews this literature and then examines some of the fundamental questions that must be addressed for…
NASA Technical Reports Server (NTRS)
1980-01-01
Twenty-four functional requirements were prepared under six categories and serve to indicate how to integrate dispersed storage generation (DSG) systems with the distribution and other portions of the electric utility system. Results indicate that there are no fundamental technical obstacles to prevent the connection of dispersed storage and generation to the distribution system. However, a communication system of some sophistication is required to integrate the distribution system and the dispersed generation sources for effective control. The large-size span of generators from 10 KW to 30 MW means that a variety of remote monitoring and control may be required. Increased effort is required to develop demonstration equipment to perform the DSG monitoring and control functions and to acquire experience with this equipment in the utility distribution environment.
Spatial distribution of CH3 and CH2 radicals in a methane rf discharge
NASA Astrophysics Data System (ADS)
Sugai, H.; Kojima, H.; Ishida, A.; Toyoda, H.
1990-06-01
Spatial distributions of neutral radicals CH3 and CH2 in a capacitively coupled rf glow discharge of methane were measured by threshold ionization mass spectrometry. A strong asymmetry of the density profile was found for the CH2 radical in the high-pressure (˜100 mTorr) discharge. In addition, comprehensive measurements of electron energy distribution, ionic composition, and radical sticking coefficient were made to use as inputs to theoretical modeling of radicals in the methane plasma. The model predictions agree substantially with the measured radical distributions.
MEXICAN-AMERICAN STUDY PROJECT. ADVANCE REPORT 10, MEXICAN AMERICANS IN SOUTHWEST LABOR MARKETS.
ERIC Educational Resources Information Center
FOGEL, WALTER
MEXICAN AMERICANS ARE CLEARLY A DISADVANTAGED GROUP IN THE LABOR MARKETS OF THE SOUTHWEST. ALTHOUGH SUBSTANTIAL GAINS IN INCOME AND OCCUPATIONAL STATUS TAKE PLACE BETWEEN THE FIRST AND SECOND GENERATIONS OF MEXICAN AMERICANS, LITTLE IMPROVEMENT IS EVIDENCED AFTER THE SECOND GENERATION. AS FURTHER EVIDENCE OF DISADVANTAGEMENT, IT HAS BEEN FOUND…
Class and University Education: Inter-Generational Patterns in Canada. NALL Working Paper.
ERIC Educational Resources Information Center
Livingstone, D. W.; Stowe, Susan
Young people from lower class origins continue to face major barriers to university education in Canada. This paper documents both substantial inter-generational class mobility and continuing inequalities in formal educational attainments by class origins. While Canada now has the world's higher educational attainments in its youth cohort and has…
75 FR 74040 - Intent To Prepare an Environmental Impact Statement and To Conduct Scoping Meetings...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-30
... Proposed Project NextEra's proposed Project would consist of up to 100 wind turbine generators with a... roads. NextEra has secured leases with willing landowners for its wind generation turbines and related... substantial natural resources conflicts. NextEra's siting process for the wind turbine strings and associated...
Accurate and Inaccurate Conceptions about Osmosis That Accompanied Meaningful Problem Solving.
ERIC Educational Resources Information Center
Zuckerman, June Trop
This study focused on the knowledge of six outstanding science students who solved an osmosis problem meaningfully. That is, they used appropriate and substantially accurate conceptual knowledge to generate an answer. Three generated a correct answer; three, an incorrect answer. This paper identifies both the accurate and inaccurate conceptions…
Time-Resolved Tandem Faraday Cup Development for High Energy TNSA Particles
NASA Astrophysics Data System (ADS)
Padalino, S.; Simone, A.; Turner, E.; Ginnane, M. K.; Glisic, M.; Kousar, B.; Smith, A.; Sangster, C.; Regan, S.
2015-11-01
MTW and OMEGA EP Lasers at LLE utilize ultra-intense laser light to produce high-energy ion pulses through Target Normal Sheath Acceleration (TNSA). A Time Resolved Tandem Faraday Cup (TRTF) was designed and built to collect and differentiate protons from heavy ions (HI) produced during TNSA. The TRTF includes a replaceable thickness absorber capable of stopping a range of user-selectable HI emitted from TNSA plasma. HI stop within the primary cup, while less massive particles continue through and deposit their remaining charge in the secondary cup, releasing secondary electrons in the process. The time-resolved beam current generated in each cup will be measured on a fast storage scope in multiple channels. A charge-exchange foil at the TRTF entrance modifies the charge state distribution of HI to a known distribution. Using this distribution and the time of flight of the HI, the total HI current can be determined. Initial tests of the TRTF have been made using a proton beam produced by SUNY Geneseo's 1.7 MV Pelletron accelerator. A substantial reduction in secondary electron production, from 70% of the proton beam current at 2MeV down to 0.7%, was achieved by installing a pair of dipole magnet deflectors which successfully returned the electrons to the cups in the TRTF. Ultimately the TRTF will be used to normalize a variety of nuclear physics cross sections and stopping power measurements. Based in part upon work supported by a DOE NNSA Award#DE-NA0001944.
Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.
Thiébaut, Anne C M; Bénichou, Jacques
2004-12-30
Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.
Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.
Verde, Pablo E
2010-12-30
In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1992-01-01
The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.
EDITORIAL: Environmental justice: a critical issue for all environmental scientists everywhere
NASA Astrophysics Data System (ADS)
Stephens, Carolyn
2007-10-01
It is now commonly understood that much of the worldwide burden of environmental ill health falls disproportionately on poorer peoples [1,2]. There is also substantial evidence that much environmental damage internationally is the result of the actions of richer nations or richer groups within nations—with impacts on poorer nations and poorer groups within nations [1,3,4]. It is becoming clear also that poorer peoples internationally experience multiple environmental harms, and that these may have a cumulative effect. The world is becoming more urbanized, and cities are becoming the locus for many of the local issues of environmental damage and environmental harm [4,5]. But cities are also responsible for substantial international environmental damage: for example, it is increasingly evident that cities are one of the main generators of climate change, and that the actions of people in cities in the rich world are deeply linked to the well-being of the overall ecosystem and of people worldwide. Environmental justice is a concept that links the environmental health science documenting these harms, to debates around rights, justice and equity. It fundamentally deals with the distribution of environmental goods and harms—and looks at who bears those harms and who is responsible for creating those harms, in both a practical sense but also in terms of policy decisions. It is a radical environmental health movement that has evolved from civil society groups, angered at what they perceive as the `unjust' distribution of environmental resources for health and, conversely the `unjust' distribution of environmental harms. The movement now includes a collaboration of non-governmental organizations with environmental scientists, public health professionals, and lawyers, all working on the issue of the distributions of environmental harms and the rights of everyone to a healthy environment. This special issue is both timely and important. Environmental justice is moving conceptually and empirically. It started in the US as a movement of local civil society groups against local environmental injustice and distribution of environmental harms [6]. It is becoming a movement that encompasses international environmental injustices and issues of access to environmental goods—and it discusses environmental justice issues both across countries and also across generations. One such definition was pulled together by academics and NGOs in the UK in 2001: 'that everyone should have the right and be able to live in a healthy environment, with access to enough environmental resources for a healthy life' 'that responsibilities are on this current generation to ensure a healthy environment exists for future generations, and on countries, organisations and individuals in this generation to ensure that development does not create environmental problems or distribute environmental resources in ways which damage other peoples health' [7]. This kind of broad definition of environmental justice has been gaining currency internationally, and language around justice is moving into many topic areas of environmental science—shifting discourse on 'climate change' to 'climate justice', 'water pollution' to 'rights to clean water', 'air pollution' to 'rights to healthy air'. Policy is changing too. In Europe the public is gaining more access to information on environmental harms through policy mechanisms such as the Aarhus Convention [8,9] and internationally, civil society groups are becoming aware that there are mechanisms to support them if they challenge environmental pollution. As the public becomes more aware of the issues of environmental justice, and as policy shifts in this direction, environmental scientists have a challenge. We have some of the methodology necessary to measure the distribution of environmental harms and environmental responsibilities. But we also need to develop new methods to deal with the new challenges: for example, how do we measure when an issue of water contamination becomes an issue of environmental injustice? How do we measure the impacts of environmental harm today on future generations? How do we measure the distribution of multiple or cumulative impacts on poorer groups? How do we quantify the responsibility of richer citizens in the world for the environmental harms distributed unequally to the poorer citizens? The papers in this focus issue do not answer all these questions, but we hope that this theme will recur in Environmental Research Letters and that more environmental scientists will begin to frame their analyses around the critical issues of distributions of environmental harms and benefits. References [1] United Nations Environment Programme 2007 Global Environmental Outlook 2007 (Nairobi: United Nations Environment Programme) [2] UNICEF 2005 The State of the World's Children 2005 (Oxford: Oxford University Press) [3] World Resources Institute 2002 Wastes Produced from Industrialised Countries available from www.wri.org [4] Stephens C and Stair P 2007 Charting a new course for urban public health State of the World 2007: Our Urban Future ed L Stark (New York: W W Norton) pp 134 48 [5] Lee K N 2007 An urbanizing world State of the World 2007: Our Urban Future ed L Stark (New York: W W Norton) pp 3 22 [6] United States Environmental Protection Agency 2003 Environmental Justice available from www.epa.gov/compliance/environmentaljustice/ [7] Stephens C, Bullock S and Scott A 2001 Environmental justice: rights and mean to a healthy environment for all Special Briefing Paper Economic and Social Research Council (ESRC) Global Environmental Change Programme (Brighton: ESRC Global Environmental Change Programme, University of Sussex) p 3 available from www.foe.co.uk/resource/reports/environmental_justice.pdf [8] United Nations Economic Commission for Europe Convention on Access to Information 1999 Public Participation in Decision-Making and Access to Justice in Environmental Matters (Geneva: UNECE) [9] United Nations Economic Commission for Europe (UNECE) 2007 Aarhus Clearinghouse for Environmental Democracy available from aarhusclearinghouse.unece.org/ Focus on Environmental Justice And Health Internationally Contents The articles below represent the first accepted contributions and further additions will appear in the near future. Environmental justice in Scotland: policy, pedagogy and praxis Eurig Scandrett Exploring the joint effect of atmospheric pollution and socioeconomic status on selected health outcomes: the PAISARC Project Denis Bard, O Laurent, L Filleul, S Havard, S Deguen, C Segala, G Pedrono, E Riviere, C Schillinger, L Rouil, D Arveiler and D Eilstein Environmental justice and the distributional deficit in policy appraisal in the UK G P Walker
Analysis on composition and inclusions of ballpoint pen tip steel
NASA Astrophysics Data System (ADS)
Yang, Qian-kun; Shen, Ping; Zhang, Dong; Wu, Yan-xin; Fu, Jian-xun
2018-04-01
Ballpoint pen tip steel, a super free-cutting stainless steel, exhibits excellent corrosion resistance and good machining properties. In this study, inductively coupled plasma spectroscopy, metallographic microscopy, and scanning electron microscopy were used to determine the elemental contents in five ballpoint pen tips and their components, morphologies, and inclusion distributions. The results showed that the steels were all S-Pb-Te super free-cutting ferritic stainless steel. The free-cutting phases in the steels were mainly MnS, Pb, and small amounts of PbTe. MnS inclusions were in the form of chain distributions, and the aspect ratio of each size inclusion in the chain was small. The stress concentration effect could substantially reduce the cutting force when the material was machined. Some of the Pb was distributed evenly in the steel matrix as fine particles (1-2 μm), and the rest of the Pb was distributed at the middle or at both ends of the MnS inclusions. The Pb plays a role in lubrication and melting embrittlement, which substantially increases the cutting performance. PbTe was also usually distributed in the middle and at both ends of the MnS inclusions, and Te could convert the sulfides into spindles, thereby improving the cutting performance of the steel.
De Allegri, Manuela; Marschall, Paul; Flessa, Steffen; Tiendrebéogo, Justin; Kouyaté, Bocar; Jahn, Albrecht; Müller, Olaf
2010-01-01
Insecticide-treated nets (ITNs) are effective in substantially reducing malaria transmission. Still, ITN coverage in sub-Saharan Africa (SSA) remains extremely low. Policy makers are concerned with identifying the most suitable delivery mechanism to achieve rapid yet sustainable increases in ITN coverage. Little is known, however, on the comparative costs of alternative ITN distribution strategies. This paper aimed to fill this gap in knowledge by developing such a comparative cost analysis, looking at the cost per ITN distributed for two alternative interventions: subsidized sales supported by social marketing and free distribution to pregnant women through antenatal care (ANC). The study was conducted in rural Burkina Faso, where the two interventions were carried out alongside one another in 2006/07. Cost information was collected prospectively to derive both a financial analysis adopting a provider's perspective and an economic analysis adopting a societal perspective. The average financial cost per ITN distributed was US$8.08 and US$7.21 for sales supported by social marketing and free distribution through ANC, respectively. The average economic cost per ITN distributed was US$4.81 for both interventions. Contrary to common belief, costs did not differ substantially between the two interventions. Due to the district's ability to rely fully on the use of existing resources, financial costs associated with free ITN distribution through ANC were in fact even lower than those associated with the social marketing campaign. This represents an encouraging finding for SSA governments and points to the possibility to invest in programmes to favour free ITN distribution through existing health facilities. Given restricted budgets, however, free distribution programmes are unlikely to be feasible.
Brozena, Alexandra H; Leeds, Jarrett D; Zhang, Yin; Fourkas, John T; Wang, YuHuang
2014-05-27
We demonstrate efficient creation of defect-bound trions through chemical doping of controlled sp(3) defect sites in semiconducting, single-walled carbon nanotubes. These tricarrier quasi-particles luminesce almost as brightly as their parent excitons, indicating a remarkably efficient conversion of excitons into trions. Substantial populations of trions can be generated at low excitation intensities, even months after a sample has been prepared. Photoluminescence spectroscopy reveals a trion binding energy as high as 262 meV, which is substantially larger than any previously reported values. This discovery may have important ramifications not only for studying the basic physics of trions but also for the application of these species in fields such as photonics, electronics, and bioimaging.
Method, memory media and apparatus for detection of grid disconnect
Ye, Zhihong [Clifton Park, NY; Du, Pengwei [Troy, NY
2008-09-23
A phase shift procedure for detecting a disconnect of a power grid from a feeder that is connected to a load and a distributed generator. The phase shift procedure compares a current phase shift of the output voltage of the distributed generator with a predetermined threshold and if greater, a command is issued for a disconnect of the distributed generator from the feeder. To extend the range of detection, the phase shift procedure is used when a power mismatch between the distributed generator and the load exceeds a threshold and either or both of an under/over frequency procedure and an under/over voltage procedure is used when any power mismatch does not exceed the threshold.
works on international programs where she focuses on grid integration of distributed renewable works on resilience issues including distributed generation and microgrids for energy system resilience Communities Water, Energy, Food Nexus Planning and Analysis Grid Integration of Distributed Generation
Effect of Rayleigh-scattering distributed feedback on multiwavelength Raman fiber laser generation.
El-Taher, A E; Harper, P; Babin, S A; Churkin, D V; Podivilov, E V; Ania-Castanon, J D; Turitsyn, S K
2011-01-15
We experimentally demonstrate a Raman fiber laser based on multiple point-action fiber Bragg grating reflectors and distributed feedback via Rayleigh scattering in an ~22-km-long optical fiber. Twenty-two lasing lines with spacing of ~100 GHz (close to International Telecommunication Union grid) in the C band are generated at the watt level. In contrast to the normal cavity with competition between laser lines, the random distributed feedback cavity exhibits highly stable multiwavelength generation with a power-equalized uniform distribution, which is almost independent on power.
Vegetation dynamics at the upper elevational limit of vascular plants in Himalaya.
Dolezal, Jiri; Dvorsky, Miroslav; Kopecky, Martin; Liancourt, Pierre; Hiiesalu, Inga; Macek, Martin; Altman, Jan; Chlumska, Zuzana; Rehakova, Klara; Capkova, Katerina; Borovec, Jakub; Mudrak, Ondrej; Wild, Jan; Schweingruber, Fritz
2016-05-04
A rapid warming in Himalayas is predicted to increase plant upper distributional limits, vegetation cover and abundance of species adapted to warmer climate. We explored these predictions in NW Himalayas, by revisiting uppermost plant populations after ten years (2003-2013), detailed monitoring of vegetation changes in permanent plots (2009-2012), and age analysis of plants growing from 5500 to 6150 m. Plant traits and microclimate variables were recorded to explain observed vegetation changes. The elevation limits of several species shifted up to 6150 m, about 150 vertical meters above the limit of continuous plant distribution. The plant age analysis corroborated the hypothesis of warming-driven uphill migration. However, the impact of warming interacts with increasing precipitation and physical disturbance. The extreme summer snowfall event in 2010 is likely responsible for substantial decrease in plant cover in both alpine and subnival vegetation and compositional shift towards species preferring wetter habitats. Simultaneous increase in summer temperature and precipitation caused rapid snow melt and, coupled with frequent night frosts, generated multiple freeze-thaw cycles detrimental to subnival plants. Our results suggest that plant species responses to ongoing climate change will not be unidirectional upward range shifts but rather multi-dimensional, species-specific and spatially variable.
CRAB3: Establishing a new generation of services for distributed analysis at CMS
NASA Astrophysics Data System (ADS)
Cinquilli, M.; Spiga, D.; Grandi, C.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Riahi, H.; Vaandering, E.
2012-12-01
In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.
Automatic information extraction from unstructured mammography reports using distributed semantics.
Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L
2018-02-01
To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.
Vegetation dynamics at the upper elevational limit of vascular plants in Himalaya
NASA Astrophysics Data System (ADS)
Dolezal, Jiri; Dvorsky, Miroslav; Kopecky, Martin; Liancourt, Pierre; Hiiesalu, Inga; Macek, Martin; Altman, Jan; Chlumska, Zuzana; Rehakova, Klara; Capkova, Katerina; Borovec, Jakub; Mudrak, Ondrej; Wild, Jan; Schweingruber, Fritz
2016-05-01
A rapid warming in Himalayas is predicted to increase plant upper distributional limits, vegetation cover and abundance of species adapted to warmer climate. We explored these predictions in NW Himalayas, by revisiting uppermost plant populations after ten years (2003-2013), detailed monitoring of vegetation changes in permanent plots (2009-2012), and age analysis of plants growing from 5500 to 6150 m. Plant traits and microclimate variables were recorded to explain observed vegetation changes. The elevation limits of several species shifted up to 6150 m, about 150 vertical meters above the limit of continuous plant distribution. The plant age analysis corroborated the hypothesis of warming-driven uphill migration. However, the impact of warming interacts with increasing precipitation and physical disturbance. The extreme summer snowfall event in 2010 is likely responsible for substantial decrease in plant cover in both alpine and subnival vegetation and compositional shift towards species preferring wetter habitats. Simultaneous increase in summer temperature and precipitation caused rapid snow melt and, coupled with frequent night frosts, generated multiple freeze-thaw cycles detrimental to subnival plants. Our results suggest that plant species responses to ongoing climate change will not be unidirectional upward range shifts but rather multi-dimensional, species-specific and spatially variable.
Vegetation dynamics at the upper elevational limit of vascular plants in Himalaya
Dolezal, Jiri; Dvorsky, Miroslav; Kopecky, Martin; Liancourt, Pierre; Hiiesalu, Inga; Macek, Martin; Altman, Jan; Chlumska, Zuzana; Rehakova, Klara; Capkova, Katerina; Borovec, Jakub; Mudrak, Ondrej; Wild, Jan; Schweingruber, Fritz
2016-01-01
A rapid warming in Himalayas is predicted to increase plant upper distributional limits, vegetation cover and abundance of species adapted to warmer climate. We explored these predictions in NW Himalayas, by revisiting uppermost plant populations after ten years (2003–2013), detailed monitoring of vegetation changes in permanent plots (2009–2012), and age analysis of plants growing from 5500 to 6150 m. Plant traits and microclimate variables were recorded to explain observed vegetation changes. The elevation limits of several species shifted up to 6150 m, about 150 vertical meters above the limit of continuous plant distribution. The plant age analysis corroborated the hypothesis of warming-driven uphill migration. However, the impact of warming interacts with increasing precipitation and physical disturbance. The extreme summer snowfall event in 2010 is likely responsible for substantial decrease in plant cover in both alpine and subnival vegetation and compositional shift towards species preferring wetter habitats. Simultaneous increase in summer temperature and precipitation caused rapid snow melt and, coupled with frequent night frosts, generated multiple freeze-thaw cycles detrimental to subnival plants. Our results suggest that plant species responses to ongoing climate change will not be unidirectional upward range shifts but rather multi-dimensional, species-specific and spatially variable. PMID:27143226
Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos
2015-01-01
The objective of this study was to assess the risk of bias of randomized controlled trials (RCTs) published in prosthodontic and implant dentistry journals. The last 30 issues of 9 journals in the field of prosthodontic and implant dentistry (Clinical Implant Dentistry and Related Research, Clinical Oral Implants Research, Implant Dentistry, International Journal of Oral & Maxillofacial Implants, International Journal of Periodontics and Restorative Dentistry, International Journal of Prosthodontics, Journal of Dentistry, Journal of Oral Rehabilitation, and Journal of Prosthetic Dentistry) were hand-searched for RCTs. Risk of bias was assessed using the Cochrane Collaboration's risk of bias tool and analyzed descriptively. From the 3,667 articles screened, a total of 147 RCTs were identified and included. The number of published RCTs increased with time. The overall distribution of a high risk of bias assessment varied across the domains of the Cochrane risk of bias tool: 8% for random sequence generation, 18% for allocation concealment, 41% for masking, 47% for blinding of outcome assessment, 7% for incomplete outcome data, 12% for selective reporting, and 41% for other biases. The distribution of high risk of bias for RCTs published in the selected prosthodontic and implant dentistry journals varied among journals and ranged from 8% to 47%, which can be considered as substantial.
Mondal, Ananya; Das, Subhasish; Sah, Rajesh Kumar; Bhattacharyya, Pradip; Bhattacharya, Satya Sundar
2017-12-31
Coal fired brick kiln factories generate significant of brick kiln bottom ash (BKBA) that contaminate soil and water environments of areas near the dumping sites through leaching of toxic metals (Pb, Cr, Cd, Zn, Mn, and Cu). However, characteristics and environmental effects of BKBAs are yet unknown. We collected BKBA samples from 32 strategic locations of two rapidly developing States (West Bengal and Assam) of India. Scanning electron microscope images indicated spherical and granular structures of BKBAs produced in West Bengal (WBKBA) and Assam (ABKBA) respectively; while energy dispersive spectroscopy and analytical assessments confirmed substantial occurrence of total organic C and nutrient elements (N, P, K, Ca, Mg, and S) in both the BKBAs. FTIR analysis revealed greater predominance of organic matter in ABKBAs than WBKBAs. Occurrence of toxic metals (Cd, Cr, Pb, Zn, Mn, and Cu) was higher in ABKBAs than in WBKBAs; while organic and residual fractions of metals were highly predominant in most of the BKBAs. Principal component analysis showed that metal contents and pH were the major distinguishing characteristics of the BKBAs generated in the two different environmental locations. Human health risk associated with BKBAs generated in Assam is of significant concern. Finally, geo-statistical tools enabled to predict the spatial distribution patterns of toxic metals contributed by the BKBAs in Assam and West Bengal respectively. Assessment of contamination index, geo-accumulation index, and ecological risk index revealed some BKBAs to be more toxic than others. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Lutermann, Heike; Medger, Katarina; Horak, Ivan G.
2012-02-01
The distribution of parasites is often characterised by substantial aggregation with a small proportion of hosts harbouring the majority of parasites. This pattern can be generated by abiotic and biotic factors that affect hosts and determine host exposure and susceptibility to parasites. Climate factors can change a host's investment in life-history traits (e.g. growth, reproduction) generating temporal patterns of parasite aggregation. Similarly, host age may affect such investment. Furthermore, sex-biased parasitism is common among vertebrates and has been linked to sexual dimorphism in morphology, behaviour and physiology. Studies exploring sex-biased parasitism have been almost exclusively conducted on polygynous species where dimorphic traits are often correlated. We investigated the effects of season and life-history traits on tick loads of the monogamous eastern rock sengi ( Elephantulus myurus). We found larger tick burdens during the non-breeding season possibly as a result of energetic constraints and/or climate effects on the tick. Reproductive investment resulted in increased larval abundance for females but not males and may be linked to sex-specific life-history strategies. The costs of reproduction could also explain the observed age effect with yearling individuals harbouring lower larval burdens than adults. Although adult males had the greatest larval tick loads, host sex appears to play a minor role in generating the observed parasite heterogeneities. Our study suggests that reproductive investment plays a major role for parasite patterns in the study species.
An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids
NASA Astrophysics Data System (ADS)
Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun
2017-07-01
Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaturvedi, Vaibhav; Clarke, Leon E.; Edmonds, James A.
Electrification plays a crucial role in cost-effective greenhouse gas emissions mitigation strategies. Such strategies in turn carry implications for financial capital markets. This paper explores the implication of climate mitigation policy for capital investment demands by the electric power sector on decade to century time scales. We go further to explore the implications of technology performance and the stringency of climate policy for capital investment demands by the power sector. Finally, we discuss the regional distribution of investment demands. We find that stabilizing GHG emissions will require additional investment in the electricity generation sector over and above investments that wouldmore » be need in the absence of climate policy, in the range of 16 to 29 Trillion US$ (60-110%) depending on the stringency of climate policy during the period 2015 to 2095 under default technology assumptions. This increase reflects the higher capital intensity of power systems that control emissions. Limits on the penetration of nuclear and carbon capture and storage technology could increase costs substantially. Energy efficiency improvements can reduce the investment requirement by 8 to21 Trillion US$ (default technology assumptions), depending on climate policy scenario with higher savings being obtained under the most stringent climate policy. The heaviest investments in power generation were observed in the China, India, SE Asia and Africa regions with the latter three regions dominating in the second half of the 21st century.« less
Blackwood, Christopher B; Hudleston, Deborah; Zak, Donald R; Buyer, Jeffrey S
2007-08-01
Ecological diversity indices are frequently applied to molecular profiling methods, such as terminal restriction fragment length polymorphism (T-RFLP), in order to compare diversity among microbial communities. We performed simulations to determine whether diversity indices calculated from T-RFLP profiles could reflect the true diversity of the underlying communities despite potential analytical artifacts. These include multiple taxa generating the same terminal restriction fragment (TRF) and rare TRFs being excluded by a relative abundance (fluorescence) threshold. True community diversity was simulated using the lognormal species abundance distribution. Simulated T-RFLP profiles were generated by assigning each species a TRF size based on an empirical or modeled TRF size distribution. With a typical threshold (1%), the only consistently useful relationship was between Smith and Wilson evenness applied to T-RFLP data (TRF-E(var)) and true Shannon diversity (H'), with correlations between 0.71 and 0.81. TRF-H' and true H' were well correlated in the simulations using the lowest number of species, but this correlation declined substantially in simulations using greater numbers of species, to the point where TRF-H' cannot be considered a useful statistic. The relationships between TRF diversity indices and true indices were sensitive to the relative abundance threshold, with greatly improved correlations observed using a 0.1% threshold, which was investigated for comparative purposes but is not possible to consistently achieve with current technology. In general, the use of diversity indices on T-RFLP data provides inaccurate estimates of true diversity in microbial communities (with the possible exception of TRF-E(var)). We suggest that, where significant differences in T-RFLP diversity indices were found in previous work, these should be reinterpreted as a reflection of differences in community composition rather than a true difference in community diversity.
Mohammed, Hlack; Roberts, Daryl L; Copley, Mark; Hammond, Mark; Nichols, Steven C; Mitchell, Jolyon P
2012-09-01
Current pharmacopeial methods for testing dry powder inhalers (DPIs) require that 4.0 L be drawn through the inhaler to quantify aerodynamic particle size distribution of "inhaled" particles. This volume comfortably exceeds the internal dead volume of the Andersen eight-stage cascade impactor (ACI) and Next Generation pharmaceutical Impactor (NGI) as designated multistage cascade impactors. Two DPIs, the second (DPI-B) having similar resistance than the first (DPI-A) were used to evaluate ACI and NGI performance at 60 L/min following the methodology described in the European and United States Pharmacopeias. At sampling times ≥2 s (equivalent to volumes ≥2.0 L), both impactors provided consistent measures of therapeutically important fine particle mass (FPM) from both DPIs, independent of sample duration. At shorter sample times, FPM decreased substantially with the NGI, indicative of incomplete aerosol bolus transfer through the system whose dead space was 2.025 L. However, the ACI provided consistent measures of both variables across the range of sampled volumes evaluated, even when this volume was less than 50% of its internal dead space of 1.155 L. Such behavior may be indicative of maldistribution of the flow profile from the relatively narrow exit of the induction port to the uppermost stage of the impactor at start-up. An explanation of the ACI anomalous behavior from first principles requires resolution of the rapidly changing unsteady flow and pressure conditions at start up, and is the subject of ongoing research by the European Pharmaceutical Aerosol Group. Meanwhile, these experimental findings are provided to advocate a prudent approach by retaining the current pharmacopeial methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahfuz, H.; Maniruzzaman, M.; Vaidya, U.
1997-04-01
Monotonic tensile and fatigue response of continuous silicon carbide fiber reinforced silicon nitride (SiC{sub f}/Si{sub 3}N{sub 4}) composites has been investigated. The monotonic tensile tests have been performed at room and elevated temperatures. Fatigue tests have been conducted at room temperature (RT), at a stress ratio, R = 0.1 and a frequency of 5 Hz. It is observed during the monotonic tests that the composites retain only 30% of its room temperature strength at 1,600 C suggesting a substantial chemical degradation of the matrix at that temperature. The softening of the matrix at elevated temperature also causes reduction in tensilemore » modulus, and the total reduction in modulus is around 45%. Fatigue data have been generated at three load levels and the fatigue strength of the composite has been found to be considerably high; about 75% of its ultimate room temperature strength. Extensive statistical analysis has been performed to understand the degree of scatter in the fatigue as well as in the static test data. Weibull shape factors and characteristic values have been determined for each set of tests and their relationship with the response of the composites has been discussed. A statistical fatigue life prediction method developed from the Weibull distribution is also presented. Maximum Likelihood Estimator with censoring techniques and data pooling schemes has been employed to determine the distribution parameters for the statistical analysis. These parameters have been used to generate the S-N diagram with desired level of reliability. Details of the statistical analysis and the discussion of the static and fatigue behavior of the composites are presented in this paper.« less
MODIS-Aqua detects Noctiluca scintillans and hotspots in the central Arabian Sea.
Dwivedi, R; Priyaja, P; Rafeeq, M; Sudhakar, M
2016-01-01
Northern Arabian Sea is considered as an ecologically sensitive area as it experiences a massive upwelling and long-lasting algal bloom, Noctiluca scintillans (green tide) during summer and spring-winter, respectively. Diatom bloom is also found to be co-located with N. scintillans and both have an impact on ecology of the basin. In-house technique of detecting species of these blooms from Moderate Resolution Imaging Spectroradiometer (MODIS)-Aqua data was used to generate a time-series of images revealing their spatial distribution. A study of spatial-temporal variability of these blooms using satellite data expressed a cyclic pattern of their spread over a period of 13 years. An average distribution of the blooms for January-March period revealed a peak in 2015 and minimum in 2013. Subsequently, a time-series of phytoplankton species images were generated for these 2 years to study their inter-annual variability and the associated factors. Species images during active phase of the bloom (February) in 2015 indicated development of N. scintillans and diatom in the central Arabian Sea also, up to 12° N. This observation was substantiated with relevant oceanic parameters measured from the ship as well as satellite data and the same is highlight of the paper. While oxygen depletion and release of ammonia associated with N. scintillans are detrimental for waters on the western side; it is relatively less extreme and supports the entire food chain on the eastern side. In view of these contrasting eco-sensitive events, it is a matter of concern to identify biologically active persistent areas, hot spots, in order to study their ecology in detail. An ecological index, persistence of the bloom, was derived from the time-series of species images and it is another highlight of our study.
Proton beam generation of whistler waves in the earth's foreshock
NASA Technical Reports Server (NTRS)
Wong, H. K.; Goldstein, M. L.
1987-01-01
It is shown that proton beams, often observed upstream of the earth's bow shock and associated with the generation of low-frequency hydromagnetic fluctuations, are also capable of generating whistler waves. The waves can be excited by an instability driven by two-temperature streaming Maxwellian proton distributions which have T (perpendicular)/T(parallel) much greater than 1. It can also be excited by gyrating proton beam distributions. These distributions generate whistler waves with frequencies ranging from 10 to 100 times the proton cyclotron frequency (in the solar wind reference frame) and provide another mechanism for generating the '1-Hz' waves often seen in the earth's foreshock.
Fuel cell using a hydrogen generation system
Dentinger, Paul M.; Crowell, Jeffrey A. W.
2010-10-19
A system is described for storing and generating hydrogen and, in particular, a system for storing and generating hydrogen for use in an H.sub.2/O.sub.2 fuel cell. The hydrogen storage system uses beta particles from a beta particle emitting material to degrade an organic polymer material to release substantially pure hydrogen. In a preferred embodiment of the invention, beta particles from .sup.63Ni are used to release hydrogen from linear polyethylene.
Hydrogen storage and generation system
Dentinger, Paul M.; Crowell, Jeffrey A. W.
2010-08-24
A system for storing and generating hydrogen generally and, in particular, a system for storing and generating hydrogen for use in an H.sub.2/O.sub.2 fuel cell. The hydrogen storage system uses the beta particles from a beta particle emitting material to degrade an organic polymer material to release substantially pure hydrogen. In a preferred embodiment of the invention, beta particles from .sup.63Ni are used to release hydrogen from linear polyethylene.
a Framework for Distributed Mixed Language Scientific Applications
NASA Astrophysics Data System (ADS)
Quarrie, D. R.
The Object Management Group has defined an architecture (CORBA) for distributed object applications based on an Object Request Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel stubs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently underway to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.
2006-12-01
Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.
NASA Astrophysics Data System (ADS)
Campbell, Kirby R.; Campagnola, Paul J.
2017-11-01
The collagen architecture in all human ovarian cancers is substantially remodeled, where these alterations are manifested in different fiber widths, fiber patterns, and fibril size and packing. Second harmonic generation (SHG) microscopy has differentiated normal tissues from high-grade serous (HGS) tumors with high accuracy; however, the classification between low-grade serous, endometrioid, and benign tumors was less successful. We postulate this is due to known higher genetic variation in these tissues relative to HGS tumors, which are genetically similar, and this results in more heterogeneous collagen remodeling in the respective matrix. Here, we examine fiber widths and SHG emission intensity and directionality locally within images (e.g., 10×10 microns) and show that normal tissues and HGS tumors are more uniform in fiber properties as well as in fibril size and packing than the other tissues. Moreover, these distributions are in good agreement with phase matching considerations relating SHG emission directionality and intensity. The findings show that in addition to average collagen assembly properties the intrinsic heterogeneity must also be considered as another aspect of characterization. These local analyses showed differences not shown in pure intensity-based image analyses and may provide further insight into disease etiology of the different tumor subtypes.
Recent advances in lossy compression of scientific floating-point data
NASA Astrophysics Data System (ADS)
Lindstrom, P.
2017-12-01
With a continuing exponential trend in supercomputer performance, ever larger data sets are being generated through numerical simulation. Bandwidth and storage capacity are, however, not keeping pace with this increase in data size, causing significant data movement bottlenecks in simulation codes and substantial monetary costs associated with archiving vast volumes of data. Worse yet, ever smaller fractions of data generated can be stored for further analysis, where scientists frequently rely on decimating or averaging large data sets in time and/or space. One way to mitigate these problems is to employ data compression to reduce data volumes. However, lossless compression of floating-point data can achieve only very modest size reductions on the order of 10-50%. We present ZFP and FPZIP, two state-of-the-art lossy compressors for structured floating-point data that routinely achieve one to two orders of magnitude reduction with little to no impact on the accuracy of visualization and quantitative data analysis. We provide examples of the use of such lossy compressors in climate and seismic modeling applications to effectively accelerate I/O and reduce storage requirements. We further discuss how the design decisions behind these and other compressors impact error distributions and other statistical and differential properties, including derived quantities of interest relevant to each science application.
Surface elevation change on ice caps in the Qaanaaq region, northwestern Greenland
NASA Astrophysics Data System (ADS)
Saito, Jun; Sugiyama, Shin; Tsutaki, Shun; Sawagaki, Takanobu
2016-09-01
A large number of glaciers and ice caps (GICs) are distributed along the Greenland coast, physically separated from the ice sheet. The total area of these GICs accounts for 5% of Greenland's ice cover. Melt water input from the GICs to the ocean substantially contributed to sea-level rise over the last century. Here, we report surface elevation changes of six ice caps near Qaanaaq (77°28‧N, 69°13‧W) in northwestern Greenland based on photogrammetric analysis of stereo pair satellite images. We processed the images with a digital map plotting instrument to generate digital elevation models (DEMs) in 2006 and 2010 with a grid resolution of 500 m. Generated DEMs were compared to measure surface elevation changes between 2006 and 2010. Over the study area of the six ice caps, covering 1215 km2, the mean rate of elevation change was -1.1 ± 0.1 m a-1. This rate is significantly greater than that previously reported for the 2003-2008 period (-0.6 ± 0.1 m a-1) for GICs all of northwestern Greenland. This increased mass loss is consistent with the rise in summer temperatures in this region at a rate of 0.12 °C a-1 for the 1997-2013 period.
The continuum of hydroclimate variability in western North America during the last millennium
Ault, Toby R.; Cole, Julia E.; Overpeck, Jonathan T.; Pederson, Gregory T.; St. George, Scott; Otto-Bliesner, Bette; Woodhouse, Connie A.; Deser, Clara
2013-01-01
The distribution of climatic variance across the frequency spectrum has substantial importance for anticipating how climate will evolve in the future. Here we estimate power spectra and power laws (ß) from instrumental, proxy, and climate model data to characterize the hydroclimate continuum in western North America (WNA). We test the significance of our estimates of spectral densities and ß against the null hypothesis that they reflect solely the effects of local (non-climate) sources of autocorrelation at the monthly timescale. Although tree-ring based hydroclimate reconstructions are generally consistent with this null hypothesis, values of ß calculated from long-moisture sensitive chronologies (as opposed to reconstructions), and other types of hydroclimate proxies, exceed null expectations. We therefore argue that there is more low-frequency variability in hydroclimate than monthly autocorrelation alone can generate. Coupled model results archived as part of the Climate Model Intercomparison Project 5 (CMIP5) are consistent with the null hypothesis and appear unable to generate variance in hydroclimate commensurate with paleoclimate records. Consequently, at decadal to multidecadal timescales there is more variability in instrumental and proxy data than in the models, suggesting that the risk of prolonged droughts under climate change may be underestimated by CMIP5 simulations of the future.
Arthur M. Phillips; Debra J. Kennedy; Barbara G. Phillips; Diedre Weage
2001-01-01
Surveys for Paradine plains cactus (Pediocactus paradinei B. W. Benson) conducted for the Kaibab National Forest, North Kaibab Ranger District in 1992-94 qualitatively showed a fairly substantial population of scattered individuals in the pinyon-juniper woodland, and indicated that there might be a correlation between plant distribution and dripline of trees. This...
Flórez-Arango, José F; Sriram Iyengar, M; Caicedo, Indira T; Escobar, German
2017-01-01
Development and electronic distribution of Clinical Practice Guidelines production is costly and challenging. This poster presents a rapid method to represent existing guidelines in auditable, computer executable multimedia format. We used a technology that enables a small number of clinicians to, in a short period of time, develop a substantial amount of computer executable guidelines without programming.
NASA Astrophysics Data System (ADS)
Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming
2018-01-01
This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.
Numerical analysis of mixing enhancement for micro-electroosmotic flow
NASA Astrophysics Data System (ADS)
Tang, G. H.; He, Y. L.; Tao, W. Q.
2010-05-01
Micro-electroosmotic flow is usually slow with negligible inertial effects and diffusion-based mixing can be problematic. To gain an improved understanding of electroosmotic mixing in microchannels, a numerical study has been carried out for channels patterned with wall blocks, and channels patterned with heterogeneous surfaces. The lattice Boltzmann method has been employed to obtain the external electric field, electric potential distribution in the electrolyte, the flow field, and the species concentration distribution within the same framework. The simulation results show that wall blocks and heterogeneous surfaces can significantly disturb the streamlines by fluid folding and stretching leading to apparently substantial improvements in mixing. However, the results show that the introduction of such features can substantially reduce the mass flow rate and thus effectively prolongs the available mixing time when the flow passes through the channel. This is a non-negligible factor on the effectiveness of the observed improvements in mixing efficiency. Compared with the heterogeneous surface distribution, the wall block cases can achieve more effective enhancement in the same mixing time. In addition, the field synergy theory is extended to analyze the mixing enhancement in electroosmotic flow. The distribution of the local synergy angle in the channel aids to evaluate the effectiveness of enhancement method.
Slot-Antenna/Permanent-Magnet Device for Generating Plasma
NASA Technical Reports Server (NTRS)
Foster, John E.
2007-01-01
A device that includes a rectangular-waveguide/slot-antenna structure and permanent magnets has been devised as a means of generating a substantially uniform plasma over a relatively large area, using relatively low input power and a low gas flow rate. The device utilizes electron cyclotron resonance (ECR) excited by microwave power to efficiently generate plasma in a manner that is completely electrodeless in the sense that, in principle, there is no electrical contact between the plasma and the antenna. Plasmas generated by devices like this one are suitable for use as sources of ions and/or electrons for diverse material-processing applications (e.g., etching or deposition) and for ion thrusters. The absence of plasma/electrode contact essentially prevents plasma-induced erosion of the antenna, thereby also helping to minimize contamination of the plasma and of objects exposed to the plasma. Consequently, the operational lifetime of the rectangular-waveguide/ slot-antenna structure is long and the lifetime of the plasma source is limited by the lifetime of the associated charged-particle-extraction grid (if used) or the lifetime of the microwave power source. The device includes a series of matched radiating slot pairs that are distributed along the length of a plasma-source discharge chamber (see figure). This arrangement enables the production of plasma in a distributed fashion, thereby giving rise to a uniform plasma profile. A uniform plasma profile is necessary for uniformity in any electron- or ion-extraction electrostatic optics. The slotted configuration of the waveguide/ antenna structure makes the device scalable to larger areas and higher powers. All that is needed for scaling up is the attachment of additional matched radiating slots along the length of the discharge chamber. If it is desired to make the power per slot remain constant in scaling up, then the input microwave power must be increased accordingly. Unlike in prior ECR microwave plasma-generating devices, there is no need for an insulating window on the antenna. Such windows are sources of contamination and gradually become ineffective as they become coated with erosion products over time. These characteristics relegate prior ECR microwave plasma-generating devices to non-ion beam, non-deposition plasma applications. In contrast, the lack of need for an insulating window in the present device makes it possible to use the device in both ion-beam (including deposition) and electron-beam applications. The device is designed so that ECR takes place above each slot and the gradient of the magnetic field at each slot is enough to prevent backflow of plasma.
46 CFR 58.16-18 - Installation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... metallic connections to minimize the effect of cylinder movement on the outlet piping. (2) Distribution... substantially secured against vibration by means of soft nonferrous metal clips without sharp edges in contact...
46 CFR 58.16-18 - Installation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... metallic connections to minimize the effect of cylinder movement on the outlet piping. (2) Distribution... substantially secured against vibration by means of soft nonferrous metal clips without sharp edges in contact...
46 CFR 58.16-18 - Installation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... metallic connections to minimize the effect of cylinder movement on the outlet piping. (2) Distribution... substantially secured against vibration by means of soft nonferrous metal clips without sharp edges in contact...
46 CFR 58.16-18 - Installation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... metallic connections to minimize the effect of cylinder movement on the outlet piping. (2) Distribution... substantially secured against vibration by means of soft nonferrous metal clips without sharp edges in contact...
NASA Astrophysics Data System (ADS)
Ushenko, A. G.; Dubolazov, A. V.; Ushenko, V. A.; Ushenko, Yu. A.; Pidkamin, L. Y.; Soltys, I. V.; Zhytaryuk, V. G.; Pavlyukovich, N.
2016-09-01
A model of generalized optical anisotropy of polycrystalline networks of albumin and globulin of human brain liquor has been suggested. The polarization-phase method of spatial and frequency differentiation of linear and circular birefringence coordinate distributions have been analytically substantiated. A set of criteria of the dynamics of necrotic changes of polarization-phase images of liquor polycrystalline films for determination of death coming prescription has been detected and substantiated.
Method for generating small and ultra small apertures, slits, nozzles and orifices
Khounsary, Ali M [Hinsdale, IL
2012-05-22
A method and device for one or more small apertures, slits, nozzles and orifices, preferably having a high aspect ratio. In one embodiment, one or more alternating layers of sacrificial layers and blocking layers are deposited onto a substrate. Each sacrificial layer is made of a material which preferably allows a radiation to substantially pass through. Each blocking layer is made of a material which substantially blocks the radiation.
Moon, Hyun Ho; Lee, Jong Joo; Choi, Sang Yule; Cha, Jae Sang; Kang, Jang Mook; Kim, Jong Tae; Shin, Myong Chul
2011-01-01
Recently there have been many studies of power systems with a focus on "New and Renewable Energy" as part of "New Growth Engine Industry" promoted by the Korean government. "New And Renewable Energy"-especially focused on wind energy, solar energy and fuel cells that will replace conventional fossil fuels-is a part of the Power-IT Sector which is the basis of the SmartGrid. A SmartGrid is a form of highly-efficient intelligent electricity network that allows interactivity (two-way communications) between suppliers and consumers by utilizing information technology in electricity production, transmission, distribution and consumption. The New and Renewable Energy Program has been driven with a goal to develop and spread through intensive studies, by public or private institutions, new and renewable energy which, unlike conventional systems, have been operated through connections with various kinds of distributed power generation systems. Considerable research on smart grids has been pursued in the United States and Europe. In the United States, a variety of research activities on the smart power grid have been conducted within EPRI's IntelliGrid research program. The European Union (EU), which represents Europe's Smart Grid policy, has focused on an expansion of distributed generation (decentralized generation) and power trade between countries with improved environmental protection. Thus, there is current emphasis on a need for studies that assesses the economic efficiency of such distributed generation systems. In this paper, based on the cost of distributed power generation capacity, calculations of the best profits obtainable were made by a Monte Carlo simulation. Monte Carlo simulations that rely on repeated random sampling to compute their results take into account the cost of electricity production, daily loads and the cost of sales and generate a result faster than mathematical computations. In addition, we have suggested the optimal design, which considers the distribution loss associated with power distribution systems focus on sensing aspect and distributed power generation.
Dong, Jinwei; Xiao, Xiangming; Sheldon, Sage; Biradar, Chandrashekhar; Zhang, Geli; Duong, Nguyen Dinh; Hazarika, Manzul; Wikantika, Ketut; Takeuhci, Wataru; Moore, Berrien
2014-01-01
Southeast Asia experienced higher rates of deforestation than other continents in the 1990s and still was a hotspot of forest change in the 2000s. Biodiversity conservation planning and accurate estimation of forest carbon fluxes and pools need more accurate information about forest area, spatial distribution and fragmentation. However, the recent forest maps of Southeast Asia were generated from optical images at spatial resolutions of several hundreds of meters, and they do not capture well the exceptionally complex and dynamic environments in Southeast Asia. The forest area estimates from those maps vary substantially, ranging from 1.73×10(6) km(2) (GlobCover) to 2.69×10(6) km(2) (MCD12Q1) in 2009; and their uncertainty is constrained by frequent cloud cover and coarse spatial resolution. Recently, cloud-free imagery from the Phased Array Type L-band Synthetic Aperture Radar (PALSAR) onboard the Advanced Land Observing Satellite (ALOS) became available. We used the PALSAR 50-m orthorectified mosaic imagery in 2009 to generate a forest cover map of Southeast Asia at 50-m spatial resolution. The validation, using ground-reference data collected from the Geo-Referenced Field Photo Library and high-resolution images in Google Earth, showed that our forest map has a reasonably high accuracy (producer's accuracy 86% and user's accuracy 93%). The PALSAR-based forest area estimates in 2009 are significantly correlated with those from GlobCover and MCD12Q1 at national and subnational scales but differ in some regions at the pixel scale due to different spatial resolutions, forest definitions, and algorithms. The resultant 50-m forest map was used to quantify forest fragmentation and it revealed substantial details of forest fragmentation. This new 50-m map of tropical forests could serve as a baseline map for forest resource inventory, deforestation monitoring, reducing emissions from deforestation and forest degradation (REDD+) implementation, and biodiversity.
Dong, Jinwei; Xiao, Xiangming; Sheldon, Sage; Biradar, Chandrashekhar; Zhang, Geli; Dinh Duong, Nguyen; Hazarika, Manzul; Wikantika, Ketut; Takeuhci, Wataru; Moore, Berrien
2014-01-01
Southeast Asia experienced higher rates of deforestation than other continents in the 1990s and still was a hotspot of forest change in the 2000s. Biodiversity conservation planning and accurate estimation of forest carbon fluxes and pools need more accurate information about forest area, spatial distribution and fragmentation. However, the recent forest maps of Southeast Asia were generated from optical images at spatial resolutions of several hundreds of meters, and they do not capture well the exceptionally complex and dynamic environments in Southeast Asia. The forest area estimates from those maps vary substantially, ranging from 1.73×106 km2 (GlobCover) to 2.69×106 km2 (MCD12Q1) in 2009; and their uncertainty is constrained by frequent cloud cover and coarse spatial resolution. Recently, cloud-free imagery from the Phased Array Type L-band Synthetic Aperture Radar (PALSAR) onboard the Advanced Land Observing Satellite (ALOS) became available. We used the PALSAR 50-m orthorectified mosaic imagery in 2009 to generate a forest cover map of Southeast Asia at 50-m spatial resolution. The validation, using ground-reference data collected from the Geo-Referenced Field Photo Library and high-resolution images in Google Earth, showed that our forest map has a reasonably high accuracy (producer's accuracy 86% and user's accuracy 93%). The PALSAR-based forest area estimates in 2009 are significantly correlated with those from GlobCover and MCD12Q1 at national and subnational scales but differ in some regions at the pixel scale due to different spatial resolutions, forest definitions, and algorithms. The resultant 50-m forest map was used to quantify forest fragmentation and it revealed substantial details of forest fragmentation. This new 50-m map of tropical forests could serve as a baseline map for forest resource inventory, deforestation monitoring, reducing emissions from deforestation and forest degradation (REDD+) implementation, and biodiversity. PMID:24465714
Ventolin Diskus and Inspyril Turbuhaler: an in vitro comparison.
Broeders, M E A C; Molema, J; Burnell, P K P; Folgering, H T M
2005-01-01
Dose delivery (total emitted dose, or TED) from dry powder inhalers (DPIs), pulmonary deposition, and the biological effects depend on drug formulation and device and patient characteristics. The aim of this study was to measure, in vitro, the relationship between parameters of inhalation profiles recorded from patients, the TED and fine particle mass (FPM) of Diskus and Turbuhaler inhalers. Inhalation profiles (IPs) of 25 patients, a representative sample of a wide range of 1500 IPs generated by 10 stable asthmatics, 3 x 16 (mild/moderate/severe) COPD patients and 15 hospitalized patients with an exacerbation asthma or COPD, were selected for each device. These 25 IPs were input IPs for the Electronic Lung (a computerdriven inhalation simulator) to determine particle size distribution from Ventolin Diskus and Inspyril Turbuhaler. The TED and FPM of Diskus and FPM of Turbuhaler were affected by the peak inspiratory flow (PIF) and not by slope of the pressure-time curve, inhaled volume and inhalation time. This flow-dependency was more marked at lower flows (PIF < 40 L/min). Both the TED and FPM of Diskus were significantly higher as compared to those of the Turbuhaler [mean (SD) TED(_diskus) (%label claim) 83.5 (13.9) vs. TED(_turbuhaler) (72.5 (11.1) (p = 0.004), FPM(_diskus) (%label claim) 36.8 (9.8) vs FPM(_turbuhaler) (28.7 (7.7) (p < 0.05)]. The TED and FPM of Diskus and FPM of Turbuhaler were affected by PIF, the flow-dependency being greater at PIF values below 40 L/min. Lower PIFs occurred more often when using Turbuhaler than Diskus, since Turbuhaler have a higher resistivity, requires substantially higher pressure in order to generate the same flow as Diskus. TED, dose consistency and the FPM were higher for Diskus as compared to Turbuhaler. The flow dependency of TED and FPM was substantially influenced by inhalation profiles when not only profiles of the usual outpatient population were included but also the real outliers from exacerbated patients.
Harnett, Mark T; Magee, Jeffrey C; Williams, Stephen R
2015-01-21
The apical tuft is the most remote area of the dendritic tree of neocortical pyramidal neurons. Despite its distal location, the apical dendritic tuft of layer 5 pyramidal neurons receives substantial excitatory synaptic drive and actively processes corticocortical input during behavior. The properties of the voltage-activated ion channels that regulate synaptic integration in tuft dendrites have, however, not been thoroughly investigated. Here, we use electrophysiological and optical approaches to examine the subcellular distribution and function of hyperpolarization-activated cyclic nucleotide-gated nonselective cation (HCN) channels in rat layer 5B pyramidal neurons. Outside-out patch recordings demonstrated that the amplitude and properties of ensemble HCN channel activity were uniform in patches excised from distal apical dendritic trunk and tuft sites. Simultaneous apical dendritic tuft and trunk whole-cell current-clamp recordings revealed that the pharmacological blockade of HCN channels decreased voltage compartmentalization and enhanced the generation and spread of apical dendritic tuft and trunk regenerative activity. Furthermore, multisite two-photon glutamate uncaging demonstrated that HCN channels control the amplitude and duration of synaptically evoked regenerative activity in the distal apical dendritic tuft. In contrast, at proximal apical dendritic trunk and somatic recording sites, the blockade of HCN channels decreased excitability. Dynamic-clamp experiments revealed that these compartment-specific actions of HCN channels were heavily influenced by the local and distributed impact of the high density of HCN channels in the distal apical dendritic arbor. The properties and subcellular distribution pattern of HCN channels are therefore tuned to regulate the interaction between integration compartments in layer 5B pyramidal neurons. Copyright © 2015 the authors 0270-6474/15/351024-14$15.00/0.
Miettinen, Mirella; Torvela, Tiina; Leskinen, Jari T T
2016-10-01
Exposure to stainless steel (SS) welding aerosol that contain toxic heavy metals, chromium (Cr), manganese (Mn), and nickel (Ni), has been associated with numerous adverse health effects. The gas tungsten arc welding (GTAW) is commonly applied to SS and produces high number concentration of substantially smaller particles compared with the other welding techniques, although the mass emission rate is low. Here, a field study in a workshop with the GTAW as principal welding technique was conducted to determine the physicochemical properties of the airborne particles and to improve the understanding of the hazard the SS welding aerosols pose to welders. Particle number concentration and number size distribution were measured near the breathing zone (50cm from the arc) and in the middle of the workshop with condensation particle counters and electrical mobility particle sizers, respectively. Particle morphology and chemical composition were studied using scanning and transmission electron microscopy and energy-dispersive X-ray spectroscopy. In the middle of the workshop, the number size distribution was unimodal with the geometric mean diameter (GMD) of 46nm. Near the breathing zone the number size distribution was multimodal, and the GMDs of the modes were in the range of 10-30nm. Two different agglomerate types existed near the breathing zone. The first type consisted of iron oxide primary particles with size up to 40nm and variable amounts of Cr, Mn, and Ni replacing iron in the structure. The second type consisted of very small primary particles and contained increased proportion of Ni compared to the proportion of (Cr + Mn) than the first agglomerate type. The alterations in the distribution of Ni between different welding aerosol particles have not been reported previously. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
NASA and USGS ASTER Expedited Satellite Data Services for Disaster Situations
NASA Astrophysics Data System (ADS)
Duda, K. A.
2012-12-01
Significant international disasters related to storms, floods, volcanoes, wildfires and numerous other themes reoccur annually, often inflicting widespread human suffering and fatalities with substantial economic consequences. During and immediately after such events it can be difficult to access the affected areas and become aware of the overall impacts, but insight on the spatial extent and effects can be gleaned from above through satellite images. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on the Terra spacecraft has offered such views for over a decade. On short notice, ASTER continues to deliver analysts multispectral imagery at 15 m spatial resolution in near real-time to assist participating responders, emergency managers, and government officials in planning for such situations and in developing appropriate responses after they occur. The joint U.S./Japan ASTER Science Team has developed policies and procedures to ensure such ongoing support is accessible when needed. Processing and distribution of data products occurs at the NASA Land Processes Distributed Active Archive Center (LP DAAC) located at the USGS Earth Resources Observation and Science Center in South Dakota. In addition to current imagery, the long-term ASTER mission has generated an extensive collection of nearly 2.5 million global 3,600 km2 scenes since the launch of Terra in late 1999. These are archived and distributed by LP DAAC and affiliates at Japan Space Systems in Tokyo. Advanced processing is performed to create higher level products of use to researchers. These include a global digital elevation model. Such pre-event imagery provides a comparative basis for use in detecting changes associated with disasters and to monitor land use trends to portray areas of increased risk. ASTER imagery acquired via the expedited collection and distribution process illustrates the utility and relevancy of such data in crisis situations.
Spatial relationships of sector-specific fossil fuel CO2 emissions in the United States
NASA Astrophysics Data System (ADS)
Zhou, Yuyu; Gurney, Kevin Robert
2011-09-01
Quantification of the spatial distribution of sector-specific fossil fuel CO2 emissions provides strategic information to public and private decision makers on climate change mitigation options and can provide critical constraints to carbon budget studies being performed at the national to urban scales. This study analyzes the spatial distribution and spatial drivers of total and sectoral fossil fuel CO2 emissions at the state and county levels in the United States. The spatial patterns of absolute versus per capita fossil fuel CO2 emissions differ substantially and these differences are sector-specific. Area-based sources such as those in the residential and commercial sectors are driven by a combination of population and surface temperature with per capita emissions largest in the northern latitudes and continental interior. Emission sources associated with large individual manufacturing or electricity producing facilities are heterogeneously distributed in both absolute and per capita metrics. The relationship between surface temperature and sectoral emissions suggests that the increased electricity consumption due to space cooling requirements under a warmer climate may outweigh the savings generated by lessened space heating. Spatial cluster analysis of fossil fuel CO2 emissions confirms that counties with high (low) CO2 emissions tend to be clustered close to other counties with high (low) CO2 emissions and some of the spatial clustering extends to multistate spatial domains. This is particularly true for the residential and transportation sectors, suggesting that emissions mitigation policy might best be approached from the regional or multistate perspective. Our findings underscore the potential for geographically focused, sector-specific emissions mitigation strategies and the importance of accurate spatial distribution of emitting sources when combined with atmospheric monitoring via aircraft, satellite and in situ measurements.
Interpretation of forest characteristics from computer-generated images.
T.M. Barrett; H.R. Zuuring; T. Christopher
2006-01-01
The need for effective communication in the management and planning of forested landscapes has led to a substantial increase in the use of visual information. Using forest plots from California, Oregon, and Washington, and a survey of 183 natural resource professionals in these states, we examined the use of computer-generated images to convey information about forest...
Social preferences toward energy generation with woody biomass from public forests in Montana, USA
Robert M. Campbell; Tyron J. Venn; Nathaniel M. Anderson
2016-01-01
In Montana, USA, there are substantial opportunities for mechanized thinning treatments on public forests to reduce the likelihood of severe and damaging wildfires and improve forest health. These treatments produce residues that can be used to generate renewable energy and displace fossil fuels. The choice modeling method is employed to examine the marginal...
ERIC Educational Resources Information Center
Miller, Joshua D.; Lynam, Donald R.
2008-01-01
Assessment of the "Diagnostic and Statistical Manual of Mental Disorders" (4th Ed.; "DSM-IV") personality disorders (PDs) using five-factor model (FFM) prototypes and counts has shown substantial promise, with a few exceptions. Miller, Reynolds, and Pilkonis suggested that the expert-generated FFM dependent prototype might be misspecified in…
The Role of Domain Knowledge in Creative Generation
ERIC Educational Resources Information Center
Ward, Thomas B.
2008-01-01
Previous studies have shown that a predominant tendency in creative generation tasks is to base new ideas on well-known, specific instances of previous ideas (e.g., basing ideas for imaginary aliens on dogs, cats or bears). However, a substantial minority of individuals has been shown to adopt more abstract approaches to the task and to develop…
Time-for-Money Exchanges between Older and Younger Generations in Swedish Families
ERIC Educational Resources Information Center
Lennartsson, Carin; Silverstein, Merril; Fritzell, Johan
2010-01-01
Despite the maturation of welfare states, family solidarity continues to be strong and a growing body of research has shown that substantial financial transfers are passed from older to younger generations within the family. At the same time, family solidarity in terms of instrumental and social support is found to be mutual. This study examines…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Mather, Barry A; Cho, Gyu-Jung
Capacitor banks have been generally installed and utilized to support distribution voltage during period of higher load or on longer, higher impedance, feeders. Installations of distributed energy resources in distribution systems are rapidly increasing, and many of these generation resources have variable and uncertain power output. These generators can significantly change the voltage profile across a feeder, and therefore when a new capacitor bank is needed analysis of optimal capacity and location of the capacitor bank is required. In this paper, we model a particular distribution system including essential equipment. An optimization method is adopted to determine the best capacitymore » and location sets of the newly installed capacitor banks, in the presence of distributed solar power generation. Finally we analyze the optimal capacitor banks configuration through the optimization and simulation results.« less
Distributed Pedagogical Leadership and Generative Dialogue in Educational Nodes
ERIC Educational Resources Information Center
Jappinen, Aini-Kristiina; Sarja, Anneli
2012-01-01
The article presents practices of distributed pedagogical leadership and generative dialogue as a tool with which management and personnel can better operate in the increasingly turbulent world of education. Distributed pedagogical leadership includes common characteristics of a professional learning community when the educational actors…
Distributed state-space generation of discrete-state stochastic models
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Gluckman, Joshua; Nicol, David
1995-01-01
High-level formalisms such as stochastic Petri nets can be used to model complex systems. Analysis of logical and numerical properties of these models of ten requires the generation and storage of the entire underlying state space. This imposes practical limitations on the types of systems which can be modeled. Because of the vast amount of memory consumed, we investigate distributed algorithms for the generation of state space graphs. The distributed construction allows us to take advantage of the combined memory readily available on a network of workstations. The key technical problem is to find effective methods for on-the-fly partitioning, so that the state space is evenly distributed among processors. In this paper we report on the implementation of a distributed state-space generator that may be linked to a number of existing system modeling tools. We discuss partitioning strategies in the context of Petri net models, and report on performance observed on a network of workstations, as well as on a distributed memory multi-computer.
Sugavanam, S; Yan, Z; Kamynin, V; Kurkov, A S; Zhang, L; Churkin, D V
2014-02-10
Multiwavelength lasing in the random distributed feedback fiber laser is demonstrated by employing an all fiber Lyot filter. Stable multiwavelength generation is obtained, with each line exhibiting sub-nanometer line-widths. A flat power distribution over multiple lines is obtained, which indicates that the power between lines is redistributed in nonlinear mixing processes. The multiwavelength generation is observed both in first and second Stokes waves.
Sun, P C; Fainman, Y
1990-09-01
An optical processor for real-time generation of the Wigner distribution of complex amplitude functions is introduced. The phase conjugation of the input signal is accomplished by a highly efficient self-pumped phase conjugator based on a 45 degrees -cut barium titanate photorefractive crystal. Experimental results on the real-time generation of Wigner distribution slices for complex amplitude two-dimensional optical functions are presented and discussed.
2011-11-21
Modern-day Mars experiences cyclical changes in climate and, consequently, ice distribution. Unlike Earth, the obliquity or tilt of Mars changes substantially on timescales of hundreds of thousands to millions of years.
NASA Technical Reports Server (NTRS)
Lee, A. Y.
1967-01-01
Computer program calculates the steady state fluid distribution, temperature rise, and pressure drop of a coolant, the material temperature distribution of a heat generating solid, and the heat flux distributions at the fluid-solid interfaces. It performs the necessary iterations automatically within the computer, in one machine run.
Mechanisms generating kappa distributions in plasmas
NASA Astrophysics Data System (ADS)
Livadiotis, Georgios
2017-10-01
Kappa distributions have become increasingly widespread across plasma physics. Publication records reveal an exponential growth of papers relevant to kappa distributions. However, the vast majority of publications refer to statistical fits and applications of these distributions in plasmas. Up to date, there is no systematic analysis on the origin of kappa distributions, that is, the mechanisms that can generate kappa distributions in plasmas. The general scheme that characterizes these mechanisms is composed of two parts: (1) the generation of local correlations among particles, and (2) the thermalization, that is, the stabilization of the particle system into stationary states described by kappa distributions or combinations thereof. Several mechanisms are known in the literature, each characterized by a specific relationship between the plasma properties. These relationships serve as conditions that need to be fulfilled for the corresponding mechanisms to be applied in the plasma. Using these relationships, we identify three mechanisms that generate kappa distributions in the solar wind plasma: (i) Debye shielding, (ii) magnetic field binding, and (iii) thermal fluctuations, each one prevailing in different scales of the solar wind plasma and magnetic field properties. The work was supported in part by the project NNX17AB74G of NASA's HGI Program.
The distributions of, and relationship between, 3He and nitrate in eddies
NASA Astrophysics Data System (ADS)
Jenkins, W. J.; McGillicuddy, D. J., Jr.; Lott, D. E., III
2008-05-01
We present and discuss the distribution of 3He and its relationship to nutrients in two eddies (cyclone C1 and anticyclone A4) with a view towards examining eddy-related mechanisms whereby nutrients are transported from the upper 200-300 m into the euphotic zone of the Sargasso Sea. The different behavior of these tracers in the euphotic zone results in changes in their distributions and relationships that may provide important clues as to the nature of physical and biological processes involved. The cyclonic eddy (C1) is characterized by substantial 3He excesses within the euphotic zone. The distribution of this excess 3He is strongly suggestive of both past and recent ongoing deep-water injection into the euphotic zone. Crude mass balance calculations suggest that an average of approximately 1.4±0.7 mol m -2 of nitrate has been introduced into the euphotic zone of eddy C1, consistent with the integrated apparent oxygen utilization anomaly in the aphotic zone below. The 3He-NO 3 relationship within the eddy deviates substantially from the linear thermocline trend, suggestive of incomplete drawdown of nutrients and/or substantial mixing between euphotic and aphotic zone waters. Anticyclone (A4) displays a simpler 3He-NO 3 relationship, but is relatively impoverished in euphotic zone excess 3He. We suggest that because of the relatively strong upwelling and lateral divergence of water the residence time of upwelled 3He is relatively short within the euphotic zone of this eddy. An estimate of the recently upwelled nutrient inventory, based on the excess 3He observed in A4's lower euphotic zone, is stoichiometrically consistent with the oxygen maximum observed in the euphotic zone.
Advanced Multi-Effect Distillation System for Desalination Using Waste Heat fromGas Brayton Cycles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haihua Zhao; Per F. Peterson
2012-10-01
Generation IV high temperature reactor systems use closed gas Brayton Cycles to realize high thermal efficiency in the range of 40% to 60%. The waste heat is removed through coolers by water at substantially greater average temperature than in conventional Rankine steam cycles. This paper introduces an innovative Advanced Multi-Effect Distillation (AMED) design that can enable the production of substantial quantities of low-cost desalinated water using waste heat from closed gas Brayton cycles. A reference AMED design configuration, optimization models, and simplified economics analysis are presented. By using an AMED distillation system the waste heat from closed gas Brayton cyclesmore » can be fully utilized to desalinate brackish water and seawater without affecting the cycle thermal efficiency. Analysis shows that cogeneration of electricity and desalinated water can increase net revenues for several Brayton cycles while generating large quantities of potable water. The AMED combining with closed gas Brayton cycles could significantly improve the sustainability and economics of Generation IV high temperature reactors.« less
Steam generator support system
Moldenhauer, J.E.
1987-08-25
A support system for connection to an outer surface of a J-shaped steam generator for use with a nuclear reactor or other liquid metal cooled power source is disclosed. The J-shaped steam generator is mounted with the bent portion at the bottom. An arrangement of elongated rod members provides both horizontal and vertical support for the steam generator. The rod members are interconnected to the steam generator assembly and a support structure in a manner which provides for thermal distortion of the steam generator without the transfer of bending moments to the support structure and in a like manner substantially minimizes forces being transferred between the support structure and the steam generator as a result of seismic disturbances. 4 figs.
Steam generator support system
Moldenhauer, James E.
1987-01-01
A support system for connection to an outer surface of a J-shaped steam generator for use with a nuclear reactor or other liquid metal cooled power source. The J-shaped steam generator is mounted with the bent portion at the bottom. An arrangement of elongated rod members provides both horizontal and vertical support for the steam generator. The rod members are interconnected to the steam generator assembly and a support structure in a manner which provides for thermal distortion of the steam generator without the transfer of bending moments to the support structure and in a like manner substantially minimizes forces being transferred between the support structure and the steam generator as a result of seismic disturbances.
An improved AVC strategy applied in distributed wind power system
NASA Astrophysics Data System (ADS)
Zhao, Y. N.; Liu, Q. H.; Song, S. Y.; Mao, W.
2016-08-01
Traditional AVC strategy is mainly used in wind farm and only concerns about grid connection point, which is not suitable for distributed wind power system. Therefore, this paper comes up with an improved AVC strategy applied in distributed wind power system. The strategy takes all nodes of distribution network into consideration and chooses the node having the most serious voltage deviation as control point to calculate the reactive power reference. In addition, distribution principles can be divided into two conditions: when wind generators access to network on single node, the reactive power reference is distributed according to reactive power capacity; when wind generators access to network on multi-node, the reference is distributed according to sensitivity. Simulation results show the correctness and reliability of the strategy. Compared with traditional control strategy, the strategy described in this paper can make full use of generators reactive power output ability according to the distribution network voltage condition and improve the distribution network voltage level effectively.
Gehlhar, K; Wüller, A; Lieverscheidt, H; Fischer, M R; Schäfer, T
2010-12-01
Problem based learning (PBL) is often introduced in curricula in form of short segments. In the literature the value of these PBL-islands is doubted. In order to gain more insight in this curricular approach, we compared student generated learning issues, from a 7-week PBL-island introduced in a traditional curriculum (PBL-I), with the gold standard of a PBL-based model-curriculum (PBL-B) existing in parallel at the same University (Ruhr-University Bochum, Germany). Both tracks use five identical PBL-cases. Thousand seven hundred and three student-generated learning issues of 252 tutorial groups (193 PBL-I and 59 PBL-B groups with six to seven students per group) were analysed in seven different categories. Results showed that overall there were no substantial differences between both curricula. PBL-B students generated more problem-related and less basic science clinical learning issues than PBL-I students, but in both groups learning issues were related to the same number of different subjects. Furthermore, students in the PBL-curriculum tend to generate little less but slightly better phrased issues. Taken together, we found no substantial evidence with respect to student-generated learning issues that could prove that students cannot work with the PBL-method, even if it is introduced later in the curriculum and last only for a short period of time.
Improved OSC Amtec generator design to meet goals of JPL's candidate Europa Orbiter mission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schock, A.; Noravian, H.; Or, C.
1998-07-01
The preceding paper (Paper IECEC.98.244) described OSC's initial designs of AMTEC (Alkali Metal Thermal-to-Electrical Conversion) power systems, consisting of one or two generators, each with 2, 3, or 4 General Purpose Heat Source (GPHS) modules and with 16 refractory AMTEC cells containing 5 Beta Alumina Solid Electrolyte (BASE) tubes; and presented the effect of heat input and voltage output on the generator's BOM evaporator and clad temperatures and on its EOM system efficiency and power output. Comparison of the computed results with JPL's goals for the Europa Orbiter mission showed that all of the initial 16-cell design options yielded eithermore » excessive evaporator and clad temperatures or insufficient EOM power to satisfy the JPL-specified mission goals. The present paper describes modified OSC generator designs with different numbers of AMTEC cells, cell diameters, cell lengths, cell materials, BASE tube lengths, and number of tubes per cell. These efforts succeeded in identifying generator designs with only half the number of AMTEC cells which -- for the same assumptions -- can produce EOM power outputs substantially in excess of JPL's goals for NASA's Europa Orbiter mission while operating well below the prescribed BOM limits on evaporator and clad temperature; and revealed that lowering the emissivity of the generator's housing to raise the cells' condenser temperatures can achieve substantial additional performance improvement. Finally, the paper culminates in programmatic recommendations.« less
Power Amplifier Module with 734-mW Continuous Wave Output Power
NASA Technical Reports Server (NTRS)
Fung, King Man; Samoska, Lorene A.; Kangaslahti, Pekka P.; Lamgrigtsen, Bjorn H.; Goldsmith, Paul F.; Lin, Robert H.; Soria, Mary M.; Cooperrider, Joelle T.; Micovic, Moroslav; Kurdoghlian, Ara
2010-01-01
Research findings were reported from an investigation of new gallium nitride (GaN) monolithic millimeter-wave integrated circuit (MMIC) power amplifiers (PAs) targeting the highest output power and the highest efficiency for class-A operation in W-band (75-110 GHz). W-band PAs are a major component of many frequency multiplied submillimeter-wave LO signal sources. For spectrometer arrays, substantial W-band power is required due to the passive lossy frequency multipliers-to generate higher frequency signals in nonlinear Schottky diode-based LO sources. By advancing PA technology, the LO system performance can be increased with possible cost reductions compared to current GaAs PAs. High-power, high-efficiency GaN PAs are cross-cutting and can enable more efficient local oscillator distribution systems for new astrophysics and planetary receivers and heterodyne array instruments. It can also allow for a new, electronically scannable solid-state array technology for future Earth science radar instruments and communications platforms.
Apparatus and method for mounting photovoltaic power generating systems on buildings
Russell, Miles Clayton [Lincoln, MA
2008-10-14
Rectangular PV modules (6) are mounted on a building roof (4) by mounting stands that are distributed in rows and columns. Each stand comprises a base plate (10) that rests on the building roof (4) and first and second brackets (12, 14) of different height attached to opposite ends of the base plate (10). Each bracket (12, 14) has dual members for supporting two different PV modules (6), and each PV module (6) has a mounting pin (84) adjacent to each of its four corners. Each module (6) is supported by attachment of two of its mounting pins (84) to different first brackets (12), whereby the modules (6) and their supporting stands are able to resist uplift forces resulting from high velocity winds without the base plates (10) being physically attached to the supporting roof structure (4). Preferably the second brackets (14) have a telescoping construction that permits their effective height to vary from less than to substantially the same as that of the first brackets (12).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinivasan, Shweta; Kholod, Nazar; Chaturvedi, Vaibhav
This paper provides projections of water withdrawals and consumption for electricity generation in India through 2050. Based on the results from five energy-economic modeling teams, the paper explores the implications of economic growth, power plant cooling policies, and electricity CO2 emissions reductions on water withdrawals and consumption. To isolate modeling differences, the five teams used harmonized assumptions regarding economic and population growth, the distribution of power plants by cooling technologies, and withdrawals and consumption intensities. The results demonstrate the different but potentially complementary implications of cooling technology policies and efforts to reduce CO2 emissions. The application of closed-loop cooling technologiesmore » substantially reduces water withdrawals but increases consumption. The water implications of CO2 emissions reductions, depend critically on the approach to these reductions. Focusing on wind and solar power reduces consumption and withdrawals; a focus on nuclear power increases both; and a focus on hydroelectric power could increase consumptive losses through evaporation.« less
Data-adaptive test statistics for microarray data.
Mukherjee, Sach; Roberts, Stephen J; van der Laan, Mark J
2005-09-01
An important task in microarray data analysis is the selection of genes that are differentially expressed between different tissue samples, such as healthy and diseased. However, microarray data contain an enormous number of dimensions (genes) and very few samples (arrays), a mismatch which poses fundamental statistical problems for the selection process that have defied easy resolution. In this paper, we present a novel approach to the selection of differentially expressed genes in which test statistics are learned from data using a simple notion of reproducibility in selection results as the learning criterion. Reproducibility, as we define it, can be computed without any knowledge of the 'ground-truth', but takes advantage of certain properties of microarray data to provide an asymptotically valid guide to expected loss under the true data-generating distribution. We are therefore able to indirectly minimize expected loss, and obtain results substantially more robust than conventional methods. We apply our method to simulated and oligonucleotide array data. By request to the corresponding author.
The impact of a natural disaster on altruistic behaviour and crime.
Lemieux, Frederic
2014-07-01
Institutional altruism in the form of a public-sector intervention and support for victims and social altruism generated by mutual aid and solidarity among citizens constitute a coming together in a crisis. This coming together and mutual support precipitate a decrease in crime rates during such an event. This paper presents an analysis of daily fluctuations in crime during the prolonged ice storms in Quebec, Canada, in January 1998 that provoked an electrical blackout. Of particular interest are the principal crisis-related influences on daily crime patterns. A first series of analyses examines the impact of altruistic public-sector mobilisation on crime. A significant decline in property crime rates was noticed when cheques were distributed to crisis victims in financial need in Montérégie, and hence they were attributable to public intervention (institutional altruism). Moreover, the rate of social altruism (financial donations), which was more substantial in adjoining rather than distant regions, was inversely proportional to crime rates. © 2014 The Author(s). Disasters © Overseas Development Institute, 2014.
Genetic variants linked to education predict longevity
Marioni, Riccardo E.; Ritchie, Stuart J.; Joshi, Peter K.; Hagenaars, Saskia P.; Fischer, Krista; Adams, Mark J.; Hill, W. David; Davies, Gail; Nagy, Reka; Amador, Carmen; Läll, Kristi; Metspalu, Andres; Liewald, David C.; Wilson, James F.; Hayward, Caroline; Esko, Tõnu; Porteous, David J.; Gale, Catharine R.; Deary, Ian J.
2016-01-01
Educational attainment is associated with many health outcomes, including longevity. It is also known to be substantially heritable. Here, we used data from three large genetic epidemiology cohort studies (Generation Scotland, n = ∼17,000; UK Biobank, n = ∼115,000; and the Estonian Biobank, n = ∼6,000) to test whether education-linked genetic variants can predict lifespan length. We did so by using cohort members’ polygenic profile score for education to predict their parents’ longevity. Across the three cohorts, meta-analysis showed that a 1 SD higher polygenic education score was associated with ∼2.7% lower mortality risk for both mothers (total ndeaths = 79,702) and ∼2.4% lower risk for fathers (total ndeaths = 97,630). On average, the parents of offspring in the upper third of the polygenic score distribution lived 0.55 y longer compared with those of offspring in the lower third. Overall, these results indicate that the genetic contributions to educational attainment are useful in the prediction of human longevity. PMID:27799538
Transport of Space Environment Electrons: A Simplified Rapid-Analysis Computational Procedure
NASA Technical Reports Server (NTRS)
Nealy, John E.; Anderson, Brooke M.; Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Chang, C. K.
2002-01-01
A computational procedure for describing transport of electrons in condensed media has been formulated for application to effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The procedure is based on earlier parameterizations established from numerous electron beam experiments. New parameterizations have been derived that logically extend the domain of application to low molecular weight (high hydrogen content) materials and higher energies (approximately 50 MeV). The production and transport of high energy photons (bremsstrahlung) generated in the electron transport processes have also been modeled using tabulated values of photon production cross sections. A primary purpose for developing the procedure has been to provide a means for rapidly performing numerous repetitive calculations essential for electron radiation exposure assessments for complex space structures. Several favorable comparisons have been made with previous calculations for typical space environment spectra, which have indicated that accuracy has not been substantially compromised at the expense of computational speed.
Analysis of electric and thermal behaviour of lithium-ion cells in realistic driving cycles
NASA Astrophysics Data System (ADS)
Tourani, Abbas; White, Peter; Ivey, Paul
2014-12-01
A substantial part of electric vehicles (EVs) powertrain is the battery cell. The cells are usually connected in series, and failure of a single cell can deactivate an entire module in the battery pack. Hence, understanding the cell behaviour helps to predict and improve the battery performance and leads to design a cost effective thermal management system for the battery pack. A first principle thermo electrochemical model is applied to study the cell behaviour. The model is in good agreement with the experimental results and can predict the heat generation and the temperature distribution across the cell for different operating conditions. The operating temperature effect on the cell performance is studied and the operating temperature for the best performance is verified. In addition, EV cells are examined in a realistic driving cycle from the Artemis class. The study findings lead to the proposal of some crucial recommendation to design cost effective thermal management systems for the battery pack.
Genetic variants linked to education predict longevity.
Marioni, Riccardo E; Ritchie, Stuart J; Joshi, Peter K; Hagenaars, Saskia P; Okbay, Aysu; Fischer, Krista; Adams, Mark J; Hill, W David; Davies, Gail; Nagy, Reka; Amador, Carmen; Läll, Kristi; Metspalu, Andres; Liewald, David C; Campbell, Archie; Wilson, James F; Hayward, Caroline; Esko, Tõnu; Porteous, David J; Gale, Catharine R; Deary, Ian J
2016-11-22
Educational attainment is associated with many health outcomes, including longevity. It is also known to be substantially heritable. Here, we used data from three large genetic epidemiology cohort studies (Generation Scotland, n = ∼17,000; UK Biobank, n = ∼115,000; and the Estonian Biobank, n = ∼6,000) to test whether education-linked genetic variants can predict lifespan length. We did so by using cohort members' polygenic profile score for education to predict their parents' longevity. Across the three cohorts, meta-analysis showed that a 1 SD higher polygenic education score was associated with ∼2.7% lower mortality risk for both mothers (total n deaths = 79,702) and ∼2.4% lower risk for fathers (total n deaths = 97,630). On average, the parents of offspring in the upper third of the polygenic score distribution lived 0.55 y longer compared with those of offspring in the lower third. Overall, these results indicate that the genetic contributions to educational attainment are useful in the prediction of human longevity.
Forging the link between nuclear reactions and nuclear structure.
Mahzoon, M H; Charity, R J; Dickhoff, W H; Dussan, H; Waldecker, S J
2014-04-25
A comprehensive description of all single-particle properties associated with the nucleus Ca40 is generated by employing a nonlocal dispersive optical potential capable of simultaneously reproducing all relevant data above and below the Fermi energy. The introduction of nonlocality in the absorptive potentials yields equivalent elastic differential cross sections as compared to local versions but changes the absorption profile as a function of angular momentum suggesting important consequences for the analysis of nuclear reactions. Below the Fermi energy, nonlocality is essential to allow for an accurate representation of particle number and the nuclear charge density. Spectral properties implied by (e, e'p) and (p, 2p) reactions are correctly incorporated, including the energy distribution of about 10% high-momentum nucleons, as experimentally determined by data from Jefferson Lab. These high-momentum nucleons provide a substantial contribution to the energy of the ground state, indicating a residual attractive contribution from higher-body interactions for Ca40 of about 0.64 MeV/A.
Solid state tritium detector for biomedical applications
NASA Astrophysics Data System (ADS)
Gordon, J. S.; Farrell, R.; Daley, K.; Oakes, C. E.
1994-08-01
Radioactive labeling of proteins is a very important technique used in biomedical research to identify, isolate, and investigate the expression and properties of proteins in biological systems. In such procedures, the preferred radiolabel is often tritium. Presently, binding assays involving tritium are carried out using inconvenient and expensive techniques which rely on the use of scintillation fluid counting systems. This traditional method involves both time-consuming laboratory protocols and the generation of substantial quantities of radioactive and chemical waste. We have developed a novel technology to measure the tritium content of biological specimens that does not rely on scintillation fluids. The tritiated samples can be positioned directly under a large area, monolithic array of specially prepared avalanche photodiodes (APDs) which record the tritium activity distribution at each point within the field of view of the array. The 1 mm(sup 2) sensing elements exhibit an intrinsic tritium beta detection efficiency of 27% with high gain uniformity and very low cross talk.
McGonigle, A. J. S.; James, M. R.; Tamburello, G.; Aiuppa, A.; Delle Donne, D.; Ripepe, M.
2016-01-01
Abstract Recent gas flux measurements have shown that Strombolian explosions are often followed by periods of elevated flux, or “gas codas,” with durations of order a minute. Here we present UV camera data from 200 events recorded at Stromboli volcano to constrain the nature of these codas for the first time, providing estimates for combined explosion plus coda SO2 masses of ≈18–225 kg. Numerical simulations of gas slug ascent show that substantial proportions of the initial gas mass can be distributed into a train of “daughter bubbles” released from the base of the slug, which we suggest, generate the codas, on bursting at the surface. This process could also cause transitioning of slugs into cap bubbles, significantly reducing explosivity. This study is the first attempt to combine high temporal resolution gas flux data with numerical simulations of conduit gas flow to investigate volcanic degassing dynamics. PMID:27478285
NASA Astrophysics Data System (ADS)
Inomata, Satoshi; Sato, Kei; Sakamoto, Yosuke; Hirokawa, Jun
2017-12-01
Secondary organic aerosol formation during the ozonolysis of isoprene and ethene in the presence of ammonium nitrate seed particles (surface area concentrations = (0.8-3) × 107 nm2 cm-3) was investigated using a 1 nm scanning mobility particle sizer. Based on the size distribution of formed particles, particles with a diameter smaller than the minimum diameter of the seed particles (less than ∼6 nm) formed under dry conditions, but the formation of such particles was substantially suppressed during isoprene ozonolysis and was not observed during ethane ozonolysis under humid conditions. We propose that oligomeric hydroperoxides generated by stabilized Criegee intermediates (sCIs), including C1-sCI (CH2OO), contribute to new particle formation while competing to be taken up onto preexisting particles. The OH reaction products of isoprene and ethene seem to not contribute to new particle formation; however, they are taken up onto preexisting particles and contribute to particle growth.
1982-07-01
waste-heat steam generators. The applicable steam generator design concepts and general design consideration were reviewed and critical problems...a once-through forced-circulation steam generator design should be selected because of stability, reliability, compact- ness and lightweight...consists of three sections and one appendix. In Section I, the applicable steam generator design conccpts and general design * considerations are reviewed
NASA Astrophysics Data System (ADS)
Santarelli, M.; Leone, P.; Calì, M.; Orsello, G.
The tubular SOFC generator CHP-100, built by Siemens Power Generation (SPG) Stationary Fuel Cells (SFC), is running at the Gas Turbine Technologies (GTT) in Torino (Italy), in the framework of the EOS Project. The nominal load of the generator ensures a produced electric power of around 105 kW e ac and around 60 kW t of thermal power at 250 °C to be used for the custom tailored HVAC system. Several experimental sessions have been scheduled on the generator; the aim is to characterize the operation through the analysis of some global performance index and the detailed control of the operation of the different bundles of the whole stack. All the scheduled tests have been performed by applying the methodology of design of experiment; the main obtained results show the effect of the change of the analysed operating factors in terms of distribution of voltage and temperature over the stack. Fuel consumption tests give information about the sensitivity of the voltage and temperature distribution along the single bundles. On the other hand, since the generator is an air cooled system, the results of the tests on the air stoichs have been used to analyze the generator thermal management (temperature distribution and profiles) and its effect on the polarization. The sensitivity analysis of the local voltage to the overall fuel consumption modifications can be used as a powerful procedure to deduce the local distribution of fuel utilization (FU) along the single bundles: in fact, through a model obtained by deriving the polarization curve respect to FU, it is possible to link the distribution of voltage sensitivities to FC to the distribution of the local FU. The FU distribution will be shown as non-uniform, and this affects the local voltage and temperatures, causing a high warming effect in some rows of the generator. Therefore, a discussion around the effectiveness of the thermal regulation made by the air stoichs, in order to reduce the non-uniform distribution of temperature and the overheating (increasing therefore the voltage behavior along the generator) has been performed. It is demonstrated that the utilization of one air plenum is not effective in the thermal regulation of the whole generator, in particular in the reduction of the temperature gradients linked to the non-uniform fuel distribution.
Application Processing | Distributed Generation Interconnection
delivering swift customer service. The rapid rise of distributed generation (DG) PV interconnection speed processing, reduce paperwork, and improve customer service. Webinars and publications are
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alderfer, B.; Eldridge, M.; Starrs, T.
Distributed power is modular electric generation or storage located close to the point of use. Based on interviews of distributed generation project proponents, this report reviews the barriers that distributed generators of electricity are encountering when attempting to interconnect to the electrical grid. Descriptions of 26 of 65 case studies are included in the report. The survey found and the report describes a wide range of technical, business-practice, and regulatory barriers to interconnection. An action plan for reducing the impact of these barriers is also included.
Optimal Output of Distributed Generation Based On Complex Power Increment
NASA Astrophysics Data System (ADS)
Wu, D.; Bao, H.
2017-12-01
In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.
Optical power splitter for splitting high power light
English, Jr., Ronald E.; Christensen, John J.
1995-01-01
An optical power splitter for the distribution of high-power light energy has a plurality of prisms arranged about a central axis to form a central channel. The input faces of the prisms are in a common plane which is substantially perpendicular to the central axis. A beam of light which is substantially coaxial to the central axis is incident on the prisms and at least partially strikes a surface area of each prism input face. The incident beam also partially passes through the central channel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, Michael; Ma, Zhiwen; Martinek, Janna
An aspect of the present disclosure is a receiver for receiving radiation from a heliostat array that includes at least one external panel configured to form an internal cavity and an open face. The open face is positioned substantially perpendicular to a longitudinal axis and forms an entrance to the internal cavity. The receiver also includes at least one internal panel positioned within the cavity and aligned substantially parallel to the longitudinal axis, and the at least one internal panel includes at least one channel configured to distribute a heat transfer medium.
29 CFR 1910.269 - Electric power generation, transmission, and distribution.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Electric power generation, transmission, and distribution. 1910.269 Section 1910.269 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS Special Industries § 1910.269 Electric power generation,...
29 CFR 1910.269 - Electric power generation, transmission, and distribution.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 5 2014-07-01 2014-07-01 false Electric power generation, transmission, and distribution. 1910.269 Section 1910.269 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS Special Industries § 1910.269 Electric power generation,...
29 CFR 1910.269 - Electric power generation, transmission, and distribution.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 5 2013-07-01 2013-07-01 false Electric power generation, transmission, and distribution. 1910.269 Section 1910.269 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS Special Industries § 1910.269 Electric power generation,...
29 CFR 1910.269 - Electric power generation, transmission, and distribution.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 5 2012-07-01 2012-07-01 false Electric power generation, transmission, and distribution. 1910.269 Section 1910.269 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS Special Industries § 1910.269 Electric power generation,...
29 CFR 1910.269 - Electric power generation, transmission, and distribution.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 5 2011-07-01 2011-07-01 false Electric power generation, transmission, and distribution. 1910.269 Section 1910.269 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS Special Industries § 1910.269 Electric power generation,...
Miroshnichenko, Iu V; Umarov, S Z
2012-12-01
One of the ways of increase of effectiveness and safety of patients medication supplement is the use of automated systems of distribution, through which substantially increases the efficiency and safety of patients' medication supplement, achieves significant economy of material and financial resources for medication assistance and possibility of systematical improvement of its accessibility and quality.
Fluid delivery manifolds and microfluidic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renzi, Ronald F.; Sommer, Gregory J.; Singh, Anup K.
2017-02-28
Embodiments of fluid distribution manifolds, cartridges, and microfluidic systems are described herein. Fluid distribution manifolds may include an insert member and a manifold base and may define a substantially closed channel within the manifold when the insert member is press-fit into the base. Cartridges described herein may allow for simultaneous electrical and fluidic interconnection with an electrical multiplex board and may be held in place using magnetic attraction.
Managing Learning for Performance.
ERIC Educational Resources Information Center
Kuchinke, K. Peter
1995-01-01
Presents findings of organizational learning literature that could substantiate claims of learning organization proponents. Examines four learning processes and their contribution to performance-based learning management: knowledge acquisition, information distribution, information interpretation, and organizational memory. (SK)
Atmospheric Ionizing Radiation and Human Exposure
NASA Technical Reports Server (NTRS)
Wilson, John W.; Mertens, Christopher J.; Goldhagen, Paul; Friedberg, W.; DeAngelis, G.; Clem, J. M.; Copeland, K.; Bidasaria, H. B.
2005-01-01
Atmospheric ionizing radiation is of interest, apart from its main concern of aircraft exposures, because it is a principal source of human exposure to radiations with high linear energy transfer (LET). The ionizing radiations of the lower atmosphere near the Earth s surface tend to be dominated by the terrestrial radioisotopes. especially along the coastal plain and interior low lands, and have only minor contributions from neutrons (11 percent). The world average is substantially larger but the high altitude cities especially have substantial contributions from neutrons (25 to 45 percent). Understanding the world distribution of neutron exposures requires an improved understanding of the latitudinal, longitudinal, altitude and spectral distribution that depends on local terrain and time. These issues are being investigated in a combined experimental and theoretical program. This paper will give an overview of human exposures and describe the development of improved environmental models.
Atmospheric Ionizing Radiation and Human Exposure
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Goldhagen, P.; Friedberg, W.; DeAngelis, G.; Clem, J. M.; Copeland, K.; Bidasaria, H. B.
2004-01-01
Atmospheric ionizing radiation is of interest, apart from its main concern of aircraft exposures, because it is a principal source of human exposure to radiations with high linear energy transfer (LET). The ionizing radiations of the lower atmosphere near the Earth s surface tend to be dominated by the terrestrial radioisotopes especially along the coastal plain and interior low lands and have only minor contributions from neutrons (11 percent). The world average is substantially larger but the high altitude cities especially have substantial contributions from neutrons (25 to 45 percent). Understanding the world distribution of neutron exposures requires an improved understanding of the latitudinal, longitudinal, altitude and spectral distribution that depends on local terrain and time. These issues are being investigated in a combined experimental and theoretical program. This paper will give an overview of human exposures and describe the development of improved environmental models.
Cooling system for a gas turbine
Wilson, Ian David; Salamah, Samir Armando; Bylina, Noel Jacob
2003-01-01
A plurality of arcuate circumferentially spaced supply and return manifold segments are arranged on the rim of a rotor for respectively receiving and distributing cooling steam through exit ports for distribution to first and second-stage buckets and receiving spent cooling steam from the first and second-stage buckets through inlet ports for transmission to axially extending return passages. Each of the supply and return manifold segments has a retention system for precluding substantial axial, radial and circumferential displacement relative to the rotor. The segments also include guide vanes for minimizing pressure losses in the supply and return of the cooling steam. The segments lie substantially equal distances from the centerline of the rotor and crossover tubes extend through each of the segments for communicating steam between the axially adjacent buckets of the first and second stages, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruenbacher, Don
2015-12-31
This project addresses both fundamental and applied research problems that will help with problems defined by the DOE “20% Wind by 2030 Report”. In particular, this work focuses on increasing the capacity of small or community wind generation capabilities that would be operated in a distributed generation approach. A consortium (KWEC – Kansas Wind Energy Consortium) of researchers from Kansas State University and Wichita State University aims to dramatically increase the penetration of wind energy via distributed wind power generation. We believe distributed generation through wind power will play a critical role in the ability to reach and extend themore » renewable energy production targets set by the Department of Energy. KWEC aims to find technical and economic solutions to enable widespread implementation of distributed renewable energy resources that would apply to wind.« less
NASA Technical Reports Server (NTRS)
Kawa, Stephan R.; Baker, David Frank; Schuh, Andrew E.; Abshire, James Brice; Browell, Edward V.; Michalak, Anna M.
2012-01-01
The NASA ASCENDS mission (Active Sensing of Carbon Emissions, Nights, Days, and Seasons) is envisioned as the next generation of dedicated, space-based CO2 observing systems, currently planned for launch in about the year 2022. Recommended by the US National Academy of Sciences Decadal Survey, active (lidar) sensing of CO2 from space has several potentially significant advantages, in comparison to current and planned passive CO2 instruments, that promise to advance CO2 measurement capability and carbon cycle understanding into the next decade. Assessment and testing of possible lidar instrument technologies indicates that such sensors are more than feasible, however, the measurement precision and accuracy requirements remain at unprecedented levels of stringency. It is, therefore, important to quantitatively and consistently evaluate the measurement capabilities and requirements for the prospective active system in the context of advancing our knowledge of carbon flux distributions and their dependence on underlying physical processes. This amounts to establishing minimum requirements for precision, relative accuracy, spatial/temporal coverage and resolution, vertical information content, interferences, and possibly the tradeoffs among these parameters, while at the same time framing a mission that can be implemented within a constrained budget. Here, we present results of observing system simulation studies, commissioned by the ASCENDS Science Requirements Definition Team, for a range of possible mission implementation options that are intended to substantiate science measurement requirements for a laser-based CO2 space instrument.
Selimovic-Hamza, Senija; Boujon, Céline L; Hilbe, Monika; Oevermann, Anna; Seuberlich, Torsten
2017-01-18
Next-generation sequencing (NGS) has opened up the possibility of detecting new viruses in unresolved diseases. Recently, astrovirus brain infections have been identified in neurologically diseased humans and animals by NGS, among them bovine astrovirus (BoAstV) CH13/NeuroS1, which has been found in brain tissues of cattle with non-suppurative encephalitis. Only a few studies are available on neurotropic astroviruses and a causal relationship between BoAstV CH13/NeuroS1 infections and neurological disease has been postulated, but remains unproven. Aiming at making a step forward towards assessing the causality, we collected brain samples of 97 cases of cattle diagnosed with unresolved non-suppurative encephalitis, and analyzed them by in situ hybridization and immunohistochemistry, to determine the frequency and neuropathological distribution of the BoAstV CH13/NeuroS1 and its topographical correlation to the pathology. We detected BoAstV CH13/NeuroS1 RNA or proteins in neurons throughout all parts of the central nervous system (CNS) in 34% of all cases, but none were detected in cattle of the control group. In general, brain lesions had a high correlation with the presence of the virus. These findings show that a substantial proportion of cattle with non-suppurative encephalitis are infected with BoAstV CH13/NeuroS1 and further substantiate the causal relationship between neurological disease and astrovirus infections.
Selimovic-Hamza, Senija; Boujon, Céline L.; Hilbe, Monika; Oevermann, Anna; Seuberlich, Torsten
2017-01-01
Next-generation sequencing (NGS) has opened up the possibility of detecting new viruses in unresolved diseases. Recently, astrovirus brain infections have been identified in neurologically diseased humans and animals by NGS, among them bovine astrovirus (BoAstV) CH13/NeuroS1, which has been found in brain tissues of cattle with non-suppurative encephalitis. Only a few studies are available on neurotropic astroviruses and a causal relationship between BoAstV CH13/NeuroS1 infections and neurological disease has been postulated, but remains unproven. Aiming at making a step forward towards assessing the causality, we collected brain samples of 97 cases of cattle diagnosed with unresolved non-suppurative encephalitis, and analyzed them by in situ hybridization and immunohistochemistry, to determine the frequency and neuropathological distribution of the BoAstV CH13/NeuroS1 and its topographical correlation to the pathology. We detected BoAstV CH13/NeuroS1 RNA or proteins in neurons throughout all parts of the central nervous system (CNS) in 34% of all cases, but none were detected in cattle of the control group. In general, brain lesions had a high correlation with the presence of the virus. These findings show that a substantial proportion of cattle with non-suppurative encephalitis are infected with BoAstV CH13/NeuroS1 and further substantiate the causal relationship between neurological disease and astrovirus infections. PMID:28106800
Fuel to burn : economics of converting forest thinnings to energy using BioMax in southern Oregon
E.M. (Ted) Bilek; Kenneth E. Skog; Jeremy Fried; Glenn Christensen
2005-01-01
Small-scale gasification plants that generate electrical energy from forest health thinnings may have the potential to deliver substantial amounts of electricity to the national grid. We evaluated the economic feasibility of two sizes of BioMax, a generator manufactured by the Community Power Corporation of Littleton, Colorado. At current avoided- cost electricity...
Fuel to burn: economics of converting forest thinnings to energy using BioMax in southern Oregon.
E.M. (Ted) Bilek; Kenneth E. Skog; Jeremy Fried; Glenn Christensen
2005-01-01
Small-scale gasification plants that generate electrical energy from forest health thinnings may have the potential to deliver substantial amounts of electricity to the national grid. We evaluated the economic feasibility of two sizes of BioMax, a generator manufactured by the Community Power Corporation of Littleton, Colorado. At current avoided-cost electricity...
Voltage regulation in distribution networks with distributed generation
NASA Astrophysics Data System (ADS)
Blažič, B.; Uljanić, B.; Papič, I.
2012-11-01
The paper deals with the topic of voltage regulation in distribution networks with relatively high distributed energy resources (DER) penetration. The problem of voltage rise is described and different options for voltage regulation are given. The influence of DER on voltage profile and the effectiveness of the investigated solutions are evaluated by means of simulation in DIgSILENT. The simulated network is an actual distribution network in Slovenia with a relatively high penetration of distributed generation. Recommendations for voltage control in networks with DER penetration are given at the end.
Distributed Coordination of Energy Storage with Distributed Generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Stoorvogel, Antonie A.
2016-07-18
With a growing emphasis on energy efficiency and system flexibility, a great effort has been made recently in developing distributed energy resources (DER), including distributed generators and energy storage systems. This paper first formulates an optimal coordination problem considering constraints at both system and device levels, including power balance constraint, generator output limits, storage energy and power capacity and charging/discharging efficiencies. An algorithm is then proposed to dynamically and automatically coordinate DERs in a distributed manner. With the proposed algorithm, the agent at each DER only maintains a local incremental cost and updates it through information exchange with a fewmore » neighbors, without relying on any central decision maker. Simulation results are used to illustrate and validate the proposed algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weston, F.; Harrington, C.; Moskovitz, D.
Distributed resources can provide cost-effective reliability and energy services - in many cases, obviating the need for more expensive investments in wires and central station electricity generating facilities. Given the unique features of distributed resources, the challenge facing policymakers today is how to restructure wholesale markets for electricity and related services so as to reveal the full value that distributed resources can provide to the electric power system (utility grid). This report looks at the functions that distributed resources can perform and examines the barriers to them. It then identifies a series of policy and operational approaches to promoting DRmore » in wholesale markets. This report is one in the State Electricity Regulatory Policy and Distributed Resources series developed under contract to NREL (see Annual Technical Status Report of the Regulatory Assistance Project: September 2000-September 2001, NREL/SR-560-32733). Other titles in this series are: (1) Distributed Resource Distribution Credit Pilot Programs - Revealing the Value to Consumers and Vendors, NREL/SR-560-32499; (2) Distributed Resources and Electric System Reliability, NREL/SR-560-32498; (3) Distribution System Cost Methodologies for Distributed Generation, NREL/SR-560-32500; (4) Distribution System Cost Methodologies for Distributed Generation Appendices, NREL/SR-560-32501« less
Optimal placement and sizing of wind / solar based DG sources in distribution system
NASA Astrophysics Data System (ADS)
Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng
2017-06-01
Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.
Generation of distributed W-states over long distances
NASA Astrophysics Data System (ADS)
Li, Yi
2017-08-01
Ultra-secure quantum communication between distant locations requires distributed entangled states between nodes. Various methodologies have been proposed to tackle this technological challenge, of which the so-called DLCZ protocol is the most promising and widely adopted scheme. This paper aims to extend this well-known protocol to a multi-node setting where the entangled W-state is generated between nodes over long distances. The generation of multipartite W-states is the foundation of quantum networks, paving the way for quantum communication and distributed quantum computation.
Effects of independently altering body weight and body mass on the metabolic cost of running.
Teunissen, Lennart P J; Grabowski, Alena; Kram, Rodger
2007-12-01
The metabolic cost of running is substantial, despite the savings from elastic energy storage and return. Previous studies suggest that generating vertical force to support body weight and horizontal forces to brake and propel body mass are the major determinants of the metabolic cost of running. In the present study, we investigated how independently altering body weight and body mass affects the metabolic cost of running. Based on previous studies, we hypothesized that reducing body weight would decrease metabolic rate proportionally, and adding mass and weight would increase metabolic rate proportionally. Further, because previous studies show that adding mass alone does not affect the forces generated on the ground, we hypothesized that adding mass alone would have no substantial effect on metabolic rate. We manipulated the body weight and body mass of 10 recreational human runners and measured their metabolic rates while they ran at 3 m s(-1). We reduced weight using a harness system, increased mass and weight using lead worn about the waist, and increased mass alone using a combination of weight support and added load. We found that net metabolic rate decreased in less than direct proportion to reduced body weight, increased in slightly more than direct proportion to added load (added mass and weight), and was not substantially different from normal running with added mass alone. Adding mass alone was not an effective method for determining the metabolic cost attributable to braking/propelling body mass. Runners loaded with mass alone did not generate greater vertical or horizontal impulses and their metabolic costs did not substantially differ from those of normal running. Our results show that generating force to support body weight is the primary determinant of the metabolic cost of running. Extrapolating our reduced weight data to zero weight suggests that supporting body weight comprises at most 74% of the net cost of running. However, 74% is probably an overestimate of the metabolic demand of body weight to support itself because in reduced gravity conditions decrements in horizontal impulse accompanied decrements in vertical impulse.
NASA Technical Reports Server (NTRS)
Schaack, Todd K.; Lenzen, Allen J.; Johnson, Donald R.
1991-01-01
This study surveys the large-scale distribution of heating for January 1979 obtained from five sources of information. Through intercomparison of these distributions, with emphasis on satellite-derived information, an investigation is conducted into the global distribution of atmospheric heating and the impact of observations on the diagnostic estimates of heating derived from assimilated datasets. The results indicate a substantial impact of satellite information on diagnostic estimates of heating in regions where there is a scarcity of conventional observations. The addition of satellite data provides information on the atmosphere's temperature and wind structure that is important for estimation of the global distribution of heating and energy exchange.
Online Optimization Method for Operation of Generators in a Micro Grid
NASA Astrophysics Data System (ADS)
Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi
Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.
Polarization-multiplexed plasmonic phase generation with distributed nanoslits.
Lee, Seung-Yeol; Kim, Kyuho; Lee, Gun-Yeal; Lee, Byoungho
2015-06-15
Methods for multiplexing surface plasmon polaritons (SPPs) have been attracting much attention due to their potentials for plasmonic integrated systems, plasmonic holography, and optical tweezing. Here, using closely-distanced distributed nanoslits, we propose a method for generating polarization-multiplexed SPP phase profiles which can be applied for implementing general SPP phase distributions. Two independent types of SPP phase generation mechanisms - polarization-independent and polarization-reversible ones - are combined to generate fully arbitrary phase profiles for each optical handedness. As a simple verification of the proposed scheme, we experimentally demonstrate that the location of plasmonic focus can be arbitrary designed, and switched by the change of optical handedness.
NASA Astrophysics Data System (ADS)
Darghouth, Naim Richard
Net metering has become a widespread policy mechanism in the U.S. for supporting customer adoption of distributed photovoltaics (PV), allowing customers with PV systems to reduce their electric bills by offsetting their consumption with PV generation, independent of the timing of the generation relative to consumption. Although net metering is one of the principal drivers for the residential PV market in the U.S., the academic literature on this policy has been sparse and this dissertation contributes to this emerging body of literature. This dissertation explores the linkages between the availability of net metering, wholesale electricity market conditions, retail rates, and the residential bill savings from behind-the-meter PV systems. First, I examine the value of the bill savings that customers receive under net metering and alternatives to net metering, and the associated role of retail rate design, based on current rates and a sample of approximately two hundred residential customers of California's two largest electric utilities. I find that the bill savings per kWh of PV electricity generated varies greatly, largely attributable to the increasing block structure of the California utilities' residential retail rates. I also find that net metering provides significantly greater bill savings than alternative compensation mechanisms based on avoided costs. However, retail electricity rates may shift as wholesale electricity market conditions change. I then investigate a potential change in market conditions -- increased solar PV penetrations -- on wholesale prices in the short-term based on the merit-order effect. This demonstrates the potential price effects of changes in market conditions, but also points to a number of methodological shortcomings of this method, motivating my usage of a long-term capacity investment and economic dispatch model to examine wholesale price effects of various wholesale market scenarios in the subsequent analysis. By developing three types of retail rates (a flat rate, a time-of-use rate, and real-time pricing) from these wholesale price profiles, I examine bill savings from PV generation for the ten wholesale market scenarios under net metering and an alternative to net metering where hourly excess PV generation is compensated at the wholesale price. Most generally, I challenge the common assertion that PV compensation is likely to stay constant (or rise) due to constant (or rising) retail rates, and find that future electricity market scenarios can drive substantial changes in residential retail rates and that these changes, in concert with variations in retail rate structures and PV compensation mechanisms, interact to place substantial uncertainty on the future value of bill savings from residential PV.
Okeniyi, Joshua O; Atayero, Aderemi A; Popoola, Segun I; Okeniyi, Elizabeth T; Alalade, Gbenga M
2018-04-01
This data article presents comparisons of energy generation costs from gas-fired turbine and diesel-powered systems of distributed generation type of electrical energy in Covenant University, Ota, Nigeria, a smart university campus driven by Information and Communication Technologies (ICT). Cumulative monthly data of the energy generation costs, for consumption in the institution, from the two modes electric power, which was produced at locations closed to the community consuming the energy, were recorded for the period spanning January to December 2017. By these, energy generation costs from the turbine system proceed from the gas-firing whereas the generation cost data from the diesel-powered generator also include data on maintenance cost for this mode of electrical power generation. These energy generation cost data that were presented in tables and graphs employ descriptive probability distribution and goodness-of-fit tests of statistical significance as the methods for the data detailing and comparisons. Information details from this data of energy generation costs are useful for furthering research developments and aiding energy stakeholders and decision-makers in the formulation of policies on energy generation modes, economic valuation in terms of costing and management for attaining energy-efficient/smart educational environment.
R.A. Payn; M.N. Gooseff; B.L. McGlynn; K.E. Bencala; S.M. Wondzell
2012-01-01
Relating watershed structure to streamflow generation is a primary focus of hydrology. However, comparisons of longitudinal variability in stream discharge with adjacent valley structure have been rare, resulting in poor understanding of the distribution of the hydrologic mechanisms that cause variability in streamflow generation along valleys. This study explores...
Soroye, Peter; Ahmed, Najeeba; Kerr, Jeremy T
2018-06-19
Opportunistic citizen science (CS) programs allow volunteers to report species observations from anywhere, at any time, and can assemble large volumes of historic and current data at faster rates than more coordinated programs with standardized data collection. This can quickly provide large amounts of species distributional data, but whether this focus on participation comes at a cost in data quality is not clear. While automated and expert vetting can increase data reliability, there is no guarantee that opportunistic data will do anything more than confirm information from professional surveys. Here, we use eButterfly, an opportunistic CS program, and a comparable dataset of professionally collected observations, to measure the amount of new distributional species information that opportunistic CS generates. We also test how well opportunistic CS can estimate regional species richness for a large group of taxa (>300 butterfly species) across a broad area. We find that eButterfly contributes new distributional information for >80% of species, and that opportunistically submitting observations allowed volunteers to spot species ~35 days earlier than professionals. While eButterfly did a relatively poor job at predicting regional species richness by itself (detecting only about 35-57% of species per region), it significantly contributed to regional species richness when used with the professional dataset (adding ~3 species that had gone undetected in professional surveys per region). Overall, we find that the opportunistic CS model can provide substantial complementary species information when used alongside professional survey data. Our results suggest that data from opportunistic CS programs in conjunction with professional datasets can strongly increase the capacity of researchers to estimate species richness, and provide unique information on species distributions and phenologies that are relevant to the detection of the biological consequences of global change. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Diagnostic and model dependent uncertainty of simulated Tibetan permafrost area
Wang, A.; Moore, J.C.; Cui, Xingquan; Ji, D.; Li, Q.; Zhang, N.; Wang, C.; Zhang, S.; Lawrence, D.M.; McGuire, A.D.; Zhang, W.; Delire, C.; Koven, C.; Saito, K.; MacDougall, A.; Burke, E.; Decharme, B.
2016-01-01
We perform a land-surface model intercomparison to investigate how the simulation of permafrost area on the Tibetan Plateau (TP) varies among six modern stand-alone land-surface models (CLM4.5, CoLM, ISBA, JULES, LPJ-GUESS, UVic). We also examine the variability in simulated permafrost area and distribution introduced by five different methods of diagnosing permafrost (from modeled monthly ground temperature, mean annual ground and air temperatures, air and surface frost indexes). There is good agreement (99 to 135 × 104 km2) between the two diagnostic methods based on air temperature which are also consistent with the observation-based estimate of actual permafrost area (101 × 104 km2). However the uncertainty (1 to 128 × 104 km2) using the three methods that require simulation of ground temperature is much greater. Moreover simulated permafrost distribution on the TP is generally only fair to poor for these three methods (diagnosis of permafrost from monthly, and mean annual ground temperature, and surface frost index), while permafrost distribution using air-temperature-based methods is generally good. Model evaluation at field sites highlights specific problems in process simulations likely related to soil texture specification, vegetation types and snow cover. Models are particularly poor at simulating permafrost distribution using the definition that soil temperature remains at or below 0 °C for 24 consecutive months, which requires reliable simulation of both mean annual ground temperatures and seasonal cycle, and hence is relatively demanding. Although models can produce better permafrost maps using mean annual ground temperature and surface frost index, analysis of simulated soil temperature profiles reveals substantial biases. The current generation of land-surface models need to reduce biases in simulated soil temperature profiles before reliable contemporary permafrost maps and predictions of changes in future permafrost distribution can be made for the Tibetan Plateau.
Diagnostic and model dependent uncertainty of simulated Tibetan permafrost area
NASA Astrophysics Data System (ADS)
Wang, W.; Rinke, A.; Moore, J. C.; Cui, X.; Ji, D.; Li, Q.; Zhang, N.; Wang, C.; Zhang, S.; Lawrence, D. M.; McGuire, A. D.; Zhang, W.; Delire, C.; Koven, C.; Saito, K.; MacDougall, A.; Burke, E.; Decharme, B.
2016-02-01
We perform a land-surface model intercomparison to investigate how the simulation of permafrost area on the Tibetan Plateau (TP) varies among six modern stand-alone land-surface models (CLM4.5, CoLM, ISBA, JULES, LPJ-GUESS, UVic). We also examine the variability in simulated permafrost area and distribution introduced by five different methods of diagnosing permafrost (from modeled monthly ground temperature, mean annual ground and air temperatures, air and surface frost indexes). There is good agreement (99 to 135 × 104 km2) between the two diagnostic methods based on air temperature which are also consistent with the observation-based estimate of actual permafrost area (101 × 104 km2). However the uncertainty (1 to 128 × 104 km2) using the three methods that require simulation of ground temperature is much greater. Moreover simulated permafrost distribution on the TP is generally only fair to poor for these three methods (diagnosis of permafrost from monthly, and mean annual ground temperature, and surface frost index), while permafrost distribution using air-temperature-based methods is generally good. Model evaluation at field sites highlights specific problems in process simulations likely related to soil texture specification, vegetation types and snow cover. Models are particularly poor at simulating permafrost distribution using the definition that soil temperature remains at or below 0 °C for 24 consecutive months, which requires reliable simulation of both mean annual ground temperatures and seasonal cycle, and hence is relatively demanding. Although models can produce better permafrost maps using mean annual ground temperature and surface frost index, analysis of simulated soil temperature profiles reveals substantial biases. The current generation of land-surface models need to reduce biases in simulated soil temperature profiles before reliable contemporary permafrost maps and predictions of changes in future permafrost distribution can be made for the Tibetan Plateau.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Barnette, Daniel W.
2002-01-01
The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.
Natural Language Interface for Safety Certification of Safety-Critical Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2011-01-01
Model-based design and automated code generation are being used increasingly at NASA. The trend is to move beyond simulation and prototyping to actual flight code, particularly in the guidance, navigation, and control domain. However, there are substantial obstacles to more widespread adoption of code generators in such safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. The AutoCert generator plug-in supports the certification of automatically generated code by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews.
NASA Astrophysics Data System (ADS)
Yokomizu, Yasunobu
Dispersed generation systems, such as micro gas-turbines and fuel cells, have been installed on some of commercial facilities. Smaller dispersed generators like solar photovoltaics have been also located on the several of individual homes. The trends in the introduction of the these generation systems seem to continue in the future and to cause the power system to have the enormous number of the dispersed generation systems. The present report discusses the near-future power distribution systems.
Moon, Hyun Ho; Lee, Jong Joo; Choi, Sang Yule; Cha, Jae Sang; Kang, Jang Mook; Kim, Jong Tae; Shin, Myong Chul
2011-01-01
Recently there have been many studies of power systems with a focus on “New and Renewable Energy” as part of “New Growth Engine Industry” promoted by the Korean government. “New And Renewable Energy”—especially focused on wind energy, solar energy and fuel cells that will replace conventional fossil fuels—is a part of the Power-IT Sector which is the basis of the SmartGrid. A SmartGrid is a form of highly-efficient intelligent electricity network that allows interactivity (two-way communications) between suppliers and consumers by utilizing information technology in electricity production, transmission, distribution and consumption. The New and Renewable Energy Program has been driven with a goal to develop and spread through intensive studies, by public or private institutions, new and renewable energy which, unlike conventional systems, have been operated through connections with various kinds of distributed power generation systems. Considerable research on smart grids has been pursued in the United States and Europe. In the United States, a variety of research activities on the smart power grid have been conducted within EPRI’s IntelliGrid research program. The European Union (EU), which represents Europe’s Smart Grid policy, has focused on an expansion of distributed generation (decentralized generation) and power trade between countries with improved environmental protection. Thus, there is current emphasis on a need for studies that assesses the economic efficiency of such distributed generation systems. In this paper, based on the cost of distributed power generation capacity, calculations of the best profits obtainable were made by a Monte Carlo simulation. Monte Carlo simulations that rely on repeated random sampling to compute their results take into account the cost of electricity production, daily loads and the cost of sales and generate a result faster than mathematical computations. In addition, we have suggested the optimal design, which considers the distribution loss associated with power distribution systems focus on sensing aspect and distributed power generation. PMID:22164047
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate. PMID:28991919
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate.
Southwest Michigan nonmotorized investment plan
DOT National Transportation Integrated Search
2001-09-01
The provisions of federal law and regulation, along with substantial federal Enhancement Program investment, have generated increased activity involving nonmotorized transportation in all corners of MDOT. To effectively respond to the questions now a...
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1991-01-01
The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.
Proton Radiotherapy for Pediatric Central Nervous System Germ Cell Tumors: Early Clinical Outcomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDonald, Shannon M., E-mail: smacdonald@partners.or; Trofimov, Alexei; Safai, Sairos
Purpose: To report early clinical outcomes for children with central nervous system (CNS) germ cell tumors treated with protons; to compare dose distributions for intensity-modulated photon radiotherapy (IMRT), three-dimensional conformal proton radiation (3D-CPT), and intensity-modulated proton therapy with pencil beam scanning (IMPT) for whole-ventricular irradiation with and without an involved-field boost. Methods and Materials: All children with CNS germinoma or nongerminomatous germ cell tumor who received treatment at the Massachusetts General Hospital between 1998 and 2007 were included in this study. The IMRT, 3D-CPT, and IMPT plans were generated and compared for a representative case. Results: Twenty-two patients were treatedmore » with 3D-CPT. At a median follow-up of 28 months, there were no CNS recurrences; 1 patient had a recurrence outside the CNS. Local control, progression-free survival, and overall survival rates were 100%, 95%, and 100%, respectively. Comparable tumor volume coverage was achieved with IMRT, 3D-CPT, and IMPT. Substantial normal tissue sparing was seen with any form of proton therapy as compared with IMRT. The use of IMPT may yield additional sparing of the brain and temporal lobes. Conclusions: Preliminary disease control with proton therapy compares favorably to the literature. Dosimetric comparisons demonstrate the advantage of proton radiation over IMRT for whole-ventricle radiation. Superior dose distributions were accomplished with fewer beam angles utilizing 3D-CPT and scanned protons. Intensity-modulated proton therapy with pencil beam scanning may improve dose distribution as compared with 3D-CPT for this treatment.« less
Pressman, Alice R; Avins, Andrew L; Hubbard, Alan; Satariano, William A
2011-07-01
There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien-Fleming and Lan-DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. Copyright © 2011 Elsevier Inc. All rights reserved.
Pressman, Alice R.; Avins, Andrew L.; Hubbard, Alan; Satariano, William A.
2014-01-01
Background There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. Methods We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien–Fleming and Lan–DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. Results No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Conclusions Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. PMID:21453792
Bagheri, Fatemeh; Safarian, Shahrokh; Eslaminejad, Mohamadreza Baghaban; Sheibani, Nader
2015-12-01
There are a number of reports demonstrating a relationship between the alterations in DFF40 expression and development of some cancers. Here, increased DFF40 expression in T-47D cells in the presence of doxorubicin was envisaged for therapeutic usage. The T-47D cells were transfected with an eukaryotic expression vector encoding the DFF40 cDNA. Following incubation with doxorubicin, propidium iodide (PI) staining was used for cell cycle distribution analysis. The rates of apoptosis were determined by annexin V/PI staining. Apoptosis was also evaluated using the DNA laddering analysis. The viability of DFF40-transfected cells incubated with doxorubicin was significantly decreased compared with control cells. However, there were no substantial changes in the cell cycle distribution of pIRES2-DFF40 cells incubated with doxorubicin compared to control cells. The expression of DFF40, without doxorubicin incubation, had also no significant effect on the cell cycle distribution. There was no DNA laddering in cells transfected with the empty pIRES2 vector when incubated with doxorubicin. In contrast, DNA laddering was observed in DFF40 transfected cells in the presence of doxorubicin after 48 h. Also, the expression of DFF40 and DFF45 was increased in DFF40 transfected cells in the presence of doxorubicin enhancing cell death. Collectively our results indicated that co-treatment of DFF40-transfected cells with doxorubicin can enhance the killing of these tumor cells via apoptosis. Thus, modulation of DFF40 level may be a beneficial strategy for treatment of chemo-resistant cancers.
Integrating a reservoir regulation scheme into a spatially distributed hydrological model
Zhao, Gang; Gao, Huili; Naz, Bibi S; ...
2016-10-14
During the past several decades, numerous reservoirs have been built across the world for a variety of purposes such as flood control, irrigation, municipal/industrial water supplies, and hydropower generation. Consequently, timing and magnitude of natural streamflows have been altered significantly by reservoir operations. In addition, the hydrological cycle can be modified by land-use/land-cover and climate changes. To understand the fine-scale feedback between hydrological processes and water management decisions, a distributed hydrological model embedded with a reservoir component is desired. In this study, a multi-purpose reservoir module with predefined complex operational rules was integrated into the Distributed Hydrology Soil Vegetation Modelmore » (DHSVM). Conditional operating rules, which are designed to reduce flood risk and enhance water supply reliability, were adopted in this module. The performance of the integrated model was tested over the upper Brazos River Basin in Texas, where two U.S. Army Corps of Engineers reservoirs, Lake Whitney and Aquilla Lake, are located. The integrated DHSVM was calibrated and validated using observed reservoir inflow, outflow, and storage data. The error statistics were summarized for both reservoirs on a daily, weekly, and monthly basis. Using the weekly reservoir storage for Lake Whitney as an example, the coefficient of determination (R 2) and the Nash-Sutcliff Efficiency (NSE) were 0.85 and 0.75, respectively. These results suggest that this reservoir module holds promise for use in sub-monthly hydrological simulations. Furthermore, with the new reservoir component, the DHSVM provides a platform to support adaptive water resources management under the impacts of evolving anthropogenic activities and substantial environmental changes.« less
Integrating a reservoir regulation scheme into a spatially distributed hydrological model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Gang; Gao, Huili; Naz, Bibi S
During the past several decades, numerous reservoirs have been built across the world for a variety of purposes such as flood control, irrigation, municipal/industrial water supplies, and hydropower generation. Consequently, timing and magnitude of natural streamflows have been altered significantly by reservoir operations. In addition, the hydrological cycle can be modified by land-use/land-cover and climate changes. To understand the fine-scale feedback between hydrological processes and water management decisions, a distributed hydrological model embedded with a reservoir component is desired. In this study, a multi-purpose reservoir module with predefined complex operational rules was integrated into the Distributed Hydrology Soil Vegetation Modelmore » (DHSVM). Conditional operating rules, which are designed to reduce flood risk and enhance water supply reliability, were adopted in this module. The performance of the integrated model was tested over the upper Brazos River Basin in Texas, where two U.S. Army Corps of Engineers reservoirs, Lake Whitney and Aquilla Lake, are located. The integrated DHSVM was calibrated and validated using observed reservoir inflow, outflow, and storage data. The error statistics were summarized for both reservoirs on a daily, weekly, and monthly basis. Using the weekly reservoir storage for Lake Whitney as an example, the coefficient of determination (R 2) and the Nash-Sutcliff Efficiency (NSE) were 0.85 and 0.75, respectively. These results suggest that this reservoir module holds promise for use in sub-monthly hydrological simulations. Furthermore, with the new reservoir component, the DHSVM provides a platform to support adaptive water resources management under the impacts of evolving anthropogenic activities and substantial environmental changes.« less
Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data
NASA Astrophysics Data System (ADS)
Glüsenkamp, Thorsten
2018-06-01
Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.
Model-based Bayesian inference for ROC data analysis
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Bae, K. Ty
2013-03-01
This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.
The earnings of informal carers: wage differentials and opportunity costs.
Heitmueller, Axel; Inglis, Kirsty
2007-07-01
A substantial proportion of working age individuals in Britain are looking after sick, disabled or elderly people, often combining their work and caring responsibilities. Previous research has shown that informal care is linked with substantial opportunity costs for the individual due to forgone wages as a result of non-labour market participation. In this paper we show that informal carers exhibit further disadvantages even when participating. Using the British Household Panel Study (BHPS) we decompose wage differentials and show that carers can expect lower returns for a given set of characteristics, with this wage penalty varying along the pay distribution and by gender. Furthermore, opportunity costs from forgone wages and wage penalties are estimated and found to be substantial.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stout, Sherry; Hotchkiss, Eliza
Distributed generation can play a critical role in supporting climate adaptation goals. This infographic style poster will showcase the role of distributed generation in achieving a wide range of technical and policy goals and social services associated with climate adaptation.
Underwater seismic source. [for petroleum exploration
NASA Technical Reports Server (NTRS)
Yang, L. C. (Inventor)
1979-01-01
Apparatus for generating a substantially oscillation-free seismic signal for use in underwater petroleum exploration, including a bag with walls that are flexible but substantially inelastic, and a pressured gas supply for rapidly expanding the bag to its fully expanded condition is described. The inelasticity of the bag permits the application of high pressure gas to rapidly expand it to full size, without requiring a venting mechanism to decrease the pressure as the bag approaches a predetermined size to avoid breaking of the bag.
NASA Astrophysics Data System (ADS)
Tsuji, Takao; Hara, Ryoichi; Oyama, Tsutomu; Yasuda, Keiichiro
A super distributed energy system is a future energy system in which the large part of its demand is fed by a huge number of distributed generators. At one time some nodes in the super distributed energy system behave as load, however, at other times they behave as generator - the characteristic of each node depends on the customers' decision. In such situation, it is very difficult to regulate voltage profile over the system due to the complexity of power flows. This paper proposes a novel control method of distributed generators that can achieve the autonomous decentralized voltage profile regulation by using multi-agent technology. The proposed multi-agent system employs two types of agent; a control agent and a mobile agent. Control agents generate or consume reactive power to regulate the voltage profile of neighboring nodes and mobile agents transmit the information necessary for VQ-control among the control agents. The proposed control method is tested through numerical simulations.
Anti-islanding Protection of Distributed Generation Using Rate of Change of Impedance
NASA Astrophysics Data System (ADS)
Shah, Pragnesh; Bhalja, Bhavesh
2013-08-01
Distributed Generation (DG), which is interlinked with distribution system, has inevitable effect on distribution system. Integrating DG with the utility network demands an anti-islanding scheme to protect the system. Failure to trip islanded generators can lead to problems such as threats to personnel safety, out-of-phase reclosing, and degradation of power quality. In this article, a new method for anti-islanding protection based on impedance monitoring of distribution network is carried out in presence of DG. The impedance measured between two phases is used to derive the rate of change of impedance (dz/dt), and its peak values are used for final trip decision. Test data are generated using PSCAD/EMTDC software package and the performance of the proposed method is evaluated in MatLab software. The simulation results show the effectiveness of the proposed scheme as it is capable to detect islanding condition accurately. Subsequently, it is also observed that the proposed scheme does not mal-operate during other disturbances such as short circuit and switching event.
NASA Astrophysics Data System (ADS)
Li, Zaoyang; Qi, Xiaofang; Liu, Lijun; Zhou, Genshu
2018-02-01
The alternating current (AC) in the resistance heater for generating heating power can induce a magnetic field in the silicon melt during directional solidification (DS) of silicon ingots. We numerically study the influence of such a heater-generating magnetic field on the silicon melt flow and temperature distribution in an industrial DS process. 3D simulations are carried out to calculate the Lorentz force distribution as well as the melt flow and heat transfer in the entire DS furnace. The pattern and intensity of silicon melt flow as well as the temperature distribution are compared for cases with and without Lorentz force. The results show that the Lorentz force induced by the heater-generating magnetic field is mainly distributed near the top and side surfaces of the silicon melt. The melt flow and temperature distribution, especially those in the upper part of the silicon region, can be influenced significantly by the magnetic field.
Microelectromechanical power generator and vibration sensor
Roesler, Alexander W [Tijeras, NM; Christenson, Todd R [Albuquerque, NM
2006-11-28
A microelectromechanical (MEM) apparatus is disclosed which can be used to generate electrical power in response to an external source of vibrations, or to sense the vibrations and generate an electrical output voltage in response thereto. The MEM apparatus utilizes a meandering electrical pickup located near a shuttle which holds a plurality of permanent magnets. Upon movement of the shuttle in response to vibrations coupled thereto, the permanent magnets move in a direction substantially parallel to the meandering electrical pickup, and this generates a voltage across the meandering electrical pickup. The MEM apparatus can be fabricated by LIGA or micromachining.
NASA Astrophysics Data System (ADS)
Schneider, Markus P. A.
This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely the never married and women. The estimated parameter for never-married men's incomes is significantly different from the parameter estimated for never-married women, implying that either the combined distribution is not exponential or that the individual distributions are not exponential. However, it substantiates the existence of a persistent gender income gap among the never-married. References: Reich, M., D. M. Gordon, and R. C. Edwards (1973). A Theory of Labor Market Segmentation. Quarterly Journal of Economics 63, 359-365. Yakovenko, V. M. (2009). Econophysics, Statistical Mechanics Approach to. In R. A. Meyers (Ed.), Encyclopedia of Complexity and System Science. Springer.
Method of Manufacturing a Light Emitting, Photovoltaic or Other Electronic Apparatus and System
NASA Technical Reports Server (NTRS)
Blanchard, Richard A. (Inventor); Fuller, Kirk A. (Inventor); Ray, William Johnstone (Inventor); Shotton, Neil O. (Inventor); Frazier, Donald Odell (Inventor); Lowenthal, Mark D. (Inventor); Lewandowski, Mark Allan (Inventor)
2013-01-01
The present invention provides a method of manufacturing an electronic apparatus, such as a lighting device having light emitting diodes (LEDs) or a power generating device having photovoltaic diodes. The exemplary method includes forming at least one first conductor coupled to a base; coupling a plurality of substantially spherical substrate particles to the at least one first conductor; converting the substrate particles into a plurality of substantially spherical diodes; forming at least one second conductor coupled to the substantially spherical diodes; and depositing or attaching a plurality of substantially spherical lenses suspended in a first polymer. The lenses and the suspending polymer have different indices of refraction. In some embodiments, the lenses and diodes have a ratio of mean diameters or lengths between about 10:1 and 2:1. In various embodiments, the forming, coupling and converting steps are performed by or through a printing process.
Kim, Hwi; Min, Sung-Wook; Lee, Byoungho; Poon, Ting-Chung
2008-07-01
We propose a novel optical sectioning method for optical scanning holography, which is performed in phase space by using Wigner distribution functions together with the fractional Fourier transform. The principle of phase-space optical sectioning for one-dimensional signals, such as slit objects, and two-dimensional signals, such as rectangular objects, is first discussed. Computer simulation results are then presented to substantiate the proposed idea.