Real-Time Optimization and Control of Next-Generation Distribution
Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution Infrastructure This project develops innovative, real-time optimization and control methods for next-generation
Sun, P C; Fainman, Y
1990-09-01
An optical processor for real-time generation of the Wigner distribution of complex amplitude functions is introduced. The phase conjugation of the input signal is accomplished by a highly efficient self-pumped phase conjugator based on a 45 degrees -cut barium titanate photorefractive crystal. Experimental results on the real-time generation of Wigner distribution slices for complex amplitude two-dimensional optical functions are presented and discussed.
Two-sided Topp-Leone Weibull distribution
NASA Astrophysics Data System (ADS)
Podeang, Krittaya; Bodhisuwan, Winai
2017-11-01
In this paper, we introduce a general class of lifetime distributions, called the two-sided Topp-Leone generated family of distribution. A special case of new family is the two-sided Topp-Leone Weibull distribution. This distribution used the two-sided Topp-Leone distribution as a generator for the Weibull distribution. The two-sided Topp-Leone Weibull distribution is presented in several shapes of distributions such as decreasing, unimodal, and bimodal which make this distribution more than flexible than the Weibull distribution. Its quantile function is presented. The parameter estimation method by using maximum likelihood estimation is discussed. The proposed distribution is applied to the strength data set, remission times of bladder cancer patients data set and time to failure of turbocharger data set. We compare the proposed distribution to the Topp-Leone Generated Weibull distribution. In conclusion, the two-sided Topp-Leone Weibull distribution performs similarly as the Topp-Leone Generated Weibull distribution in the first and second data sets. However, the proposed distribution can perform better than fit to Topp-Leone Generated Weibull distribution for the other.
Real time testing of intelligent relays for synchronous distributed generation islanding detection
NASA Astrophysics Data System (ADS)
Zhuang, Davy
As electric power systems continue to grow to meet ever-increasing energy demand, their security, reliability, and sustainability requirements also become more stringent. The deployment of distributed energy resources (DER), including generation and storage, in conventional passive distribution feeders, gives rise to integration problems involving protection and unintentional islanding. Distributed generators need to be islanded for safety reasons when disconnected or isolated from the main feeder as distributed generator islanding may create hazards to utility and third-party personnel, and possibly damage the distribution system infrastructure, including the distributed generators. This thesis compares several key performance indicators of a newly developed intelligent islanding detection relay, against islanding detection devices currently used by the industry. The intelligent relay employs multivariable analysis and data mining methods to arrive at decision trees that contain both the protection handles and the settings. A test methodology is developed to assess the performance of these intelligent relays on a real time simulation environment using a generic model based on a real-life distribution feeder. The methodology demonstrates the applicability and potential advantages of the intelligent relay, by running a large number of tests, reflecting a multitude of system operating conditions. The testing indicates that the intelligent relay often outperforms frequency, voltage and rate of change of frequency relays currently used for islanding detection, while respecting the islanding detection time constraints imposed by standing distributed generator interconnection guidelines.
NASA Astrophysics Data System (ADS)
Godsey, S. E.; Kirchner, J. W.
2008-12-01
The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.
Distributed Generation to Support Development-Focused Climate Action
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, Sadie; Gagnon, Pieter; Stout, Sherry
2016-09-01
This paper explores the role of distributed generation, with a high renewable energy contribution, in supporting low emission climate-resilient development. The paper presents potential impacts on development (via energy access), greenhouse gas emission mitigation, and climate resilience directly associated with distributed generation, as well as specific actions that may enhance or increase the likelihood of climate and development benefits. This paper also seeks to provide practical and timely insights to support distributed generation policymaking and planning within the context of common climate and development goals as the distributed generation landscape rapidly evolves globally. Country-specific distributed generation policy and program examples,more » as well as analytical tools that can inform efforts internationally, are also highlighted throughout the paper.« less
Online Optimization Method for Operation of Generators in a Micro Grid
NASA Astrophysics Data System (ADS)
Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi
Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.
NASA Astrophysics Data System (ADS)
Tsuji, Takao; Hara, Ryoichi; Oyama, Tsutomu; Yasuda, Keiichiro
A super distributed energy system is a future energy system in which the large part of its demand is fed by a huge number of distributed generators. At one time some nodes in the super distributed energy system behave as load, however, at other times they behave as generator - the characteristic of each node depends on the customers' decision. In such situation, it is very difficult to regulate voltage profile over the system due to the complexity of power flows. This paper proposes a novel control method of distributed generators that can achieve the autonomous decentralized voltage profile regulation by using multi-agent technology. The proposed multi-agent system employs two types of agent; a control agent and a mobile agent. Control agents generate or consume reactive power to regulate the voltage profile of neighboring nodes and mobile agents transmit the information necessary for VQ-control among the control agents. The proposed control method is tested through numerical simulations.
Cooperative Optimal Coordination for Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Ren, Wei
In this paper, we consider the optimal coordination problem for distributed energy resources (DERs) including distributed generators and energy storage devices. We propose an algorithm based on the push-sum and gradient method to optimally coordinate storage devices and distributed generators in a distributed manner. In the proposed algorithm, each DER only maintains a set of variables and updates them through information exchange with a few neighbors over a time-varying directed communication network. We show that the proposed distributed algorithm solves the optimal DER coordination problem if the time-varying directed communication network is uniformly jointly strongly connected, which is a mildmore » condition on the connectivity of communication topologies. The proposed distributed algorithm is illustrated and validated by numerical simulations.« less
Transition in the waiting-time distribution of price-change events in a global socioeconomic system
NASA Astrophysics Data System (ADS)
Zhao, Guannan; McDonald, Mark; Fenn, Dan; Williams, Stacy; Johnson, Nicholas; Johnson, Neil F.
2013-12-01
The goal of developing a firmer theoretical understanding of inhomogeneous temporal processes-in particular, the waiting times in some collective dynamical system-is attracting significant interest among physicists. Quantifying the deviations between the waiting-time distribution and the distribution generated by a random process may help unravel the feedback mechanisms that drive the underlying dynamics. We analyze the waiting-time distributions of high-frequency foreign exchange data for the best executable bid-ask prices across all major currencies. We find that the lognormal distribution yields a good overall fit for the waiting-time distribution between currency rate changes if both short and long waiting times are included. If we restrict our study to long waiting times, each currency pair’s distribution is consistent with a power-law tail with exponent near to 3.5. However, for short waiting times, the overall distribution resembles one generated by an archetypal complex systems model in which boundedly rational agents compete for limited resources. Our findings suggest that a gradual transition arises in trading behavior between a fast regime in which traders act in a boundedly rational way and a slower one in which traders’ decisions are driven by generic feedback mechanisms across multiple timescales and hence produce similar power-law tails irrespective of currency type.
Testing the shape of distributions of weather data
NASA Astrophysics Data System (ADS)
Baccon, Ana L. P.; Lunardi, José T.
2016-08-01
The characterization of the statistical distributions of observed weather data is of crucial importance both for the construction and for the validation of weather models, such as weather generators (WG's). An important class of WG's (e.g., the Richardson-type generators) reduce the time series of each variable to a time series of its residual elements, and the residuals are often assumed to be normally distributed. In this work we propose an approach to investigate if the shape assumed for the distribution of residuals is consistent or not with the observed data of a given site. Specifically, this procedure tests if the same distribution shape for the residuals noise is maintained along the time. The proposed approach is an adaptation to climate time series of a procedure first introduced to test the shapes of distributions of growth rates of business firms aggregated in large panels of short time series. We illustrate the procedure by applying it to the residuals time series of maximum temperature in a given location, and investigate the empirical consistency of two assumptions, namely i) the most common assumption that the distribution of the residuals is Gaussian and ii) that the residuals noise has a time invariant shape which coincides with the empirical distribution of all the residuals noise of the whole time series pooled together.
NASA Astrophysics Data System (ADS)
Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng
2018-02-01
Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cormier, Dallas; Edra, Sherwin; Espinoza, Michael
This project will enable utilities to develop long-term strategic plans that integrate high levels of renewable energy generation, and to better plan power system operations under high renewable penetration. The program developed forecast data streams for decision support and effective integration of centralized and distributed solar power generation in utility operations. This toolset focused on real time simulation of distributed power generation within utility grids with the emphasis on potential applications in day ahead (market) and real time (reliability) utility operations. The project team developed and demonstrated methodologies for quantifying the impact of distributed solar generation on core utility operations,more » identified protocols for internal data communication requirements, and worked with utility personnel to adapt the new distributed generation (DG) forecasts seamlessly within existing Load and Generation procedures through a sophisticated DMS. This project supported the objectives of the SunShot Initiative and SUNRISE by enabling core utility operations to enhance their simulation capability to analyze and prepare for the impacts of high penetrations of solar on the power grid. The impact of high penetration solar PV on utility operations is not only limited to control centers, but across many core operations. Benefits of an enhanced DMS using state-of-the-art solar forecast data were demonstrated within this project and have had an immediate direct operational cost savings for Energy Marketing for Day Ahead generation commitments, Real Time Operations, Load Forecasting (at an aggregate system level for Day Ahead), Demand Response, Long term Planning (asset management), Distribution Operations, and core ancillary services as required for balancing and reliability. This provided power system operators with the necessary tools and processes to operate the grid in a reliable manner under high renewable penetration.« less
Control and performance of the AGS and AGS Booster Main Magnet Power Supplies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reece, R.K.; Casella, R.; Culwick, B.
1993-06-01
Techniques for precision control of the main magnet power supplies for the AGS and AGS Booster synchrotron will be discussed. Both synchrotrons are designed to operate in a Pulse-to-Pulse Modulation (PPM) environment with a Supercycle Generator defining and distributing global timing events for the AGS Facility. Details of modelling, real-time feedback and feedforward systems, generation and distribution of real time field data, operational parameters and an overview of performance for both machines are included.
Control and performance of the AGS and AGS Booster Main Magnet Power Supplies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reece, R.K.; Casella, R.; Culwick, B.
1993-01-01
Techniques for precision control of the main magnet power supplies for the AGS and AGS Booster synchrotron will be discussed. Both synchrotrons are designed to operate in a Pulse-to-Pulse Modulation (PPM) environment with a Supercycle Generator defining and distributing global timing events for the AGS Facility. Details of modelling, real-time feedback and feedforward systems, generation and distribution of real time field data, operational parameters and an overview of performance for both machines are included.
Proton beam generation of whistler waves in the earth's foreshock
NASA Technical Reports Server (NTRS)
Wong, H. K.; Goldstein, M. L.
1987-01-01
It is shown that proton beams, often observed upstream of the earth's bow shock and associated with the generation of low-frequency hydromagnetic fluctuations, are also capable of generating whistler waves. The waves can be excited by an instability driven by two-temperature streaming Maxwellian proton distributions which have T (perpendicular)/T(parallel) much greater than 1. It can also be excited by gyrating proton beam distributions. These distributions generate whistler waves with frequencies ranging from 10 to 100 times the proton cyclotron frequency (in the solar wind reference frame) and provide another mechanism for generating the '1-Hz' waves often seen in the earth's foreshock.
Fast Grid Frequency Support from Distributed Inverter-Based Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoke, Anderson F
This presentation summarizes power hardware-in-the-loop testing performed to evaluate the ability of distributed inverter-coupled generation to support grid frequency on the fastest time scales. The research found that distributed PV inverters and other DERs can effectively support the grid on sub-second time scales.
Distributed operating system for NASA ground stations
NASA Technical Reports Server (NTRS)
Doyle, John F.
1987-01-01
NASA ground stations are characterized by ever changing support requirements, so application software is developed and modified on a continuing basis. A distributed operating system was designed to optimize the generation and maintenance of those applications. Unusual features include automatic program generation from detailed design graphs, on-line software modification in the testing phase, and the incorporation of a relational database within a real-time, distributed system.
Klinkenberg, Don; Nishiura, Hiroshi
2011-09-07
The generation time of an infectious disease is the time between infection of a primary case and infection of a secondary case by the primary case. Its distribution plays a key role in understanding the dynamics of infectious diseases in populations, e.g. in estimating the basic reproduction number. Moreover, the generation time and incubation period distributions together characterize the effectiveness of control by isolation and quarantine. In modelling studies, a relation between the two is often not made specific, but a correlation is biologically plausible. However, it is difficult to establish such correlation, because of the unobservable nature of infection events. We have quantified a joint distribution of generation time and incubation period by a novel estimation method for household data with two susceptible individuals, consisting of time intervals between disease onsets of two measles cases. We used two such datasets, and a separate incubation period dataset. Results indicate that the mean incubation period and the generation time of measles are positively correlated, and that both lie in the range of 11-12 days, suggesting that infectiousness of measles cases increases significantly around the time of symptom onset. The correlation between times from infection to secondary transmission and to symptom onset could critically affect the predicted effectiveness of isolation and quarantine. Copyright © 2011 Elsevier Ltd. All rights reserved.
Heo, Jino; Kang, Min-Sung; Hong, Chang-Ho; Yang, Hyung-Jin; Choi, Seong-Gon; Hong, Jong-Phil
2017-08-31
We design schemes to generate and distribute hybrid entanglement and hyperentanglement correlated with degrees of freedom (polarization and time-bin) via weak cross-Kerr nonlinearities (XKNLs) and linear optical devices (including time-bin encoders). In our scheme, the multi-photon gates (which consist of XKNLs, quantum bus [qubus] beams, and photon-number-resolving [PNR] measurement) with time-bin encoders can generate hyperentanglement or hybrid entanglement. And we can also purify the entangled state (polarization) of two photons using only linear optical devices and time-bin encoders under a noisy (bit-flip) channel. Subsequently, through local operations (using a multi-photon gate via XKNLs) and classical communications, it is possible to generate a four-qubit hybrid entangled state (polarization and time-bin). Finally, we discuss how the multi-photon gate using XKNLs, qubus beams, and PNR measurement can be reliably performed under the decoherence effect.
Computer-Generated Dot Maps as an Epidemiologic Tool: Investigating an Outbreak of Toxoplasmosis
Werker, Denise H.; King, Arlene S.; Marion, Stephen A.; Bell, Alison; Issac-Renton, Judith L.; Irwin, G. Stewart; Bowie, William R.
1999-01-01
We used computer-generated dot maps to examine the spatial distribution of 94 Toxoplasma gondii infections associated with an outbreak in British Columbia, Canada. The incidence among patients served by one water distribution system was 3.52 times that of patients served by other sources. Acute T. gondii infection among 3,812 pregnant women was associated with the incriminated distribution system. PMID:10603218
TURBULENCE-GENERATED PROTON-SCALE STRUCTURES IN THE TERRESTRIAL MAGNETOSHEATH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vörös, Zoltán; Narita, Yasuhito; Yordanova, Emiliya
2016-03-01
Recent results of numerical magnetohydrodynamic simulations suggest that in collisionless space plasmas, turbulence can spontaneously generate thin current sheets. These coherent structures can partially explain the intermittency and the non-homogenous distribution of localized plasma heating in turbulence. In this Letter, Cluster multi-point observations are used to investigate the distribution of magnetic field discontinuities and the associated small-scale current sheets in the terrestrial magnetosheath downstream of a quasi-parallel bow shock. It is shown experimentally, for the first time, that the strongest turbulence-generated current sheets occupy the long tails of probability distribution functions associated with extremal values of magnetic field partial derivatives.more » During the analyzed one-hour time interval, about a hundred strong discontinuities, possibly proton-scale current sheets, were observed.« less
Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stenclik, Derek; Denholm, Paul; Chalamala, Babu
For nearly a century, global power systems have focused on three key functions: generating, transmitting, and distributing electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load-despite variability in load on time scales ranging from subsecond disturbances to multiyear trends. With the increasing role of variable generation from wind and solar, the retirement of fossil-fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.
Free-Space Quantum Key Distribution using Polarization Entangled Photons
NASA Astrophysics Data System (ADS)
Kurtsiefer, Christian
2007-06-01
We report on a complete experimental implementation of a quantum key distribution protocol through a free space link using polarization-entangled photon pairs from a compact parametric down-conversion source [1]. Based on a BB84-equivalent protocol, we generated without interruption over 10 hours a secret key free-space optical link distance of 1.5 km with a rate up to 950 bits per second after error correction and privacy amplification. Our system is based on two time stamp units and relies on no specific hardware channel for coincidence identification besides an IP link. For that, initial clock synchronization with an accuracy of better than 2 ns is achieved, based on a conventional NTP protocol and a tiered cross correlation of time tags on both sides. Time tags are used to servo a local clock, allowing a streamed measurement on correctly identified photon pairs. Contrary to the majority of quantum key distribution systems, this approach does not require a trusted large-bandwidth random number generator, but integrates that into the physical key generation process. We discuss our current progress of implementing a key distribution via an atmospherical link during daylight conditions, and possible attack scenarios on a physical timing information side channel to a entanglement-based key distribution system. [1] I. Marcikic, A. Lamas-Linares, C. Kurtsiefer, Appl. Phys. Lett. 89, 101122 (2006).
Radar signal analysis of ballistic missile with micro-motion based on time-frequency distribution
NASA Astrophysics Data System (ADS)
Wang, Jianming; Liu, Lihua; Yu, Hua
2015-12-01
The micro-motion of ballistic missile targets induces micro-Doppler modulation on the radar return signal, which is a unique feature for the warhead discrimination during flight. In order to extract the micro-Doppler feature of ballistic missile targets, time-frequency analysis is employed to process the micro-Doppler modulated time-varying radar signal. The images of time-frequency distribution (TFD) reveal the micro-Doppler modulation characteristic very well. However, there are many existing time-frequency analysis methods to generate the time-frequency distribution images, including the short-time Fourier transform (STFT), Wigner distribution (WD) and Cohen class distribution, etc. Under the background of ballistic missile defence, the paper aims at working out an effective time-frequency analysis method for ballistic missile warhead discrimination from the decoys.
An Infrastructure for UML-Based Code Generation Tools
NASA Astrophysics Data System (ADS)
Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.
The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.
A new generation of real-time DOS technology for mission-oriented system integration and operation
NASA Technical Reports Server (NTRS)
Jensen, E. Douglas
1988-01-01
Information is given on system integration and operation (SIO) requirements and a new generation of technical approaches for SIO. Real-time, distribution, survivability, and adaptability requirements and technical approaches are covered. An Alpha operating system program management overview is outlined.
Real-time high speed generator system emulation with hardware-in-the-loop application
NASA Astrophysics Data System (ADS)
Stroupe, Nicholas
The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.
Generational Differences in Knowledge Markets
2010-03-01
and Generation X generations. Following Generation X, Generation Y , or the Millennial Generation, includes those born between 1979 and 1994. The...positions but their numbers are small—approximately half the Baby Boomer population—and they’ll be leading Generation Y which is three times their size...boom” resulted in the 98.8 million-strong Generation Y (Sincavage, 2004). The resulting unevenness of the population distribution by age in the
Generalized Success-Breeds-Success Principle Leading to Time-Dependent Informetric Distributions.
ERIC Educational Resources Information Center
Egghe, Leo; Rousseau, Ronald
1995-01-01
Reformulates the success-breeds-success (SBS) principle in informetrics in order to generate a general theory of source-item relationships. Topics include a time-dependent probability, a new model for the expected probability that is compared with the SBS principle with exact combinatorial calculations, classical frequency distributions, and…
Analysis and Application of Microgrids
NASA Astrophysics Data System (ADS)
Yue, Lu
New trends of generating electricity locally and utilizing non-conventional or renewable energy sources have attracted increasing interests due to the gradual depletion of conventional fossil fuel energy sources. The new type of power generation is called Distributed Generation (DG) and the energy sources utilized by Distributed Generation are termed Distributed Energy Sources (DERs). With DGs embedded in the distribution networks, they evolve from passive distribution networks to active distribution networks enabling bidirectional power flows in the networks. Further incorporating flexible and intelligent controllers and employing future technologies, active distribution networks will turn to a Microgrid. A Microgrid is a small-scale, low voltage Combined with Heat and Power (CHP) supply network designed to supply electrical and heat loads for a small community. To further implement Microgrids, a sophisticated Microgrid Management System must be integrated. However, due to the fact that a Microgrid has multiple DERs integrated and is likely to be deregulated, the ability to perform real-time OPF and economic dispatch with fast speed advanced communication network is necessary. In this thesis, first, problems such as, power system modelling, power flow solving and power system optimization, are studied. Then, Distributed Generation and Microgrid are studied and reviewed, including a comprehensive review over current distributed generation technologies and Microgrid Management Systems, etc. Finally, a computer-based AC optimization method which minimizes the total transmission loss and generation cost of a Microgrid is proposed and a wireless communication scheme based on synchronized Code Division Multiple Access (sCDMA) is proposed. The algorithm is tested with a 6-bus power system and a 9-bus power system.
Absolute nuclear material assay using count distribution (LAMBDA) space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay using count distribution (LAMBDA) space
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-06-05
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
NASA Astrophysics Data System (ADS)
Mohammed, Touseef Ahmed Faisal
Since 2000, renewable electricity installations in the United States (excluding hydropower) have more than tripled. Renewable electricity has grown at a compounded annual average of nearly 14% per year from 2000-2010. Wind, Concentrated Solar Power (CSP) and solar Photo Voltaic (PV) are the fastest growing renewable energy sectors. In 2010 in the U.S., solar PV grew over 71% and CSP grew by 18% from the previous year. Globally renewable electricity installations have more than quadrupled from 2000-2010. Solar PV generation grew by a factor of more than 28 between 2000 and 2010. The amount of CSP and solar PV installations are increasing on the distribution grid. These PV installations transmit electrical current from the load centers to the generating stations. But the transmission and distribution grid have been designed for uni-directional flow of electrical energy from generating stations to load centers. This causes imbalances in voltage and switchgear of the electrical circuitry. With the continuous rise in PV installations, analysis of voltage profile and penetration levels remain an active area of research. Standard distributed photovoltaic (PV) generators represented in simulation studies do not reflect the exact location and variability properties such as distance between interconnection points to substations, voltage regulators, solar irradiance and other environmental factors. Quasi-Static simulations assist in peak load planning hour and day ahead as it gives a time sequence analysis to help in generation allocation. Simulation models can be daily, hourly or yearly depending on duty cycle and dynamics of the system. High penetration of PV into the power grid changes the voltage profile and power flow dynamically in the distribution circuits due to the inherent variability of PV. There are a number of modeling and simulations tools available for the study of such high penetration PV scenarios. This thesis will specifically utilize OpenDSS, a open source Distribution System Simulator developed by Electric Power Research Institute, to simulate grid voltage profile with a large scale PV system under quasi-static time series considering variations of PV output in seconds, minutes, and the average daily load variations. A 13 bus IEEE distribution feeder model is utilized with distributed residential and commercial scale PV at different buses for simulation studies. Time series simulations are discussed for various modes of operation considering dynamic PV penetration at different time periods in a day. In addition, this thesis demonstrates simulations taking into account the presence of moving cloud for solar forecasting studies.
Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering
NASA Technical Reports Server (NTRS)
Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)
2001-01-01
Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.
Narrow-band generation in random distributed feedback fiber laser.
Sugavanam, Srikanth; Tarasov, Nikita; Shu, Xuewen; Churkin, Dmitry V
2013-07-15
Narrow-band emission of spectral width down to ~0.05 nm line-width is achieved in the random distributed feedback fiber laser employing narrow-band fiber Bragg grating or fiber Fabry-Perot interferometer filters. The observed line-width is ~10 times less than line-width of other demonstrated up to date random distributed feedback fiber lasers. The random DFB laser with Fabry-Perot interferometer filter provides simultaneously multi-wavelength and narrow-band (within each line) generation with possibility of further wavelength tuning.
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele
2017-04-01
Power-law (PL) distributions are widely adopted to define the late-time scaling of solute breakthrough curves (BTCs) during transport experiments in highly heterogeneous media. However, from a statistical perspective, distinguishing between a PL distribution and another tailed distribution is difficult, particularly when a qualitative assessment based on visual analysis of double-logarithmic plotting is used. This presentation aims to discuss the results from a recent analysis where a suite of statistical tools was applied to evaluate rigorously the scaling of BTCs from experiments that generate tailed distributions typically described as PL at late time. To this end, a set of BTCs from numerical simulations in highly heterogeneous media were generated using a transition probability approach (T-PROGS) coupled to a finite different numerical solver of the flow equation (MODFLOW) and a random walk particle tracking approach for Lagrangian transport (RW3D). The T-PROGS fields assumed randomly distributed hydraulic heterogeneities with long correlation scales creating solute channeling and anomalous transport. For simplicity, transport was simulated as purely advective. This combination of tools generates strongly non-symmetric BTCs visually resembling PL distributions at late time when plotted in double log scales. Unlike other combination of modeling parameters and boundary conditions (e.g. matrix diffusion in fractures), at late time no direct link exists between the mathematical functions describing scaling of these curves and physical parameters controlling transport. The results suggest that the statistical tests fail to describe the majority of curves as PL distributed. Moreover, they suggest that PL or lognormal distributions have the same likelihood to represent parametrically the shape of the tails. It is noticeable that forcing a model to reproduce the tail as PL functions results in a distribution of PL slopes comprised between 1.2 and 4, which are the typical values observed during field experiments. We conclude that care must be taken when defining a BTC late time distribution as a power law function. Even though the estimated scaling factors are found to fall in traditional ranges, the actual distribution controlling the scaling of concentration may different from a power-law function, with direct consequences for instance for the selection of effective parameters in upscaling modeling solutions.
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
Miyatake, Aya; Nishio, Teiji; Ogino, Takashi
2011-10-01
The purpose of this study is to develop a new calculation algorithm that is satisfactory in terms of the requirements for both accuracy and calculation time for a simulation of imaging of the proton-irradiated volume in a patient body in clinical proton therapy. The activity pencil beam algorithm (APB algorithm), which is a new technique to apply the pencil beam algorithm generally used for proton dose calculations in proton therapy to the calculation of activity distributions, was developed as a calculation algorithm of the activity distributions formed by positron emitter nuclei generated from target nuclear fragment reactions. In the APB algorithm, activity distributions are calculated using an activity pencil beam kernel. In addition, the activity pencil beam kernel is constructed using measured activity distributions in the depth direction and calculations in the lateral direction. (12)C, (16)O, and (40)Ca nuclei were determined as the major target nuclei that constitute a human body that are of relevance for calculation of activity distributions. In this study, "virtual positron emitter nuclei" was defined as the integral yield of various positron emitter nuclei generated from each target nucleus by target nuclear fragment reactions with irradiated proton beam. Compounds, namely, polyethylene, water (including some gelatin) and calcium oxide, which contain plenty of the target nuclei, were irradiated using a proton beam. In addition, depth activity distributions of virtual positron emitter nuclei generated in each compound from target nuclear fragment reactions were measured using a beam ON-LINE PET system mounted a rotating gantry port (BOLPs-RGp). The measured activity distributions depend on depth or, in other words, energy. The irradiated proton beam energies were 138, 179, and 223 MeV, and measurement time was about 5 h until the measured activity reached the background level. Furthermore, the activity pencil beam data were made using the activity pencil beam kernel, which was composed of the measured depth data and the lateral data including multiple Coulomb scattering approximated by the Gaussian function, and were used for calculating activity distributions. The data of measured depth activity distributions for every target nucleus by proton beam energy were obtained using BOLPs-RGp. The form of the depth activity distribution was verified, and the data were made in consideration of the time-dependent change of the form. Time dependence of an activity distribution form could be represented by two half-lives. Gaussian form of the lateral distribution of the activity pencil beam kernel was decided by the effect of multiple Coulomb scattering. Thus, the data of activity pencil beam involving time dependence could be obtained in this study. The simulation of imaging of the proton-irradiated volume in a patient body using target nuclear fragment reactions was feasible with the developed APB algorithm taking time dependence into account. With the use of the APB algorithm, it was suggested that a system of simulation of activity distributions that has levels of both accuracy and calculation time appropriate for clinical use can be constructed.
Implementation of a Wireless Time Distribution Testbed Protected with Quantum Key Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonior, Jason D; Evans, Philip G; Sheets, Gregory S
2017-01-01
Secure time transfer is critical for many timesensitive applications. the Global Positioning System (GPS) which is often used for this purpose has been shown to be susceptible to spoofing attacks. Quantum Key Distribution offers a way to securely generate encryption keys at two locations. Through careful use of this information it is possible to create a system that is more resistant to spoofing attacks. In this paper we describe our work to create a testbed which utilizes QKD and traditional RF links. This testbed will be used for the development of more secure and spoofing resistant time distribution protocols.
Category Induction via Distributional Analysis: Evidence from a Serial Reaction Time Task
ERIC Educational Resources Information Center
Hunt, Ruskin H.; Aslin, Richard N.
2010-01-01
Category formation lies at the heart of a number of higher-order behaviors, including language. We assessed the ability of human adults to learn, from distributional information alone, categories embedded in a sequence of input stimuli using a serial reaction time task. Artificial grammars generated corpora of input strings containing a…
MAGI: many-component galaxy initializer
NASA Astrophysics Data System (ADS)
Miki, Yohei; Umemura, Masayuki
2018-04-01
Providing initial conditions is an essential procedure for numerical simulations of galaxies. The initial conditions for idealized individual galaxies in N-body simulations should resemble observed galaxies and be dynamically stable for time-scales much longer than their characteristic dynamical times. However, generating a galaxy model ab initio as a system in dynamical equilibrium is a difficult task, since a galaxy contains several components, including a bulge, disc, and halo. Moreover, it is desirable that the initial-condition generator be fast and easy to use. We have now developed an initial-condition generator for galactic N-body simulations that satisfies these requirements. The developed generator adopts a distribution-function-based method, and it supports various kinds of density models, including custom-tabulated inputs and the presence of more than one disc. We tested the dynamical stability of systems generated by our code, representing early- and late-type galaxies, with N = 2097 152 and 8388 608 particles, respectively, and we found that the model galaxies maintain their initial distributions for at least 1 Gyr. The execution times required to generate the two models were 8.5 and 221.7 seconds, respectively, which is negligible compared to typical execution times for N-body simulations. The code is provided as open-source software and is publicly and freely available at https://bitbucket.org/ymiki/magi.
A Distributed Algorithm for Economic Dispatch Over Time-Varying Directed Networks With Delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Lu, Jie; Wu, Di
In power system operation, economic dispatch problem (EDP) is designed to minimize the total generation cost while meeting the demand and satisfying generator capacity limits. This paper proposes an algorithm based on the gradient-push method to solve the EDP in a distributed manner over communication networks potentially with time-varying topologies and communication delays. It has been shown that the proposed method is guaranteed to solve the EDP if the time-varying directed communication network is uniformly jointly strongly connected. Moreover, the proposed algorithm is also able to handle arbitrarily large but bounded time delays on communication links. Numerical simulations are usedmore » to illustrate and validate the proposed algorithm.« less
Real-time data flow and product generating for GNSS
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.; Caissy, Mark
2004-01-01
The last IGS workshop with the theme 'Towards Real-Time' resulted in the design of a prototype for real-time data and sharing within the IGS. A prototype real-time network is being established that will serve as a test bed for real-time activities within the IGS. We review the developments of the prototype and discuss some of the existing methods and related products of real-time GNSS systems. Recommendations are made concerning real-time data distribution and product generation.
Digitally controlled chirped pulse laser for sub-terahertz-range fiber structure interrogation.
Chen, Zhen; Hefferman, Gerald; Wei, Tao
2017-03-01
This Letter reports a sweep velocity-locked laser pulse generator controlled using a digital phase-locked loop (DPLL) circuit. This design is used for the interrogation of sub-terahertz-range fiber structures for sensing applications that require real-time data collection with millimeter-level spatial resolution. A distributed feedback laser was employed to generate chirped laser pulses via injection current modulation. A DPLL circuit was developed to lock the optical frequency sweep velocity. A high-quality linearly chirped laser pulse with a frequency excursion of 117.69 GHz at an optical communication band was demonstrated. The system was further adopted to interrogate a continuously distributed sub-terahertz-range fiber structure (sub-THz-fs) for sensing applications. A strain test was conducted in which the sub-THz-fs showed a linear response to longitudinal strain change with predicted sensitivity. Additionally, temperature testing was conducted in which a heat source was used to generate a temperature distribution along the fiber structure to demonstrate its distributed sensing capability. A Gaussian temperature profile was measured using the described system and tracked in real time, as the heat source was moved.
NASA Astrophysics Data System (ADS)
Pourmousavi Kani, Seyyed Ali
Future power systems (known as smart grid) will experience a high penetration level of variable distributed energy resources to bring abundant, affordable, clean, efficient, and reliable electric power to all consumers. However, it might suffer from the uncertain and variable nature of these generations in terms of reliability and especially providing required balancing reserves. In the current power system structure, balancing reserves (provided by spinning and non-spinning power generation units) usually are provided by conventional fossil-fueled power plants. However, such power plants are not the favorite option for the smart grid because of their low efficiency, high amount of emissions, and expensive capital investments on transmission and distribution facilities, to name a few. Providing regulation services in the presence of variable distributed energy resources would be even more difficult for islanded microgrids. The impact and effectiveness of demand response are still not clear at the distribution and transmission levels. In other words, there is no solid research reported in the literature on the evaluation of the impact of DR on power system dynamic performance. In order to address these issues, a real-time demand response approach along with real-time power management (specifically for microgrids) is proposed in this research. The real-time demand response solution is utilized at the transmission (through load-frequency control model) and distribution level (both in the islanded and grid-tied modes) to provide effective and fast regulation services for the stable operation of the power system. Then, multiple real-time power management algorithms for grid-tied and islanded microgrids are proposed to economically and effectively operate microgrids. Extensive dynamic modeling of generation, storage, and load as well as different controller design are considered and developed throughout this research to provide appropriate models and simulation environment to evaluate the effectiveness of the proposed methodologies. Simulation results revealed the effectiveness of the proposed methods in providing balancing reserves and microgrids' economic and stable operation. The proposed tools and approaches can significantly enhance the application of microgrids and demand response in the smart grid era. They will also help to increase the penetration level of variable distributed generation resources in the smart grid.
Distributions-per-level: a means of testing level detectors and models of patch-clamp data.
Schröder, I; Huth, T; Suitchmezian, V; Jarosik, J; Schnell, S; Hansen, U P
2004-01-01
Level or jump detectors generate the reconstructed time series from a noisy record of patch-clamp current. The reconstructed time series is used to create dwell-time histograms for the kinetic analysis of the Markov model of the investigated ion channel. It is shown here that some additional lines in the software of such a detector can provide a powerful new means of patch-clamp analysis. For each current level that can be recognized by the detector, an array is declared. The new software assigns every data point of the original time series to the array that belongs to the actual state of the detector. From the data sets in these arrays distributions-per-level are generated. Simulated and experimental time series analyzed by Hinkley detectors are used to demonstrate the benefits of these distributions-per-level. First, they can serve as a test of the reliability of jump and level detectors. Second, they can reveal beta distributions as resulting from fast gating that would usually be hidden in the overall amplitude histogram. Probably the most valuable feature is that the malfunctions of the Hinkley detectors turn out to depend on the Markov model of the ion channel. Thus, the errors revealed by the distributions-per-level can be used to distinguish between different putative Markov models of the measured time series.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhao, Changhong; Guggilam, Swaroop
Power networks have to withstand a variety of disturbances that affect system frequency, and the problem is compounded with the increasing integration of intermittent renewable generation. Following a large-signal generation or load disturbance, system frequency is arrested leveraging primary frequency control provided by governor action in synchronous generators. In this work, we propose a framework for distributed energy resources (DERs) deployed in distribution networks to provide (supplemental) primary frequency response. Particularly, we demonstrate how power-frequency droop slopes for individual DERs can be designed so that the distribution feeder presents a guaranteed frequency-regulation characteristic at the feeder head. Furthermore, the droopmore » slopes are engineered such that injections of individual DERs conform to a well-defined fairness objective that does not penalize them for their location on the distribution feeder. Time-domain simulations for an illustrative network composed of a combined transmission network and distribution network with frequency-responsive DERs are provided to validate the approach.« less
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-05-15
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2010-07-13
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Estimating probable flaw distributions in PWR steam generator tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorman, J.A.; Turner, A.P.L.
1997-02-01
This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regardingmore » uncertainties and assumptions in the data and analyses.« less
Fokker-Planck simulation of runaway electron generation in disruptions with the hot-tail effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuga, H., E-mail: nuga@p-grp.nucleng.kyoto-u.ac.jp; Fukuyama, A.; Yagi, M.
2016-06-15
To study runaway electron generation in disruptions, we have extended the three-dimensional (two-dimensional in momentum space; one-dimensional in the radial direction) Fokker-Planck code, which describes the evolution of the relativistic momentum distribution function of electrons and the induced toroidal electric field in a self-consistent manner. A particular focus is placed on the hot-tail effect in two-dimensional momentum space. The effect appears if the drop of the background plasma temperature is sufficiently rapid compared with the electron-electron slowing down time for a few times of the pre-quench thermal velocity. It contributes to not only the enhancement of the primary runaway electronmore » generation but also the broadening of the runaway electron distribution in the pitch angle direction. If the thermal energy loss during the major disruption is assumed to be isotropic, there are hot-tail electrons that have sufficiently large perpendicular momentum, and the runaway electron distribution becomes broader in the pitch angle direction. In addition, the pitch angle scattering also yields the broadening. Since the electric field is reduced due to the burst of runaway electron generation, the time required for accelerating electrons to the runaway region becomes longer. The longer acceleration period makes the pitch-angle scattering more effective.« less
Differential memory in the earth's magnetotail
NASA Technical Reports Server (NTRS)
Burkhart, G. R.; Chen, J.
1991-01-01
The process of 'differential memory' in the earth's magnetotail is studied in the framework of the modified Harris magnetotail geometry. It is verified that differential memory can generate non-Maxwellian features in the modified Harris field model. The time scales and the potentially observable distribution functions associated with the process of differential memory are investigated, and it is shown that non-Maxwelllian distributions can evolve as a test particle response to distribution function boundary conditions in a Harris field magnetotail model. The non-Maxwellian features which arise from distribution function mapping have definite time scales associated with them, which are generally shorter than the earthward convection time scale but longer than the typical Alfven crossing time.
Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun
2014-02-01
We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, D.O.
It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less
ERIC Educational Resources Information Center
Doerann-George, Judith
The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…
Resilient Distribution System by Microgrids Formation After Natural Disasters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chen; Wang, Jianhui; Qiu, Feng
2016-03-01
Microgrids with distributed generation provide a resilient solution in the case of major faults in a distribution system due to natural disasters. This paper proposes a novel distribution system operational approach by forming multiple microgrids energized by distributed generation from the radial distribution system in real-time operations, to restore critical loads from the power outage. Specifically, a mixed-integer linear program (MILP) is formulated to maximize the critical loads to be picked up while satisfying the self-adequacy and operation constraints for the microgrids formation problem, by controlling the ON/OFF status of the remotely controlled switch devices and distributed generation. A distributedmore » multi-agent coordination scheme is designed via local communications for the global information discovery as inputs of the optimization, which is suitable for autonomous communication requirements after the disastrous event. The formed microgrids can be further utilized for power quality control and can be connected to a larger microgrid before the restoration of the main grids is complete. Numerical results based on modified IEEE distribution test systems validate the effectiveness of our proposed scheme.« less
Controlled decoherence in a quantum Lévy kicked rotator
NASA Astrophysics Data System (ADS)
Schomerus, Henning; Lutz, Eric
2008-06-01
We develop a theory describing the dynamics of quantum kicked rotators (modeling cold atoms in a pulsed optical field) which are subjected to combined amplitude and timing noise generated by a renewal process (acting as an engineered reservoir). For waiting-time distributions of variable exponent (Lévy noise), we demonstrate the existence of a regime of nonexponential loss of phase coherence. In this regime, the momentum dynamics is subdiffusive, which also manifests itself in a non-Gaussian limiting distribution and a fractional power-law decay of the inverse participation ratio. The purity initially decays with a stretched exponential which is followed by two regimes of power-law decay with different exponents. The averaged logarithm of the fidelity probes the sprinkling distribution of the renewal process. These analytical results are confirmed by numerical computations on quantum kicked rotators subjected to noise events generated by a Yule-Simon distribution.
Wigner-Ville distribution and Gabor transform in Doppler ultrasound signal processing.
Ghofrani, S; Ayatollahi, A; Shamsollahi, M B
2003-01-01
Time-frequency distributions have been used extensively for nonstationary signal analysis, they describe how the frequency content of a signal is changing in time. The Wigner-Ville distribution (WVD) is the best known. The draw back of WVD is cross-term artifacts. An alternative to the WVD is Gabor transform (GT), a signal decomposition method, which displays the time-frequency energy of a signal on a joint t-f plane without generating considerable cross-terms. In this paper the WVD and GT of ultrasound echo signals are computed analytically.
Comparing Different Fault Identification Algorithms in Distributed Power System
NASA Astrophysics Data System (ADS)
Alkaabi, Salim
A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.
Ogawa, Kuniyasu; Sasaki, Tatsuyoshi; Yoneda, Shigeki; Tsujinaka, Kumiko; Asai, Ritsuko
2018-05-17
In order to increase the current density generated in a PEFC (polymer electrolyte fuel cell), a method for measuring the spatial distribution of both the current and the water content of the MEA (membrane electrode assembly) is necessary. Based on the frequency shifts of NMR (nuclear magnetic resonance) signals acquired from the water contained in the MEA using 49 NMR coils in a 7 × 7 arrangement inserted in the PEFC, a method for measuring the two-dimensional spatial distribution of electric current generated in a unit cell with a power generation area of 140 mm × 160 mm was devised. We also developed an inverse analysis method to determine the two-dimensional electric current distribution that can be applied to actual PEFC connections. Two analytical techniques, namely coarse graining of segments and stepwise search, were used to shorten the calculation time required for inverse analysis of the electric current map. Using this method and techniques, spatial distributions of electric current and water content in the MEA were obtained when the PEFC generated electric power at 100 A. Copyright © 2018 Elsevier Inc. All rights reserved.
Size distributions of micro-bubbles generated by a pressurized dissolution method
NASA Astrophysics Data System (ADS)
Taya, C.; Maeda, Y.; Hosokawa, S.; Tomiyama, A.; Ito, Y.
2012-03-01
Size of micro-bubbles is widely distributed in the range of one to several hundreds micrometers and depends on generation methods, flow conditions and elapsed times after the bubble generation. Although a size distribution of micro-bubbles should be taken into account to improve accuracy in numerical simulations of flows with micro-bubbles, a variety of the size distribution makes it difficult to introduce the size distribution in the simulations. On the other hand, several models such as the Rosin-Rammler equation and the Nukiyama-Tanazawa equation have been proposed to represent the size distribution of particles or droplets. Applicability of these models to the size distribution of micro-bubbles has not been examined yet. In this study, we therefore measure size distribution of micro-bubbles generated by a pressurized dissolution method by using a phase Doppler anemometry (PDA), and investigate the applicability of the available models to the size distributions of micro-bubbles. Experimental apparatus consists of a pressurized tank in which air is dissolved in liquid under high pressure condition, a decompression nozzle in which micro-bubbles are generated due to pressure reduction, a rectangular duct and an upper tank. Experiments are conducted for several liquid volumetric fluxes in the decompression nozzle. Measurements are carried out at the downstream region of the decompression nozzle and in the upper tank. The experimental results indicate that (1) the Nukiyama-Tanasawa equation well represents the size distribution of micro-bubbles generated by the pressurized dissolution method, whereas the Rosin-Rammler equation fails in the representation, (2) the bubble size distribution of micro-bubbles can be evaluated by using the Nukiyama-Tanasawa equation without individual bubble diameters, when mean bubble diameter and skewness of the bubble distribution are given, and (3) an evaluation method of visibility based on the bubble size distribution and bubble number density is proposed, and the evaluated visibility agrees well with the visibility measured in the upper tank.
NASA Technical Reports Server (NTRS)
Huba, J. D.; Chen, J.; Anderson, R. R.
1992-01-01
Attention is given to a mechanism to generate a broad spectrum of electrostatic turbulence in the quiet time central plasma sheet (CPS) plasma. It is shown theoretically that multiple-ring ion distributions can generate short-wavelength (less than about 1), electrostatic turbulence with frequencies less than about kVj, where Vj is the velocity of the jth ring. On the basis of a set of parameters from measurements made in the CPS, it is found that electrostatic turbulence can be generated with wavenumbers in the range of 0.02 and 1.0, with real frequencies in the range of 0 and 10, and with linear growth rates greater than 0.01 over a broad range of angles relative to the magnetic field (5-90 deg). These theoretical results are compared with wave data from ISEE 1 using an ion distribution function exhibiting multiple-ring structures observed at the same time. The theoretical results in the linear regime are found to be consistent with the wave data.
Waste prevention in liquid detergent distribution: a comparison based on life cycle assessment.
Nessi, Simone; Rigamonti, Lucia; Grosso, Mario
2014-11-15
The distribution of liquid detergents through self-dispensing systems has been adopted in some Italian retail stores over the last few years. By enabling the consumer to refill several times the same container, it is proposed as a less waste-generating and more environmentally friendly alternative to the traditional distribution with single-use plastic containers. For this reason, its implementation is encouraged by the national waste prevention programme recently adopted in Italy. In order to assess such claims, a life cycle assessment was carried out to evaluate whether detergent distribution through self-dispensing systems actually allows to achieve the expected reduction in waste generation and environmental impacts. The focus was on the distribution within the large-scale retail trade and on the categories of laundry detergents, fabric softeners and hand dishwashing detergents. For each of them, a set of baseline single-use scenarios were compared with two alternative waste prevention scenarios, where the detergent is distributed through self-dispensing systems. Beyond waste generation, also the Cumulative Energy Demand and thirteen midpoint-level potential impact indicators were calculated for the comparison. Results showed that a reduction in waste generation up to 98% can be achieved, depending on the category of detergent, on the baseline scenario of comparison and on the number of times the refillable container is used. A progressive reduction in the energy demand and in most of the potential impacts was also observed, starting from a minimum number of uses of the refillable container. Copyright © 2014 Elsevier B.V. All rights reserved.
Generating heavy particles with energy and momentum conservation
NASA Astrophysics Data System (ADS)
Mereš, Michal; Melo, Ivan; Tomášik, Boris; Balek, Vladimír; Černý, Vladimír
2011-12-01
We propose a novel algorithm, called REGGAE, for the generation of momenta of a given sample of particle masses, evenly distributed in Lorentz-invariant phase space and obeying energy and momentum conservation. In comparison to other existing algorithms, REGGAE is designed for the use in multiparticle production in hadronic and nuclear collisions where many hadrons are produced and a large part of the available energy is stored in the form of their masses. The algorithm uses a loop simulating multiple collisions which lead to production of configurations with reasonably large weights. Program summaryProgram title: REGGAE (REscattering-after-Genbod GenerAtor of Events) Catalogue identifier: AEJR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1523 No. of bytes in distributed program, including test data, etc.: 9608 Distribution format: tar.gz Programming language: C++ Computer: PC Pentium 4, though no particular tuning for this machine was performed. Operating system: Originally designed on Linux PC with g++, but it has been compiled and ran successfully on OS X with g++ and MS Windows with Microsoft Visual C++ 2008 Express Edition, as well. RAM: This depends on the number of particles which are generated. For 10 particles like in the attached example it requires about 120 kB. Classification: 11.2 Nature of problem: The task is to generate momenta of a sample of particles with given masses which obey energy and momentum conservation. Generated samples should be evenly distributed in the available Lorentz-invariant phase space. Solution method: In general, the algorithm works in two steps. First, all momenta are generated with the GENBOD algorithm. There, particle production is modeled as a sequence of two-body decays of heavy resonances. After all momenta are generated this way, they are reshuffled. Each particle undergoes a collision with some other partner such that in the pair center of mass system the new directions of momenta are distributed isotropically. After each particle collides only a few times, the momenta are distributed evenly across the whole available phase space. Starting with GENBOD is not essential for the procedure but it improves the performance. Running time: This depends on the number of particles and number of events one wants to generate. On a LINUX PC with 2 GHz processor, generation of 1000 events with 10 particles each takes about 3 s.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu
2014-08-21
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less
Overview of Wholesale Electricity Markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milligan, Michael; Bloom, Aaron P; Cochran, Jaquelin M
This chapter provides a comprehensive review of four key electricity markets: energy markets (day-ahead and real-time markets); ancillary service markets; financial transmission rights markets; capacity markets. It also discusses how the outcomes of each of these markets may be impacted by the introduction of high penetrations of variable generation. Furthermore, the chapter examines considerations needed to ensure that wholesale market designs are inclusive of emerging technologies, such as demand response, distributed generation, and distributed storage.
Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stenclik, Derek; Denholm, Paul; Chalamala, Babu
For nearly a century, global power systems have focused on three key functions: to generate, transmit, and distribute electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load, despite variability in load on timescales ranging from sub-second disturbances to multi-year trends. With the increasing role of variable generation from wind and solar, retirements of fossil fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.
Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration
Stenclik, Derek; Denholm, Paul; Chalamala, Babu
2017-10-17
For nearly a century, global power systems have focused on three key functions: to generate, transmit, and distribute electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load, despite variability in load on timescales ranging from sub-second disturbances to multi-year trends. With the increasing role of variable generation from wind and solar, retirements of fossil fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.
Multifractal analysis of time series generated by discrete Ito equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Telesca, Luciano; Czechowski, Zbigniew; Lovallo, Michele
2015-06-15
In this study, we show that discrete Ito equations with short-tail Gaussian marginal distribution function generate multifractal time series. The multifractality is due to the nonlinear correlations, which are hidden in Markov processes and are generated by the interrelation between the drift and the multiplicative stochastic forces in the Ito equation. A link between the range of the generalized Hurst exponents and the mean of the squares of all averaged net forces is suggested.
Earthquakes induced by fluid injection and explosion
Healy, J.H.; Hamilton, R.M.; Raleigh, C.B.
1970-01-01
Earthquakes generated by fluid injection near Denver, Colorado, are compared with earthquakes triggered by nuclear explosion at the Nevada Test Site. Spatial distributions of the earthquakes in both cases are compatible with the hypothesis that variation of fluid pressure in preexisting fractures controls the time distribution of the seismic events in an "aftershock" sequence. We suggest that the fluid pressure changes may also control the distribution in time and space of natural aftershock sequences and of earthquakes that have been reported near large reservoirs. ?? 1970.
NASA Astrophysics Data System (ADS)
Liland, Kristian Hovde; Snipen, Lars
When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.
A study of the kinetic energy generation with general circulation models
NASA Technical Reports Server (NTRS)
Chen, T.-C.; Lee, Y.-H.
1983-01-01
The history data of winter simulation by the GLAS climate model and the NCAR community climate model are used to examine the generation of atmospheric kinetic energy. The contrast between the geographic distributions of the generation of kinetic energy and divergence of kinetic energy flux shows that kinetic energy is generated in the upstream side of jets, transported to the downstream side and destroyed there. The contributions from the time-mean and transient modes to the counterbalance between generation of kinetic energy and divergence of kinetic energy flux are also investigated. It is observed that the kinetic energy generated by the time-mean mode is essentially redistributed by the time-mean flow, while that generated by the transient flow is mainly responsible for the maintenance of the kinetic energy of the entire atmospheric flow.
An extension of the directed search domain algorithm to bilevel optimization
NASA Astrophysics Data System (ADS)
Wang, Kaiqiang; Utyuzhnikov, Sergey V.
2017-08-01
A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.
Primary Frequency Response with Aggregated DERs: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guggilam, Swaroop S.; Dhople, Sairaj V.; Zhao, Changhong
2017-03-03
Power networks have to withstand a variety of disturbances that affect system frequency, and the problem is compounded with the increasing integration of intermittent renewable generation. Following a large-signal generation or load disturbance, system frequency is arrested leveraging primary frequency control provided by governor action in synchronous generators. In this work, we propose a framework for distributed energy resources (DERs) deployed in distribution networks to provide (supplemental) primary frequency response. Particularly, we demonstrate how power-frequency droop slopes for individual DERs can be designed so that the distribution feeder presents a guaranteed frequency-regulation characteristic at the feeder head. Furthermore, the droopmore » slopes are engineered such that injections of individual DERs conform to a well-defined fairness objective that does not penalize them for their location on the distribution feeder. Time-domain simulations for an illustrative network composed of a combined transmission network and distribution network with frequency-responsive DERs are provided to validate the approach.« less
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
NASA Astrophysics Data System (ADS)
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
Using the Quantile Mapping to improve a weather generator
NASA Astrophysics Data System (ADS)
Chen, Y.; Themessl, M.; Gobiet, A.
2012-04-01
We developed a weather generator (WG) by using statistical and stochastic methods, among them are quantile mapping (QM), Monte-Carlo, auto-regression, empirical orthogonal function (EOF). One of the important steps in the WG is using QM, through which all the variables, no matter what distribution they originally are, are transformed into normal distributed variables. Therefore, the WG can work on normally distributed variables, which greatly facilitates the treatment of random numbers in the WG. Monte-Carlo and auto-regression are used to generate the realization; EOFs are employed for preserving spatial relationships and the relationships between different meteorological variables. We have established a complete model named WGQM (weather generator and quantile mapping), which can be applied flexibly to generate daily or hourly time series. For example, with 30-year daily (hourly) data and 100-year monthly (daily) data as input, the 100-year daily (hourly) data would be relatively reasonably produced. Some evaluation experiments with WGQM have been carried out in the area of Austria and the evaluation results will be presented.
Şafak, K.; Xin, M.; Callahan, P. T.; Peng, M. Y.; Kärtner, F. X.
2015-01-01
We report recent progress made in a complete fiber-optic, high-precision, long-term stable timing distribution system for synchronization of next generation X-ray free-electron lasers. Timing jitter characterization of the master laser shows less than 170-as RMS integrated jitter for frequencies above 10 kHz, limited by the detection noise floor. Timing stabilization of a 3.5-km polarization-maintaining fiber link is successfully achieved with an RMS drift of 3.3 fs over 200 h of operation using all fiber-coupled elements. This all fiber-optic implementation will greatly reduce the complexity of optical alignment in timing distribution systems and improve the overall mechanical and timing stability of the system. PMID:26798814
Real-Time Multiprocessor Programming Language (RTMPL) user's manual
NASA Technical Reports Server (NTRS)
Arpasi, D. J.
1985-01-01
A real-time multiprocessor programming language (RTMPL) has been developed to provide for high-order programming of real-time simulations on systems of distributed computers. RTMPL is a structured, engineering-oriented language. The RTMPL utility supports a variety of multiprocessor configurations and types by generating assembly language programs according to user-specified targeting information. Many programming functions are assumed by the utility (e.g., data transfer and scaling) to reduce the programming chore. This manual describes RTMPL from a user's viewpoint. Source generation, applications, utility operation, and utility output are detailed. An example simulation is generated to illustrate many RTMPL features.
A distributed data base management system. [for Deep Space Network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1975-01-01
Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.
Time-resolved measurements of supersonic fuel sprays using synchrotron X-rays.
Powell, C F; Yue, Y; Poola, R; Wang, J
2000-11-01
A time-resolved radiographic technique has been developed for probing the fuel distribution close to the nozzle of a high-pressure single-hole diesel injector. The measurement was made using X-ray absorption of monochromatic synchrotron-generated radiation, allowing quantitative determination of the fuel distribution in this optically impenetrable region with a time resolution of better than 1 micros. These quantitative measurements constitute the most detailed near-nozzle study of a fuel spray to date.
Brand, Samuel P C; Rock, Kat S; Keeling, Matt J
2016-04-01
Epidemiological modelling has a vital role to play in policy planning and prediction for the control of vectors, and hence the subsequent control of vector-borne diseases. To decide between competing policies requires models that can generate accurate predictions, which in turn requires accurate knowledge of vector natural histories. Here we highlight the importance of the distribution of times between life-history events, using short-lived midge species as an example. In particular we focus on the distribution of the extrinsic incubation period (EIP) which determines the time between infection and becoming infectious, and the distribution of the length of the gonotrophic cycle which determines the time between successful bites. We show how different assumptions for these periods can radically change the basic reproductive ratio (R0) of an infection and additionally the impact of vector control on the infection. These findings highlight the need for detailed entomological data, based on laboratory experiments and field data, to correctly construct the next-generation of policy-informing models.
A fragmentation model of earthquake-like behavior in internet access activity
NASA Astrophysics Data System (ADS)
Paguirigan, Antonino A.; Angco, Marc Jordan G.; Bantang, Johnrob Y.
We present a fragmentation model that generates almost any inverse power-law size distribution, including dual-scaled versions, consistent with the underlying dynamics of systems with earthquake-like behavior. We apply the model to explain the dual-scaled power-law statistics observed in an Internet access dataset that covers more than 32 million requests. The non-Poissonian statistics of the requested data sizes m and the amount of time τ needed for complete processing are consistent with the Gutenberg-Richter-law. Inter-event times δt between subsequent requests are also shown to exhibit power-law distributions consistent with the generalized Omori law. Thus, the dataset is similar to the earthquake data except that two power-law regimes are observed. Using the proposed model, we are able to identify underlying dynamics responsible in generating the observed dual power-law distributions. The model is universal enough for its applicability to any physical and human dynamics that is limited by finite resources such as space, energy, time or opportunity.
NASA Astrophysics Data System (ADS)
Fatichi, S.; Ivanov, V. Y.; Caporali, E.
2013-04-01
This study extends a stochastic downscaling methodology to generation of an ensemble of hourly time series of meteorological variables that express possible future climate conditions at a point-scale. The stochastic downscaling uses general circulation model (GCM) realizations and an hourly weather generator, the Advanced WEather GENerator (AWE-GEN). Marginal distributions of factors of change are computed for several climate statistics using a Bayesian methodology that can weight GCM realizations based on the model relative performance with respect to a historical climate and a degree of disagreement in projecting future conditions. A Monte Carlo technique is used to sample the factors of change from their respective marginal distributions. As a comparison with traditional approaches, factors of change are also estimated by averaging GCM realizations. With either approach, the derived factors of change are applied to the climate statistics inferred from historical observations to re-evaluate parameters of the weather generator. The re-parameterized generator yields hourly time series of meteorological variables that can be considered to be representative of future climate conditions. In this study, the time series are generated in an ensemble mode to fully reflect the uncertainty of GCM projections, climate stochasticity, as well as uncertainties of the downscaling procedure. Applications of the methodology in reproducing future climate conditions for the periods of 2000-2009, 2046-2065 and 2081-2100, using the period of 1962-1992 as the historical baseline are discussed for the location of Firenze (Italy). The inferences of the methodology for the period of 2000-2009 are tested against observations to assess reliability of the stochastic downscaling procedure in reproducing statistics of meteorological variables at different time scales.
Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.
Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul
2017-02-01
Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.
Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid
Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul
2017-01-01
Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654
Level-crossing statistics of the horizontal wind speed in the planetary surface boundary layer
NASA Astrophysics Data System (ADS)
Edwards, Paul J.; Hurst, Robert B.
2001-09-01
The probability density of the times for which the horizontal wind remains above or below a given threshold speed is of some interest in the fields of renewable energy generation and pollutant dispersal. However there appear to be no analytic or conceptual models which account for the observed power law form of the distribution of these episode lengths over a range of over three decades, from a few tens of seconds to a day or more. We reanalyze high resolution wind data and demonstrate the fractal character of the point process generated by the wind speed level crossings. We simulate the fluctuating wind speed by a Markov process which approximates the characteristics of the real (non-Markovian) wind and successfully generates a power law distribution of episode lengths. However, fundamental questions concerning the physical basis for this behavior and the connection between the properties of a continuous-time stochastic process and the fractal statistics of the point process generated by its level crossings remain unanswered.
Quantum dense key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degiovanni, I.P.; Ruo Berchera, I.; Castelletto, S.
2004-03-01
This paper proposes a protocol for quantum dense key distribution. This protocol embeds the benefits of a quantum dense coding and a quantum key distribution and is able to generate shared secret keys four times more efficiently than the Bennet-Brassard 1984 protocol. We hereinafter prove the security of this scheme against individual eavesdropping attacks, and we present preliminary experimental results, showing its feasibility.
NASA Astrophysics Data System (ADS)
Keene, W. C.; Long, M. S.; Duplessis, P.; Kieber, D. J.; Maben, J. R.; Frossard, A. A.; Kinsey, J. D.; Beaupre, S. R.; Lu, X.; Chang, R.; Zhu, Y.; Bisgrove, J.
2017-12-01
During a September-October 2016 cruise of the R/V Endeavor in the western North Atlantic Ocean, primary marine aerosol (PMA) was produced in a high capacity generator during day and night via detrainment of bubbles from biologically productive and oligotrophic seawater. The turbulent mixing of clean air and seawater in a Venturi nozzle produced bubble plumes with tunable size distributions. Physicochemical characteristics of size-resolved PMA and seawater were measured. PMA number production efficiencies per unit air detrained (PEnum) increased with increasing detainment rate. For given conditions, PEnum values summed over size distributions were roughly ten times greater than those for frits whereas normalized size distributions were similar. Results show that bubble size distributions significantly modulated number production fluxes but not relative shapes of corresponding size distributions. In contrast, mass production efficiencies (PEmass) decreased with increasing air detrainment and were similar to those for frits, consistent with the hypothesis that bubble rafts on the seawater surface modulate emissions of larger jet droplets that dominate PMA mass production. Production efficiencies of organic matter were about three times greater than those for frits whereas organic enrichment factors integrated over size distributions were similar.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-12-18
This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
NASA Astrophysics Data System (ADS)
Diamanti, Eleni; Takesue, Hiroki; Langrock, Carsten; Fejer, M. M.; Yamamoto, Yoshihisa
2006-12-01
We present a quantum key distribution experiment in which keys that were secure against all individual eavesdropping attacks allowed by quantum mechanics were distributed over 100 km of optical fiber. We implemented the differential phase shift quantum key distribution protocol and used low timing jitter 1.55 µm single-photon detectors based on frequency up-conversion in periodically poled lithium niobate waveguides and silicon avalanche photodiodes. Based on the security analysis of the protocol against general individual attacks, we generated secure keys at a practical rate of 166 bit/s over 100 km of fiber. The use of the low jitter detectors also increased the sifted key generation rate to 2 Mbit/s over 10 km of fiber.
Time-independent models of asset returns revisited
NASA Astrophysics Data System (ADS)
Gillemot, L.; Töyli, J.; Kertesz, J.; Kaski, K.
2000-07-01
In this study we investigate various well-known time-independent models of asset returns being simple normal distribution, Student t-distribution, Lévy, truncated Lévy, general stable distribution, mixed diffusion jump, and compound normal distribution. For this we use Standard and Poor's 500 index data of the New York Stock Exchange, Helsinki Stock Exchange index data describing a small volatile market, and artificial data. The results indicate that all models, excluding the simple normal distribution, are, at least, quite reasonable descriptions of the data. Furthermore, the use of differences instead of logarithmic returns tends to make the data looking visually more Lévy-type distributed than it is. This phenomenon is especially evident in the artificial data that has been generated by an inflated random walk process.
NASA Astrophysics Data System (ADS)
Li, Cheng
Wind farms, photovoltaic arrays, fuel cells, and micro-turbines are all considered to be Distributed Generation (DG). DG is defined as the generation of power which is dispersed throughout a utility's service territory and either connected to the utility's distribution system or isolated in a small grid. This thesis addresses modeling and economic issues pertaining to the optimal reactive power planning for distribution system with wind power generation (WPG) units. Wind farms are inclined to cause reverse power flows and voltage variations due to the random-like outputs of wind turbines. To deal with this kind of problem caused by wide spread usage of wind power generation, this thesis investigates voltage and reactive power controls in such a distribution system. Consequently static capacitors (SC) and transformer taps are introduced into the system and treated as controllers. For the purpose of getting optimum voltage and realizing reactive power control, the research proposes a proper coordination among the controllers like on-load tap changer (OLTC), feeder-switched capacitors. What's more, in order to simulate its uncertainty, the wind power generation is modeled by the Markov model. In that way, calculating the probabilities for all the scenarios is possible. Some outputs with consecutive and discrete values have been used for transition between successive time states and within state wind speeds. The thesis will describe the method to generate the wind speed time series from the transition probability matrix. After that, utilizing genetic algorithm, the optimal locations of SCs, the sizes of SCs and transformer taps are determined so as to minimize the cost or minimize the power loss, and more importantly improve voltage profiles. The applicability of the proposed method is verified through simulation on a 9-bus system and a 30-bus system respectively. At last, the simulation results indicate that as long as the available capacitors are able to sufficiently compensate the reactive power demand, the DG operation no longer imposes a significant effect on the voltage fluctuations in the distribution system. And the proposed approach is efficient, simple and straightforward.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Generation of flat wideband chaos with suppressed time delay signature by using optical time lens.
Jiang, Ning; Wang, Chao; Xue, Chenpeng; Li, Guilan; Lin, Shuqing; Qiu, Kun
2017-06-26
We propose a flat wideband chaos generation scheme that shows excellent time delay signature suppression effect, by injecting the chaotic output of general external cavity semiconductor laser into an optical time lens module composed of a phase modulator and two dispersive units. The numerical results demonstrate that by properly setting the parameters of the driving signal of phase modulator and the accumulated dispersion of dispersive units, the relaxation oscillation in chaos can be eliminated, wideband chaos generation with an efficient bandwidth up to several tens of GHz can be achieved, and the RF spectrum of generated chaotic signal is nearly as flat as uniform distribution. Moreover, the periodicity of chaos induced by the external cavity modes can be simultaneously destructed by the optical time lens module, based on this the time delay signature can be completely suppressed.
The perturbed Sparre Andersen model with a threshold dividend strategy
NASA Astrophysics Data System (ADS)
Gao, Heli; Yin, Chuancun
2008-10-01
In this paper, we consider a Sparre Andersen model perturbed by diffusion with generalized Erlang(n)-distributed inter-claim times and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the mth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case where the inter-claim times are Erlang(2) distributed and the claim size distribution is exponential is considered in some details.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
NASA Astrophysics Data System (ADS)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; Cohen, Guy
2018-03-01
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n -electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; ...
2018-03-06
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
NASA Astrophysics Data System (ADS)
Jiang, Shi-Mei; Cai, Shi-Min; Zhou, Tao; Zhou, Pei-Ling
2008-06-01
The two-phase behaviour in financial markets actually means the bifurcation phenomenon, which represents the change of the conditional probability from an unimodal to a bimodal distribution. We investigate the bifurcation phenomenon in Hang-Seng index. It is observed that the bifurcation phenomenon in financial index is not universal, but specific under certain conditions. For Hang-Seng index and randomly generated time series, the phenomenon just emerges when the power-law exponent of absolute increment distribution is between 1 and 2 with appropriate period. Simulations on a randomly generated time series suggest the bifurcation phenomenon itself is subject to the statistics of absolute increment, thus it may not be able to reflect essential financial behaviours. However, even under the same distribution of absolute increment, the range where bifurcation phenomenon occurs is far different from real market to artificial data, which may reflect certain market information.
Spatial distribution on high-order-harmonic generation of an H2+ molecule in intense laser fields
NASA Astrophysics Data System (ADS)
Zhang, Jun; Ge, Xin-Lei; Wang, Tian; Xu, Tong-Tong; Guo, Jing; Liu, Xue-Shen
2015-07-01
High-order-harmonic generation (HHG) for the H2 + molecule in a 3-fs, 800-nm few-cycle Gaussian laser pulse combined with a static field is investigated by solving the one-dimensional electronic and one-dimensional nuclear time-dependent Schrödinger equation within the non-Born-Oppenheimer approximation. The spatial distribution in HHG is demonstrated and the results present the recombination process of the electron with the two nuclei, respectively. The spatial distribution of the HHG spectra shows that there is little possibility of the recombination of the electron with the nuclei around the origin z =0 a.u. and equilibrium internuclear positions z =±1.3 a.u. This characteristic is irrelevant to laser parameters and is only attributed to the molecular structure. Furthermore, we investigate the time-dependent electron-nuclear wave packet and ionization probability to further explain the underlying physical mechanism.
Ishihara, Koji; Morimoto, Jun
2018-03-01
Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
A Monte Carlo study of fluorescence generation probability in a two-layered tissue model
NASA Astrophysics Data System (ADS)
Milej, Daniel; Gerega, Anna; Wabnitz, Heidrun; Liebert, Adam
2014-03-01
It was recently reported that the time-resolved measurement of diffuse reflectance and/or fluorescence during injection of an optical contrast agent may constitute a basis for a technique to assess cerebral perfusion. In this paper, we present results of Monte Carlo simulations of the propagation of excitation photons and tracking of fluorescence photons in a two-layered tissue model mimicking intra- and extracerebral tissue compartments. Spatial 3D distributions of the probability that the photons were converted from excitation to emission wavelength in a defined voxel of the medium (generation probability) during their travel between source and detector were obtained for different optical properties in intra- and extracerebral tissue compartments. It was noted that the spatial distribution of the generation probability depends on the distribution of the fluorophore in the medium and is influenced by the absorption of the medium and of the fluorophore at excitation and emission wavelengths. Simulations were also carried out for realistic time courses of the dye concentration in both layers. The results of the study show that the knowledge of the absorption properties of the medium at excitation and emission wavelengths is essential for the interpretation of the time-resolved fluorescence signals measured on the surface of the head.
A distributed pipeline for DIDSON data processing
Li, Liling; Danner, Tyler; Eickholt, Jesse; McCann, Erin L.; Pangle, Kevin; Johnson, Nicholas
2018-01-01
Technological advances in the field of ecology allow data on ecological systems to be collected at high resolution, both temporally and spatially. Devices such as Dual-frequency Identification Sonar (DIDSON) can be deployed in aquatic environments for extended periods and easily generate several terabytes of underwater surveillance data which may need to be processed multiple times. Due to the large amount of data generated and need for flexibility in processing, a distributed pipeline was constructed for DIDSON data making use of the Hadoop ecosystem. The pipeline is capable of ingesting raw DIDSON data, transforming the acoustic data to images, filtering the images, detecting and extracting motion, and generating feature data for machine learning and classification. All of the tasks in the pipeline can be run in parallel and the framework allows for custom processing. Applications of the pipeline include monitoring migration times, determining the presence of a particular species, estimating population size and other fishery management tasks.
Energy management and control of active distribution systems
NASA Astrophysics Data System (ADS)
Shariatzadeh, Farshid
Advancements in the communication, control, computation and information technologies have driven the transition to the next generation active power distribution systems. Novel control techniques and management strategies are required to achieve the efficient, economic and reliable grid. The focus of this work is energy management and control of active distribution systems (ADS) with integrated renewable energy sources (RESs) and demand response (DR). Here, ADS mean automated distribution system with remotely operated controllers and distributed energy resources (DERs). DER as active part of the next generation future distribution system includes: distributed generations (DGs), RESs, energy storage system (ESS), plug-in hybrid electric vehicles (PHEV) and DR. Integration of DR and RESs into ADS is critical to realize the vision of sustainability. The objective of this dissertation is the development of management architecture to control and operate ADS in the presence of DR and RES. One of the most challenging issues for operating ADS is the inherent uncertainty of DR and RES as well as conflicting objective of DER and electric utilities. ADS can consist of different layers such as system layer and building layer and coordination between these layers is essential. In order to address these challenges, multi-layer energy management and control architecture is proposed with robust algorithms in this work. First layer of proposed multi-layer architecture have been implemented at the system layer. Developed AC optimal power flow (AC-OPF) generates fair price for all DR and non-DR loads which is used as a control signal for second layer. Second layer controls DR load at buildings using a developed look-ahead robust controller. Load aggregator collects information from all buildings and send aggregated load to the system optimizer. Due to the different time scale at these two management layers, time coordination scheme is developed. Robust and deterministic controllers are developed to maximize the energy usage from rooftop photovoltaic (PV) generation locally and minimize heat-ventilation and air conditioning (HVAC) consumption while maintaining inside temperature within comfort zone. The performance of the developed multi-layer architecture has been analyzed using test case studies and results show the robustness of developed controller in the presence of uncertainty.
Terahertz radiation from accelerating charge carriers in graphene under ultrafast photoexcitation
NASA Astrophysics Data System (ADS)
Rustagi, Avinash; Stanton, C. J.
2016-11-01
We study the generation of terahertz (THz) radiation from the acceleration of ultrafast photoexcited charge carriers in graphene in the presence of a dc electric field. Our model is based on calculating the transient current density from the time-dependent distribution function which is determined using the Boltzmann transport equation (BTE) within a relaxation time approximation. We include the time-dependent generation of carriers by the pump pulse by solving for the carrier generation rate using the optical Bloch equations in the rotating wave approximation (RWA). The linearly polarized pump pulse generates an anisotropic distribution of photoexcited carriers in the kx-ky plane. The collision integral in the Boltzmann equation includes a term that leads to the thermalization of carriers via carrier-carrier scattering to an effective temperature above the lattice temperature, as well as a cooling term, which leads to energy relaxation via inelastic carrier-phonon scattering. The radiated signal is proportional to the time derivative of the transient current density. In spite of the fact that the magnitude of the velocity is the same for all the carriers in graphene, there is still emitted radiation from the photoexcited charge carriers with frequency components in the THz range due to a change in the direction of velocity of the photoexcited carriers in the external electric field as well as cooling of the photoexcited carriers on a subpicosecond time scale.
Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction
NASA Astrophysics Data System (ADS)
Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob
2018-04-01
Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories.
Data center thermal management
Hamann, Hendrik F.; Li, Hongfei
2016-02-09
Historical high-spatial-resolution temperature data and dynamic temperature sensor measurement data may be used to predict temperature. A first formulation may be derived based on the historical high-spatial-resolution temperature data for determining a temperature at any point in 3-dimensional space. The dynamic temperature sensor measurement data may be calibrated based on the historical high-spatial-resolution temperature data at a corresponding historical time. Sensor temperature data at a plurality of sensor locations may be predicted for a future time based on the calibrated dynamic temperature sensor measurement data. A three-dimensional temperature spatial distribution associated with the future time may be generated based on the forecasted sensor temperature data and the first formulation. The three-dimensional temperature spatial distribution associated with the future time may be projected to a two-dimensional temperature distribution, and temperature in the future time for a selected space location may be forecasted dynamically based on said two-dimensional temperature distribution.
DOT National Transportation Integrated Search
2014-05-01
Advanced Traveler Information Systems (ATIS) have been proposed as a mechanism to generate and : distribute real-time travel information to drivers for the purpose of improving travel experience : represented by experienced travel time and enhancing ...
Spatial distribution of Cherenkov light from cascade showers in water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khomyakov, V. A., E-mail: VAKhomyakov@mephi.ru; Bogdanov, A. G.; Kindin, V. V.
2016-12-15
The spatial distribution of the Cherenkov light generated by cascade showers is analyzed using the NEVOD Cherenkov water detector. The dependence of the Cherenkov light intensity on the depth of shower development at various distances from the shower axis is investigated for the first time. The experimental data are compared with the Cherenkov light distributions predicted by various models for the scattering of cascade particles.
DGIC Interconnection Insights | Distributed Generation Interconnection
time and resources from utilities, customers, and local permitting authorities. Past research by the interconnection processes can benefit all parties by reducing the financial and time commitments involved. In this susceptible to time-consuming setbacks-for example, if an application is submitted with incomplete information
A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Wu, Yifu; Wei, Jin
Distributed Energy Resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. As most of these generators are geographically dispersed, dedicated communications investments for every generator are capital cost prohibitive. Real-time distributed communications middleware, which supervises, organizes and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs, allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the Quality of Experience (QoE) measures to complement the conventional Quality of Service (QoS)more » information to detect and mitigate the congestion attacks effectively. The simulation results illustrate the efficiency of our proposed communications middleware architecture.« less
A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Yifu; Wei, Jin; Hodge, Bri-Mathias
Distributed energy resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. Because most of these generators are geographically dispersed, dedicated communications investments for every generator are capital-cost prohibitive. Real-time distributed communications middleware - which supervises, organizes, and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs - allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the quality of experience measures to complement the conventional quality of service informationmore » to effectively detect and mitigate congestion attacks. The simulation results illustrate the efficiency of our proposed communications middleware architecture.« less
Intelligent and robust optimization frameworks for smart grids
NASA Astrophysics Data System (ADS)
Dhansri, Naren Reddy
A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic optimization algorithm for smart grid automatic generation control.
Power-law Exponent in Multiplicative Langevin Equation with Temporally Correlated Noise
NASA Astrophysics Data System (ADS)
Morita, Satoru
2018-05-01
Power-law distributions are ubiquitous in nature. Random multiplicative processes are a basic model for the generation of power-law distributions. For discrete-time systems, the power-law exponent is known to decrease as the autocorrelation time of the multiplier increases. However, for continuous-time systems, it is not yet clear how the temporal correlation affects the power-law behavior. Herein, we analytically investigated a multiplicative Langevin equation with colored noise. We show that the power-law exponent depends on the details of the multiplicative noise, in contrast to the case of discrete-time systems.
Time series with tailored nonlinearities
NASA Astrophysics Data System (ADS)
Räth, C.; Laut, I.
2015-10-01
It is demonstrated how to generate time series with tailored nonlinearities by inducing well-defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncorrelated Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for, e.g., turbulence and financial data can thus be explained in terms of phase correlations.
Distributed Load Shedding over Directed Communication Networks with Time Delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di
When generation is insufficient to support all loads under emergencies, effective and efficient load shedding needs to be deployed in order to maintain the supply-demand balance. This paper presents a distributed load shedding algorithm, which makes efficient decision based on the discovered global information. In the global information discovery process, each load only communicates with its neighboring load via directed communication links possibly with arbitrarily large but bounded time varying communication delays. We propose a novel distributed information discovery algorithm based on ratio consensus. Simulation results are used to validate the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakafuji, Dora; Gouveia, Lauren
This project supports development of the next generation, integrated energy management infrastructure (EMS) able to incorporate advance visualization of behind-the-meter distributed resource information and probabilistic renewable energy generation forecasts to inform real-time operational decisions. The project involves end-users and active feedback from an Utility Advisory Team (UAT) to help inform how information can be used to enhance operational functions (e.g. unit commitment, load forecasting, Automatic Generation Control (AGC) reserve monitoring, ramp alerts) within two major EMS platforms. Objectives include: Engaging utility operations personnel to develop user input on displays, set expectations, test and review; Developing ease of use and timelinessmore » metrics for measuring enhancements; Developing prototype integrated capabilities within two operational EMS environments; Demonstrating an integrated decision analysis platform with real-time wind and solar forecasting information and timely distributed resource information; Seamlessly integrating new 4-dimensional information into operations without increasing workload and complexities; Developing sufficient analytics to inform and confidently transform and adopt new operating practices and procedures; Disseminating project lessons learned through industry sponsored workshops and conferences;Building on collaborative utility-vendor partnership and industry capabilities« less
Managing distribution changes in time series prediction
NASA Astrophysics Data System (ADS)
Matias, J. M.; Gonzalez-Manteiga, W.; Taboada, J.; Ordonez, C.
2006-07-01
When a problem is modeled statistically, a single distribution model is usually postulated that is assumed to be valid for the entire space. Nonetheless, this practice may be somewhat unrealistic in certain application areas, in which the conditions of the process that generates the data may change; as far as we are aware, however, no techniques have been developed to tackle this problem.This article proposes a technique for modeling and predicting this change in time series with a view to improving estimates and predictions. The technique is applied, among other models, to the hypernormal distribution recently proposed. When tested on real data from a range of stock market indices the technique produces better results that when a single distribution model is assumed to be valid for the entire period of time studied.Moreover, when a global model is postulated, it is highly recommended to select the hypernormal distribution parameter in the same likelihood maximization process.
Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek
2012-07-30
The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.
Design and Realization of Online Monitoring System of Distributed New Energy and Renewable Energy
NASA Astrophysics Data System (ADS)
Tang, Yanfen; Zhou, Tao; Li, Mengwen; Zheng, Guotai; Li, Hao
2018-01-01
Aimed at difficult centralized monitoring and management of current distributed new energy and renewable energy generation projects due to great varieties, different communication protocols and large-scale difference, this paper designs a online monitoring system of new energy and renewable energy characterized by distributed deployment, tailorable functions, extendible applications and fault self-healing performance. This system is designed based on international general standard for grid information data model, formulates unified data acquisition and transmission standard for different types of new energy and renewable energy generation projects, and can realize unified data acquisition and real-time monitoring of new energy and renewable energy generation projects, such as solar energy, wind power, biomass energy, etc. within its jurisdiction. This system has applied in Beijing. At present, 576 projects are connected to the system. Good effect is achieved and stability and reliability of the system have been validated.
Analysis on flood generation processes by means of a continuous simulation model
NASA Astrophysics Data System (ADS)
Fiorentino, M.; Gioia, A.; Iacobellis, V.; Manfreda, S.
2006-03-01
In the present research, we exploited a continuous hydrological simulation to investigate on key variables responsible of flood peak formation. With this purpose, a distributed hydrological model (DREAM) is used in cascade with a rainfall generator (IRP-Iterated Random Pulse) to simulate a large number of extreme events providing insight into the main controls of flood generation mechanisms. Investigated variables are those used in theoretically derived probability distribution of floods based on the concept of partial contributing area (e.g. Iacobellis and Fiorentino, 2000). The continuous simulation model is used to investigate on the hydrological losses occurring during extreme events, the variability of the source area contributing to the flood peak and its lag-time. Results suggest interesting simplification for the theoretical probability distribution of floods according to the different climatic and geomorfologic environments. The study is applied to two basins located in Southern Italy with different climatic characteristics.
Robust low-frequency spread-spectrum navigation system
Smith, Stephen F [Loudon, TN; Moore, James A [Powell, TN
2012-01-03
Methods and apparatus are described for a navigation system. A process includes providing a plurality of transmitters distributed throughout a desired coverage area; locking the plurality of transmitters to a common timing reference; transmitting a signal from each of the plurality of transmitters. An apparatus includes a plurality of transmitters distributed throughout a desired coverage area; wherein each of the plurality of transmitters comprises a packet generator; and wherein the plurality of transmitters are locked to a common timing reference.
Robust low-frequency spread-spectrum navigation system
Smith, Stephen F [Loudon, TN; Moore, James A [Powell, TN
2011-01-25
Methods and apparatus are described for a navigation system. A process includes providing a plurality of transmitters distributed throughout a desired coverage area; locking the plurality of transmitters to a common timing reference; transmitting a signal from each of the plurality of transmitters. An apparatus includes a plurality of transmitters distributed throughout a desired coverage area; wherein each of the plurality of transmitters comprises a packet generator; and wherein the plurality of transmitters are locked to a common timing reference.
Robust low-frequency spread-spectrum navigation system
Smith, Stephen F; Moore, James A
2012-10-30
Methods and apparatus are described for a navigation system. A process includes providing a plurality of transmitters distributed throughout a desired coverage area; locking the plurality of transmitters to a common timing reference; transmitting a signal from each of the plurality of transmitters. An apparatus includes a plurality of transmitters distributed throughout a desired coverage area; wherein each of the plurality of transmitters comprises a packet generator; and wherein the plurality of transmitters are locked to a common timing reference.
Robust low-frequency spread-spectrum navigation system
Smith, Stephen F [Loudon, TN; Moore, James A [Powell, TN
2009-12-01
Methods and apparatus are described for a navigation system. A process includes providing a plurality of transmitters distributed throughout a desired coverage area; locking the plurality of transmitters to a common timing reference; transmitting a signal from each of the plurality of transmitters. An apparatus includes a plurality of transmitters distributed throughout a desired coverage area; wherein each of the plurality of transmitters comprises a packet generator; and wherein the plurality of transmitters are locked to a common timing reference.
Execution time supports for adaptive scientific algorithms on distributed memory machines
NASA Technical Reports Server (NTRS)
Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey
1990-01-01
Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.
Execution time support for scientific programs on distributed memory machines
NASA Technical Reports Server (NTRS)
Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey
1990-01-01
Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.
A two-step method for developing a control rod program for boiling water reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taner, M.S.; Levine, S.H.; Hsiao, M.Y.
1992-01-01
This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in amore » computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift.« less
24 CFR 570.489 - Program administrative requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... funds reallocated by HUD to the State which are distributed during the time the final Statement for the..., less the costs incidental to the generation of the income; (iv) Gross income from the use or rental of... activity that was constructed or improved with CDBG funds, less the costs incidental to the generation of...
24 CFR 570.489 - Program administrative requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... funds reallocated by HUD to the State which are distributed during the time the final Statement for the..., less the costs incidental to the generation of the income; (iv) Gross income from the use or rental of... activity that was constructed or improved with CDBG funds, less the costs incidental to the generation of...
Li, Zhen-shan; Fu, Hui-zhen; Qu, Xiao-yan
2011-09-15
Reliable and accurate determinations of the quantities and composition of wastes is required for the planning of municipal solid waste (MSW) management systems. A model, based on the interrelationships of expenditure on consumer goods, time distribution, daily activities, residents groups, and waste generation, was developed and employed to estimate MSW generation by different activities and resident groups in Beijing. The principle is that MSW is produced by consumption of consumer goods by residents in their daily activities: 'Maintenance' (meeting the basic needs of food, housing and personal care), 'Subsistence' (providing the financial requirements) and 'Leisure' (social and recreational pursuits) activities. Three series of important parameters - waste generation per unit of consumer expenditure, consumer expenditure distribution to activities in unit time, and time assignment to activities by different resident groups - were determined using a statistical analysis, a sampling survey and the Analytic Hierarchy Process, respectively. Data for analysis were obtained from the Beijing Statistical Yearbook (2004-2008) and questionnaire survey. The results reveal that 'Maintenance' activity produced the most MSW, distantly followed by 'Leisure' and 'Subsistence' activities. In 2008, in descending order of MSW generation the different resident groups were floating population, non-civil servants, retired people, civil servants, college students (including both undergraduates and graduates), primary and secondary students, and preschoolers. The new estimation model, which was successful in fitting waste generation by different activities and resident groups over the investigated years, was amenable to MSW prediction. Copyright © 2011 Elsevier B.V. All rights reserved.
Generation of wideband chaos with suppressed time-delay signature by delayed self-interference.
Wang, Anbang; Yang, Yibiao; Wang, Bingjie; Zhang, Beibei; Li, Lei; Wang, Yuncai
2013-04-08
We demonstrate experimentally and numerically a method using the incoherent delayed self-interference (DSI) of chaotic light from a semiconductor laser with optical feedback to generate wideband chaotic signal. The results show that, the DSI can eliminate the domination of laser relaxation oscillation existing in the chaotic laser light and therefore flatten and widen the power spectrum. Furthermore, the DSI depresses the time-delay signature induced by external cavity modes and improves the symmetry of probability distribution by more than one magnitude. We also experimentally show that this DSI signal is beneficial to the random number generation.
The application of connectionism to query planning/scheduling in intelligent user interfaces
NASA Technical Reports Server (NTRS)
Short, Nicholas, Jr.; Shastri, Lokendra
1990-01-01
In the mid nineties, the Earth Observing System (EOS) will generate an estimated 10 terabytes of data per day. This enormous amount of data will require the use of sophisticated technologies from real time distributed Artificial Intelligence (AI) and data management. Without regard to the overall problems in distributed AI, efficient models were developed for doing query planning and/or scheduling in intelligent user interfaces that reside in a network environment. Before intelligent query/planning can be done, a model for real time AI planning and/or scheduling must be developed. As Connectionist Models (CM) have shown promise in increasing run times, a connectionist approach to AI planning and/or scheduling is proposed. The solution involves merging a CM rule based system to a general spreading activation model for the generation and selection of plans. The system was implemented in the Rochester Connectionist Simulator and runs on a Sun 3/260.
NASA Astrophysics Data System (ADS)
Veronesi, F.; Grassi, S.
2016-09-01
Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners.
Phan, Thanh G; Beare, Richard; Chen, Jian; Clissold, Benjamin; Ly, John; Singhal, Shaloo; Ma, Henry; Srikanth, Velandai
2017-05-01
There is great interest in how endovascular clot retrieval hubs provide services to a population. We applied a computational method to objectively generate service boundaries for such endovascular clot retrieval hubs, defined by traveling time to hub. Stroke incidence data merged with population census to estimate numbers of stroke in metropolitan Melbourne, Australia. Traveling time from randomly generated addresses to 4 endovascular clot retrieval-capable hubs (Royal Melbourne Hospital [RMH], Monash Medical Center [MMC], Alfred Hospital [ALF], and Austin Hospital [AUS]) estimated using Google Map application program interface. Boundary maps generated based on traveling time at various times of day for combinations of hubs. In a 2-hub model, catchment was best distributed when RMH was paired with MMC (model 1a, RMH 1765 km 2 and MMC 1164 km 2 ) or with AUS (model 1c, RMH 1244 km 2 and AUS 1685 km 2 ), with no statistical difference between models ( P =0.20). Catchment was poorly distributed when RMH was paired with ALF (model 1b, RMH 2252 km 2 and ALF 676 km 2 ), significantly different from both models 1a and 1c (both P <0.05). Model 1a had the greatest proportion of patients arriving within ideal time of 30 minutes followed by model 1c ( P <0.001). In a 3-hub model, the combination of RMH, MMC, and AUS was superior to that of RMH, MMC, and ALF in catchment distribution and travel time. The method was also successfully applied to the city of Adelaide demonstrating wider applicability. We provide proof of concept for a novel computational method to objectively designate service boundaries for endovascular clot retrieval hubs. © 2017 American Heart Association, Inc.
Granton, Patrick V; Verhaegen, Frank
2013-05-21
Precision image-guided small animal radiotherapy is rapidly advancing through the use of dedicated micro-irradiation devices. However, precise modeling of these devices in model-based dose-calculation algorithms such as Monte Carlo (MC) simulations continue to present challenges due to a combination of very small beams, low mechanical tolerances on beam collimation, positioning and long calculation times. The specific intent of this investigation is to introduce and demonstrate the viability of a fast analytical source model (AM) for use in either investigating improvements in collimator design or for use in faster dose calculations. MC models using BEAMnrc were developed for circular and square fields sizes from 1 to 25 mm in diameter (or side) that incorporated the intensity distribution of the focal spot modeled after an experimental pinhole image. These MC models were used to generate phase space files (PSFMC) at the exit of the collimators. An AM was developed that included the intensity distribution of the focal spot, a pre-calculated x-ray spectrum, and the collimator-specific entrance and exit apertures. The AM was used to generate photon fluence intensity distributions (ΦAM) and PSFAM containing photons radiating at angles according to the focal spot intensity distribution. MC dose calculations using DOSXYZnrc in a water and mouse phantom differing only by source used (PSFMC versus PSFAM) were found to agree within 7% and 4% for the smallest 1 and 2 mm collimator, respectively, and within 1% for all other field sizes based on depth dose profiles. PSF generation times were approximately 1200 times faster for the smallest beam and 19 times faster for the largest beam. The influence of the focal spot intensity distribution on output and on beam shape was quantified and found to play a significant role in calculated dose distributions. Beam profile differences due to collimator alignment were found in both small and large collimators sensitive to shifts of 1 mm with respect to the central axis.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
NASA Astrophysics Data System (ADS)
Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino
2017-04-01
Investigations on Palaeonummulites venosus using the natural laboratory approach for determining chamber building rate, test diameter increase rate, reproduction time and longevity is based on the decomposition of monthly obtained frequency distributions based on chamber number and test diameter into normal-distributed components. The shift of the component parameters 'mean' and 'standard deviation' during the investigation period of 15 months was used to calculate Michaelis-Menten functions applied to estimate the averaged chamber building rate and diameter increase rate under natural conditions. The individual dates of birth were estimated using the inverse averaged chamber building rate and the inverse diameter increase rate fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e. frequency divided by sediment weight) based on chamber building rate and diameter increase rate resulted both in a continuous reproduction through the year with two peaks, the stronger in May /June determined as the beginning of the summer generation (generation1) and the weaker in November determined as the beginning of the winter generation (generation 2). This reproduction scheme explains the existence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date seems to be round about one year, obtained by both estimations based on the chamber building rate and the diameter increase rate.
NASA Astrophysics Data System (ADS)
Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino
2017-12-01
We investigated the symbiont-bearing benthic foraminifer Palaeonummulites venosus to determine the chamber building rate (CBR), test diameter increase rate (DIR), reproduction time and longevity using the `natural laboratory' approach. This is based on the decomposition of monthly obtained frequency distributions of chamber number and test diameter into normally distributed components. Test measurements were taken using MicroCT. The shift of the mean and standard deviation of component parameters during the 15-month investigation period was used to calculate Michaelis-Menten functions applied to estimate the averaged CBR and DIR under natural conditions. The individual dates of birth were estimated using the inverse averaged CBR and the inverse DIR fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e., frequency divided by sediment weight) based on both CBR and DIR revealed continuous reproduction throughout the year with two peaks, a stronger one in June determined as the onset of the summer generation (generation 1) and a weaker one in November determined as the onset of the winter generation (generation 2). This reproduction scheme explains the presence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date, is approximately 1.5 yr, an estimation obtained by using both CBR and DIR.
Relaxation of ferroelectric states in 2D distributions of quantum dots: EELS simulation
NASA Astrophysics Data System (ADS)
Cortés, C. M.; Meza-Montes, L.; Moctezuma, R. E.; Carrillo, J. L.
2016-06-01
The relaxation time of collective electronic states in a 2D distribution of quantum dots is investigated theoretically by simulating EELS experiments. From the numerical calculation of the probability of energy loss of an electron beam, traveling parallel to the distribution, it is possible to estimate the damping time of ferroelectric-like states. We generate this collective response of the distribution by introducing a mean field interaction among the quantum dots, and then, the model is extended incorporating effects of long-range correlations through a Bragg-Williams approximation. The behavior of the dielectric function, the energy loss function, and the relaxation time of ferroelectric-like states is then investigated as a function of the temperature of the distribution and the damping constant of the electronic states in the single quantum dots. The robustness of the trends and tendencies of our results indicate that this scheme of analysis can guide experimentalists to develop tailored quantum dots distributions for specific applications.
Anticipatory control of xenon in a pressurized water reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Impink, A.J. Jr.
1987-02-10
A method is described for automatically dampening xenon-135 spatial transients in the core of a pressurized water reactor having control rods which regulate reactor power level, comprising the steps of: measuring the neutron flu in the reactor core at a plurality of axially spaced locations on a real-time, on-line basis; repetitively generating from the neutron flux measurements, on a point-by-point basis, signals representative of the current axial distribution of xenon-135, and signals representative of the current rate of change of the axial distribution of xenon-135; generating from the xenon-135 distribution signals and the rate of change of xenon distribution signals,more » control signals for reducing the xenon transients; and positioning the control rods as a function of the control signals to dampen the xenon-135 spatial transients.« less
Burst wait time simulation of CALIBAN reactor at delayed super-critical state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.; Authier, N.; Richard, B.
2012-07-01
In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less
Priority queues with bursty arrivals of incoming tasks
NASA Astrophysics Data System (ADS)
Masuda, N.; Kim, J. S.; Kahng, B.
2009-03-01
Recently increased accessibility of large-scale digital records enables one to monitor human activities such as the interevent time distributions between two consecutive visits to a web portal by a single user, two consecutive emails sent out by a user, two consecutive library loans made by a single individual, etc. Interestingly, those distributions exhibit a universal behavior, D(τ)˜τ-δ , where τ is the interevent time, and δ≃1 or 3/2 . The universal behaviors have been modeled via the waiting-time distribution of a task in the queue operating based on priority; the waiting time follows a power-law distribution Pw(τ)˜τ-α with either α=1 or 3/2 depending on the detail of queuing dynamics. In these models, the number of incoming tasks in a unit time interval has been assumed to follow a Poisson-type distribution. For an email system, however, the number of emails delivered to a mail box in a unit time we measured follows a power-law distribution with general exponent γ . For this case, we obtain analytically the exponent α , which is not necessarily 1 or 3/2 and takes nonuniversal values depending on γ . We develop the generating function formalism to obtain the exponent α , which is distinct from the continuous time approximation used in the previous studies.
An Incentive-based Online Optimization Framework for Distribution Grids
Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun; ...
2017-10-09
This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less
An Incentive-based Online Optimization Framework for Distribution Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun
This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less
Compiling global name-space programs for distributed execution
NASA Technical Reports Server (NTRS)
Koelbel, Charles; Mehrotra, Piyush
1990-01-01
Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situations is described and the performance of the generated code on the Intel iPSC/2 is presented.
NASA Technical Reports Server (NTRS)
Woods, J. M. (Inventor)
1973-01-01
An electrical power distribution system is described for use in providing different dc voltage levels. A circuit is supplied with DC voltage levels and commutates pulses for timed intervals onto a pair of distribution wires. The circuit is driven by a command generator which places pulses on the wires in a timed sequence. The pair of wires extend to voltage strippers connected to the various loads. The voltage strippers each respond to the pulse dc levels on the pair of wires and form different output voltages communicated to each load.
Analysis on Voltage Profile of Distribution Network with Distributed Generation
NASA Astrophysics Data System (ADS)
Shao, Hua; Shi, Yujie; Yuan, Jianpu; An, Jiakun; Yang, Jianhua
2018-02-01
Penetration of distributed generation has some impacts on a distribution network in load flow, voltage profile, reliability, power loss and so on. After the impacts and the typical structures of the grid-connected distributed generation are analyzed, the back/forward sweep method of the load flow calculation of the distribution network is modelled including distributed generation. The voltage profiles of the distribution network affected by the installation location and the capacity of distributed generation are thoroughly investigated and simulated. The impacts on the voltage profiles are summarized and some suggestions to the installation location and the capacity of distributed generation are given correspondingly.
Induced Ellipticity for Inspiraling Binary Systems
NASA Astrophysics Data System (ADS)
Randall, Lisa; Xianyu, Zhong-Zhi
2018-01-01
Although gravitational waves tend to erase eccentricity of an inspiraling binary system, ellipticity can be generated in the presence of surrounding matter. We present a semianalytical method for understanding the eccentricity distribution of binary black holes (BHs) in the presence of a supermassive BH in a galactic center. Given a matter distribution, we show how to determine the resultant eccentricity analytically in the presence of both tidal forces and evaporation up to one cutoff and one matter-distribution-independent function, paving the way for understanding the environment of detected inspiraling BHs. We furthermore generalize Kozai–Lidov dynamics to situations where perturbation theory breaks down for short time intervals, allowing more general angular momentum exchange, such that eccentricity is generated even when all bodies orbit in the same plane.
Tidal influence through LOD variations on the temporal distribution of earthquake occurrences
NASA Astrophysics Data System (ADS)
Varga, P.; Gambis, D.; Bizouard, Ch.; Bus, Z.; Kiszely, M.
2006-10-01
Stresses generated by the body tides are very small at the depth of crustal earth- quakes (~10^2 N/m2). The maximum value of the lunisolar stress within the depth range of earthquakes is 10^3 N/m2 (at depth of about 600 km). Surface loads, due to oceanic tides, in coastal areas are ~ 104 N/m2. These influences are however too small to affect the outbreak time of seismic events. Authors show the effect on time distribution of seismic activity due to ΔLOD generated by zonal tides for the case of Mf, Mm, Ssa and Sa tidal constituents can be much more effective to trigger earthquakes. According to this approach we show that the tides are not directly triggering the seismic events but through the generated length of day variations. That is the reason why in case of zonal tides a correlation of the lunisolar effect and seismic activity exists, what is not the case for the tesseral and sectorial tides.
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Minagata, Atsushi; Suzuoki, Yasuo
This paper discusses the influence of mass installation of a home co-generation system (H-CGS) using a polymer electrolyte fuel cell (PEFC) on the voltage profile of power distribution system in residential area. The influence of H-CGS is compared with that of photovoltaic power generation systems (PV systems). The operation pattern of H-CGS is assumed based on the electricity and hot-water demand observed in 10 households for a year. The main results are as follows. With the clustered H-CGS, the voltage of each bus is higher by about 1-3% compared with the conventional system without any distributed generators. Because H-CGS tends to increase the output during the early evening, H-CGS contributes to recover the voltage drop during the early evening, resulting in smaller voltage variation of distribution system throughout a day. Because of small rated power output about 1kW, the influence on voltage profile by the clustered H-CGS is smaller than that by the clustered PV systems. The highest voltage during the day time is not so high as compared with the distribution system with the clustered PV systems, even if the reverse power flow from H-CGS is allowed.
Chen, Shaoqiang; Sato, Aya; Ito, Takashi; Yoshita, Masahiro; Akiyama, Hidefumi; Yokoyama, Hiroyuki
2012-10-22
This paper reports generation of sub-5-ps Fourier-transform limited optical pulses from a 1.55-µm gain-switched single-mode distributed-feedback laser diode via nanosecond electric excitation and a simple spectral-filtering technique. Typical damped oscillations of the whole lasing spectrum were observed in the time-resolved waveform. Through a spectral-filtering technique, the initial relaxation oscillation pulse and the following components in the output pulse can be well separated, and the initial short pulse can be selectively extracted by filtering out the short-wavelength components in the spectrum. Short pulses generated by this simple method are expected to have wide potential applications comparable to mode-locking lasers.
Global Swath and Gridded Data Tiling
NASA Technical Reports Server (NTRS)
Thompson, Charles K.
2012-01-01
This software generates cylindrically projected tiles of swath-based or gridded satellite data for the purpose of dynamically generating high-resolution global images covering various time periods, scaling ranges, and colors called "tiles." It reconstructs a global image given a set of tiles covering a particular time range, scaling values, and a color table. The program is configurable in terms of tile size, spatial resolution, format of input data, location of input data (local or distributed), number of processes run in parallel, and data conditioning.
Engineering High Assurance Distributed Cyber Physical Systems
2015-01-15
decisions: number of interacting agents and co-dependent decisions made in real-time without causing interference . To engineer a high assurance DART...environment specification, architecture definition, domain-specific languages, design patterns, code - generation, analysis, test-generation, and simulation...include synchronization between the models and source code , debugging at the model level, expression of the design intent, and quality of service
Performance of finite order distribution-generated universal portfolios
NASA Astrophysics Data System (ADS)
Pang, Sook Theng; Liew, How Hui; Chang, Yun Fah
2017-04-01
A Constant Rebalanced Portfolio (CRP) is an investment strategy which reinvests by redistributing wealth equally among a set of stocks. The empirical performance of the distribution-generated universal portfolio strategies are analysed experimentally concerning 10 higher volume stocks from different categories in Kuala Lumpur Stock Exchange. The time interval of study is from January 2000 to December 2015, which includes the credit crisis from September 2008 to March 2009. The performance of the finite-order universal portfolio strategies has been shown to be better than Constant Rebalanced Portfolio with some selected parameters of proposed universal portfolios.
A high-fidelity weather time series generator using the Markov Chain process on a piecewise level
NASA Astrophysics Data System (ADS)
Hersvik, K.; Endrerud, O.-E. V.
2017-12-01
A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.
Garrido, Jesús A.; Luque, Niceto R.; D'Angelo, Egidio; Ros, Eduardo
2013-01-01
Adaptable gain regulation is at the core of the forward controller operation performed by the cerebro-cerebellar loops and it allows the intensity of motor acts to be finely tuned in a predictive manner. In order to learn and store information about body-object dynamics and to generate an internal model of movement, the cerebellum is thought to employ long-term synaptic plasticity. LTD at the PF-PC synapse has classically been assumed to subserve this function (Marr, 1969). However, this plasticity alone cannot account for the broad dynamic ranges and time scales of cerebellar adaptation. We therefore tested the role of plasticity distributed over multiple synaptic sites (Hansel et al., 2001; Gao et al., 2012) by generating an analog cerebellar model embedded into a control loop connected to a robotic simulator. The robot used a three-joint arm and performed repetitive fast manipulations with different masses along an 8-shape trajectory. In accordance with biological evidence, the cerebellum model was endowed with both LTD and LTP at the PF-PC, MF-DCN and PC-DCN synapses. This resulted in a network scheme whose effectiveness was extended considerably compared to one including just PF-PC synaptic plasticity. Indeed, the system including distributed plasticity reliably self-adapted to manipulate different masses and to learn the arm-object dynamics over a time course that included fast learning and consolidation, along the lines of what has been observed in behavioral tests. In particular, PF-PC plasticity operated as a time correlator between the actual input state and the system error, while MF-DCN and PC-DCN plasticity played a key role in generating the gain controller. This model suggests that distributed synaptic plasticity allows generation of the complex learning properties of the cerebellum. The incorporation of further plasticity mechanisms and of spiking signal processing will allow this concept to be extended in a more realistic computational scenario. PMID:24130518
Investigation of advancing front method for generating unstructured grid
NASA Technical Reports Server (NTRS)
Thomas, A. M.; Tiwari, S. N.
1992-01-01
The advancing front technique is used to generate an unstructured grid about simple aerodynamic geometries. Unstructured grids are generated using VGRID2D and VGRID3D software. Specific problems considered are a NACA 0012 airfoil, a bi-plane consisting of two NACA 0012 airfoil, a four element airfoil in its landing configuration, and an ONERA M6 wing. Inviscid time dependent solutions are computed on these geometries using USM3D and the results are compared with standard test results obtained by other investigators. A grid convergence study is conducted for the NACA 0012 airfoil and compared with a structured grid. A structured grid is generated using GRIDGEN software and inviscid solutions computed using CFL3D flow solver. The results obtained by unstructured grid for NACA 0012 airfoil showed an asymmetric distribution of flow quantities, and a fine distribution of grid was required to remove this asymmetry. On the other hand, the structured grid predicted a very symmetric distribution, but when the total number of points were compared to obtain the same results it was seen that structured grid required more grid points.
NASA Astrophysics Data System (ADS)
Julie, Hongki; Pasaribu, Udjianna S.; Pancoro, Adi
2015-12-01
This paper will allow Markov Chain's application in genome shared identical by descent by two individual at full sibs model. The full sibs model was a continuous time Markov Chain with three state. In the full sibs model, we look for the cumulative distribution function of the number of sub segment which have 2 IBD haplotypes from a segment of the chromosome which the length is t Morgan and the cumulative distribution function of the number of sub segment which have at least 1 IBD haplotypes from a segment of the chromosome which the length is t Morgan. This cumulative distribution function will be developed by the moment generating function.
NASA Astrophysics Data System (ADS)
Kawasaki, Shoji; Shimoda, Kazuki; Tanaka, Motohiro; Taoka, Hisao; Matsuki, Junya; Hayashi, Yasuhiro
Recently, the amount of distributed generation (DG) such as photovoltaic system and wind power generator system installed in a distribution system has been increasing because of reduction of the effects on the environment. However, the harmonic troubles in the distribution system are apprehended in the background of the increase of connection of DGs through the inverters and the spread of power electronics equipment. In this paper, the authors propose a restraint method of voltage total harmonic distortion (THD) in a whole distribution network by active filter (AF) operation of plural power conditioner systems (PCS). Moreover, the authors propose a determination method of the optimal gain of AF operation so as to minimize the maximum value of voltage THD in the distribution network by the real-time feedback control with measured data from the information technology (IT) switches. In order to verify the validity of the proposed method, the numerical calculations are carried out by using an analytical model of distribution network interconnected DGs with PCS.
Comparison of algorithms to generate event times conditional on time-dependent covariates.
Sylvestre, Marie-Pierre; Abrahamowicz, Michal
2008-06-30
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.
Reconfigurable generation and measurement of mutually unbiased bases for time-bin qudits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukens, Joseph M.; Islam, Nurul T.; Lim, Charles Ci Wen
Here, we propose a method for implementing mutually unbiased generation and measurement of time-bin qudits using a cascade of electro-optic phase modulator–coded fiber Bragg grating pairs. Our approach requires only a single spatial mode and can switch rapidly between basis choices. We obtain explicit solutions for dimensions d = 2, 3, and 4 that realize all d + 1 possible mutually unbiased bases and analyze the performance of our approach in quantum key distribution. Given its practicality and compatibility with current technology, our approach provides a promising springboard for scalable processing of high-dimensional time-bin states.
Reconfigurable generation and measurement of mutually unbiased bases for time-bin qudits
Lukens, Joseph M.; Islam, Nurul T.; Lim, Charles Ci Wen; ...
2018-03-12
Here, we propose a method for implementing mutually unbiased generation and measurement of time-bin qudits using a cascade of electro-optic phase modulator–coded fiber Bragg grating pairs. Our approach requires only a single spatial mode and can switch rapidly between basis choices. We obtain explicit solutions for dimensions d = 2, 3, and 4 that realize all d + 1 possible mutually unbiased bases and analyze the performance of our approach in quantum key distribution. Given its practicality and compatibility with current technology, our approach provides a promising springboard for scalable processing of high-dimensional time-bin states.
Reconfigurable generation and measurement of mutually unbiased bases for time-bin qudits
NASA Astrophysics Data System (ADS)
Lukens, Joseph M.; Islam, Nurul T.; Lim, Charles Ci Wen; Gauthier, Daniel J.
2018-03-01
We propose a method for implementing mutually unbiased generation and measurement of time-bin qudits using a cascade of electro-optic phase modulator-coded fiber Bragg grating pairs. Our approach requires only a single spatial mode and can switch rapidly between basis choices. We obtain explicit solutions for dimensions d = 2, 3, and 4 that realize all d + 1 possible mutually unbiased bases and analyze the performance of our approach in quantum key distribution. Given its practicality and compatibility with current technology, our approach provides a promising springboard for scalable processing of high-dimensional time-bin states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aponte, C.I.
F and H Tank Farms generate supernate and sludge contaminated Low-Level Waste. The waste is collected, characterized, and packaged for disposal. Before the waste can be disposed of, however, it must be properly characterized. Since the radionuclide distribution in typical supernate is well known, its characterization is relatively straight forward and requires minimal effort. Non-routine waste, including potentially sludge contaminated, requires much more effort to effectively characterize. The radionuclide distribution must be determined. In some cases the waste can be contaminated by various sludge transfers with unique radionuclide distributions. In these cases, the characterization can require an extensive effort. Evenmore » after an extensive characterization effort, the container must still be prepared for shipping. Therefore a significant amount of time may elapse from the time the waste is generated until the time of disposal. During the time it is possible for a tornado or high wind scenario to occur. The purpose of this report is to determine the effect of a tornado on potential sludge contaminated waste, or Transuranic (TRU) waste in B-25s [large storage containers], to evaluate the potential impact on F and H Tank Farms, and to help establish a B-25 control program for tornado events.« less
van Maanen, Leendert; de Jong, Ritske; van Rijn, Hedderik
2014-01-01
When multiple strategies can be used to solve a type of problem, the observed response time distributions are often mixtures of multiple underlying base distributions each representing one of these strategies. For the case of two possible strategies, the observed response time distributions obey the fixed-point property. That is, there exists one reaction time that has the same probability of being observed irrespective of the actual mixture proportion of each strategy. In this paper we discuss how to compute this fixed-point, and how to statistically assess the probability that indeed the observed response times are generated by two competing strategies. Accompanying this paper is a free R package that can be used to compute and test the presence or absence of the fixed-point property in response time data, allowing for easy to use tests of strategic behavior. PMID:25170893
A dynamic re-partitioning strategy based on the distribution of key in Spark
NASA Astrophysics Data System (ADS)
Zhang, Tianyu; Lian, Xin
2018-05-01
Spark is a memory-based distributed data processing framework, has the ability of processing massive data and becomes a focus in Big Data. But the performance of Spark Shuffle depends on the distribution of data. The naive Hash partition function of Spark can not guarantee load balancing when data is skewed. The time of job is affected by the node which has more data to process. In order to handle this problem, dynamic sampling is used. In the process of task execution, histogram is used to count the key frequency distribution of each node, and then generate the global key frequency distribution. After analyzing the distribution of key, load balance of data partition is achieved. Results show that the Dynamic Re-Partitioning function is better than the default Hash partition, Fine Partition and the Balanced-Schedule strategy, it can reduce the execution time of the task and improve the efficiency of the whole cluster.
A Critique of the DoD Materiel Distribution Study,
1979-03-01
are generated on order cycle times by their components: communication times, depot order processing times, depot capacity delay times, and transit...exceeded, the order was placed in one of three priority queues. The order processing time was determined by priority group by depot. A 20-point probability...time was defined to be the sum of communication, depot order processing , depot capacity delay, and transit times. As has been argued, the first three of
Real-time modeling and simulation of distribution feeder and distributed resources
NASA Astrophysics Data System (ADS)
Singh, Pawan
The analysis of the electrical system dates back to the days when analog network analyzers were used. With the advent of digital computers, many programs were written for power-flow and short circuit analysis for the improvement of the electrical system. Real-time computer simulations can answer many what-if scenarios in the existing or the proposed power system. In this thesis, the standard IEEE 13-Node distribution feeder is developed and validated on a real-time platform OPAL-RT. The concept and the challenges of the real-time simulation are studied and addressed. Distributed energy resources include some of the commonly used distributed generation and storage devices like diesel engine, solar photovoltaic array, and battery storage system are modeled and simulated on a real-time platform. A microgrid encompasses a portion of an electric power distribution which is located downstream of the distribution substation. Normally, the microgrid operates in paralleled mode with the grid; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. The microgrid can operate in grid connected and islanded mode, both the operating modes are studied in the last chapter. Towards the end, a simple microgrid controller modeled and simulated on the real-time platform is developed for energy management and protection for the microgrid.
Bergues Pupo, Ana E; Reyes, Juan Bory; Bergues Cabrales, Luis E; Bergues Cabrales, Jesús M
2011-09-24
Electrotherapy is a relatively well established and efficient method of tumor treatment. In this paper we focus on analytical and numerical calculations of the potential and electric field distributions inside a tumor tissue in a two-dimensional model (2D-model) generated by means of electrode arrays with shapes of different conic sections (ellipse, parabola and hyperbola). Analytical calculations of the potential and electric field distributions based on 2D-models for different electrode arrays are performed by solving the Laplace equation, meanwhile the numerical solution is solved by means of finite element method in two dimensions. Both analytical and numerical solutions reveal significant differences between the electric field distributions generated by electrode arrays with shapes of circle and different conic sections (elliptic, parabolic and hyperbolic). Electrode arrays with circular, elliptical and hyperbolic shapes have the advantage of concentrating the electric field lines in the tumor. The mathematical approach presented in this study provides a useful tool for the design of electrode arrays with different shapes of conic sections by means of the use of the unifying principle. At the same time, we verify the good correspondence between the analytical and numerical solutions for the potential and electric field distributions generated by the electrode array with different conic sections.
Barlow, Aaron M.; Slepkov, Aaron D.; Ridsdale, Andrew; McGinn, Patrick J.; Stolow, Albert
2014-01-01
We consider multi-modal four-wave mixing microscopies to be ideal tools for the in vivo study of carotenoid distributions within the important biofuel microalgae Haematococcus pluvialis. We show that hyperspectral coherent anti-Stokes Raman scattering (CARS) microscopy generates non-invasive, quantitative real-time concentrations maps of intracellular carotenoid distributions in live algae. PMID:25360358
Ground States of Random Spanning Trees on a D-Wave 2X
NASA Astrophysics Data System (ADS)
Hall, J. S.; Hobl, L.; Novotny, M. A.; Michielsen, Kristel
The performances of two D-Wave 2 machines (476 and 496 qubits) and of a 1097-qubit D-Wave 2X were investigated. Each chip has a Chimera interaction graph calG . Problem input consists of values for the fields hj and for the two-qubit interactions Ji , j of an Ising spin-glass problem formulated on calG . Output is returned in terms of a spin configuration {sj } , with sj = +/- 1 . We generated random spanning trees (RSTs) uniformly distributed over all spanning trees of calG . On the 476-qubit D-Wave 2, RSTs were generated on the full chip with Ji , j = - 1 and hj = 0 and solved one thousand times. The distribution of solution energies and the average magnetization of each qubit were determined. On both the 476- and 1097-qubit machines, four identical spanning trees were generated on each quadrant of the chip. The statistical independence of these regions was investigated. In another study, on the D-Wave 2X, one hundred RSTs with random Ji , j ∈ { - 1 , 1 } and hj = 0 were generated on the full chip. Each RST problem was solved one hundred times and the number of times the ground state energy was found was recorded. This procedure was repeated for square subgraphs, with dimensions ranging from 7 ×7 to 11 ×11. Supported in part by NSF Grants DGE-0947419 and DMR-1206233. D-Wave time provided by D-Wave Systems and by the USRA Quantum Artificial Intelligence Laboratory Research Opportunity.
Investigation of the delay time distribution of high power microwave surface flashover
NASA Astrophysics Data System (ADS)
Foster, J.; Krompholz, H.; Neuber, A.
2011-01-01
Characterizing and modeling the statistics associated with the initiation of gas breakdown has proven to be difficult due to a variety of rather unexplored phenomena involved. Experimental conditions for high power microwave window breakdown for pressures on the order of 100 to several 100 torr are complex: there are little to no naturally occurring free electrons in the breakdown region. The initial electron generation rate, from an external source, for example, is time dependent and so is the charge carrier amplification in the increasing radio frequency (RF) field amplitude with a rise time of 50 ns, which can be on the same order as the breakdown delay time. The probability of reaching a critical electron density within a given time period is composed of the statistical waiting time for the appearance of initiating electrons in the high-field region and the build-up of an avalanche with an inherent statistical distribution of the electron number. High power microwave breakdown and its delay time is of critical importance, since it limits the transmission through necessary windows, especially for high power, high altitude, low pressure applications. The delay time distribution of pulsed high power microwave surface flashover has been examined for nitrogen and argon as test gases for pressures ranging from 60 to 400 torr, with and without external UV illumination. A model has been developed for predicting the discharge delay time for these conditions. The results provide indications that field induced electron generation, other than standard field emission, plays a dominant role, which might be valid for other gas discharge types as well.
Recent progress in distributed optical fiber Raman photon sensors at China Jiliang University
NASA Astrophysics Data System (ADS)
Zhang, Zaixuan; Wang, Jianfeng; Li, Yi; Gong, Huaping; Yu, Xiangdong; Liu, Honglin; Jin, Yongxing; Kang, Juan; Li, Chenxia; Zhang, Wensheng; Zhang, Wenping; Niu, Xiaohui; Sun, Zhongzhou; Zhao, Chunliu; Dong, Xinyong; Jin, Shangzhong
2012-06-01
A brief review of recent progress in researches, productions and applications of full distributed fiber Raman photon sensors at China Jiliang University (CJLU) is presented. In order to improve the measurement distance, the accuracy, the space resolution, the ability of multi-parameter measurements, and the intelligence of full distributed fiber sensor systems, a new generation fiber sensor technology based on the optical fiber nonlinear scattering fusion principle is proposed. A series of new generation full distributed fiber sensors are investigated and designed, which consist of new generation ultra-long distance full distributed fiber Raman and Rayleigh scattering photon sensors integrated with a fiber Raman amplifier, auto-correction full distributed fiber Raman photon temperature sensors based on Raman correlation dual sources, full distributed fiber Raman photon temperature sensors based on a pulse coding source, full distributed fiber Raman photon temperature sensors using a fiber Raman wavelength shifter, a new type of Brillouin optical time domain analyzers (BOTDAs) integrated with a fiber Raman amplifier for replacing a fiber Brillouin amplifier, full distributed fiber Raman and Brillouin photon sensors integrated with a fiber Raman amplifier, and full distributed fiber Brillouin photon sensors integrated with a fiber Brillouin frequency shifter. The Internet of things is believed as one of candidates of the next technological revolution, which has driven hundreds of millions of class markets. Sensor networks are important components of the Internet of things. The full distributed optical fiber sensor network (Rayleigh, Raman, and Brillouin scattering) is a 3S (smart materials, smart structure, and smart skill) system, which is easy to construct smart fiber sensor networks. The distributed optical fiber sensor can be embedded in the power grids, railways, bridges, tunnels, roads, constructions, water supply systems, dams, oil and gas pipelines and other facilities, and can be integrated with wireless networks.
NASA Astrophysics Data System (ADS)
Li, Zishen; Wang, Ningbo; Li, Min; Zhou, Kai; Yuan, Yunbin; Yuan, Hong
2017-04-01
The Earth's ionosphere is part of the atmosphere stretching from an altitude of about 50 km to more than 1000 km. When the Global Navigation Satellite System (GNSS) signal emitted from a satellite travels through the ionosphere before reaches a receiver on or near the Earth surface, the GNSS signal is significantly delayed by the ionosphere and this delay bas been considered as one of the major errors in the GNSS measurement. The real-time global ionospheric map calculated from the real-time data obtained by global stations is an essential method for mitigating the ionospheric delay for real-time positioning. The generation of an accurate global ionospheric map generally depends on the global stations with dense distribution; however, the number of global stations that can produce the real-time data is very limited at present, which results that the generation of global ionospheric map with a high accuracy is very different when only using the current stations with real-time data. In view of this, a new approach is proposed for calculating the real-time global ionospheric map only based on the current stations with real-time data. This new approach is developed on the basis of the post-processing and the one-day predicted global ionospheric map from our research group. The performance of the proposed approach is tested by the current global stations with the real-time data and the test results are also compared with the IGS-released final global ionospheric map products.
Generating and using truly random quantum states in Mathematica
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2012-01-01
The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.
Spatiotemporal modelling and mapping of the bubonic plague epidemic in India.
Yu, Hwa-Lung; Christakos, George
2006-03-17
This work studies the spatiotemporal evolution of bubonic plague in India during 1896-1906 using stochastic concepts and geographical information science techniques. In the past, most investigations focused on selected cities to conduct different kinds of studies, such as the ecology of rats. No detailed maps existed incorporating the space-time dependence structure and uncertainty sources of the epidemic system and providing a composite space-time picture of the disease propagation characteristics. Informative spatiotemporal maps were generated that represented mortality rates and geographical spread of the disease, and epidemic indicator plots were derived that offered meaningful characterizations of the spatiotemporal disease distribution. The bubonic plague in India exhibited strong seasonal and geographical features. During its entire duration, the plague continued to invade new geographical areas, while it followed a re-emergence pattern at many localities; its rate changed significantly during each year and the mortality distribution exhibited space-time heterogeneous patterns; prevalence usually occurred in the autumn and spring, whereas the plague stopped moving towards new locations during the summers. Modern stochastic modelling and geographical information science provide powerful means to study the spatiotemporal distribution of the bubonic plague epidemic under conditions of uncertainty and multi-sourced databases; to account for various forms of interdisciplinary knowledge; and to generate informative space-time maps of mortality rates and propagation patterns. To the best of our knowledge, this kind of plague maps and plots become available for the first time, thus providing novel perspectives concerning the distribution and space-time propagation of the deadly epidemic. Furthermore, systematic maps and indicator plots make possible the comparison of the spatial-temporal propagation patterns of different diseases.
Spatiotemporal modelling and mapping of the bubonic plague epidemic in India
Yu, Hwa-Lung; Christakos, George
2006-01-01
Background This work studies the spatiotemporal evolution of bubonic plague in India during 1896–1906 using stochastic concepts and geographical information science techniques. In the past, most investigations focused on selected cities to conduct different kinds of studies, such as the ecology of rats. No detailed maps existed incorporating the space-time dependence structure and uncertainty sources of the epidemic system and providing a composite space-time picture of the disease propagation characteristics. Results Informative spatiotemporal maps were generated that represented mortality rates and geographical spread of the disease, and epidemic indicator plots were derived that offered meaningful characterizations of the spatiotemporal disease distribution. The bubonic plague in India exhibited strong seasonal and geographical features. During its entire duration, the plague continued to invade new geographical areas, while it followed a re-emergence pattern at many localities; its rate changed significantly during each year and the mortality distribution exhibited space-time heterogeneous patterns; prevalence usually occurred in the autumn and spring, whereas the plague stopped moving towards new locations during the summers. Conclusion Modern stochastic modelling and geographical information science provide powerful means to study the spatiotemporal distribution of the bubonic plague epidemic under conditions of uncertainty and multi-sourced databases; to account for various forms of interdisciplinary knowledge; and to generate informative space-time maps of mortality rates and propagation patterns. To the best of our knowledge, this kind of plague maps and plots become available for the first time, thus providing novel perspectives concerning the distribution and space-time propagation of the deadly epidemic. Furthermore, systematic maps and indicator plots make possible the comparison of the spatial-temporal propagation patterns of different diseases. PMID:16545128
Susong, D.; Marks, D.; Garen, D.
1999-01-01
Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.
Improved Results for Route Planning in Stochastic Transportation Networks
NASA Technical Reports Server (NTRS)
Boyan, Justin; Mitzenmacher, Michael
2000-01-01
In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.
Wave generation by contaminant ions near a large spacecraft
NASA Technical Reports Server (NTRS)
Singh, N.
1993-01-01
Measurements from the space shuttle flights have revealed that a large spacecraft in a low earth orbit is accompanied by an extensive gas cloud which is primarily made up of water. The charge exchange between the water molecule and the ionospheric O(+) ions produces a water ion beam traversing downstream of the spacecraft. In this report we present results from a study on the generation of plasma waves by the interaction of the water ion beams with the ionospheric plasma. Since velocity distribution function is key to the understanding of the wave generation process, we have performed a test particle simulation to determine the nature of H2O(+) ions velocity distribution function. The simulations show that at the time scales shorter than the ion cyclotron period tau(sub c), the distribution function can be described by a beam. On the other hand, when the time scales are larger than tau(sub c), a ring distribution forms. A brief description of the linear instabilities driven by an ion beam streaming across a magnetic field in a plasma is presented. We have identified two types of instabilities occurring in low and high frequency bands; the low-frequency instability occurs over the frequency band from zero to about the lower hybrid frequency for a sufficiently low beam density. As the beam density increases, the linear instability occurs at decreasing frequencies below the lower-hybrid frequency. The high frequency instability occurs near the electron cyclotron frequency and its harmonics.
NASA Technical Reports Server (NTRS)
Isenberg, P. A.
1995-01-01
Intense MHD waves generated by the isotropization of interstellar pickup protons were predicted by Lee and Ip (1987) to appear in the solar wind whenever pickup proton fluxes were high enough. However, in reality these waves have proved surprisingly difficult to identify, even in the presence of observed pickup protons. We investigate the wave excitation by isotropization from an initially broad pitch-angle distribution instead of the narrow ring-beam assumed by Lee and Ip. The pitch angle of a newly-ionized proton is given by theta(sub o), the angle between the magnetic field (averaged over a pickup proton gyroradius) and the solar wind flow at the time of ionization. Then, a broadened distribution results from spatial transport of pickup protons prior to isotropization from regions upstream along the field containing different values of theta(sub o). The value of theta(sub o) will vary as a result of the ambient long-wavelength fluctuations in the solar wind. Thus, the range of initial pitch-angles is directly related to the amplitude of these fluctuations within a length-scale determined by the isotropization time. We show that a broad initial pitch-angle distribution can significantly modify the intensity and shape of the pickup-proton-generated wave spectrum, and we derive a criterion for the presence of observable pickup-proton generated waves given the intensity of the ambient long wavelength fluctuations.
Effects of the infectious period distribution on predicted transitions in childhood disease dynamics
Krylova, Olga; Earn, David J. D.
2013-01-01
The population dynamics of infectious diseases occasionally undergo rapid qualitative changes, such as transitions from annual to biennial cycles or to irregular dynamics. Previous work, based on the standard seasonally forced ‘susceptible–exposed–infectious–removed’ (SEIR) model has found that transitions in the dynamics of many childhood diseases result from bifurcations induced by slow changes in birth and vaccination rates. However, the standard SEIR formulation assumes that the stage durations (latent and infectious periods) are exponentially distributed, whereas real distributions are narrower and centred around the mean. Much recent work has indicated that realistically distributed stage durations strongly affect the dynamical structure of seasonally forced epidemic models. We investigate whether inferences drawn from previous analyses of transitions in patterns of measles dynamics are robust to the shapes of the stage duration distributions. As an illustrative example, we analyse measles dynamics in New York City from 1928 to 1972. We find that with a fixed mean infectious period in the susceptible–infectious–removed (SIR) model, the dynamical structure and predicted transitions vary substantially as a function of the shape of the infectious period distribution. By contrast, with fixed mean latent and infectious periods in the SEIR model, the shapes of the stage duration distributions have a less dramatic effect on model dynamical structure and predicted transitions. All these results can be understood more easily by considering the distribution of the disease generation time as opposed to the distributions of individual disease stages. Numerical bifurcation analysis reveals that for a given mean generation time the dynamics of the SIR and SEIR models for measles are nearly equivalent and are insensitive to the shapes of the disease stage distributions. PMID:23676892
Krylova, Olga; Earn, David J D
2013-07-06
The population dynamics of infectious diseases occasionally undergo rapid qualitative changes, such as transitions from annual to biennial cycles or to irregular dynamics. Previous work, based on the standard seasonally forced 'susceptible-exposed-infectious-removed' (SEIR) model has found that transitions in the dynamics of many childhood diseases result from bifurcations induced by slow changes in birth and vaccination rates. However, the standard SEIR formulation assumes that the stage durations (latent and infectious periods) are exponentially distributed, whereas real distributions are narrower and centred around the mean. Much recent work has indicated that realistically distributed stage durations strongly affect the dynamical structure of seasonally forced epidemic models. We investigate whether inferences drawn from previous analyses of transitions in patterns of measles dynamics are robust to the shapes of the stage duration distributions. As an illustrative example, we analyse measles dynamics in New York City from 1928 to 1972. We find that with a fixed mean infectious period in the susceptible-infectious-removed (SIR) model, the dynamical structure and predicted transitions vary substantially as a function of the shape of the infectious period distribution. By contrast, with fixed mean latent and infectious periods in the SEIR model, the shapes of the stage duration distributions have a less dramatic effect on model dynamical structure and predicted transitions. All these results can be understood more easily by considering the distribution of the disease generation time as opposed to the distributions of individual disease stages. Numerical bifurcation analysis reveals that for a given mean generation time the dynamics of the SIR and SEIR models for measles are nearly equivalent and are insensitive to the shapes of the disease stage distributions.
Aerodynamic Efficiency Analysis on Modified Drag Generator of Tanker-Ship Using Symmetrical Airfoil
NASA Astrophysics Data System (ADS)
Moranova, Starida; Rahmat Hadiyatul A., S. T.; Indra Permana S., S. T.
2018-04-01
Time reduction of tanker ship spent in the sea should be applied for solving problems occured in oil and gas distribution, such as the unpunctuality of the distribution and oil spilling. The aerodynamic design for some parts that considered as drag generators is presumed to be one of the solution, utilizing our demand of the increasing speed. This paper suggests two examples of the more-aerodynamic design of a part in the tanker that is considered a drag generator, and reports the value of drag generated from the basic and the suggested aerodynamic designs. The new designs are made by adding the NACA airfoil to the cross section of the drag generator. The scenario is assumed with a 39 km/hour speed of tanker, neglecting the hydrodynamic effects occured in the tanker by cutting it at the waterline which separated the drag between air and water. The results of produced drag in each design are calculated by Computational Fluid Dynamic method.
The spatial distribution and time evolution of impact-generated magnetic fields
NASA Technical Reports Server (NTRS)
Crawford, D. A.; Schultz, P. H.
1991-01-01
The production of magnetic fields was revealed by laboratory hypervelocity impacts in easily vaporized targets. As quantified by pressure measurements, high frame-rate photography, and electrostatic probes, these impacts tend to produce large quantities of slightly ionized vapor, which is referred to as impact-generated plasma. Nonaligned electron density and temperature gradients within this plasma may lead to production of the observed magnetic fields. Past experiments were limited to measuring a single component of the impact-generated magnetic fields at only a few locations about the developing impact crater and consequently gave little information about the field production mechanism. To understand this mechanism, the techniques were extended to map the three components of the magnetic field both in space and time. By conducting many otherwise identical experiments with arrayed magnetic detectors, a preliminary 3-D picture was produced of impact-generated magnetic fields as they develop through time.
Lee, Jasper; Zhang, Jianguo; Park, Ryan; Dagliyan, Grant; Liu, Brent; Huang, H K
2012-07-01
A Molecular Imaging Data Grid (MIDG) was developed to address current informatics challenges in archival, sharing, search, and distribution of preclinical imaging studies between animal imaging facilities and investigator sites. This manuscript presents a 2nd generation MIDG replacing the Globus Toolkit with a new system architecture that implements the IHE XDS-i integration profile. Implementation and evaluation were conducted using a 3-site interdisciplinary test-bed at the University of Southern California. The 2nd generation MIDG design architecture replaces the initial design's Globus Toolkit with dedicated web services and XML-based messaging for dedicated management and delivery of multi-modality DICOM imaging datasets. The Cross-enterprise Document Sharing for Imaging (XDS-i) integration profile from the field of enterprise radiology informatics was adopted into the MIDG design because streamlined image registration, management, and distribution dataflow are likewise needed in preclinical imaging informatics systems as in enterprise PACS application. Implementation of the MIDG is demonstrated at the University of Southern California Molecular Imaging Center (MIC) and two other sites with specified hardware, software, and network bandwidth. Evaluation of the MIDG involves data upload, download, and fault-tolerance testing scenarios using multi-modality animal imaging datasets collected at the USC Molecular Imaging Center. The upload, download, and fault-tolerance tests of the MIDG were performed multiple times using 12 collected animal study datasets. Upload and download times demonstrated reproducibility and improved real-world performance. Fault-tolerance tests showed that automated failover between Grid Node Servers has minimal impact on normal download times. Building upon the 1st generation concepts and experiences, the 2nd generation MIDG system improves accessibility of disparate animal-model molecular imaging datasets to users outside a molecular imaging facility's LAN using a new architecture, dataflow, and dedicated DICOM-based management web services. Productivity and efficiency of preclinical research for translational sciences investigators has been further streamlined for multi-center study data registration, management, and distribution.
Novel mechanism of network protection against the new generation of cyber attacks
NASA Astrophysics Data System (ADS)
Milovanov, Alexander; Bukshpun, Leonid; Pradhan, Ranjit
2012-06-01
A new intelligent mechanism is presented to protect networks against the new generation of cyber attacks. This mechanism integrates TCP/UDP/IP protocol stack protection and attacker/intruder deception to eliminate existing TCP/UDP/IP protocol stack vulnerabilities. It allows to detect currently undetectable, highly distributed, low-frequency attacks such as distributed denial-of-service (DDoS) attacks, coordinated attacks, botnet, and stealth network reconnaissance. The mechanism also allows insulating attacker/intruder from the network and redirecting the attack to a simulated network acting as a decoy. As a result, network security personnel gain sufficient time to defend the network and collect the attack information. The presented approach can be incorporated into wireless or wired networks that require protection against known and the new generation of cyber attacks.
Statistical properties of Charney-Hasegawa-Mima zonal flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Johan, E-mail: anderson.johan@gmail.com; Botha, G. J. J.
2015-05-15
A theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent plasma transport events in unforced zonal flows is provided within the Charney-Hasegawa-Mima (CHM) model. The governing equation is solved numerically with various prescribed density gradients that are designed to produce different configurations of parallel and anti-parallel streams. Long-lasting vortices form whose flow is governed by the zonal streams. It is found that the numerically generated PDFs can be matched with analytical predictions of PDFs based on the instanton method by removing the autocorrelations from the time series. In many instances, the statistics generated by the CHM dynamics relaxesmore » to Gaussian distributions for both the electrostatic and vorticity perturbations, whereas in areas with strong nonlinear interactions it is found that the PDFs are exponentially distributed.« less
On the Use of the Beta Distribution in Probabilistic Resource Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olea, Ricardo A., E-mail: olea@usgs.gov
2011-12-15
The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. Themore » beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution.« less
Ahn, Hyo-Sung; Kim, Byeong-Yeon; Lim, Young-Hun; Lee, Byung-Hun; Oh, Kwang-Kyo
2018-03-01
This paper proposes three coordination laws for optimal energy generation and distribution in energy network, which is composed of physical flow layer and cyber communication layer. The physical energy flows through the physical layer; but all the energies are coordinated to generate and flow by distributed coordination algorithms on the basis of communication information. First, distributed energy generation and energy distribution laws are proposed in a decoupled manner without considering the interactive characteristics between the energy generation and energy distribution. Second, a joint coordination law to treat the energy generation and energy distribution in a coupled manner taking account of the interactive characteristics is designed. Third, to handle over- or less-energy generation cases, an energy distribution law for networks with batteries is designed. The coordination laws proposed in this paper are fully distributed in the sense that they are decided optimally only using relative information among neighboring nodes. Through numerical simulations, the validity of the proposed distributed coordination laws is illustrated.
WASTE HANDLING BUILDING ELECTRICAL SYSTEM DESCRIPTION DOCUMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.C. Khamamkar
2000-06-23
The Waste Handling Building Electrical System performs the function of receiving, distributing, transforming, monitoring, and controlling AC and DC power to all waste handling building electrical loads. The system distributes normal electrical power to support all loads that are within the Waste Handling Building (WHB). The system also generates and distributes emergency power to support designated emergency loads within the WHB within specified time limits. The system provides the capability to transfer between normal and emergency power. The system provides emergency power via independent and physically separated distribution feeds from the normal supply. The designated emergency electrical equipment will bemore » designed to operate during and after design basis events (DBEs). The system also provides lighting, grounding, and lightning protection for the Waste Handling Building. The system is located in the Waste Handling Building System. The system consists of a diesel generator, power distribution cables, transformers, switch gear, motor controllers, power panel boards, lighting panel boards, lighting equipment, lightning protection equipment, control cabling, and grounding system. Emergency power is generated with a diesel generator located in a QL-2 structure and connected to the QL-2 bus. The Waste Handling Building Electrical System distributes and controls primary power to acceptable industry standards, and with a dependability compatible with waste handling building reliability objectives for non-safety electrical loads. It also generates and distributes emergency power to the designated emergency loads. The Waste Handling Building Electrical System receives power from the Site Electrical Power System. The primary material handling power interfaces include the Carrier/Cask Handling System, Canister Transfer System, Assembly Transfer System, Waste Package Remediation System, and Disposal Container Handling Systems. The system interfaces with the MGR Operations Monitoring and Control System for supervisory monitoring and control signals. The system interfaces with all facility support loads such as heating, ventilation, and air conditioning, office, fire protection, monitoring and control, safeguards and security, and communications subsystems.« less
Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids
Chen, Bo; Chen, Chen; Wang, Jianhui; ...
2017-07-07
Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determinedmore » to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.« less
Communication library for run-time visualization of distributed, asynchronous data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowlan, J.; Wightman, B.T.
1994-04-01
In this paper we present a method for collecting and visualizing data generated by a parallel computational simulation during run time. Data distributed across multiple processes is sent across parallel communication lines to a remote workstation, which sorts and queues the data for visualization. We have implemented our method in a set of tools called PORTAL (for Parallel aRchitecture data-TrAnsfer Library). The tools comprise generic routines for sending data from a parallel program (callable from either C or FORTRAN), a semi-parallel communication scheme currently built upon Unix Sockets, and a real-time connection to the scientific visualization program AVS. Our methodmore » is most valuable when used to examine large datasets that can be efficiently generated and do not need to be stored on disk. The PORTAL source libraries, detailed documentation, and a working example can be obtained by anonymous ftp from info.mcs.anl.gov from the file portal.tar.Z from the directory pub/portal.« less
New activity pattern in human interactive dynamics
NASA Astrophysics Data System (ADS)
Formentin, Marco; Lovison, Alberto; Maritan, Amos; Zanzotto, Giovanni
2015-09-01
We investigate the response function of human agents as demonstrated by written correspondence, uncovering a new pattern for how the reactive dynamics of individuals is distributed across the set of each agent’s contacts. In long-term empirical data on email, we find that the set of response times considered separately for the messages to each different correspondent of a given writer, generate a family of heavy-tailed distributions, which have largely the same features for all agents, and whose characteristic times grow exponentially with the rank of each correspondent. We furthermore show that this new behavioral pattern emerges robustly by considering weighted moving averages of the priority-conditioned response-time probabilities generated by a basic prioritization model. Our findings clarify how the range of priorities in the inputs from one’s environment underpin and shape the dynamics of agents embedded in a net of reactive relations. These newly revealed activity patterns might be universal, being present in other general interactive environments, and constrain future models of communication and interaction networks, affecting their architecture and evolution.
Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Bo; Chen, Chen; Wang, Jianhui
Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determinedmore » to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.« less
NASA Astrophysics Data System (ADS)
Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan
2017-03-01
This paper presents a method for modeling and simulation of shear wave generation from a nonlinear Acoustic Radiation Force Impulse (ARFI) that is considered as a distributed force applied at the focal region of a HIFU transducer radiating in nonlinear regime. The shear wave propagation is simulated by solving the Navier's equation from the distributed nonlinear ARFI as the source of the shear wave. Then, the Wigner-Ville Distribution (WVD) as a time-frequency analysis method is used to detect the shear wave at different local points in the region of interest. The WVD results in an estimation of the shear wave time of arrival, its mean frequency and local attenuation which can be utilized to estimate medium's shear modulus and shear viscosity using the Voigt model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Bernstein, Andrey; Simonetto, Andrea
This paper develops an online optimization method to maximize operational objectives of distribution-level distributed energy resources (DERs), while adjusting the aggregate power generated (or consumed) in response to services requested by grid operators. The design of the online algorithm is based on a projected-gradient method, suitably modified to accommodate appropriate measurements from the distribution network and the DERs. By virtue of this approach, the resultant algorithm can cope with inaccuracies in the representation of the AC power flows, it avoids pervasive metering to gather the state of noncontrollable resources, and it naturally lends itself to a distributed implementation. Optimality claimsmore » are established in terms of tracking of the solution of a well-posed time-varying convex optimization problem.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Bernstein, Andrey; Simonetto, Andrea
This paper develops an online optimization method to maximize the operational objectives of distribution-level distributed energy resources (DERs) while adjusting the aggregate power generated (or consumed) in response to services requested by grid operators. The design of the online algorithm is based on a projected-gradient method, suitably modified to accommodate appropriate measurements from the distribution network and the DERs. By virtue of this approach, the resultant algorithm can cope with inaccuracies in the representation of the AC power, it avoids pervasive metering to gather the state of noncontrollable resources, and it naturally lends itself to a distributed implementation. Optimality claimsmore » are established in terms of tracking of the solution of a well-posed time-varying optimization problem.« less
Intelligent Control of Micro Grid: A Big Data-Based Control Center
NASA Astrophysics Data System (ADS)
Liu, Lu; Wang, Yanping; Liu, Li; Wang, Zhiseng
2018-01-01
In this paper, a structure of micro grid system with big data-based control center is introduced. Energy data from distributed generation, storage and load are analized through the control center, and from the results new trends will be predicted and applied as a feedback to optimize the control. Therefore, each step proceeded in micro grid can be adjusted and orgnized in a form of comprehensive management. A framework of real-time data collection, data processing and data analysis will be proposed by employing big data technology. Consequently, a integrated distributed generation and a optimized energy storage and transmission process can be implemented in the micro grid system.
Wynden, Rob; Anderson, Nick; Casale, Marco; Lakshminarayanan, Prakash; Anderson, Kent; Prosser, Justin; Errecart, Larry; Livshits, Alice; Thimman, Tim; Weiner, Mark
2011-01-01
Within the CTSA (Clinical Translational Sciences Awards) program, academic medical centers are tasked with the storage of clinical formulary data within an Integrated Data Repository (IDR) and the subsequent exposure of that data over grid computing environments for hypothesis generation and cohort selection. Formulary data collected over long periods of time across multiple institutions requires normalization of terms before those data sets can be aggregated and compared. This paper sets forth a solution to the challenge of generating derived aggregated normalized views from large, distributed data sets of clinical formulary data intended for re-use within clinical translational research.
Copilot: Monitoring Embedded Systems
NASA Technical Reports Server (NTRS)
Pike, Lee; Wegmann, Nis; Niller, Sebastian; Goodloe, Alwyn
2012-01-01
Runtime verification (RV) is a natural fit for ultra-critical systems, where correctness is imperative. In ultra-critical systems, even if the software is fault-free, because of the inherent unreliability of commodity hardware and the adversity of operational environments, processing units (and their hosted software) are replicated, and fault-tolerant algorithms are used to compare the outputs. We investigate both software monitoring in distributed fault-tolerant systems, as well as implementing fault-tolerance mechanisms using RV techniques. We describe the Copilot language and compiler, specifically designed for generating monitors for distributed, hard real-time systems. We also describe two case-studies in which we generated Copilot monitors in avionics systems.
Vutukuru, Satish; Carreras-Sospedra, Marc; Brouwer, Jacob; Dabdub, Donald
2011-12-01
Distributed power generation-electricity generation that is produced by many small stationary power generators distributed throughout an urban air basin-has the potential to supply a significant portion of electricity in future years. As a result, distributed generation may lead to increased pollutant emissions within an urban air basin, which could adversely affect air quality. However, the use of combined heating and power with distributed generation may reduce the energy consumption for space heating and air conditioning, resulting in a net decrease of pollutant and greenhouse gas emissions. This work used a systematic approach based on land-use geographical information system data to determine the spatial and temporal distribution of distributed generation emissions in the San Joaquin Valley Air Basin of California and simulated the potential air quality impacts using state-of-the-art three-dimensional computer models. The evaluation of the potential market penetration of distributed generation focuses on the year 2023. In general, the air quality impacts of distributed generation were found to be small due to the restrictive 2007 California Air Resources Board air emission standards applied to all distributed generation units and due to the use of combined heating and power. Results suggest that if distributed generation units were allowed to emit at the current Best Available Control Technology standards (which are less restrictive than the 2007 California Air Resources Board standards), air quality impacts of distributed generation could compromise compliance with the federal 8-hr average ozone standard in the region.
Vutukuru, Satish; Carreras-Sospedra, Marc; Brouwer, Jacob; Dabdub, Donald
2011-12-01
Distributed power generation-electricity generation that is produced by many small stationary power generators distributed throughout an urban air basin-has the potential to supply a significant portion of electricity in future years. As a result, distributed generation may lead to increased pollutant emissions within an urban air basin, which could adversely affect air quality. However, the use of combined heating and power with distributed generation may reduce the energy consumption for space heating and air conditioning, resulting in a net decrease of pollutant and greenhouse gas emissions. This work used a systematic approach based on land-use geographical information system data to determine the spatial and temporal distribution of distributed generation emissions in the San Joaquin Valley Air Basin of California and simulated the potential air quality impacts using state-of-the-art three-dimensional computer models. The evaluation of the potential market penetration of distributed generation focuses on the year 2023. In general, the air quality impacts of distributed generation were found to be small due to the restrictive 2007 California Air Resources Board air emission standards applied to all distributed generation units and due to the use of combined heating and power. Results suggest that if distributed generation units were allowed to emit at the current Best Available Control Technology standards (which are less restrictive than the 2007 California Air Resources Board standards), air quality impacts of distributed generation could compromise compliance with the federal 8-hr average ozone standard in the region. [Box: see text].
NASA Astrophysics Data System (ADS)
Engdahl, N.
2017-12-01
Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.
Time-frequency representation of a highly nonstationary signal via the modified Wigner distribution
NASA Technical Reports Server (NTRS)
Zoladz, T. F.; Jones, J. H.; Jong, J.
1992-01-01
A new signal analysis technique called the modified Wigner distribution (MWD) is presented. The new signal processing tool has been very successful in determining time frequency representations of highly non-stationary multicomponent signals in both simulations and trials involving actual Space Shuttle Main Engine (SSME) high frequency data. The MWD departs from the classic Wigner distribution (WD) in that it effectively eliminates the cross coupling among positive frequency components in a multiple component signal. This attribute of the MWD, which prevents the generation of 'phantom' spectral peaks, will undoubtedly increase the utility of the WD for real world signal analysis applications which more often than not involve multicomponent signals.
Hontinfinde, Régis; Coulibaly, Saliya; Megret, Patrice; Taki, Majid; Wuilpart, Marc
2017-05-01
Supercontinuum generation (SCG) in optical fibers arises from the spectral broadening of an intense light, which results from the interplay of both linear and nonlinear optical effects. In this Letter, a nondestructive optical time domain reflectometry method is proposed for the first time, to the best of our knowledge, to measure the spatial (longitudinal) evolution of the SC induced along an optical fiber. The method was experimentally tested on highly nonlinear fibers. The experimental results are in a good agreement with the optical spectra measured at the fiber outputs.
Planning applications in image analysis
NASA Technical Reports Server (NTRS)
Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.
1994-01-01
We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.
NASA Astrophysics Data System (ADS)
Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan
2016-11-01
In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.
A Computational Model for Predicting Gas Breakdown
NASA Astrophysics Data System (ADS)
Gill, Zachary
2017-10-01
Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.
Di Matteo, Livio
2005-01-01
This paper examines the determinants of real per capita health expenditures in order to assess the impact of age distribution, income and time using American state-level data for the period 1980-1998 and Canadian province-level data for the period 1975-2000. Ageing population distributions and income explain a relatively small portion of health expenditures when the impact of time effects, which is a partial proxy for technological change, is controlled for. However, the impact of age is of more concern given that cost increases are concentrated in the last few years of life and there may be cohort effects as the "Baby-Boom" generation ages. There is an urgent need to better understand the exact mechanisms driving health expenditure increases given that time accounts for approximately two-thirds of health expenditure increases and that its effect is non-linear.
Synchrony detection and amplification by silicon neurons with STDP synapses.
Bofill-i-petit, Adria; Murray, Alan F
2004-09-01
Spike-timing dependent synaptic plasticity (STDP) is a form of plasticity driven by precise spike-timing differences between presynaptic and postsynaptic spikes. Thus, the learning rules underlying STDP are suitable for learning neuronal temporal phenomena such as spike-timing synchrony. It is well known that weight-independent STDP creates unstable learning processes resulting in balanced bimodal weight distributions. In this paper, we present a neuromorphic analog very large scale integration (VLSI) circuit that contains a feedforward network of silicon neurons with STDP synapses. The learning rule implemented can be tuned to have a moderate level of weight dependence. This helps stabilise the learning process and still generates binary weight distributions. From on-chip learning experiments we show that the chip can detect and amplify hierarchical spike-timing synchrony structures embedded in noisy spike trains. The weight distributions of the network emerging from learning are bimodal.
Using ordinal partition transition networks to analyze ECG data
NASA Astrophysics Data System (ADS)
Kulp, Christopher W.; Chobot, Jeremy M.; Freitas, Helena R.; Sprechini, Gene D.
2016-07-01
Electrocardiogram (ECG) data from patients with a variety of heart conditions are studied using ordinal pattern partition networks. The ordinal pattern partition networks are formed from the ECG time series by symbolizing the data into ordinal patterns. The ordinal patterns form the nodes of the network and edges are defined through the time ordering of the ordinal patterns in the symbolized time series. A network measure, called the mean degree, is computed from each time series-generated network. In addition, the entropy and number of non-occurring ordinal patterns (NFP) is computed for each series. The distribution of mean degrees, entropies, and NFPs for each heart condition studied is compared. A statistically significant difference between healthy patients and several groups of unhealthy patients with varying heart conditions is found for the distributions of the mean degrees, unlike for any of the distributions of the entropies or NFPs.
Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?
NASA Astrophysics Data System (ADS)
Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.
2018-02-01
The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and found the statistical range of β values. The observed value of β = 0.83 for the CMT catalog corresponds to a p value of p=0.004 leading us to conclude that the interevent natural times in the CMT catalog are not random. For the time series analysis, we calculated the autocorrelation function for the sequence of natural time intervals between large global earthquakes and again compared with data from 1.5 × 10^4 synthetic catalogs of random data. In this case, the spread of autocorrelation values was much larger, so we concluded that this approach is insensitive to deviations from random behavior.
Energy management and cooperation in microgrids
NASA Astrophysics Data System (ADS)
Rahbar, Katayoun
Microgrids are key components of future smart power grids, which integrate distributed renewable energy generators to efficiently serve the load demand locally. However, random and intermittent characteristics of renewable energy generations may hinder the reliable operation of microgrids. This thesis is thus devoted to investigating new strategies for microgrids to optimally manage their energy consumption, energy storage system (ESS) and cooperation in real time to achieve the reliable and cost-effective operation. This thesis starts with a single microgrid system. The optimal energy scheduling and ESS management policy is derived to minimize the energy cost of the microgrid resulting from drawing conventional energy from the main grid under both the off-line and online setups, where the renewable energy generation/load demand are assumed to be non-causally known and causally known at the microgrid, respectively. The proposed online algorithm is designed based on the optimal off-line solution and works under arbitrary (even unknown) realizations of future renewable energy generation/load demand. Therefore, it is more practically applicable as compared to solutions based on conventional techniques such as dynamic programming and stochastic programming that require the prior knowledge of renewable energy generation and load demand realizations/distributions. Next, for a group of microgrids that cooperate in energy management, we study efficient methods for sharing energy among them for both fully and partially cooperative scenarios, where microgrids are of common interests and self-interested, respectively. For the fully cooperative energy management, the off-line optimization problem is first formulated and optimally solved, where a distributed algorithm is proposed to minimize the total (sum) energy cost of microgrids. Inspired by the results obtained from the off-line optimization, efficient online algorithms are proposed for the real-time energy management, which are of low complexity and work given arbitrary realizations of renewable energy generation/load demand. On the other hand, for self-interested microgrids, the partially cooperative energy management is formulated and a distributed algorithm is proposed to optimize the energy cooperation such that energy costs of individual microgrids reduce simultaneously over the case without energy cooperation while limited information is shared among the microgrids and the central controller.
Universal portfolios generated by weakly stationary processes
NASA Astrophysics Data System (ADS)
Tan, Choon Peng; Pang, Sook Theng
2014-12-01
Recently, a universal portfolio generated by a set of independent Brownian motions where a finite number of past stock prices are weighted by the moments of the multivariate normal distribution is introduced and studied. The multivariate normal moments as polynomials in time consequently lead to a constant rebalanced portfolio depending on the drift coefficients of the Brownian motions. For a weakly stationary process, a different type of universal portfolio is proposed where the weights on the stock prices depend only on the time differences of the stock prices. An empirical study is conducted on the returns achieved by the universal portfolios generated by the Ornstein-Uhlenbeck process on selected stock-price data sets. Promising results are demonstrated for increasing the wealth of the investor by using the weakly-stationary-process-generated universal portfolios.
Yan, Aidong; Huang, Sheng; Li, Shuo; Chen, Rongzhang; Ohodnicki, Paul; Buric, Michael; Lee, Shiwoo; Li, Ming-Jun; Chen, Kevin P
2017-08-24
This paper reports a technique to enhance the magnitude and high-temperature stability of Rayleigh back-scattering signals in silica fibers for distributed sensing applications. With femtosecond laser radiation, more than 40-dB enhancement of Rayleigh backscattering signal was generated in silica fibers using 300-nJ laser pulses at 250 kHz repetition rate. The laser-induced Rayleigh scattering defects were found to be stable from the room temperature to 800 °C in hydrogen gas. The Rayleigh scatter at high temperatures was correlated to the formation and modification of nanogratings in the fiber core. Using optical fibers with enhanced Rayleigh backscattering profiles as distributed temperature sensors, we demonstrated real-time monitoring of solid oxide fuel cell (SOFC) operations with 5-mm spatial resolution at 800 °C. Information gathered by these fiber sensor tools can be used to verify simulation results or operated in a process-control system to improve the operational efficiency and longevity of SOFC-based energy generation systems.
A Distributed Fuzzy Associative Classifier for Big Data.
Segatori, Armando; Bechini, Alessio; Ducange, Pietro; Marcelloni, Francesco
2017-09-19
Fuzzy associative classification has not been widely analyzed in the literature, although associative classifiers (ACs) have proved to be very effective in different real domain applications. The main reason is that learning fuzzy ACs is a very heavy task, especially when dealing with large datasets. To overcome this drawback, in this paper, we propose an efficient distributed fuzzy associative classification approach based on the MapReduce paradigm. The approach exploits a novel distributed discretizer based on fuzzy entropy for efficiently generating fuzzy partitions of the attributes. Then, a set of candidate fuzzy association rules is generated by employing a distributed fuzzy extension of the well-known FP-Growth algorithm. Finally, this set is pruned by using three purposely adapted types of pruning. We implemented our approach on the popular Hadoop framework. Hadoop allows distributing storage and processing of very large data sets on computer clusters built from commodity hardware. We have performed an extensive experimentation and a detailed analysis of the results using six very large datasets with up to 11,000,000 instances. We have also experimented different types of reasoning methods. Focusing on accuracy, model complexity, computation time, and scalability, we compare the results achieved by our approach with those obtained by two distributed nonfuzzy ACs recently proposed in the literature. We highlight that, although the accuracies result to be comparable, the complexity, evaluated in terms of number of rules, of the classifiers generated by the fuzzy distributed approach is lower than the one of the nonfuzzy classifiers.
Integrated, Automated Distributed Generation Technologies Demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Kevin
2014-09-01
The purpose of the NETL Project was to develop a diverse combination of distributed renewable generation technologies and controls and demonstrate how the renewable generation could help manage substation peak demand at the ATK Promontory plant site. The Promontory plant site is located in the northwestern Utah desert approximately 25 miles west of Brigham City, Utah. The plant encompasses 20,000 acres and has over 500 buildings. The ATK Promontory plant primarily manufactures solid propellant rocket motors for both commercial and government launch systems. The original project objectives focused on distributed generation; a 100 kW (kilowatt) wind turbine, a 100 kWmore » new technology waste heat generation unit, a 500 kW energy storage system, and an intelligent system-wide automation system to monitor and control the renewable energy devices then release the stored energy during the peak demand time. The original goal was to reduce peak demand from the electrical utility company, Rocky Mountain Power (RMP), by 3.4%. For a period of time we also sought to integrate our energy storage requirements with a flywheel storage system (500 kW) proposed for the Promontory/RMP Substation. Ultimately the flywheel storage system could not meet our project timetable, so the storage requirement was switched to a battery storage system (300 kW.) A secondary objective was to design/install a bi-directional customer/utility gateway application for real-time visibility and communications between RMP, and ATK. This objective was not achieved because of technical issues with RMP, ATK Information Technology Department’s stringent requirements based on being a rocket motor manufacturing facility, and budget constraints. Of the original objectives, the following were achieved: • Installation of a 100 kW wind turbine. • Installation of a 300 kW battery storage system. • Integrated control system installed to offset electrical demand by releasing stored energy from renewable sources during peak hours of the day. Control system also monitors the wind turbine and battery storage system health, power output, and issues critical alarms. Of the original objectives, the following were not achieved: • 100 kW new technology waste heat generation unit. • Bi-directional customer/utility gateway for real time visibility and communications between RMP and ATK. • 3.4% reduction in peak demand. 1.7% reduction in peak demand was realized instead.« less
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
Hansen, A K; Christensen, M; Noordegraaf, D; Heist, P; Papastathopoulos, E; Loyo-Maldonado, V; Jensen, O B; Skovgaard, P M W
2016-11-10
Watt-level yellow emitting lasers are interesting for medical applications, due to their high hemoglobin absorption, and for efficient detection of certain fluorophores. In this paper, we demonstrate a compact and robust diode-based laser system in the yellow spectral range. The system generates 1.9 W of single-frequency light at 562.4 nm by cascaded single-pass frequency doubling of the 1124.8 nm emission from a distributed Bragg reflector (DBR) tapered laser diode. The absence of a free-space cavity makes the system stable over a base-plate temperature range of 30 K. At the same time, the use of a laser diode enables the modulation of the pump wavelength by controlling the drive current. This is utilized to achieve a power modulation depth above 90% for the second harmonic light, with a rise time below 40 μs.
An optimization model for energy generation and distribution in a dynamic facility
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1981-01-01
An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.
Entangled quantum key distribution over two free-space optical links.
Erven, C; Couteau, C; Laflamme, R; Weihs, G
2008-10-13
We report on the first real-time implementation of a quantum key distribution (QKD) system using entangled photon pairs that are sent over two free-space optical telescope links. The entangled photon pairs are produced with a type-II spontaneous parametric down-conversion source placed in a central, potentially untrusted, location. The two free-space links cover a distance of 435 m and 1,325 m respectively, producing a total separation of 1,575 m. The system relies on passive polarization analysis units, GPS timing receivers for synchronization, and custom written software to perform the complete QKD protocol including error correction and privacy amplification. Over 6.5 hours during the night, we observed an average raw key generation rate of 565 bits/s, an average quantum bit error rate (QBER) of 4.92%, and an average secure key generation rate of 85 bits/s.
NASA Astrophysics Data System (ADS)
Finke, U.; Blakeslee, R. J.; Mach, D. M.
2017-12-01
The next generation of European geostationary weather observing satellites (MTG) will operate an optical lightning location instrument (LI) which will be very similar to the Global Lightning Mapper (GLM) on board of GOES-R. For the development and verification of the product processing algorithms realistic test data are necessary. This paper presents a method of test data generation on the basis of optical lightning data from the LIS instrument and cloud image data from the Seviri radiometer.The basis is the lightning data gathered during the 15 year LIS operation time, particularly the empirical distribution functions of the optical pulse size, duration and radiance as well as the inter-correlation of lightning in space and time. These allow for a realistically structured simulation of lightning test data. Due to its low orbit the instantaneous field of view of the LIS is limited and moving with time. For the generation of test data which cover the geostationary visible disk, the LIS data have to be extended. This is realized by 1. simulating random lightning pulses according to the established distribution functions of the lightning parameters and 2. using the cloud radiometer data of the Seviri instrument on board of the geostationary Meteosat second generation (MSG). Particularly, the cloud top height product (CTH) identifies convective storm clouds wherein the simulation places random lightning pulses. The LIS instrument was recently deployed on the International Space Station (ISS). The ISS orbit reaches higher latitudes, particularly Europe. The ISS-LIS data is analyzed for single observation days. Additionally, the statistical distribution of parameters such as radiance, footprint size, and space time correlation of the groups are compared against the long time statistics from TRMM-LIS.Optical lightning detection efficiency from space is affected by the solar radiation reflected from the clouds. This effect is changing with day and night areas across the field of view. For a realistic simulation of this cloud background radiance the Seviri visual channel VIS08 data is used.Additionally to the test data study, this paper gives a comparison of the MTG-LI to the GLM and discusses differences in instrument design, product definition and generation and the merging of data from both geostationary instruments.
Study of temperature distributions in wafer exposure process
NASA Astrophysics Data System (ADS)
Lin, Zone-Ching; Wu, Wen-Jang
During the exposure process of photolithography, wafer absorbs the exposure energy, which results in rising temperature and the phenomenon of thermal expansion. This phenomenon was often neglected due to its limited effect in the previous generation of process. However, in the new generation of process, it may very likely become a factor to be considered. In this paper, the finite element model for analyzing the transient behavior of the distribution of wafer temperature during exposure was established under the assumption that the wafer was clamped by a vacuum chuck without warpage. The model is capable of simulating the distribution of the wafer temperature under different exposure conditions. The flowchart of analysis begins with the simulation of transient behavior in a single exposure region to the variation of exposure energy, interval of exposure locations and interval of exposure time under continuous exposure to investigate the distribution of wafer temperature. The simulation results indicate that widening the interval of exposure locations has a greater impact in improving the distribution of wafer temperature than extending the interval of exposure time between neighboring image fields. Besides, as long as the distance between the field center locations of two neighboring exposure regions exceeds the straight distance equals to three image fields wide, the interacting thermal effect during wafer exposure can be ignored. The analysis flow proposed in this paper can serve as a supporting reference tool for engineers in planning exposure paths.
2011-01-01
Background Electrotherapy is a relatively well established and efficient method of tumor treatment. In this paper we focus on analytical and numerical calculations of the potential and electric field distributions inside a tumor tissue in a two-dimensional model (2D-model) generated by means of electrode arrays with shapes of different conic sections (ellipse, parabola and hyperbola). Methods Analytical calculations of the potential and electric field distributions based on 2D-models for different electrode arrays are performed by solving the Laplace equation, meanwhile the numerical solution is solved by means of finite element method in two dimensions. Results Both analytical and numerical solutions reveal significant differences between the electric field distributions generated by electrode arrays with shapes of circle and different conic sections (elliptic, parabolic and hyperbolic). Electrode arrays with circular, elliptical and hyperbolic shapes have the advantage of concentrating the electric field lines in the tumor. Conclusion The mathematical approach presented in this study provides a useful tool for the design of electrode arrays with different shapes of conic sections by means of the use of the unifying principle. At the same time, we verify the good correspondence between the analytical and numerical solutions for the potential and electric field distributions generated by the electrode array with different conic sections. PMID:21943385
NASA Astrophysics Data System (ADS)
Moneta, Diana; Mora, Paolo; Viganò, Giacomo; Alimonti, Gianluca
2014-12-01
The diffusion of Distributed Generation (DG) based on Renewable Energy Sources (RES) requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER - DIStribution Company VoltagE Regulator) is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of "case studies", that are the combination of network topology, technical constraints and targets, load and generation profiles and "costs" of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids) and actual battery characteristics are given, together with prospective performance on real case applications.
Noniterative three-dimensional grid generation using parabolic partial differential equations
NASA Technical Reports Server (NTRS)
Edwards, T. A.
1985-01-01
A new algorithm for generating three-dimensional grids has been developed and implemented which numerically solves a parabolic partial differential equation (PDE). The solution procedure marches outward in two coordinate directions, and requires inversion of a scalar tridiagonal system in the third. Source terms have been introduced to control the spacing and angle of grid lines near the grid boundaries, and to control the outer boundary point distribution. The method has been found to generate grids about 100 times faster than comparable grids generated via solution of elliptic PDEs, and produces smooth grids for finite-difference flow calculations.
Lv, Hui; Yu, Yonglin; Shu, Tan; Huang, Dexiu; Jiang, Shan; Barry, Liam P
2010-03-29
Photonic ultra-wideband (UWB) pulses are generated by direct current modulation of a semiconductor optical amplifier (SOA) section of an SOA-integrated sampled grating distributed Bragg reflector (SGDBR) laser. Modulation responses of the SOA section of the laser are first simulated with a microwave equivalent circuit model. Simulated results show a resonance behavior indicating the possibility to generate UWB signals with complex shapes in the time domain. The UWB pulse generation is then experimentally demonstrated for different selected wavelength channels with an SOA-integrated SGDBR laser.
NASA Astrophysics Data System (ADS)
Jiang, Huaiguang
With the evolution of energy and power systems, the emerging Smart Grid (SG) is mainly featured by distributed renewable energy generations, demand-response control and huge amount of heterogeneous data sources. Widely distributed synchrophasor sensors, such as phasor measurement units (PMUs) and fault disturbance recorders (FDRs), can record multi-modal signals, for power system situational awareness and renewable energy integration. An effective and economical approach is proposed for wide-area security assessment. This approach is based on wavelet analysis for detecting and locating the short-term and long-term faults in SG, using voltage signals collected by distributed synchrophasor sensors. A data-driven approach for fault detection, identification and location is proposed and studied. This approach is based on matching pursuit decomposition (MPD) using Gaussian atom dictionary, hidden Markov model (HMM) of real-time frequency and voltage variation features, and fault contour maps generated by machine learning algorithms in SG systems. In addition, considering the economic issues, the placement optimization of distributed synchrophasor sensors is studied to reduce the number of the sensors without affecting the accuracy and effectiveness of the proposed approach. Furthermore, because the natural hazards is a critical issue for power system security, this approach is studied under different types of faults caused by natural hazards. A fast steady-state approach is proposed for voltage security of power systems with a wind power plant connected. The impedance matrix can be calculated by the voltage and current information collected by the PMUs. Based on the impedance matrix, locations in SG can be identified, where cause the greatest impact on the voltage at the wind power plants point of interconnection. Furthermore, because this dynamic voltage security assessment method relies on time-domain simulations of faults at different locations, the proposed approach is feasible, convenient and effective. Conventionally, wind energy is highly location-dependent. Many desirable wind resources are located in rural areas without direct access to the transmission grid. By connecting MW-scale wind turbines or wind farms to the distributions system of SG, the cost of building long transmission facilities can be avoid and wind power supplied to consumers can be greatly increased. After the effective wide area monitoring (WAM) approach is built, an event-driven control strategy is proposed for renewable energy integration. This approach is based on support vector machine (SVM) predictor and multiple-input and multiple-output (MIMO) model predictive control (MPC) on linear time-invariant (LTI) and linear time-variant (LTV) systems. The voltage condition of the distribution system is predicted by the SVM classifier using synchrophasor measurement data. The controllers equipped with wind turbine generators are triggered by the prediction results. Both transmission level and distribution level are designed based on this proposed approach. Considering economic issues in the power system, a statistical scheduling approach to economic dispatch and energy reserves is proposed. The proposed approach focuses on minimizing the overall power operating cost with considerations of renewable energy uncertainty and power system security. The hybrid power system scheduling is formulated as a convex programming problem to minimize power operating cost, taking considerations of renewable energy generation, power generation-consumption balance and power system security. A genetic algorithm based approach is used for solving the minimization of the power operating cost. In addition, with technology development, it can be predicted that the renewable energy such as wind turbine generators and PV panels will be pervasively located in distribution systems. The distribution system is an unbalanced system, which contains single-phase, two-phase and three-phase loads, and distribution lines. The complex configuration brings a challenge to power flow calculation. A topology analysis based iterative approach is used to solve this problem. In this approach, a self-adaptive topology recognition method is used to analyze the distribution system, and the backward/forward sweep algorithm is used to generate the power flow results. Finally, for the numerical simulations, the IEEE 14-bus, 30-bus, 39-bus and 118-bus systems are studied for fault detection, identification and location. Both transmission level and distribution level models are employed with the proposed control strategy for voltage stability of renewable energy integration. The simulation results demonstrate the effectiveness of the proposed methods. The IEEE 24-bus reliability test system (IEEE-RTS), which is commonly used for evaluating the price stability and reliability of power system, is used as the test bench for verifying and evaluating system performance of the proposed scheduling approach.
NASA Astrophysics Data System (ADS)
Krylov, Alexander A.; Sazonkin, Stanislav G.; Lazarev, Vladimir A.; Dvoretskiy, Dmitriy A.; Leonov, Stanislav O.; Pnev, Alexey B.; Karasik, Valeriy E.; Grebenyukov, Vyacheslav V.; Pozharov, Anatoly S.; Obraztsova, Elena D.; Dianov, Evgeny M.
2015-06-01
We report for the first time to the best of our knowledge on the ultra-short pulse (USP) generation in the dispersion-managed erbium-doped all-fiber ring laser hybridly mode-locked with boron nitride-doped single-walled carbon nanotubes in the co-action with a nonlinear polarization evolution in the ring cavity with a distributed polarizer. Stable 92.6 fs dechirped pulses were obtained via precise polarization state adjustment at a central wavelength of 1560 nm with 11.2 mW average output power, corresponding to the 2.9 kW maximum peak power. We have also observed the laser switching from a USP generation regime to a chirped pulse one with a corresponding pulse-width of 7.1 ps at the same intracavity dispersion.
NASA Astrophysics Data System (ADS)
Fairchild, A. J.; Chirayath, V. A.; Gladen, R. W.; Chrysler, M. D.; Koymen, A. R.; Weiss, A. H.
2017-01-01
In this paper, we present results of numerical modelling of the University of Texas at Arlington’s time of flight positron annihilation induced Auger electron spectrometer (UTA TOF-PAES) using SIMION® 8.1 Ion and Electron Optics Simulator. The time of flight (TOF) spectrometer measures the energy of electrons emitted from the surface of a sample as a result of the interaction of low energy positrons with the sample surface. We have used SIMION® 8.1 to calculate the times of flight spectra of electrons leaving the sample surface with energies and angles dispersed according to distribution functions chosen to model the positron induced electron emission process and have thus obtained an estimate of the true electron energy distribution. The simulated TOF distribution was convolved with a Gaussian timing resolution function and compared to the experimental distribution. The broadening observed in the simulated TOF spectra was found to be consistent with that observed in the experimental secondary electron spectra of Cu generated as a result of positrons incident with energy 1.5 eV to 901 eV, when a timing resolution of 2.3 ns was assumed.
NASA Astrophysics Data System (ADS)
Alyami, Saeed
Installation of photovoltaic (PV) units could lead to great challenges to the existing electrical systems. Issues such as voltage rise, protection coordination, islanding detection, harmonics, increased or changed short-circuit levels, etc., need to be carefully addressed before we can see a wide adoption of this environmentally friendly technology. Voltage rise or overvoltage issues are of particular importance to be addressed for deploying more PV systems to distribution networks. This dissertation proposes a comprehensive solution to deal with the voltage violations in distribution networks, from controlling PV power outputs and electricity consumption of smart appliances in real time to optimal placement of PVs at the planning stage. The dissertation is composed of three parts: the literature review, the work that has already been done and the future research tasks. An overview on renewable energy generation and its challenges are given in Chapter 1. The overall literature survey, motivation and the scope of study are also outlined in the chapter. Detailed literature reviews are given in the rest of chapters. The overvoltage and undervoltage phenomena in typical distribution networks with integration of PVs are further explained in Chapter 2. Possible approaches for voltage quality control are also discussed in this chapter, followed by the discussion on the importance of the load management for PHEVs and appliances and its benefits to electric utilities and end users. A new real power capping method is presented in Chapter 3 to prevent overvoltage by adaptively setting the power caps for PV inverters in real time. The proposed method can maintain voltage profiles below a pre-set upper limit while maximizing the PV generation and fairly distributing the real power curtailments among all the PV systems in the network. As a result, each of the PV systems in the network has equal opportunity to generate electricity and shares the responsibility of voltage regulation. The method does not require global information and can be implemented either under a centralized supervisory control scheme or in a distributed way via consensus control. Chapter 4 investigates autonomous operation schedules for three types of intelligent appliances (or residential controllable loads) without receiving external signals for cost saving and for assisting the management of possible photovoltaic generation systems installed in the same distribution network. The three types of controllable loads studied in the chapter are electric water heaters, refrigerators deicing loads, and dishwashers, respectively. Chapter 5 investigates the method to mitigate overvoltage issues at the planning stage. A probabilistic method is presented in the chapter to evaluate the overvoltage risk in a distribution network with different PV capacity sizes under different load levels. Kolmogorov--Smirnov test (K--S test) is used to identify the most proper probability distributions for solar irradiance in different months. To increase accuracy, an iterative process is used to obtain the maximum allowable injection of active power from PVs. Conclusion and discussions on future work are given in Chapter 6.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John
We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less
Simulation of an ensemble of future climate time series with an hourly weather generator
NASA Astrophysics Data System (ADS)
Caporali, E.; Fatichi, S.; Ivanov, V. Y.; Kim, J.
2010-12-01
There is evidence that climate change is occurring in many regions of the world. The necessity of climate change predictions at the local scale and fine temporal resolution is thus warranted for hydrological, ecological, geomorphological, and agricultural applications that can provide thematic insights into the corresponding impacts. Numerous downscaling techniques have been proposed to bridge the gap between the spatial scales adopted in General Circulation Models (GCM) and regional analyses. Nevertheless, the time and spatial resolutions obtained as well as the type of meteorological variables may not be sufficient for detailed studies of climate change effects at the local scales. In this context, this study presents a stochastic downscaling technique that makes use of an hourly weather generator to simulate time series of predicted future climate. Using a Bayesian approach, the downscaling procedure derives distributions of factors of change for several climate statistics from a multi-model ensemble of GCMs. Factors of change are sampled from their distributions using a Monte Carlo technique to entirely account for the probabilistic information obtained with the Bayesian multi-model ensemble. Factors of change are subsequently applied to the statistics derived from observations to re-evaluate the parameters of the weather generator. The weather generator can reproduce a wide set of climate variables and statistics over a range of temporal scales, from extremes, to the low-frequency inter-annual variability. The final result of such a procedure is the generation of an ensemble of hourly time series of meteorological variables that can be considered as representative of future climate, as inferred from GCMs. The generated ensemble of scenarios also accounts for the uncertainty derived from multiple GCMs used in downscaling. Applications of the procedure in reproducing present and future climates are presented for different locations world-wide: Tucson (AZ), Detroit (MI), and Firenze (Italy). The stochastic downscaling is carried out with eight GCMs from the CMIP3 multi-model dataset (IPCC 4AR, A1B scenario).
NASA Astrophysics Data System (ADS)
Orans, Ren
1990-10-01
Existing procedures used to develop marginal costs for electric utilities were not designed for applications in an increasingly competitive market for electric power. The utility's value of receiving power, or the costs of selling power, however, depend on the exact location of the buyer or seller, the magnitude of the power and the period of time over which the power is used. Yet no electric utility in the United States has disaggregate marginal costs that reflect differences in costs due to the time, size or location of the load associated with their power or energy transactions. The existing marginal costing methods used by electric utilities were developed in response to the Public Utilities Regulatory Policy Act (PURPA) in 1978. The "ratemaking standards" (Title 1) established by PURPA were primarily concerned with the appropriate segmentation of total revenues to various classes-of-service, designing time-of-use rating periods, and the promotion of efficient long-term resource planning. By design, the methods were very simple and inexpensive to implement. Now, more than a decade later, the costing issues facing electric utilities are becoming increasingly complex, and the benefits of developing more specific marginal costs will outweigh the costs of developing this information in many cases. This research develops a framework for estimating total marginal costs that vary by the size, timing, and the location of changes in loads within an electric distribution system. To complement the existing work at the Electric Power Research Institute (EPRI) and Pacific Gas and Electric Company (PGandE) on estimating disaggregate generation and transmission capacity costs, this dissertation focuses on the estimation of distribution capacity costs. While the costing procedure is suitable for the estimation of total (generation, transmission and distribution) marginal costs, the empirical work focuses on the geographic disaggregation of marginal costs related to electric utility distribution investment. The study makes use of data from an actual distribution planning area, located within PGandE's service territory, to demonstrate the important characteristics of this new costing approach. The most significant result of this empirical work is that geographic differences in the cost of capacity in distribution systems can be as much as four times larger than the current system average utility estimates. Furthermore, lumpy capital investment patterns can lead to significant cost differences over time.
Lee, David; Lee, Joshua; Lee, Imshik
2015-01-01
The locomotor behavior of small fish was characterized under a cell phone-generated radio frequency electromagnetic field (RF EMF). The trajectory of movement of 10 pairs of guppy (Poecilia reticulate) and 15 pairs of Zebrafish (Danio rerio) in a fish tank was recorded and tracked under the presence of a cell phone-generated RF EMF. The measures were based on spatial and temporal distributions. A time-series trajectory was utilized to emphasize the dynamic nature of locomotor behavior. Fish movement was recorded in real-time. Their spatial, velocity, turning angle and sinuosity distribution were analyzed in terms of F(v,x), P[n(x,t)], P(v), F (θ) and F(s), respectively. In addition, potential temperature elevation caused by a cellular phone was also examined. We demonstrated that a cellular phone-induced temperature elevation was not relevant, and that our measurements reflected RF EMF-induced effects on the locomotor behavior of Poecilia reticulata and Danio rerio. Fish locomotion was observed under normal conditions, in the visual presence of a cell phone, after feeding, and under starvation. Fish locomotor behavior was random both in normal conditions and in the presence of an off-signaled cell phone. However, there were significant changes in the locomotion of the fish after feeding under the RF EMF. The locomotion of the fed fish was affected in terms of changes in population and velocity distributions under the presence of the RF EMF emitted by the cell phone. There was, however, no significant difference in angular distribution.
Chen, Hua; Chen, Kun
2013-01-01
The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n − An(t) follows a Poisson distribution, and as m → n, n(n−1)Tm/2N(0) follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference. PMID:23666939
Chen, Hua; Chen, Kun
2013-07-01
The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n - An(t) follows a Poisson distribution, and as m → n, $$n\\left(n-1\\right){T}_{m}/2N\\left(0\\right)$$ follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-07-25
This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
Metal Nanoshells for Plasmonically Enhanced Solar to Fuel Photocatalytic Conversion
2016-05-18
but are still under development. Scheme 2. Strategy for the Synthesis of Tin Oxide-Coated Gold- Silver Nanoshells Publication List: 1. Li, C.-H...DISTRIBUTION/AVAILABILITY STATEMENT A DISTRIBUTION UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT First thrust: Gold- silver nanoshells...interlayer of ~17 nm generated a rate of hydrogen production 2.6 times higher than that of unmodified ZIS. Second thrust: Tin oxide-coated gold- silver
Understanding Local Structure Globally in Earth Science Remote Sensing Data Sets
NASA Technical Reports Server (NTRS)
Braverman, Amy; Fetzer, Eric
2007-01-01
Empirical probability distributions derived from the data are the signatures of physical processes generating the data. Distributions defined on different space-time windows can be compared and differences or changes can be attributed to physical processes. This presentation discusses on ways to reduce remote sensing data in a way that preserves information, focusing on the rate-distortion theory and using the entropy-constrained vector quantization algorithm.
Peculiarities of gamma-quanta distribution at 20 TeV energy
NASA Technical Reports Server (NTRS)
Ermakov, P. M.; Loktionov, A. A.; Lukin, Y. T.; Sadykov, T. K.
1985-01-01
The angular distribution of protons from the fragmentational region is analyzed. The gamma-quanta families are generated in a dense target by cosmic ray particles at 20 Tev energy. Families were found which had dense groups (spikes) of gamma-quanta where the rapidity/density is 3 times more than the average value determined for all registered families. The experimental data is compared with the results of artificial families simulation.
Beatty, William; Jay, Chadwick V.; Fischbach, Anthony S.
2016-01-01
State-space models offer researchers an objective approach to modeling complex animal location data sets, and state-space model behavior classifications are often assumed to have a link to animal behavior. In this study, we evaluated the behavioral classification accuracy of a Bayesian state-space model in Pacific walruses using Argos satellite tags with sensors to detect animal behavior in real time. We fit a two-state discrete-time continuous-space Bayesian state-space model to data from 306 Pacific walruses tagged in the Chukchi Sea. We matched predicted locations and behaviors from the state-space model (resident, transient behavior) to true animal behavior (foraging, swimming, hauled out) and evaluated classification accuracy with kappa statistics (κ) and root mean square error (RMSE). In addition, we compared biased random bridge utilization distributions generated with resident behavior locations to true foraging behavior locations to evaluate differences in space use patterns. Results indicated that the two-state model fairly classified true animal behavior (0.06 ≤ κ ≤ 0.26, 0.49 ≤ RMSE ≤ 0.59). Kernel overlap metrics indicated utilization distributions generated with resident behavior locations were generally smaller than utilization distributions generated with true foraging behavior locations. Consequently, we encourage researchers to carefully examine parameters and priors associated with behaviors in state-space models, and reconcile these parameters with the study species and its expected behaviors.
Estimation of Parameters from Discrete Random Nonstationary Time Series
NASA Astrophysics Data System (ADS)
Takayasu, H.; Nakamura, T.
For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.
2012-04-01
time , crystal frequency, temperature, and headspace oxygen concentration. 41 Approved for public release; distribution unlimited. C-4. Fuels: In...at ambient pressure. At this point the heater, which is set at 140 °C, is turned on and computer data acquisition is begun. The run time , crystal frequency
Provably secure and high-rate quantum key distribution with time-bin qudits
Islam, Nurul T.; Lim, Charles Ci Wen; Cahall, Clinton; ...
2017-11-24
The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. Wemore » use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. In conclusion, the security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.« less
Modern Methods for fast generation of digital holograms
NASA Astrophysics Data System (ADS)
Tsang, P. W. M.; Liu, J. P.; Cheung, K. W. K.; Poon, T.-C.
2010-06-01
With the advancement of computers, digital holography (DH) has become an area of interest that has gained much popularity. Research findings derived from this technology enables holograms representing three dimensional (3-D) scenes to be acquired with optical means, or generated with numerical computation. In both cases, the holograms are in the form of numerical data that can be recorded, transmitted, and processed with digital techniques. On top of that, the availability of high capacity digital storage and wide-band communication technologies also cast light on the emergence of real time video holographic systems, enabling animated 3-D contents to be encoded as holographic data, and distributed via existing medium. At present, development in DH has reached a reasonable degree of maturity, but at the same time the heavy computation involved also imposes difficulty in practical applications. In this paper, a summary on a number of successful accomplishments that have been made recently in overcoming this problem is presented. Subsequently, we shall propose an economical framework that is suitable for real time generation and transmission of holographic video signals over existing distribution media. The proposed framework includes an aspect of extending the depth range of the object scene, which is important for the display of large-scale objects. [Figure not available: see fulltext.
Provably secure and high-rate quantum key distribution with time-bin qudits
Islam, Nurul T.; Lim, Charles Ci Wen; Cahall, Clinton; Kim, Jungsang; Gauthier, Daniel J.
2017-01-01
The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. We use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. The security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system. PMID:29202028
Provably secure and high-rate quantum key distribution with time-bin qudits.
Islam, Nurul T; Lim, Charles Ci Wen; Cahall, Clinton; Kim, Jungsang; Gauthier, Daniel J
2017-11-01
The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. We use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. The security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.
Provably secure and high-rate quantum key distribution with time-bin qudits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Islam, Nurul T.; Lim, Charles Ci Wen; Cahall, Clinton
The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. Wemore » use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. In conclusion, the security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.« less
Time trends in recurrence of juvenile nasopharyngeal angiofibroma: Experience of the past 4 decades.
Mishra, Anupam; Mishra, Subhash Chandra
2016-01-01
An analysis of time distribution of juvenile nasopharyngeal angiofibroma (JNA) from the last 4 decades is presented. Sixty recurrences were analyzed as per actuarial survival. SPSS software was used to generate Kaplan-Meier (KM) curves and time distributions were compared by Log-rank, Breslow and Tarone-Ware test. The overall recurrence rate was 17.59%. Majority underwent open transpalatal approach(es) without embolization. The probability of detecting a recurrence was 95% in first 24months and comparison of KM curves of 4 different time periods was not significant. This is the first and largest series to address the time-distribution. The required follow up period is 2years. Our recurrence is just half of the largest series (reported so far) suggesting the superiority of transpalatal techniques. The similarity of curves suggests less likelihood for recent technical advances to influence the recurrence that as per our hypothesis is more likely to reflect tumor biology per se. Copyright © 2016 Elsevier Inc. All rights reserved.
On the Use of the Beta Distribution in Probabilistic Resource Assessments
Olea, R.A.
2011-01-01
The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. The beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution. ?? 2011 International Association for Mathematical Geology (outside the USA).
Distributed Generation of Electricity and its Environmental Impacts
Distributed generation refers to technologies that generate electricity at or near where it will be used. Learn about how distributed energy generation can support the delivery of clean, reliable power to additional customers.
Statistical characteristics of surrogate data based on geophysical measurements
NASA Astrophysics Data System (ADS)
Venema, V.; Bachner, S.; Rust, H. W.; Simmer, C.
2006-09-01
In this study, the statistical properties of a range of measurements are compared with those of their surrogate time series. Seven different records are studied, amongst others, historical time series of mean daily temperature, daily rain sums and runoff from two rivers, and cloud measurements. Seven different algorithms are used to generate the surrogate time series. The best-known method is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm, which is able to reproduce the measured distribution as well as the power spectrum. Using this setup, the measurements and their surrogates are compared with respect to their power spectrum, increment distribution, structure functions, annual percentiles and return values. It is found that the surrogates that reproduce the power spectrum and the distribution of the measurements are able to closely match the increment distributions and the structure functions of the measurements, but this often does not hold for surrogates that only mimic the power spectrum of the measurement. However, even the best performing surrogates do not have asymmetric increment distributions, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found deviations of the structure functions on small scales.
On the multiple depots vehicle routing problem with heterogeneous fleet capacity and velocity
NASA Astrophysics Data System (ADS)
Hanum, F.; Hartono, A. P.; Bakhtiar, T.
2018-03-01
This current manuscript concerns with the optimization problem arising in a route determination of products distribution. The problem is formulated in the form of multiple depots and time windowed vehicle routing problem with heterogeneous capacity and velocity of fleet. Model includes a number of constraints such as route continuity, multiple depots availability and serving time in addition to generic constraints. In dealing with the unique feature of heterogeneous velocity, we generate a number of velocity profiles along the road segments, which then converted into traveling-time tables. An illustrative example of rice distribution among villages by bureau of logistics is provided. Exact approach is utilized to determine the optimal solution in term of vehicle routes and starting time of service.
NASA Astrophysics Data System (ADS)
Simonsen, I.; Jensen, M. H.; Johansen, A.
2002-06-01
In stochastic finance, one traditionally considers the return as a competitive measure of an asset, i.e., the profit generated by that asset after some fixed time span Δt, say one week or one year. This measures how well (or how bad) the asset performs over that given period of time. It has been established that the distribution of returns exhibits ``fat tails'' indicating that large returns occur more frequently than what is expected from standard Gaussian stochastic processes [1-3]. Instead of estimating this ``fat tail'' distribution of returns, we propose here an alternative approach, which is outlined by addressing the following question: What is the smallest time interval needed for an asset to cross a fixed return level of say 10%? For a particular asset, we refer to this time as the investment horizon and the corresponding distribution as the investment horizon distribution. This latter distribution complements that of returns and provides new and possibly crucial information for portfolio design and risk-management, as well as for pricing of more exotic options. By considering historical financial data, exemplified by the Dow Jones Industrial Average, we obtain a novel set of probability distributions for the investment horizons which can be used to estimate the optimal investment horizon for a stock or a future contract.
Distributed Observer Network (DON), Version 3.0, User's Guide
NASA Technical Reports Server (NTRS)
Mazzone, Rebecca A.; Conroy, Michael P.
2015-01-01
The Distributed Observer Network (DON) is a data presentation tool developed by the National Aeronautics and Space Administration (NASA) to distribute and publish simulation results. Leveraging the display capabilities inherent in modern gaming technology, DON places users in a fully navigable 3-D environment containing graphical models and allows the users to observe how those models evolve and interact over time in a given scenario. Each scenario is driven with data that has been generated by authoritative NASA simulation tools and exported in accordance with a published data interface specification. This decoupling of the data from the source tool enables DON to faithfully display a simulator's results and ensure that every simulation stakeholder will view the exact same information every time.
Method and apparatus for anti-islanding protection of distributed generations
Ye, Zhihong; John, Vinod; Wang, Changyong; Garces, Luis Jose; Zhou, Rui; Li, Lei; Walling, Reigh Allen; Premerlani, William James; Sanza, Peter Claudius; Liu, Yan; Dame, Mark Edward
2006-03-21
An apparatus for anti-islanding protection of a distributed generation with respect to a feeder connected to an electrical grid is disclosed. The apparatus includes a sensor adapted to generate a voltage signal representative of an output voltage and/or a current signal representative of an output current at the distributed generation, and a controller responsive to the signals from the sensor. The controller is productive of a control signal directed to the distributed generation to drive an operating characteristic of the distributed generation out of a nominal range in response to the electrical grid being disconnected from the feeder.
Spatio-temporal assessment of food safety risks in Canadian food distribution systems using GIS.
Hashemi Beni, Leila; Villeneuve, Sébastien; LeBlanc, Denyse I; Côté, Kevin; Fazil, Aamir; Otten, Ainsley; McKellar, Robin; Delaquis, Pascal
2012-09-01
While the value of geographic information systems (GIS) is widely applied in public health there have been comparatively few examples of applications that extend to the assessment of risks in food distribution systems. GIS can provide decision makers with strong computing platforms for spatial data management, integration, analysis, querying and visualization. The present report addresses some spatio-analyses in a complex food distribution system and defines influence areas as travel time zones generated through road network analysis on a national scale rather than on a community scale. In addition, a dynamic risk index is defined to translate a contamination event into a public health risk as time progresses. More specifically, in this research, GIS is used to map the Canadian produce distribution system, analyze accessibility to contaminated product by consumers, and estimate the level of risk associated with a contamination event over time, as illustrated in a scenario. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
A distributed system for fast alignment of next-generation sequencing data.
Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D
2010-12-01
We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.
NASA Astrophysics Data System (ADS)
Baitimirova, M.; Osite, A.; Katkevics, J.; Viksna, A.
2012-08-01
Burning of candles generates particulate matter of fine dimensions that produces poor indoor air quality, so it may cause harmful impact on human health. In this study solid aerosol particles of burning of candles of different composition and kerosene combustion were collected in a closed laboratory system. Present work describes particulate matter collection for structure analysis and the relationship between source and size distribution of particulate matter. The formation mechanism of particulate matter and their tendency to agglomerate also are described. Particles obtained from kerosene combustion have normal size distribution. Whereas, particles generated from the burning of stearin candles have distribution shifted towards finer particle size range. If an additive of stearin to paraffin candle is used, particle size distribution is also observed in range of towards finer particles. A tendency to form agglomerates in a short time is observed in case of particles obtained from kerosene combustion, while in case of particles obtained from burning of candles of different composition such a tendency is not observed. Particles from candles and kerosene combustion are Aitken and accumulation mode particles
NASA Astrophysics Data System (ADS)
Li, Jing Xia; Xu, Hang; Liu, Li; Su, Peng Cheng; Zhang, Jian Guo
2015-05-01
We report a chaotic optical time-domain reflectometry for fiber fault location, where a chaotic probe signal is generated by driving a distributed feedback laser diode with an improved Colpitts chaotic oscillator. The results show that the unterminated fiber end, the loose connector, and the mismatch connector can be precisely located. A measurement range of approximately 91 km and a range independent resolution of 6 cm are achieved. This implementation method is easy to integrate and is cost effective, which gives it great potential for commercial applications.
Partial wetting gas-liquid segmented flow microreactor.
Kazemi Oskooei, S Ali; Sinton, David
2010-07-07
A microfluidic reactor strategy for reducing plug-to-plug transport in gas-liquid segmented flow microfluidic reactors is presented. The segmented flow is generated in a wetting portion of the chip that transitions downstream to a partially wetting reaction channel that serves to disconnect the liquid plugs. The resulting residence time distributions show little dependence on channel length, and over 60% narrowing in residence time distribution as compared to an otherwise similar reactor. This partial wetting strategy mitigates a central limitation (plug-to-plug dispersion) while preserving the many attractive features of gas-liquid segmented flow reactors.
Time-frequency analysis of SEMG--with special consideration to the interelectrode spacing.
Alemu, M; Kumar, Dinesh Kant; Bradley, Alan
2003-12-01
The surface electromyogram (SEMG) is a complex, nonstationary signal. The spectrum of the SEMG is dependent on the force of contraction being generated and other factors like muscle fatigue and interelectrode distance (IED). The spectrum of the signal is time variant. This paper reports the experimental research conducted to study the influence of force of muscle contraction and IED on the SEMG signal using time-frequency (T-F) analysis. Two T-F techniques have been used: Wigner-Ville distribution (WVD) and Choi-Williams distribution (CWD). The experiment was conducted with the help of ten healthy volunteers (five males and five females) who performed isometric elbow flexions of the active right arm at 20%, 50%, and 80% of their maximal voluntary contraction. The SEMG signal was recorded using surface electrodes placed at a distance of 18 and 36 mm over biceps brachii muscle. The results indicate that the two distributions were spread out across the frequency range at smaller IED. Further, regardless of the spacing, both distributions displayed increased spectral compression with time at higher contraction level.
A Review of Microgrid Architectures and Control Strategy
NASA Astrophysics Data System (ADS)
Jadav, Krishnarajsinh A.; Karkar, Hitesh M.; Trivedi, I. N.
2017-12-01
In this paper microgrid architecture and various converters control strategies are reviewed. Microgrid is defined as interconnected network of distributed energy resources, loads and energy storage systems. This emerging concept realizes the potential of distributed generators. AC microgrid interconnects various AC distributed generators like wind turbine and DC distributed generators like PV, fuel cell using inverter. While in DC microgrid output of an AC distributed generator must be converted to DC using rectifiers and DC distributed generator can be directly interconnected. Hybrid microgrid is the solution to avoid this multiple reverse conversions AC-DC-AC and DC-AC-DC that occur in the individual AC-DC microgrid. In hybrid microgrid all AC distributed generators will be connected in AC microgrid and DC distributed generators will be connected in DC microgrid. Interlinking converter is used for power balance in both microgrids, which transfer power from one microgrid to other if any microgrid is overloaded. At the end, review of interlinking converter control strategies is presented.
Macroscopic Spatial Complexity of the Game of Life Cellular Automaton: A Simple Data Analysis
NASA Astrophysics Data System (ADS)
Hernández-Montoya, A. R.; Coronel-Brizio, H. F.; Rodríguez-Achach, M. E.
In this chapter we present a simple data analysis of an ensemble of 20 time series, generated by averaging the spatial positions of the living cells for each state of the Game of Life Cellular Automaton (GoL). We show that at the macroscopic level described by these time series, complexity properties of GoL are also presented and the following emergent properties, typical of data extracted complex systems such as financial or economical come out: variations of the generated time series following an asymptotic power law distribution, large fluctuations tending to be followed by large fluctuations, and small fluctuations tending to be followed by small ones, and fast decay of linear correlations, however, the correlations associated to their absolute variations exhibit a long range memory. Finally, a Detrended Fluctuation Analysis (DFA) of the generated time series, indicates that the GoL spatial macro states described by the time series are not either completely ordered or random, in a measurable and very interesting way.
Generating macroscopic chaos in a network of globally coupled phase oscillators
So, Paul; Barreto, Ernest
2011-01-01
We consider an infinite network of globally coupled phase oscillators in which the natural frequencies of the oscillators are drawn from a symmetric bimodal distribution. We demonstrate that macroscopic chaos can occur in this system when the coupling strength varies periodically in time. We identify period-doubling cascades to chaos, attractor crises, and horseshoe dynamics for the macroscopic mean field. Based on recent work that clarified the bifurcation structure of the static bimodal Kuramoto system, we qualitatively describe the mechanism for the generation of such complicated behavior in the time varying case. PMID:21974662
Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos
2016-09-07
This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure.
Application of Time-Frequency Representations To Non-Stationary Radar Cross Section
2009-03-01
The three- dimensional plot produced by a TFR allows one to determine which spectral components of a signal vary with time [25... a range bin ( of width cT 2 ) from the stepped frequency waveform. 2. Cancel the clutter (stationary components) by zeroing out points associated with ...generating an infinite number of bilinear Time Frequency distributions based on a generalized equation and a change- able
Thermodynamic method for generating random stress distributions on an earthquake fault
Barall, Michael; Harris, Ruth A.
2012-01-01
This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.
Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions
Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...
2015-11-01
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Molecular characterization and comparison of shale oils generated by different pyrolysis methods
Birdwell, Justin E.; Jin, Jang Mi; Kim, Sunghwan
2012-01-01
Shale oils generated using different laboratory pyrolysis methods have been studied using standard oil characterization methods as well as Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) with electrospray ionization (ESI) and atmospheric photoionization (APPI) to assess differences in molecular composition. The pyrolysis oils were generated from samples of the Mahogany zone oil shale of the Eocene Green River Formation collected from outcrops in the Piceance Basin, Colorado, using three pyrolysis systems under conditions relevant to surface and in situ retorting approaches. Significant variations were observed in the shale oils, particularly the degree of conjugation of the constituent molecules and the distribution of nitrogen-containing compound classes. Comparison of FT-ICR MS results to other oil characteristics, such as specific gravity; saturate, aromatic, resin, asphaltene (SARA) distribution; and carbon number distribution determined by gas chromatography, indicated correspondence between higher average double bond equivalence (DBE) values and increasing asphaltene content. The results show that, based on the shale oil DBE distributions, highly conjugated species are enriched in samples produced under low pressure, high temperature conditions, and under high pressure, moderate temperature conditions in the presence of water. We also report, for the first time in any petroleum-like substance, the presence of N4 class compounds based on FT-ICR MS data. Using double bond equivalence and carbon number distributions, structures for the N4 class and other nitrogen-containing compounds are proposed.
Adaptive Metropolis Sampling with Product Distributions
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lee, Chiu Fan
2005-01-01
The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.
The rates and time-delay distribution of multiply imaged supernovae behind lensing clusters
NASA Astrophysics Data System (ADS)
Li, Xue; Hjorth, Jens; Richard, Johan
2012-11-01
Time delays of gravitationally lensed sources can be used to constrain the mass model of a deflector and determine cosmological parameters. We here present an analysis of the time-delay distribution of multiply imaged sources behind 17 strong lensing galaxy clusters with well-calibrated mass models. We find that for time delays less than 1000 days, at z = 3.0, their logarithmic probability distribution functions are well represented by P(log Δt) = 5.3 × 10-4Δttilde beta/M2502tilde beta, with tilde beta = 0.77, where M250 is the projected cluster mass inside 250 kpc (in 1014M⊙), and tilde beta is the power-law slope of the distribution. The resultant probability distribution function enables us to estimate the time-delay distribution in a lensing cluster of known mass. For a cluster with M250 = 2 × 1014M⊙, the fraction of time delays less than 1000 days is approximately 3%. Taking Abell 1689 as an example, its dark halo and brightest galaxies, with central velocity dispersions σ>=500kms-1, mainly produce large time delays, while galaxy-scale mass clumps are responsible for generating smaller time delays. We estimate the probability of observing multiple images of a supernova in the known images of Abell 1689. A two-component model of estimating the supernova rate is applied in this work. For a magnitude threshold of mAB = 26.5, the yearly rate of Type Ia (core-collapse) supernovae with time delays less than 1000 days is 0.004±0.002 (0.029±0.001). If the magnitude threshold is lowered to mAB ~ 27.0, the rate of core-collapse supernovae suitable for time delay observation is 0.044±0.015 per year.
Software Comparison for Renewable Energy Deployment in a Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian
The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercialmore » tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.« less
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elizondo, Marcelo A.; Samaan, Nader A.; Makarov, Yuri V.
Voltage and reactive power system control is generally performed following usual patterns of loads, based on off-line studies for daily and seasonal operations. This practice is currently challenged by the inclusion of distributed renewable generation, such as solar. There has been focus on resolving this problem at the distribution level; however, the transmission and sub-transmission levels have received less attention. This paper provides a literature review of proposed methods and solution approaches to coordinate and optimize voltage control and reactive power management, with an emphasis on applications at transmission and sub-transmission level. The conclusion drawn from the survey is thatmore » additional research is needed in the areas of optimizing switch shunt actions and coordinating all available resources to deal with uncertain patterns from increasing distributed renewable generation in the operational time frame. These topics are not deeply explored in the literature.« less
Cho, Ming-Yuan; Hoang, Thi Thom
2017-01-01
Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.
Continuous high speed coherent one-way quantum key distribution.
Stucki, Damien; Barreiro, Claudio; Fasel, Sylvain; Gautier, Jean-Daniel; Gay, Olivier; Gisin, Nicolas; Thew, Rob; Thoma, Yann; Trinkler, Patrick; Vannel, Fabien; Zbinden, Hugo
2009-08-03
Quantum key distribution (QKD) is the first commercial quantum technology operating at the level of single quanta and is a leading light for quantum-enabled photonic technologies. However, controlling these quantum optical systems in real world environments presents significant challenges. For the first time, we have brought together three key concepts for future QKD systems: a simple high-speed protocol; high performance detection; and integration both, at the component level and for standard fibre network connectivity. The QKD system is capable of continuous and autonomous operation, generating secret keys in real time. Laboratory and field tests were performed and comparisons made with robust InGaAs avalanche photodiodes and superconducting detectors. We report the first real world implementation of a fully functional QKD system over a 43 dB-loss (150 km) transmission line in the Swisscom fibre optic network where we obtained average real-time distribution rates over 3 hours of 2.5 bps.
Motion of kinesin in a viscoelastic medium
NASA Astrophysics Data System (ADS)
Knoops, Gert; Vanderzande, Carlo
2018-05-01
Kinesin is a molecular motor that transports cargo along microtubules. The results of many in vitro experiments on kinesin-1 are described by kinetic models in which one transition corresponds to the forward motion and subsequent binding of the tethered motor head. We argue that in a viscoelastic medium like the cytosol of a cell this step is not Markov and has to be described by a nonexponential waiting time distribution. We introduce a semi-Markov kinetic model for kinesin that takes this effect into account. We calculate, for arbitrary waiting time distributions, the moment generating function of the number of steps made, and determine from this the average velocity and the diffusion constant of the motor. We illustrate our results for the case of a waiting time distribution that is Weibull. We find that for realistic parameter values, viscoelasticity decreases the velocity and the diffusion constant, but increases the randomness (or Fano factor).
Implications of Atmospheric Test Fallout Data for Nuclear Winter.
NASA Astrophysics Data System (ADS)
Baker, George Harold, III
1987-09-01
Atmospheric test fallout data have been used to determine admissable dust particle size distributions for nuclear winter studies. The research was originally motivated by extreme differences noted in the magnitude and longevity of dust effects predicted by particle size distributions routinely used in fallout predictions versus those used for nuclear winter studies. Three different sets of historical data have been analyzed: (1) Stratospheric burden of Strontium -90 and Tungsten-185, 1954-1967 (92 contributing events); (2) Continental U.S. Strontium-90 fallout through 1958 (75 contributing events); (3) Local Fallout from selected Nevada tests (16 events). The contribution of dust to possible long term climate effects following a nuclear exchange depends strongly on the particle size distribution. The distribution affects both the atmospheric residence time and optical depth. One dimensional models of stratospheric/tropospheric fallout removal were developed and used to identify optimum particle distributions. Results indicate that particle distributions which properly predict bulk stratospheric activity transfer tend to be somewhat smaller than number size distributions used in initial nuclear winter studies. In addition, both ^{90}Sr and ^ {185}W fallout behavior is better predicted by the lognormal distribution function than the prevalent power law hybrid function. It is shown that the power law behavior of particle samples may well be an aberration of gravitational cloud stratification. Results support the possible existence of two independent particle size distributions in clouds generated by surface or near surface bursts. One distribution governs late time stratospheric fallout, the other governs early time fallout. A bimodal lognormal distribution is proposed to describe the cloud particle population. The distribution predicts higher initial sunlight attenuation and lower late time attenuation than the power law hybrid function used in initial nuclear winter studies.
Runoff generation in karst catchments: multifractal analysis
NASA Astrophysics Data System (ADS)
Majone, Bruno; Bellin, Alberto; Borsato, Andrea
2004-07-01
Time series of hydrological and geochemical signals at two karst springs, located in the Dolomiti del Brenta region, near Trento, Italy, are used to infer how karst catchments work internally to generate runoff. The data analyzed include precipitation, spring flow and electric conductivity of the spring water. All the signals show the signature of multifractality but with different intermittency and non-stationarity. In particular, precipitation and spring flow are shown to have nearly the same degree of non-stationarity and intermittency, while electric conductivity, which mimics the travel time distribution of water in the karst system, is less intermittent and smoother than both spring flow and precipitations. We found that spring flow can be obtained from precipitation through fractional convolution with a power law transfer function. An important result of our study is that the probability distribution of travel times is inconsistent with the advection dispersion equation, while it supports the anomalous transport model. This result is in line with what was observed by Painter et al. [Geophys. Res. Lett. 29 (2002) 21.1] for transport in fractured rocks.
Electrically-Generated Spin Polarization in Non-Magnetic Semiconductors
2016-03-31
resolved Faraday rotation data due to electron spin polarization from previous pump pulses was characterized, and an analytic solution for this phase...electron spin polarization was shown to produce nuclear hyperpolarization through dynamic nuclear polarization. Time-resolved Faraday rotation...Distribution approved for public release. 3 Figure 3. Total magnetic field measured using time-resolved Faraday rotation with the electrically
Response time distributions in rapid chess: a large-scale decision making experiment.
Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A
2010-01-01
Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.
Application of spatial time domain reflectometry measurements in heterogeneous, rocky substrates
NASA Astrophysics Data System (ADS)
Gonzales, C.; Scheuermann, A.; Arnold, S.; Baumgartl, T.
2016-10-01
Measurement of soil moisture across depths using sensors is currently limited to point measurements or remote sensing technologies. Point measurements have limitations on spatial resolution, while the latter, although covering large areas may not represent real-time hydrologic processes, especially near the surface. The objective of the study was to determine the efficacy of elongated soil moisture probes—spatial time domain reflectometry (STDR)—and to describe transient soil moisture dynamics of unconsolidated mine waste rock materials. The probes were calibrated under controlled conditions in the glasshouse. Transient soil moisture content was measured using the gravimetric method and STDR. Volumetric soil moisture content derived from weighing was compared with values generated from a numerical model simulating the drying process. A calibration function was generated and applied to STDR field data sets. The use of elongated probes effectively assists in the real-time determination of the spatial distribution of soil moisture. It also allows hydrologic processes to be uncovered in the unsaturated zone, especially for water balance calculations that are commonly based on point measurements. The elongated soil moisture probes can potentially describe transient substrate processes and delineate heterogeneity in terms of the pore size distribution in a seasonally wet but otherwise arid environment.
Response Time Distributions in Rapid Chess: A Large-Scale Decision Making Experiment
Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A.
2010-01-01
Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation. PMID:21031032
NASA Astrophysics Data System (ADS)
Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.
2011-06-01
The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.
Tight real-time synchronization of a microwave clock to an optical clock across a turbulent air path
Bergeron, Hugo; Sinclair, Laura C.; Swann, William C.; Nelson, Craig W.; Deschênes, Jean-Daniel; Baumann, Esther; Giorgetta, Fabrizio R.; Coddington, Ian; Newbury, Nathan R.
2018-01-01
The ability to distribute the precise time and frequency from an optical clock to remote platforms could enable future precise navigation and sensing systems. Here we demonstrate tight, real-time synchronization of a remote microwave clock to a master optical clock over a turbulent 4-km open air path via optical two-way time-frequency transfer. Once synchronized, the 10-GHz frequency signals generated at each site agree to 10−14 at one second and below 10−17 at 1000 seconds. In addition, the two clock times are synchronized to ±13 fs over an 8-hour period. The ability to phase-synchronize 10-GHz signals across platforms supports future distributed coherent sensing, while the ability to time-synchronize multiple microwave-based clocks to a high-performance master optical clock supports future precision navigation/timing systems. PMID:29607352
Bergeron, Hugo; Sinclair, Laura C; Swann, William C; Nelson, Craig W; Deschênes, Jean-Daniel; Baumann, Esther; Giorgetta, Fabrizio R; Coddington, Ian; Newbury, Nathan R
2016-04-01
The ability to distribute the precise time and frequency from an optical clock to remote platforms could enable future precise navigation and sensing systems. Here we demonstrate tight, real-time synchronization of a remote microwave clock to a master optical clock over a turbulent 4-km open air path via optical two-way time-frequency transfer. Once synchronized, the 10-GHz frequency signals generated at each site agree to 10 -14 at one second and below 10 -17 at 1000 seconds. In addition, the two clock times are synchronized to ±13 fs over an 8-hour period. The ability to phase-synchronize 10-GHz signals across platforms supports future distributed coherent sensing, while the ability to time-synchronize multiple microwave-based clocks to a high-performance master optical clock supports future precision navigation/timing systems.
Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José
2012-03-12
A novel all-optical technique based on the incoherent processing of optical signals using high-order dispersive elements is analyzed for microwave arbitrary pulse generation. We show an approach which allows a full reconfigurability of a pulse in terms of chirp, envelope and central frequency by the proper control of the second-order dispersion and the incoherent optical source power distribution, achieving large values of time-bandwidth product.
NASA Astrophysics Data System (ADS)
Flores, Robert Joseph
Distributed generation can provide many benefits over traditional central generation such as increased reliability and efficiency while reducing emissions. Despite these potential benefits, distributed generation is generally not purchased unless it reduces energy costs. Economic dispatch strategies can be designed such that distributed generation technologies reduce overall facility energy costs. In this thesis, a microturbine generator is dispatched using different economic control strategies, reducing the cost of energy to the facility. Several industrial and commercial facilities are simulated using acquired electrical, heating, and cooling load data. Industrial and commercial utility rate structures are modeled after Southern California Edison and Southern California Gas Company tariffs and used to find energy costs for the simulated buildings and corresponding microturbine dispatch. Using these control strategies, building models, and utility rate models, a parametric study examining various generator characteristics is performed. An economic assessment of the distributed generation is then performed for both the microturbine generator and parametric study. Without the ability to export electricity to the grid, the economic value of distributed generation is limited to reducing the individual costs that make up the cost of energy for a building. Any economic dispatch strategy must be built to reduce these individual costs. While the ability of distributed generation to reduce cost depends of factors such as electrical efficiency and operations and maintenance cost, the building energy demand being serviced has a strong effect on cost reduction. Buildings with low load factors can accept distributed generation with higher operating costs (low electrical efficiency and/or high operations and maintenance cost) due to the value of demand reduction. As load factor increases, lower operating cost generators are desired due to a larger portion of the building load being met in an effort to reduce demand. In addition, buildings with large thermal demand have access to the least expensive natural gas, lowering the cost of operating distributed generation. Recovery of exhaust heat from DG reduces cost only if the buildings thermal demand coincides with the electrical demand. Capacity limits exist where annual savings from operation of distributed generation decrease if further generation is installed. For low operating cost generators, the approximate limit is the average building load. This limit decreases as operating costs increase. In addition, a high capital cost of distributed generation can be accepted if generator operating costs are low. As generator operating costs increase, capital cost must decrease if a positive economic performance is desired.
NASA Astrophysics Data System (ADS)
Fu, Junjie; Wang, Jin-zhi
2017-09-01
In this paper, we study the finite-time consensus problems with globally bounded convergence time also known as fixed-time consensus problems for multi-agent systems subject to directed communication graphs. Two new distributed control strategies are proposed such that leaderless and leader-follower consensus are achieved with convergence time independent on the initial conditions of the agents. Fixed-time formation generation and formation tracking problems are also solved as the generalizations. Simulation examples are provided to demonstrate the performance of the new controllers.
DISCRN: A Distributed Storytelling Framework for Intelligence Analysis.
Shukla, Manu; Dos Santos, Raimundo; Chen, Feng; Lu, Chang-Tien
2017-09-01
Storytelling connects entities (people, organizations) using their observed relationships to establish meaningful storylines. This can be extended to spatiotemporal storytelling that incorporates locations, time, and graph computations to enhance coherence and meaning. But when performed sequentially these computations become a bottleneck because the massive number of entities make space and time complexity untenable. This article presents DISCRN, or distributed spatiotemporal ConceptSearch-based storytelling, a distributed framework for performing spatiotemporal storytelling. The framework extracts entities from microblogs and event data, and links these entities using a novel ConceptSearch to derive storylines in a distributed fashion utilizing key-value pair paradigm. Performing these operations at scale allows deeper and broader analysis of storylines. The novel parallelization techniques speed up the generation and filtering of storylines on massive datasets. Experiments with microblog posts such as Twitter data and Global Database of Events, Language, and Tone events show the efficiency of the techniques in DISCRN.
Time Analyzer for Time Synchronization and Monitor of the Deep Space Network
NASA Technical Reports Server (NTRS)
Cole, Steven; Gonzalez, Jorge, Jr.; Calhoun, Malcolm; Tjoelker, Robert
2003-01-01
A software package has been developed to measure, monitor, and archive the performance of timing signals distributed in the NASA Deep Space Network. Timing signals are generated from a central master clock and distributed to over 100 users at distances up to 30 kilometers. The time offset due to internal distribution delays and time jitter with respect to the central master clock are critical for successful spacecraft navigation, radio science, and very long baseline interferometry (VLBI) applications. The instrument controller and operator interface software is written in LabView and runs on the Linux operating system. The software controls a commercial multiplexer to switch 120 separate timing signals to measure offset and jitter with a time-interval counter referenced to the master clock. The offset of each channel is displayed in histogram form, and "out of specification" alarms are sent to a central complex monitor and control system. At any time, the measurement cycle of 120 signals can be interrupted for diagnostic tests on an individual channel. The instrument also routinely monitors and archives the long-term stability of all frequency standards or any other 1-pps source compared against the master clock. All data is stored and made available for
Sensory-evoked perturbations of locomotor activity by sparse sensory input: a computational study
Brownstone, Robert M.
2015-01-01
Sensory inputs from muscle, cutaneous, and joint afferents project to the spinal cord, where they are able to affect ongoing locomotor activity. Activation of sensory input can initiate or prolong bouts of locomotor activity depending on the identity of the sensory afferent activated and the timing of the activation within the locomotor cycle. However, the mechanisms by which afferent activity modifies locomotor rhythm and the distribution of sensory afferents to the spinal locomotor networks have not been determined. Considering the many sources of sensory inputs to the spinal cord, determining this distribution would provide insights into how sensory inputs are integrated to adjust ongoing locomotor activity. We asked whether a sparsely distributed set of sensory inputs could modify ongoing locomotor activity. To address this question, several computational models of locomotor central pattern generators (CPGs) that were mechanistically diverse and generated locomotor-like rhythmic activity were developed. We show that sensory inputs restricted to a small subset of the network neurons can perturb locomotor activity in the same manner as seen experimentally. Furthermore, we show that an architecture with sparse sensory input improves the capacity to gate sensory information by selectively modulating sensory channels. These data demonstrate that sensory input to rhythm-generating networks need not be extensively distributed. PMID:25673740
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225
SLAM, a Mathematica interface for SUSY spectrum generators
NASA Astrophysics Data System (ADS)
Marquard, Peter; Zerf, Nikolai
2014-03-01
We present and publish a Mathematica package, which can be used to automatically obtain any numerical MSSM input parameter from SUSY spectrum generators, which follow the SLHA standard, like SPheno, SOFTSUSY, SuSeFLAV or Suspect. The package enables a very comfortable way of numerical evaluations within the MSSM using Mathematica. It implements easy to use predefined high scale and low scale scenarios like mSUGRA or mhmax and if needed enables the user to directly specify the input required by the spectrum generators. In addition it supports an automatic saving and loading of SUSY spectra to and from a SQL data base, avoiding the rerun of a spectrum generator for a known spectrum. Catalogue identifier: AERX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERX_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4387 No. of bytes in distributed program, including test data, etc.: 37748 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer where Mathematica version 6 or higher is running providing bash and sed. Operating system: Linux. Classification: 11.1. External routines: A SUSY spectrum generator such as SPheno, SOFTSUSY, SuSeFLAV or SUSPECT Nature of problem: Interfacing published spectrum generators for automated creation, saving and loading of SUSY particle spectra. Solution method: SLAM automatically writes/reads SLHA spectrum generator input/output and is able to save/load generated data in/from a data base. Restrictions: No general restrictions, specific restrictions are given in the manuscript. Running time: A single spectrum calculation takes much less than one second on a modern PC.
On geological interpretations of crystal size distributions: Constant vs. proportionate growth
Eberl, D.D.; Kile, D.E.; Drits, V.A.
2002-01-01
Geological interpretations of crystal size distributions (CSDs) depend on understanding the crystal growth laws that generated the distributions. Most descriptions of crystal growth, including a population-balance modeling equation that is widely used in petrology, assume that crystal growth rates at any particular time are identical for all crystals, and, therefore, independent of crystal size. This type of growth under constant conditions can be modeled by adding a constant length to the diameter of each crystal for each time step. This growth equation is unlikely to be correct for most mineral systems because it neither generates nor maintains the shapes of lognormal CSDs, which are among the most common types of CSDs observed in rocks. In an alternative approach, size-dependent (proportionate) growth is modeled approximately by multiplying the size of each crystal by a factor, an operation that maintains CSD shape and variance, and which is in accord with calcite growth experiments. The latter growth law can be obtained during supply controlled growth using a modified version of the Law of Proportionate Effect (LPE), an equation that simulates the reaction path followed by a CSD shape as mean size increases.
Toward Ultraintense Compact RBS Pump for Recombination 3.4 nm Laser via OFI
NASA Astrophysics Data System (ADS)
Suckewer, S.; Ren, J.; Li, S.; Lou, Y.; Morozov, A.; Turnbull, D.; Avitzour, Y.
In our presentation we overview progress we made in developing a new ultrashort and ultraintensive laser system based on Raman backscattering (RBS) amplifier /compressor from time of 10th XRL Conference in Berlin to present time of 11th XRL Conference in Belfast. One of the main objectives of RBS laser system development is to use it for pumping of recombination X-ray laser in transition to ground state of CVI ions at 3.4 nm. Using elaborate computer code the processes of Optical Field Ionization, electron energy distribution, and recombination were calculated. It was shown that in very earlier stage of recombination, when electron energy distribution is strongly non-Maxwellian, high gain in transition from the first excited level n=2 to ground level m=1 can be generated. Adding large amount of hydrogen gas into initial gas containing carbon atoms (e.g. methane, CH4) the calculated gain has reached values up to 150-200 cm-2 Taking into account this very encouraging result, we have proceed with arrangement of experimental setup. We will present the observation of plasma channels and measurements of electron density distribution required for generation of gain at 3.4 nm.
Flexible and fast: linguistic shortcut affects both shallow and deep conceptual processing.
Connell, Louise; Lynott, Dermot
2013-06-01
Previous research has shown that people use linguistic distributional information during conceptual processing, and that it is especially useful for shallow tasks and rapid responding. Using two conceptual combination tasks, we showed that this linguistic shortcut extends to the processing of novel stimuli, is used in both successful and unsuccessful conceptual processing, and is evident in both shallow and deep conceptual tasks. Specifically, as predicted by the ECCo theory of conceptual combination, people use the linguistic shortcut as a "quick-and-dirty" guide to whether the concepts are likely to combine into a coherent conceptual representation, in both shallow sensibility judgment and deep interpretation generation tasks. Linguistic distributional frequency predicts both the likelihood and the time course of rejecting a novel word compound as nonsensical or uninterpretable. However, it predicts the time course of successful processing only in shallow sensibility judgment, because the deeper conceptual process of interpretation generation does not allow the linguistic shortcut to suffice. Furthermore, the effects of linguistic distributional frequency are independent of any effects of conventional word frequency. We discuss the utility of the linguistic shortcut as a cognitive triage mechanism that can optimize processing in a limited-resource conceptual system.
NASA Astrophysics Data System (ADS)
Keramitsoglou, Iphigenia; Kiranoudis, Chris T.; Sismanidis, Panagiotis
2016-08-01
The Urban Heat Island (UHI) is an adverse environmental effect of urbanization that increases the energy demand of cities, impacts the human health, and intensifies and prolongs heatwave events. To facilitate the study of UHIs the Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing of the National Observatory of Athens (IAASARS/NOA) has developed an operational real-time system that exploits remote sensing image data from Meteosat Second Generation - Spinning Enhanced Visible and Infrared Imager (MSG-SEVIRI) and generates high spatiotemporal land surface temperature (LST) and 2 m air temperature (TA) time series. These datasets form the basis for the generation of higher value products and services related to energy demand and heat-related health issues. These products are the heatwave hazard (HZ); the HUMIDEX (i.e. an index that describes the temperature felt by an individual exposed to heat and humidity); and the cooling degrees (CD; i.e. a measure that reflects the energy needed to cool a building). The spatiotemporal characteristics of HZ, HUMIDEX and CD are unique (1 km/5 min) and enable the appraisal of the spatially distributed heat related health risk and energy demand of cities. In this paper, the real time generation of the high spatiotemporal HZ, HUMIDEX and CD products is discussed. In addition, a case study corresponding to Athens' September 2015 heatwave is presented so as to demonstrate their capabilities. The overall aim of the system is to provide high quality data to several different end users, such as health responders, and energy suppliers. The urban thermal monitoring web service is available at http://snf-652558.vm.okeanos.grnet.gr/treasure/portal/info.html.
A Blueprint for Demonstrating Quantum Supremacy with Superconducting Qubits
NASA Technical Reports Server (NTRS)
Kechedzhi, Kostyantyn
2018-01-01
Long coherence times and high fidelity control recently achieved in scalable superconducting circuits paved the way for the growing number of experimental studies of many-qubit quantum coherent phenomena in these devices. Albeit full implementation of quantum error correction and fault tolerant quantum computation remains a challenge the near term pre-error correction devices could allow new fundamental experiments despite inevitable accumulation of errors. One such open question foundational for quantum computing is achieving the so called quantum supremacy, an experimental demonstration of a computational task that takes polynomial time on the quantum computer whereas the best classical algorithm would require exponential time and/or resources. It is possible to formulate such a task for a quantum computer consisting of less than a 100 qubits. The computational task we consider is to provide approximate samples from a non-trivial quantum distribution. This is a generalization for the case of superconducting circuits of ideas behind boson sampling protocol for quantum optics introduced by Arkhipov and Aaronson. In this presentation we discuss a proof-of-principle demonstration of such a sampling task on a 9-qubit chain of superconducting gmon qubits developed by Google. We discuss theoretical analysis of the driven evolution of the device resulting in output approximating samples from a uniform distribution in the Hilbert space, a quantum chaotic state. We analyze quantum chaotic characteristics of the output of the circuit and the time required to generate a sufficiently complex quantum distribution. We demonstrate that the classical simulation of the sampling output requires exponential resources by connecting the task of calculating the output amplitudes to the sign problem of the Quantum Monte Carlo method. We also discuss the detailed theoretical modeling required to achieve high fidelity control and calibration of the multi-qubit unitary evolution in the device. We use a novel cross-entropy statistical metric as a figure of merit to verify the output and calibrate the device controls. Finally, we demonstrate the statistics of the wave function amplitudes generated on the 9-gmon chain and verify the quantum chaotic nature of the generated quantum distribution. This verifies the implementation of the quantum supremacy protocol.
Pacific Northwest GridWise™ Testbed Demonstration Projects; Part I. Olympic Peninsula Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammerstrom, Donald J.; Ambrosio, Ron; Carlon, Teresa A.
2008-01-09
This report describes the implementation and results of a field demonstration wherein residential electric water heaters and thermostats, commercial building space conditioning, municipal water pump loads, and several distributed generators were coordinated to manage constrained feeder electrical distribution through the two-way communication of load status and electric price signals. The field demonstration took place in Washington and Oregon and was paid for by the U.S. Department of Energy and several northwest utilities. Price is found to be an effective control signal for managing transmission or distribution congestion. Real-time signals at 5-minute intervals are shown to shift controlled load in time.more » The behaviors of customers and their responses under fixed, time-of-use, and real-time price contracts are compared. Peak loads are effectively reduced on the experimental feeder. A novel application of portfolio theory is applied to the selection of an optimal mix of customer contract types.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, D.O.
In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less
Buffered coscheduling for parallel programming and enhanced fault tolerance
Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM
2006-01-31
A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors
Rapid prototyping of reflectors for vehicle lighting using laser activated remote phosphor
NASA Astrophysics Data System (ADS)
Lachmayer, Roland; Kloppenburg, Gerolf; Wolf, Alexander
2015-03-01
Bright white light sources are of significant importance for automotive front lighting systems. Today's upper class vehicles mainly use HID or LED as light source. As a further step in this development laser diode based systems offer high luminance, efficiency and allow the realization of new styling concepts and new dynamic lighting functions. These white laser diode systems can either be realized by mixing different spectral sources or by combining diodes with specific phosphors. Based on the approach of generating light using a laser and remote phosphor, lighting modules are manufactured. Four blue laser diodes (450 nm) are used to activate a phosphor coating and thus to achieve white light. A segmented paraboloid reflector generates the desired light distribution for an additional car headlamp. We use high speed milling and selective laser melting to build the reflector system for this lighting module. We compare the spectral reflection grade of these materials. Furthermore the generated modules are analyzed regarding their efficiency and light distribution. The use of Rapid Prototyping technologies allows an early validation of the chosen concept and is supposed to reduce cost and time in the product development process significantly. Therefor we discuss costs and times of the applied manufacturing technologies.
Nessler, Bernhard; Pfeiffer, Michael; Buesing, Lars; Maass, Wolfgang
2013-01-01
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex. PMID:23633941
Theoretical Framework for Integrating Distributed Energy Resources into Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, Jianming; Wu, Di; Kalsi, Karanjit
This paper focuses on developing a novel theoretical framework for effective coordination and control of a large number of distributed energy resources in distribution systems in order to more reliably manage the future U.S. electric power grid under the high penetration of renewable generation. The proposed framework provides a systematic view of the overall structure of the future distribution systems along with the underlying information flow, functional organization, and operational procedures. It is characterized by the features of being open, flexible and interoperable with the potential to support dynamic system configuration. Under the proposed framework, the energy consumption of variousmore » DERs is coordinated and controlled in a hierarchical way by using market-based approaches. The real-time voltage control is simultaneously considered to complement the real power control in order to keep nodal voltages stable within acceptable ranges during real time. In addition, computational challenges associated with the proposed framework are also discussed with recommended practices.« less
Reconstruction of the modified discrete Langevin equation from persistent time series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czechowski, Zbigniew
The discrete Langevin-type equation, which can describe persistent processes, was introduced. The procedure of reconstruction of the equation from time series was proposed and tested on synthetic data, with short and long-tail distributions, generated by different Langevin equations. Corrections due to the finite sampling rates were derived. For an exemplary meteorological time series, an appropriate Langevin equation, which constitutes a stochastic macroscopic model of the phenomenon, was reconstructed.
Characterization of gigahertz (GHz) bandwidth photomultipliers
NASA Technical Reports Server (NTRS)
Abshire, J. B.; Rowe, H. E.
1977-01-01
The average impulse response, root-mean-square times jitter as a function of signal level, single photoelectron distribution, and multiphotoelectron dark-count distribution have been measured for two static crossed-field and five electrostatic photomultipliers. The optical signal source for the first three of these tests was a 30 picosecond mode-locked laser pulse at 0.53 micron. The static crossed-field detectors had 2-photoelectron resolution, less than 200 ps rise times, and rms time jitters of 30 ps at the single photoelectron level. The electrostatic photomultipliers had rise times from 1 to 2.5 nanoseconds, and rms time jitters from 160 to 650 ps at the same signal level. The two static crossed-field photomultipliers had ion-feedback-generated dark pulses to the 50-photoelectron level, whereas one electrostatic photomultiplier had dark pulses to the 30-photoelectron level.
Wellman, G S; Hammond, R L; Talmage, R
2001-10-01
A secondary data-reporting system used to scan the archives of a hospital's automated storage and distribution cabinets (ASDCs) for indications of controlled-substance diversion is described. ASDCs, which allow access to multiple doses of the same medication at one time, use drug count verification to ensure complete audits and disposition tracking. Because an ASDC may interpret inappropriate removal of a medication as a normal transaction, users of ASDCs should have a comprehensive plan for detecting and investigating controlled-substance diversion. Monitoring for and detecting diversion can be difficult and time-consuming, given the limited report-generating features of many ASDCs. Managers at an 800-bed hospital used report-writing software to address these problems. This application interfaces with the hospital's computer system and generates customized reports. The monthly activity recapitulation report lists each user of the ASDCs and gives a summary of all the controlled-substance transactions for those users for the time period specified. The monthly summary report provides the backbone of the surveillance system and identifies situations that require further audit and review. This report provides a summary of each user's activity for a specific medication for the time period specified. The detailed summary report allows for efficient review of specific transactions before there is a decision to conduct a chart review. This report identifies all ASDC controlled-substance transactions associated with a user. A computerized report-generating system identifies instances of inappropriate removal of controlled substances from a hospital's ASDCs.
Optimal Regulation of Virtual Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall Anese, Emiliano; Guggilam, Swaroop S.; Simonetto, Andrea
This paper develops a real-time algorithmic framework for aggregations of distributed energy resources (DERs) in distribution networks to provide regulation services in response to transmission-level requests. Leveraging online primal-dual-type methods for time-varying optimization problems and suitable linearizations of the nonlinear AC power-flow equations, we believe this work establishes the system-theoretic foundation to realize the vision of distribution-level virtual power plants. The optimization framework controls the output powers of dispatchable DERs such that, in aggregate, they respond to automatic-generation-control and/or regulation-services commands. This is achieved while concurrently regulating voltages within the feeder and maximizing customers' and utility's performance objectives. Convergence andmore » tracking capabilities are analytically established under suitable modeling assumptions. Simulations are provided to validate the proposed approach.« less
Tetzlaff, D; Birkel, C; Dick, J; Geris, J; Soulsby, C
2014-01-01
We examined the storage dynamics and isotopic composition of soil water over 12 months in three hydropedological units in order to understand runoff generation in a montane catchment. The units form classic catena sequences from freely draining podzols on steep upper hillslopes through peaty gleys in shallower lower slopes to deeper peats in the riparian zone. The peaty gleys and peats remained saturated throughout the year, while the podzols showed distinct wetting and drying cycles. In this region, most precipitation events are <10 mm in magnitude, and storm runoff is mainly generated from the peats and peaty gleys, with runoff coefficients (RCs) typically <10%. In larger events the podzolic soils become strongly connected to the saturated areas, and RCs can exceed 40%. Isotopic variations in precipitation are significantly damped in the organic-rich soil surface horizons due to mixing with larger volumes of stored water. This damping is accentuated in the deeper soil profile and groundwater. Consequently, the isotopic composition of stream water is also damped, but the dynamics strongly reflect those of the near-surface waters in the riparian peats. “pre-event” water typically accounts for >80% of flow, even in large events, reflecting the displacement of water from the riparian soils that has been stored in the catchment for >2 years. These riparian areas are the key zone where different source waters mix. Our study is novel in showing that they act as “isostats,” not only regulating the isotopic composition of stream water, but also integrating the transit time distribution for the catchment. Key Points Hillslope connectivity is controlled by small storage changes in soil units Different catchment source waters mix in large riparian wetland storage Isotopes show riparian wetlands set the catchment transit time distribution PMID:25506098
Tetzlaff, D; Birkel, C; Dick, J; Geris, J; Soulsby, C
2014-02-01
We examined the storage dynamics and isotopic composition of soil water over 12 months in three hydropedological units in order to understand runoff generation in a montane catchment. The units form classic catena sequences from freely draining podzols on steep upper hillslopes through peaty gleys in shallower lower slopes to deeper peats in the riparian zone. The peaty gleys and peats remained saturated throughout the year, while the podzols showed distinct wetting and drying cycles. In this region, most precipitation events are <10 mm in magnitude, and storm runoff is mainly generated from the peats and peaty gleys, with runoff coefficients (RCs) typically <10%. In larger events the podzolic soils become strongly connected to the saturated areas, and RCs can exceed 40%. Isotopic variations in precipitation are significantly damped in the organic-rich soil surface horizons due to mixing with larger volumes of stored water. This damping is accentuated in the deeper soil profile and groundwater. Consequently, the isotopic composition of stream water is also damped, but the dynamics strongly reflect those of the near-surface waters in the riparian peats. "pre-event" water typically accounts for >80% of flow, even in large events, reflecting the displacement of water from the riparian soils that has been stored in the catchment for >2 years. These riparian areas are the key zone where different source waters mix. Our study is novel in showing that they act as "isostats," not only regulating the isotopic composition of stream water, but also integrating the transit time distribution for the catchment. Hillslope connectivity is controlled by small storage changes in soil unitsDifferent catchment source waters mix in large riparian wetland storageIsotopes show riparian wetlands set the catchment transit time distribution.
Managing risks of market price uncertainty for a microgrid operation
NASA Astrophysics Data System (ADS)
Raghavan, Sriram
After deregulation of electricity in the United States, the day-ahead and real-time markets allow load serving entities and generation companies to bid and purchase/sell energy under the supervision of the independent system operator (ISO). The electricity market prices are inherently uncertain, and can be highly volatile. The main objective of this thesis is to hedge against the risk from the uncertainty of the market prices when purchasing/selling energy from/to the market. The energy manager can also schedule distributed generators (DGs) and storage of the microgrid to meet the demand, in addition to energy transactions from the market. The risk measure used in this work is the variance of the uncertain market purchase/sale cost/revenue, assuming the price following a Gaussian distribution. Using Markowitz optimization, the risk is minimized to find the optimal mix of purchase from the markets. The problem is formulated as a mixed integer quadratic program. The microgrid at Illinois Institute of Technology (IIT) in Chicago, IL was used as a case study. The result of this work reveals the tradeoff faced by the microgrid energy manager between minimizing the risk and minimizing the mean of the total operating cost (TOC) of the microgrid. With this information, the microgrid energy manager can make decisions in the day-ahead and real-time markets according to their risk aversion preference. The assumption of market prices following Gaussian distribution is also verified to be reasonable for the purpose of hedging against their risks. This is done by comparing the result of the proposed formulation with that obtained from the sample market prices randomly generated using the distribution of actual historic market price data.
Distributed Modelling of Stormflow Generation: Assessing the Effect of Ground Cover
NASA Astrophysics Data System (ADS)
Jarihani, B.; Sidle, R. C.; Roth, C. H.; Bartley, R.; Wilkinson, S. N.
2017-12-01
Understanding the effects of grazing management and land cover changes on surface hydrology is important for water resources and land management. A distributed hydrological modelling platform, wflow, (that was developed as part of Deltares's OpenStreams project) is used to assess the effect of land management practices on runoff generation processes. The model was applied to Weany Creek, a small catchment (13.6 km2) of the Burdekin Basin, North Australia, which is being studied to understand sources of sediment and nutrients to the Great Barrier Reef. Satellite and drone-based ground cover data, high resolution topography from LiDAR, soil properties, and distributed rainfall data were used to parameterise the model. Wflow was used to predict total runoff, peak runoff, time of rise, and lag time for several events of varying magnitudes and antecedent moisture conditions. A nested approach was employed to calibrate the model by using recorded flow hydrographs at three scales: (1) a hillslope sub-catchment: (2) a gullied sub-catchment; and the 13.6 km2 catchment outlet. Model performance was evaluated by comparing observed and predicted stormflow hydrograph attributes using the Nash Sutcliffe efficiency metric. By using a nested approach, spatiotemporal patterns of overland flow occurrence across the catchment can also be evaluated. The results show that a process-based distributed model can be calibrated to simulate spatial and temporal patterns of runoff generation processes, to help identify dominant processes which may be addressed by land management to improve rainfall retention. The model will be used to assess the effects of ground cover changes due to management practices in grazed lands on storm runoff.
Fast coincidence counting with active inspection systems
NASA Astrophysics Data System (ADS)
Mullens, J. A.; Neal, J. S.; Hausladen, P. A.; Pozzi, S. A.; Mihalczo, J. T.
2005-12-01
This paper describes 2nd and 3rd order time coincidence distributions measurements with a GHz processor that synchronously samples 5 or 10 channels of data from radiation detectors near fissile material. On-line, time coincidence distributions are measured between detectors or between detectors and an external stimulating source. Detector-to-detector correlations are useful for passive measurements also. The processor also measures the number of times n pulses occur in a selectable time window and compares this multiplet distribution to a Poisson distribution as a method of determining the occurrence of fission. The detectors respond to radiation emitted in the fission process induced internally by inherent sources or by external sources such as LINACS, DT generators either pulsed or steady state with alpha detectors, etc. Data can be acquired from prompt emission during the source pulse, prompt emissions immediately after the source pulse, or delayed emissions between source pulses. These types of time coincidence measurements (occurring on the time scale of the fission chain multiplication processes for nuclear weapons grade U and Pu) are useful for determining the presence of these fissile materials and quantifying the amount, and are useful for counter terrorism and nuclear material control and accountability. This paper presents the results for a variety of measurements.
Reuter, Katja; Ukpolo, Francis; Ward, Edward; Wilson, Melissa L; Angyan, Praveen
2016-06-29
Scarce information about clinical research, in particular clinical trials, is among the top reasons why potential participants do not take part in clinical studies. Without volunteers, on the other hand, clinical research and the development of novel approaches to preventing, diagnosing, and treating disease are impossible. Promising digital options such as social media have the potential to work alongside traditional methods to boost the promotion of clinical research. However, investigators and research institutions are challenged to leverage these innovations while saving time and resources. To develop and test the efficiency of a Web-based tool that automates the generation and distribution of user-friendly social media messages about clinical trials. Trial Promoter is developed in Ruby on Rails, HTML, cascading style sheet (CSS), and JavaScript. In order to test the tool and the correctness of the generated messages, clinical trials (n=46) were randomized into social media messages and distributed via the microblogging social media platform Twitter and the social network Facebook. The percent correct was calculated to determine the probability with which Trial Promoter generates accurate messages. During a 10-week testing phase, Trial Promoter automatically generated and published 525 user-friendly social media messages on Twitter and Facebook. On average, Trial Promoter correctly used the message templates and substituted the message parameters (text, URLs, and disease hashtags) 97.7% of the time (1563/1600). Trial Promoter may serve as a promising tool to render clinical trial promotion more efficient while requiring limited resources. It supports the distribution of any research or other types of content. The Trial Promoter code and installation instructions are freely available online.
Ukpolo, Francis; Ward, Edward; Wilson, Melissa L
2016-01-01
Background Scarce information about clinical research, in particular clinical trials, is among the top reasons why potential participants do not take part in clinical studies. Without volunteers, on the other hand, clinical research and the development of novel approaches to preventing, diagnosing, and treating disease are impossible. Promising digital options such as social media have the potential to work alongside traditional methods to boost the promotion of clinical research. However, investigators and research institutions are challenged to leverage these innovations while saving time and resources. Objective To develop and test the efficiency of a Web-based tool that automates the generation and distribution of user-friendly social media messages about clinical trials. Methods Trial Promoter is developed in Ruby on Rails, HTML, cascading style sheet (CSS), and JavaScript. In order to test the tool and the correctness of the generated messages, clinical trials (n=46) were randomized into social media messages and distributed via the microblogging social media platform Twitter and the social network Facebook. The percent correct was calculated to determine the probability with which Trial Promoter generates accurate messages. Results During a 10-week testing phase, Trial Promoter automatically generated and published 525 user-friendly social media messages on Twitter and Facebook. On average, Trial Promoter correctly used the message templates and substituted the message parameters (text, URLs, and disease hashtags) 97.7% of the time (1563/1600). Conclusions Trial Promoter may serve as a promising tool to render clinical trial promotion more efficient while requiring limited resources. It supports the distribution of any research or other types of content. The Trial Promoter code and installation instructions are freely available online. PMID:27357424
The Impact of Utility Tariff Evolution on Behind-the-Meter PV Adoption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Wesley J; Gagnon, Pieter J; Frew, Bethany A
This analysis uses a new method to link the NREL Regional Energy Deployment System (ReEDS) capacity expansion model with the NREL distributed generation market demand model (dGen) to explore the impact that the evolution of retail electricity tariffs can have on the adoption of distributed photovoltaics (DPV). The evolution most notably takes the form of decreased mid-day electricity costs, as low-cost PV reduces the marginal cost of electricity during those hours and the changes are subsequently communicated to electricity consumers through tariffs. We find that even under the low PV prices of the new SunShot targets the financial performance ofmore » DPV under evolved tariffs still motivates behind-the-meter adoption, despite significant reduction in the costs of electricity during afternoon periods driven by deployment of cheap utility-scale PV. The amount of DPV in 2050 in these low-cost futures ranged from 206 GW to 263 GW, a 13-fold and 16-fold increase over 2016 adoption levels respectively. From a utility planner's perspective, the representation of tariff evolution has noteworthy impacts on forecasted DPV adoption in scenarios with widespread time-of-use tariffs. Scenarios that projected adoption under a portfolio of time-of-use tariffs, but did not represent the evolution of those tariffs, predicted up to 36 percent more DPV in 2050, compared to scenarios that did not represent that evolution. Lastly, we find that a reduction in DPV deployment resulting from evolved tariffs had a negligible impact on the total generation from PV - both utility-scale and distributed - in the scenarios we examined. Any reduction in DPV generation was replaced with utility-scale PV generation, to arrive at the quantity that makes up the least-cost portfolio.« less
Entanglement between a Photonic Time-Bin Qubit and a Collective Atomic Spin Excitation.
Farrera, Pau; Heinze, Georg; de Riedmatten, Hugues
2018-03-09
Entanglement between light and matter combines the advantage of long distance transmission of photonic qubits with the storage and processing capabilities of atomic qubits. To distribute photonic states efficiently over long distances several schemes to encode qubits have been investigated-time-bin encoding being particularly promising due to its robustness against decoherence in optical fibers. Here, we demonstrate the generation of entanglement between a photonic time-bin qubit and a single collective atomic spin excitation (spin wave) in a cold atomic ensemble, followed by the mapping of the atomic qubit onto another photonic qubit. A magnetic field that induces a periodic dephasing and rephasing of the atomic excitation ensures the temporal distinguishability of the two time bins and plays a central role in the entanglement generation. To analyze the generated quantum state, we use largely imbalanced Mach-Zehnder interferometers to perform projective measurements in different qubit bases and verify the entanglement by violating a Clauser-Horne-Shimony-Holt Bell inequality.
Entanglement between a Photonic Time-Bin Qubit and a Collective Atomic Spin Excitation
NASA Astrophysics Data System (ADS)
Farrera, Pau; Heinze, Georg; de Riedmatten, Hugues
2018-03-01
Entanglement between light and matter combines the advantage of long distance transmission of photonic qubits with the storage and processing capabilities of atomic qubits. To distribute photonic states efficiently over long distances several schemes to encode qubits have been investigated—time-bin encoding being particularly promising due to its robustness against decoherence in optical fibers. Here, we demonstrate the generation of entanglement between a photonic time-bin qubit and a single collective atomic spin excitation (spin wave) in a cold atomic ensemble, followed by the mapping of the atomic qubit onto another photonic qubit. A magnetic field that induces a periodic dephasing and rephasing of the atomic excitation ensures the temporal distinguishability of the two time bins and plays a central role in the entanglement generation. To analyze the generated quantum state, we use largely imbalanced Mach-Zehnder interferometers to perform projective measurements in different qubit bases and verify the entanglement by violating a Clauser-Horne-Shimony-Holt Bell inequality.
Programmable quantum random number generator without postprocessing.
Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping
2018-02-15
We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosokai, Tomonao; Zhidkov, Alexei; Yamazaki, Atsushi
2010-03-22
Hundred-mega-electron-volt electron beams with quasi-monoenergetic distribution, and a transverse geometrical emittance as small as approx0.02 pi mm mrad are generated by low power (7 TW, 45 fs) laser pulses tightly focused in helium gas jets in an external static magnetic field, Bapprox1 T. Generation of monoenergetic beams strongly correlates with appearance of a straight, at least 2 mm length plasma channel in a short time before the main laser pulse and with the energy of copropagating picosecond pedestal pulses (PPP). For a moderate energy PPP, the multiple or staged electron self-injection in the channel gives several narrow peaks in themore » electron energy distribution.« less
Informing Mexico's Distributed Generation Policy with System Advisor Model (SAM) Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aznar, Alexandra Y; Zinaman, Owen R; McCall, James D
The Government of Mexico recognizes the potential for clean distributed generation (DG) to meaningfully contribute to Mexico's clean energy and emissions reduction goals. However, important questions remain about how to fairly value DG and foster inclusive and equitable market growth that is beneficial to investors, electricity ratepayers, electricity distributors, and society. The U.S. National Renewable Energy Laboratory (NREL) has partnered with power sector institutions and stakeholders in Mexico to provide timely analytical support and expertise to help inform policymaking processes on clean DG. This document describes two technical assistance interventions that used the System Advisor Model (SAM) to inform Mexico'smore » DG policymaking processes with a focus on rooftop solar regulation and policy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawas, M.F.; Takezaki, H.
1995-08-01
The distribution of hydrocarbons in the Lower Cretaceous Thamama Group and Upper Jurassic Arab Formation in Abu Dhabi is influenced by the development of the intervening Hith anhydrites. The geochemical analysis of Thamama and Arab hydrocarbons indicate that they were generated from a common source rock: the Upper Jurassic Diyab Formation. Studies carried out on the Miocene sabkha anhydrites in the coastal flat west of Abu Dhabi supported a model for vertical migration through the Hith anhydrites under certain conditions. The established model implies that the Diyab oil and gas had migrated essentially vertically and individually which means that themore » oil migrated prior to the gas and their distribution is controlled by the differential sealing potential of the anhydrites at each migration phase: a Hith anhydrite bed of more than 30 feet (ft.) thick was a perfect seal for hydrocarbon migration into the Arab reservoirs. In this case, oils could not break through to the overlying Thamama group. But where the anhydride bed thicknesses dropped below 30 ft. thick, this permitted oil migration through to the overlying Thamama reservoirs during the oil generation phase in the Turonian time. At a later stage, with additional depth of burial and progressive diagenesis anhydrite beds as thin as 8 ft. thick became effective seals. These controlled the distribution of the gas during the gas generation phase in the Eocene time.« less
Tempo: A Toolkit for the Timed Input/Output Automata Formalism
2008-01-30
generation of distributed code from specifications. F .4.3 [Formal Languages]: Tempo;, D.3 [Programming Many distributed systems involve a combination of...and require The chek (i) transition is enabled when process i’s program the simulator to check the assertions after every single step counter is set to...output foo (n:Int) The Tempo simulator addresses this issue by putting the states x: Int : = 10;transitions modeler in charge of resolving the non
Secure Distributed Time for Secure Distributed Protocols
1994-09-01
minimal generating set of X = UV (A) AEY Implications Suppose (M, M’) is an acyclic Typ, -tent and independent) parallel pair. A timeslice containing...compromise the system if the attacker is willing to pay tremendous amounts of money . (For a detailed analysis of the cost, see [Wein9l 1.) What do we do...example, suppose auditor Alice is asking for a snapshot to verify that the electronic currency in circulation sums correctly. If counterfeiter Bad
Intertime jump statistics of state-dependent Poisson processes.
Daly, Edoardo; Porporato, Amilcare
2007-01-01
A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.
NASA Astrophysics Data System (ADS)
Hayata, Tomoya; Hidaka, Yoshimasa; Noumi, Toshifumi; Hongo, Masaru
2015-09-01
We derive relativistic hydrodynamics from quantum field theories by assuming that the density operator is given by a local Gibbs distribution at initial time. We decompose the energy-momentum tensor and particle current into nondissipative and dissipative parts, and analyze their time evolution in detail. Performing the path-integral formulation of the local Gibbs distribution, we microscopically derive the generating functional for the nondissipative hydrodynamics. We also construct a basis to study dissipative corrections. In particular, we derive the first-order dissipative hydrodynamic equations without a choice of frame such as the Landau-Lifshitz or Eckart frame.
Characterization of autoregressive processes using entropic quantifiers
NASA Astrophysics Data System (ADS)
Traversaro, Francisco; Redelico, Francisco O.
2018-01-01
The aim of the contribution is to introduce a novel information plane, the causal-amplitude informational plane. As previous works seems to indicate, Bandt and Pompe methodology for estimating entropy does not allow to distinguish between probability distributions which could be fundamental for simulation or for probability analysis purposes. Once a time series is identified as stochastic by the causal complexity-entropy informational plane, the novel causal-amplitude gives a deeper understanding of the time series, quantifying both, the autocorrelation strength and the probability distribution of the data extracted from the generating processes. Two examples are presented, one from climate change model and the other from financial markets.
Technology Solutions | Distributed Generation Interconnection Collaborative
technologies, both hardware and software, can support the wider adoption of distributed generation on the grid . As the penetration of distributed-generation photovoltaics (DGPV) has risen rapidly in recent years posed by high penetrations of distributed PV. Other promising technologies include new utility software
Pore Pressure and Stress Distributions Around a Hydraulic Fracture in Heterogeneous Rock
NASA Astrophysics Data System (ADS)
Gao, Qian; Ghassemi, Ahmad
2017-12-01
One of the most significant characteristics of unconventional petroleum bearing formations is their heterogeneity, which affects the stress distribution, hydraulic fracture propagation and also fluid flow. This study focuses on the stress and pore pressure redistributions during hydraulic stimulation in a heterogeneous poroelastic rock. Lognormal random distributions of Young's modulus and permeability are generated to simulate the heterogeneous distributions of material properties. A 3D fully coupled poroelastic model based on the finite element method is presented utilizing a displacement-pressure formulation. In order to verify the model, numerical results are compared with analytical solutions showing excellent agreements. The effects of heterogeneities on stress and pore pressure distributions around a penny-shaped fracture in poroelastic rock are then analyzed. Results indicate that the stress and pore pressure distributions are more complex in a heterogeneous reservoir than in a homogeneous one. The spatial extent of stress reorientation during hydraulic stimulations is a function of time and is continuously changing due to the diffusion of pore pressure in the heterogeneous system. In contrast to the stress distributions in homogeneous media, irregular distributions of stresses and pore pressure are observed. Due to the change of material properties, shear stresses and nonuniform deformations are generated. The induced shear stresses in heterogeneous rock cause the initial horizontal principal stresses to rotate out of horizontal planes.
Fowler, Mike S; Ruokolainen, Lasse
2013-01-01
The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let the characteristics of known natural environmental covariates (e.g., colour and distribution shape) guide us in our choice of how to best model the impact of coloured environmental variation on population dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K. S.; Nakae, L. F.; Prasad, M. K.
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Nanopore Kinetic Proofreading of DNA Sequences
NASA Astrophysics Data System (ADS)
Ling, Xinsheng Sean
The concept of DNA sequencing using the time dependence of the nanopore ionic current was proposed in 1996 by Kasianowicz, Brandin, Branton, and Deamer (KBBD). The KBBD concept has generated tremendous amount interests in recent decade. In this talk, I will review the current understanding of the DNA ``translocation'' dynamics and how it can be described by Schrodinger's 1915 paper on first-passage-time distribution function. Schrodinger's distribution function can be used to give a rigorous criterion for achieving nanopore DNA sequencing which turns out to be identical to that of gel electrophoresis used by Sanger in the first-generation Sanger method. A nanopore DNA sequencing technology also requires discrimination of bases with high accuracies. I will describe a solid-state nanopore sandwich structure that can function as a proofreading device capable of discriminating between correct and incorrect hybridization probes with an accuracy rivaling that of high-fidelity DNA polymerases. The latest results from Nanjing will be presented. This work is supported by China 1000-Talent Program at Southeast University, Nanjing, China.
Joint Real-Time Energy and Demand-Response Management using a Hybrid Coalitional-Noncooperative Game
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Fulin; Gu, Yi; Hao, Jun
In order to model the interactions among utility companies, building demands and renewable energy generators (REGs), a hybrid coalitional-noncooperative game framework has been proposed. We formulate a dynamic non-cooperative game to study the energy dispatch within multiple utility companies, while we take a coalitional perspective on REGs and buildings demands through a hedonic coalition formation game approach. In this case, building demands request different power supply from REGs, then the building demands can be organized into an ultimate coalition structure through a distributed hedonic shift algorithm. At the same time, utility companies can also obtain a stable power generation profile.more » In addition, the interactive progress among the utility companies and building demands which cannot be supplied by REGs is implemented by distributed game theoretic algorithms. Numerical results illustrate that the proposed hybrid coalitional-noncooperative game scheme reduces the cost of both building demands and utility companies compared with the initial scene.« less
Lopa, Silvia; Piraino, Francesco; Kemp, Raymond J; Di Caro, Clelia; Lovati, Arianna B; Di Giancamillo, Alessia; Moroni, Lorenzo; Peretti, Giuseppe M; Rasponi, Marco; Moretti, Matteo
2015-07-01
Three-dimensional (3D) culture models are widely used in basic and translational research. In this study, to generate and culture multiple 3D cell spheroids, we exploited laser ablation and replica molding for the fabrication of polydimethylsiloxane (PDMS) multi-well chips, which were validated using articular chondrocytes (ACs). Multi-well ACs spheroids were comparable or superior to standard spheroids, as revealed by glycosaminoglycan and type-II collagen deposition. Moreover, the use of our multi-well chips significantly reduced the operation time for cell seeding and medium refresh. Exploiting a similar approach, we used clinical-grade fibrin to generate implantable multi-well constructs allowing for the precise distribution of multiple cell types. Multi-well fibrin constructs were seeded with ACs generating high cell density regions, as shown by histology and cell fluorescent staining. Multi-well constructs were compared to standard constructs with homogeneously distributed ACs. After 7 days in vitro, expression of SOX9, ACAN, COL2A1, and COMP was increased in both constructs, with multi-well constructs expressing significantly higher levels of chondrogenic genes than standard constructs. After 5 weeks in vivo, we found that despite a dramatic size reduction, the cell distribution pattern was maintained and glycosaminoglycan content per wet weight was significantly increased respect to pre-implantation samples. In conclusion, multi-well chips for the generation and culture of multiple cell spheroids can be fabricated by low-cost rapid prototyping techniques. Furthermore, these techniques can be used to generate implantable constructs with defined architecture and controlled cell distribution, allowing for in vitro and in vivo investigation of cell interactions in a 3D environment. © 2015 Wiley Periodicals, Inc.
Waiting time distribution in public health care: empirics and theory.
Dimakou, Sofia; Dimakou, Ourania; Basso, Henrique S
2015-12-01
Excessive waiting times for elective surgery have been a long-standing concern in many national healthcare systems in the OECD. How do the hospital admission patterns that generate waiting lists affect different patients? What are the hospitals characteristics that determine waiting times? By developing a model of healthcare provision and analysing empirically the entire waiting time distribution we attempt to shed some light on those issues. We first build a theoretical model that describes the optimal waiting time distribution for capacity constraint hospitals. Secondly, employing duration analysis, we obtain empirical representations of that distribution across hospitals in the UK from 1997-2005. We observe important differences on the 'scale' and on the 'shape' of admission rates. Scale refers to how quickly patients are treated and shape represents trade-offs across duration-treatment profiles. By fitting the theoretical to the empirical distributions we estimate the main structural parameters of the model and are able to closely identify the main drivers of these empirical differences. We find that the level of resources allocated to elective surgery (budget and physical capacity), which determines how constrained the hospital is, explains differences in scale. Changes in benefits and costs structures of healthcare provision, which relate, respectively, to the desire to prioritise patients by duration and the reduction in costs due to delayed treatment, determine the shape, affecting short and long duration patients differently. JEL Classification I11; I18; H51.
Modeling, Simulation, and Analysis of a Decoy State Enabled Quantum Key Distribution System
2015-03-26
through the fiber , we assume Alice and Bob have correct basis alignment and timing control for reference frame correction and precise photon detection...optical components ( laser , polarization modulator, electronic variable optical attenuator, fixed optical attenuator, fiber channel, beamsplitter...generated by the laser in the CPG propagate through multiple optical components, each with a unique propagation delay before reaching the OPM. Timing
Nanoscale Magnetism in Next Generation Magnetic Nanoparticles
2018-03-17
as dextran coated SPIONs were studied. From the measured T1 and T2 relaxation times, a new method called Quantitative Ultra- Short Time-to-Echo...angiograms with high clarity and definition, and enabled quantitative MRI in biological samples. At UCL, the work included (i) fabricating multi-element...distribution unlimited. I. Introduction Compared to flat biosensor devices, 3D engineered biosensors achievemore intimate and conformal interfaces with cells
Femtosecond timing distribution and control for next generation accelerators and light sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Li -Jin
Femtosecond Timing Distribution At LCLS Free-electron-lasers (FEL) have the capability of producing high photon flux from the IR to the hard x-ray wavelength range and to emit femtosecond and eventually even attosecond pulses. This makes them an ideal tool for fundamental as well as applied re-search. Timing precision at the Stanford Linear Coherent Light Source (LCLS) between the x-ray FEL (XFEL) and ultrafast optical lasers is currently no better than 100 fs RMS. Ideally this precision should be much better and could be limited only by the x-ray pulse duration, which can be as short as a few femtoseconds. Anmore » increasing variety of science problems involving electron and nuclear dynamics in chemical and material systems will become accessible as the timing improves to a few femtoseconds. Advanced methods of electron beam conditioning or pulse injection could allow the FEL to achieve pulse durations less than one femtosecond. The objective of the work described in this proposal is to set up an optical timing distribution system based on mode locked Erbium doped fiber lasers at LCLS facility to improve the timing precision in the facility and allow time stamping with a 10 fs precision. The primary commercial applications for optical timing distributions systems are seen in the worldwide accelerator facilities and next generation light sources community. It is reasonable to expect that at least three major XFELs will be built in the next decade. In addition there will be up to 10 smaller machines, such as FERMI in Italy and Maxlab in Sweden, plus the market for upgrading already existing facilities like Jefferson Lab. The total market is estimated to be on the order of a 100 Million US Dollars. The company owns the exclusive rights to the IP covering the technology enabling sub-10 fs synchronization systems. Testing this technology, which has set records in a lab environment, at LCLS, hence in a real world scenario, is an important corner stone of bringing the technology to market.« less
Yamaguchi, Takashi; Hinata, Takashi
2007-09-03
The time-average energy density of the optical near-field generated around a metallic sphere is computed using the finite-difference time-domain method. To check the accuracy, the numerical results are compared with the rigorous solutions by Mie theory. The Lorentz-Drude model, which is coupled with Maxwell's equation via motion equations of an electron, is applied to simulate the dispersion relation of metallic materials. The distributions of the optical near-filed generated around a metallic hemisphere and a metallic spheroid are also computed, and strong optical near-fields are obtained at the rim of them.
Measurement of positron annihilation lifetimes for positron burst by multi-detector array
NASA Astrophysics Data System (ADS)
Wang, B. Y.; Kuang, P.; Liu, F. Y.; Han, Z. J.; Cao, X. Z.; Zhang, P.
2018-03-01
It is currently impossible to exploit the timing information in a gamma-ray pulse generated within nanoseconds when a high-intensity positron burst annihilation event occurs in a target using conventional single-detector methods. A state-of-the-art solution to the problem is proposed in this paper. In this approach, a multi-detector array composed of many independent detection cells mounted spherically around the target is designed to detect the time distribution of the annihilated gamma rays generated following, in particular, a positron burst emitting huge amounts of positrons in a short pulse duration, even less than a few nano- or picoseconds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Yung-Sung; Kenoyer, Judson L.; Guilmette, Raymond A.
2009-03-01
The Capstone Depleted Uranium (DU) Aerosol Study, which generated and characterized aerosols containing depleted uranium from perforation of armored vehicles with large-caliber DU penetrators, incorporated a sampling protocol to evaluated particle size distributions. Aerosol particle size distribution is an important parameter that influences aerosol transport and deposition processes as well as the dosimetry of the inhaled particles. These aerosols were collected on cascade impactor substrates using a pre-established time sequence following the firing event to analyze the uranium concentration and particle size of the aerosols as a function of time. The impactor substrates were analyzed using beta spectrometry, and themore » derived uranium content of each served as input to the evaluation of particle size distributions. Activity median aerodynamic diameters (AMADs) of the particle size distributions were evaluated using unimodal and bimodal models. The particle size data from the impactor measurements was quite variable. Most size distributions measured in the test based on activity had bimodal size distributions with a small particle size mode in the range of between 0.2 and 1.2 um and a large size mode between 2 and 15 um. In general, the evolution of particle size over time showed an overall decrease of average particle size from AMADs of 5 to 10 um shortly after perforation to around 1 um at the end of the 2-hr sampling period. The AMADs generally decreased over time because of settling. Additionally, the median diameter of the larger size mode decreased with time. These results were used to estimate the dosimetry of inhaled DU particles.« less
NASA Astrophysics Data System (ADS)
Prada, Jose Fernando
Keeping a contingency reserve in power systems is necessary to preserve the security of real-time operations. This work studies two different approaches to the optimal allocation of energy and reserves in the day-ahead generation scheduling process. Part I presents a stochastic security-constrained unit commitment model to co-optimize energy and the locational reserves required to respond to a set of uncertain generation contingencies, using a novel state-based formulation. The model is applied in an offer-based electricity market to allocate contingency reserves throughout the power grid, in order to comply with the N-1 security criterion under transmission congestion. The objective is to minimize expected dispatch and reserve costs, together with post contingency corrective redispatch costs, modeling the probability of generation failure and associated post contingency states. The characteristics of the scheduling problem are exploited to formulate a computationally efficient method, consistent with established operational practices. We simulated the distribution of locational contingency reserves on the IEEE RTS96 system and compared the results with the conventional deterministic method. We found that assigning locational spinning reserves can guarantee an N-1 secure dispatch accounting for transmission congestion at a reasonable extra cost. The simulations also showed little value of allocating downward reserves but sizable operating savings from co-optimizing locational nonspinning reserves. Overall, the results indicate the computational tractability of the proposed method. Part II presents a distributed generation scheduling model to optimally allocate energy and spinning reserves among competing generators in a day-ahead market. The model is based on the coordination between individual generators and a market entity. The proposed method uses forecasting, augmented pricing and locational signals to induce efficient commitment of generators based on firm posted prices. It is price-based but does not rely on multiple iterations, minimizes information exchange and simplifies the market clearing process. Simulations of the distributed method performed on a six-bus test system showed that, using an appropriate set of prices, it is possible to emulate the results of a conventional centralized solution, without need of providing make-whole payments to generators. Likewise, they showed that the distributed method can accommodate transactions with different products and complex security constraints.
Microscale air quality impacts of distributed power generation facilities.
Olaguer, Eduardo P; Knipping, Eladio; Shaw, Stephanie; Ravindran, Satish
2016-08-01
The electric system is experiencing rapid growth in the adoption of a mix of distributed renewable and fossil fuel sources, along with increasing amounts of off-grid generation. New operational regimes may have unforeseen consequences for air quality. A three-dimensional microscale chemical transport model (CTM) driven by an urban wind model was used to assess gaseous air pollutant and particulate matter (PM) impacts within ~10 km of fossil-fueled distributed power generation (DG) facilities during the early afternoon of a typical summer day in Houston, TX. Three types of DG scenarios were considered in the presence of motor vehicle emissions and a realistic urban canopy: (1) a 25-MW natural gas turbine operating at steady state in either simple cycle or combined heating and power (CHP) mode; (2) a 25-MW simple cycle gas turbine undergoing a cold startup with either moderate or enhanced formaldehyde emissions; and (3) a data center generating 10 MW of emergency power with either diesel or natural gas-fired backup generators (BUGs) without pollution controls. Simulations of criteria pollutants (NO2, CO, O3, PM) and the toxic pollutant, formaldehyde (HCHO), were conducted assuming a 2-hr operational time period. In all cases, NOx titration dominated ozone production near the source. The turbine scenarios did not result in ambient concentration enhancements significantly exceeding 1 ppbv for gaseous pollutants or over 1 µg/m(3) for PM after 2 hr of emission, assuming realistic plume rise. In the case of the datacenter with diesel BUGs, ambient NO2 concentrations were enhanced by 10-50 ppbv within 2 km downwind of the source, while maximum PM impacts in the immediate vicinity of the datacenter were less than 5 µg/m(3). Plausible scenarios of distributed fossil generation consistent with the electricity grid's transformation to a more flexible and modernized system suggest that a substantial amount of deployment would be required to significantly affect air quality on a localized scale. In particular, natural gas turbines typically used in distributed generation may have minor effects. Large banks of diesel backup generators such as those used by data centers, on the other hand, may require pollution controls or conversion to natural gas-fired reciprocal internal combustion engines to decrease nitrogen dioxide pollution.
Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications
NASA Astrophysics Data System (ADS)
Wang, K.; Lettenmaier, D. P.
2017-12-01
Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.
Lin, Kuen-Feng; Chiang, Chien-Hung; Wu, Chun-Guey
2014-01-01
The refractive index and extinction coefficient of a triiodide perovskite absorber (TPA) were obtained by fitting the transmittance spectra of TPA/PEDOT:PSS/ITO/glass using the transfer matrix method. Cu nanoplasmonic structures were designed to enhance the exciton generation in the TPA and to simultaneously reduce the film thickness of the TPA. Excitons were effectively generated at the interface between TPA and Cu nanoparticles, as observed through the 3D finite-difference time-domain method. The exciton distribution is advantageous for the exciton dissociation and carrier transport. PMID:25295290
Mesoscale mapping of available solar energy at the earth's surface by use of satellites
NASA Technical Reports Server (NTRS)
Hiser, H. W.; Senn, H. V.
1980-01-01
A method is presented for use of cloud images in the visual spectrum from the SMS/GOES geostationary satellites to determine the hourly distribution of sunshine on the mesoscale. Cloud coverage and density as a function of time of day and season are evaluated through the use of digital data processing techniques. Seasonal geographic distributions of cloud cover/sunshine are converted to joules of solar radiation received at the earth's surface through relationships developed from long-term measurements of these two parameters at six widely distributed stations. The technique can be used to generate maps showing the geographic distribution of total solar radiation on the mesoscale which is received at the earth's surface.
Cascaded clocks measurement and simulation findings
NASA Technical Reports Server (NTRS)
Chislow, Don; Zampetti, George
1994-01-01
This paper will examine aspects related to network synchronization distribution and the cascading of timing elements. Methods of timing distribution have become a much debated topic in standards forums and among network service providers (both domestically and internationally). Essentially these concerns focus on the need to migrate their existing network synchronization plans (and capabilities) to those required for the next generation of transport technologies (namely, the Synchronous Digital Hierarchy (SDH), Synchronous Optical Networks (SONET), and Asynchronous Transfer Mode (ATM). The particular choices for synchronization distribution network architectures are now being evaluated and are demonstrating that they can indeed have a profound effect on the overall service performance levels that will be delivered to the customer. The salient aspects of these concerns reduce to the following: (1) identifying that the devil is in the details of the timing element specifications and the distribution of timing information (i.e., small design choices can have a large performance impact); (2) developing a standardized method of performance verification that will yield unambiguous results; and (3) presentation of those results. Specifically, this will be done for two general cases: an ideal input, and a noisy input to a cascaded chain of slave clocks.
Random local temporal structure of category fluency responses.
Meyer, David J; Messer, Jason; Singh, Tanya; Thomas, Peter J; Woyczynski, Wojbor A; Kaye, Jeffrey; Lerner, Alan J
2012-04-01
The Category Fluency Test (CFT) provides a sensitive measurement of cognitive capabilities in humans related to retrieval from semantic memory. In particular, it is widely used to assess progress of cognitive impairment in patients with dementia. Previous research shows that, in the first approximation, the intensity of tested individuals' responses within a standard 60-s test period decays exponentially with time, with faster decay rates for more cognitively impaired patients. Such decay rate can then be viewed as a global (macro) diagnostic parameter of each test. In the present paper we focus on the statistical properties of the properly de-trended time intervals between consecutive responses (inter-call times) in the Category Fluency Test. In a sense, those properties reflect the local (micro) structure of the response generation process. We find that a good approximation for the distribution of the de-trended inter-call times is provided by the Weibull Distribution, a probability distribution that appears naturally in this context as a distribution of a minimum of independent random quantities and is the standard tool in industrial reliability theory. This insight leads us to a new interpretation of the concept of "navigating a semantic space" via patient responses.
NASA Astrophysics Data System (ADS)
Iino, Shota; Ito, Riho; Doi, Kento; Imaizumi, Tomoyuki; Hikosaka, Shuhei
2017-10-01
In the developing countries, urban areas are expanding rapidly. With the rapid developments, a short term monitoring of urban changes is important. A constant observation and creation of urban distribution map of high accuracy and without noise pollution are the key issues for the short term monitoring. SAR satellites are highly suitable for day or night and regardless of atmospheric weather condition observations for this type of study. The current study highlights the methodology of generating high-accuracy urban distribution maps derived from the SAR satellite imagery based on Convolutional Neural Network (CNN), which showed the outstanding results for image classification. Several improvements on SAR polarization combinations and dataset construction were performed for increasing the accuracy. As an additional data, Digital Surface Model (DSM), which are useful to classify land cover, were added to improve the accuracy. From the obtained result, high-accuracy urban distribution map satisfying the quality for short-term monitoring was generated. For the evaluation, urban changes were extracted by taking the difference of urban distribution maps. The change analysis with time series of imageries revealed the locations of urban change areas for short-term. Comparisons with optical satellites were performed for validating the results. Finally, analysis of the urban changes combining X-band, L-band and C-band SAR satellites was attempted to increase the opportunity of acquiring satellite imageries. Further analysis will be conducted as future work of the present study
ANN based Real-Time Estimation of Power Generation of Different PV Module Types
NASA Astrophysics Data System (ADS)
Syafaruddin; Karatepe, Engin; Hiyama, Takashi
Distributed generation is expected to become more important in the future generation system. Utilities need to find solutions that help manage resources more efficiently. Effective smart grid solutions have been experienced by using real-time data to help refine and pinpoint inefficiencies for maintaining secure and reliable operating conditions. This paper proposes the application of Artificial Neural Network (ANN) for the real-time estimation of the maximum power generation of PV modules of different technologies. An intelligent technique is necessary required in this case due to the relationship between the maximum power of PV modules and the open circuit voltage and temperature is nonlinear and can't be easily expressed by an analytical expression for each technology. The proposed ANN method is using input signals of open circuit voltage and cell temperature instead of irradiance and ambient temperature to determine the estimated maximum power generation of PV modules. It is important for the utility to have the capability to perform this estimation for optimal operating points and diagnostic purposes that may be an early indicator of a need for maintenance and optimal energy management. The proposed method is accurately verified through a developed real-time simulator on the daily basis of irradiance and cell temperature changes.
Basire, Marie; Borgis, Daniel; Vuilleumier, Rodolphe
2013-08-14
Langevin dynamics coupled to a quantum thermal bath (QTB) allows for the inclusion of vibrational quantum effects in molecular dynamics simulations at virtually no additional computer cost. We investigate here the ability of the QTB method to reproduce the quantum Wigner distribution of a variety of model potentials, designed to assess the performances and limits of the method. We further compute the infrared spectrum of a multidimensional model of proton transfer in the gas phase and in solution, using classical trajectories sampled initially from the Wigner distribution. It is shown that for this type of system involving large anharmonicities and strong nonlinear coupling to the environment, the quantum thermal bath is able to sample the Wigner distribution satisfactorily and to account for both zero point energy and tunneling effects. It leads to quantum time correlation functions having the correct short-time behavior, and the correct associated spectral frequencies, but that are slightly too overdamped. This is attributed to the classical propagation approximation rather than the generation of the quantized initial conditions themselves.
of his time to fire a single round. The solution of the simple duel in the case where each protagonist’s time-to-kill is distributed as a gamma-variate...general simple duel . An expansion of the moment-generating function of the marksman’s time-to- kill in powers of his kill probability is next derived and...found to provide a good approximation to the solution of the simple duel ; various properties of the expansion are also considered. A stochastic battle
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
The coalescent of a sample from a binary branching process.
Lambert, Amaury
2018-04-25
At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.
On a framework for generating PoD curves assisted by numerical simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in
2015-03-31
The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less
On a framework for generating PoD curves assisted by numerical simulations
NASA Astrophysics Data System (ADS)
Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar
2015-03-01
The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.
NASA Technical Reports Server (NTRS)
Englander, Jacob; Englander, Arnold
2014-01-01
Trajectory optimization methods using MBH have become well developed during the past decade. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing RVs from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by Englander significantly improves MBH performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness, where efficiency is finding better solutions in less time, and robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive RWs originally developed in the field of statistical physics.
Principal Investigator Microgravity Services Role in ISS Acceleration Data Distribution
NASA Technical Reports Server (NTRS)
McPherson, Kevin
1999-01-01
Measurement of the microgravity acceleration environment on the International Space Station will be accomplished by two accelerometer systems. The Microgravity Acceleration Measurement System will record the quasi-steady microgravity environment, including the influences of aerodynamic drag, vehicle rotation, and venting effects. Measurement of the vibratory/transient regime comprised of vehicle, crew, and equipment disturbances will be accomplished by the Space Acceleration Measurement System-II. Due to the dynamic nature of the microgravity environment and its potential to influence sensitive experiments, Principal Investigators require distribution of microgravity acceleration in a timely and straightforward fashion. In addition to this timely distribution of the data, long term access to International Space Station microgravity environment acceleration data is required. The NASA Glenn Research Center's Principal Investigator Microgravity Services project will provide the means for real-time and post experiment distribution of microgravity acceleration data to microgravity science Principal Investigators. Real-time distribution of microgravity environment acceleration data will be accomplished via the World Wide Web. Data packets from the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System-II will be routed from onboard the International Space Station to the NASA Glenn Research Center's Telescience Support Center. Principal Investigator Microgravity Services' ground support equipment located at the Telescience Support Center will be capable of generating a standard suite of acceleration data displays, including various time domain and frequency domain options. These data displays will be updated in real-time and will periodically update images available via the Principal Investigator Microgravity Services web page.
Correlation in photon pairs generated using four-wave mixing in a cold atomic ensemble
NASA Astrophysics Data System (ADS)
Ferdinand, Andrew Richard; Manjavacas, Alejandro; Becerra, Francisco Elohim
2017-04-01
Spontaneous four-wave mixing (FWM) in atomic ensembles can be used to generate narrowband entangled photon pairs at or near atomic resonances. While extensive research has been done to investigate the quantum correlations in the time and polarization of such photon pairs, the study and control of high dimensional quantum correlations contained in their spatial degrees of freedom has not been fully explored. In our work we experimentally investigate the generation of correlated light from FWM in a cold ensemble of cesium atoms as a function of the frequencies of the pump fields in the FWM process. In addition, we theoretically study the spatial correlations of the photon pairs generated in the FWM process, specifically the joint distribution of their orbital angular momentum (OAM). We investigate the width of the distribution of the OAM modes, known as the spiral bandwidth, and the purity of OAM correlations as a function of the properties of the pump fields, collected photons, and the atomic ensemble. These studies will guide experiments involving high dimensional entanglement of photons generated from this FWM process and OAM-based quantum communication with atomic ensembles. This work is supported by AFORS Grant FA9550-14-1-0300.
NASA Astrophysics Data System (ADS)
Akbardin, J.; Parikesit, D.; Riyanto, B.; TMulyono, A.
2018-05-01
Zones that produce land fishery commodity and its yields have characteristics that is limited in distribution capability because infrastructure conditions availability. High demand for fishery commodities caused to a growing distribution at inefficient distribution distance. The development of the gravity theory with the limitation of movement generation from the production zone can increase the interaction inter-zones by distribution distances effectively and efficiently with shorter movement distribution distances. Regression analysis method with multiple variable of transportation infrastructure condition based on service level and quantitative capacity is determined to estimate the 'mass' of movement generation that is formed. The resulting movement distribution (Tid) model has the equation Tid = 27.04 -0.49 tid. Based on barrier function of power model with calibration value β = 0.0496. In the way of development of the movement generation 'mass' boundary at production zone will shorten the distribution distance effectively with shorter distribution distances. Shorter distribution distances will increase the accessibility inter-zones to interact according to the magnitude of the movement generation 'mass'.
NASA Astrophysics Data System (ADS)
Antonenkov, D. V.; Solovev, D. B.
2017-10-01
The article covers the aspects of forecasting and consideration of the wholesale market environment in generating the power demand forecast. Major mining companies that operate in conditions of the present day power market have to provide a reliable energy demand request for a certain time period ahead, thus ensuring sufficient reduction of financial losses associated with deviations of the actual power demand from the expected figures. Normally, under the power supply agreement, the consumer is bound to provide a per-month and per-hour request annually. It means that the consumer has to generate one-month-ahead short-term and medium-term hourly forecasts. The authors discovered that empiric distributions of “Yakutugol”, Holding Joint Stock Company, power demand belong to the sustainable rank parameter H-distribution type used for generating forecasts based on extrapolation of such distribution parameters. For this reason they justify the need to apply the mathematic rank analysis in short-term forecasting of the contracted power demand of “Neryungri” coil strip mine being a component of the technocenosis-type system of the mining company “Yakutugol”, Holding JSC.
On the temperature control in self-controlling hyperthermia therapy
NASA Astrophysics Data System (ADS)
Ebrahimi, Mahyar
2016-10-01
In self-controlling hyperthermia therapy, once the desired temperature is reached, the heat generation ceases and overheating is prevented. In order to design a system that generates sufficient heat without thermal ablation of surrounding healthy tissue, a good understanding of temperature distribution and its change with time is imperative. This study is conducted to extend our understanding about the heat generation and transfer, temperature distribution and temperature rise pattern in the tumor and surrounding tissue during self-controlling magnetic hyperthermia. A model consisting of two concentric spheres that represents the tumor and its surrounding tissue is considered and temperature change pattern and temperature distribution in tumor and surrounding tissue are studied. After describing the model and its governing equations and constants precisely, a typical numerical solution of the model is presented. Then it is showed that how different parameters like Curie temperature of nanoparticles, magnetic field amplitude and nanoparticles concentration can affect the temperature change pattern during self-controlling magnetic hyperthermia. The model system herein discussed can be useful to gain insight on the self-controlling magnetic hyperthermia while applied to cancer treatment in real scenario and can be useful for treatment strategy determination.
NASA Astrophysics Data System (ADS)
Larrañeta, M.; Moreno-Tejera, S.; Lillo-Bravo, I.; Silva-Pérez, M. A.
2018-02-01
Many of the available solar radiation databases only provide global horizontal irradiance (GHI) while there is a growing need of extensive databases of direct normal radiation (DNI) mainly for the development of concentrated solar power and concentrated photovoltaic technologies. In the present work, we propose a methodology for the generation of synthetic DNI hourly data from the hourly average GHI values by dividing the irradiance into a deterministic and stochastic component intending to emulate the dynamics of the solar radiation. The deterministic component is modeled through a simple classical model. The stochastic component is fitted to measured data in order to maintain the consistency of the synthetic data with the state of the sky, generating statistically significant DNI data with a cumulative frequency distribution very similar to the measured data. The adaptation and application of the model to the location of Seville shows significant improvements in terms of frequency distribution over the classical models. The proposed methodology applied to other locations with different climatological characteristics better results than the classical models in terms of frequency distribution reaching a reduction of the 50% in the Finkelstein-Schafer (FS) and Kolmogorov-Smirnov test integral (KSI) statistics.
Kye, Bongoh; Mare, Robert D.
2014-01-01
This study examines the intergenerational effects of changes in women's education in South Korea. We define intergenerational effects as changes in the distribution of educational attainment in an offspring generation associated with the changes in a parental generation. Departing from the previous approach in research on social mobility that has focused on intergenerational association, we examine the changes in the distribution of educational attainment across generations. Using a simulation method based on Mare and Maralani's recursive population renewal model, we examine how intergenerational transmission, assortative mating, and differential fertility influence intergenerational effects. The results point to the following conclusions. First, we find a positive intergenerational effect: improvement in women's education leads to improvement in daughter's education. Second, we find that the magnitude of intergenerational effects substantially depends on assortative marriage and differential fertility: assortative mating amplifies and differential fertility dampens the intergenerational effects. Third, intergenerational effects become bigger for the less educated and smaller for the better educated over time, which is a consequence of educational expansion. We compare our results with Mare and Maralani's original Indonesian study to illustrate how the model of intergenerational effects works in different socioeconomic circumstances. PMID:23017970
Renewable Energy Power Generation Estimation Using Consensus Algorithm
NASA Astrophysics Data System (ADS)
Ahmad, Jehanzeb; Najm-ul-Islam, M.; Ahmed, Salman
2017-08-01
At the small consumer level, Photo Voltaic (PV) panel based grid tied systems are the most common form of Distributed Energy Resources (DER). Unlike wind which is suitable for only selected locations, PV panels can generate electricity almost anywhere. Pakistan is currently one of the most energy deficient countries in the world. In order to mitigate this shortage the Government has recently announced a policy of net-metering for residential consumers. After wide spread adoption of DERs, one of the issues that will be faced by load management centers would be accurate estimate of the amount of electricity being injected in the grid at any given time through these DERs. This becomes a critical issue once the penetration of DER increases beyond a certain limit. Grid stability and management of harmonics becomes an important consideration where electricity is being injected at the distribution level and through solid state controllers instead of rotating machinery. This paper presents a solution using graph theoretic methods for the estimation of total electricity being injected in the grid in a wide spread geographical area. An agent based consensus approach for distributed computation is being used to provide an estimate under varying generation conditions.
Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos
2016-01-01
This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure. PMID:27618040
Distributed vibration fiber sensing system based on Polarization Diversity Receiver
NASA Astrophysics Data System (ADS)
Zhang, Junan; Jiang, Peng; Hu, Zhengliang; Hu, Yongming
2016-10-01
In this paper, we propose a distributed vibration fiber sensing system based on Polarization Diversity Receiver(PDR). We use Acoustic Optical Modulator(AOM) to generate pulse light and an unbalanced M-Z interferometer to generate two pulse light with a certain time delay in the same period. As the pulse lights propagating in fibers, the Backward Rayleigh scattering lights will interfere with each other. The vibration on the fiber will change the length and refractive index of fiber which results in the change of the phase of the interference signal. Hence, one arm of the M-Z interferometer is modulated by a sinusoidal phase-generated carrier(PGC) signal, and PGC demodulation algorithm has been used to acquire phase information from the Backward Rayleigh scattering lights. In order to overcome the influence of polarization-induced fading and enhance Signal Noise Ratio(SNR), we set a PDR before the photo detector. The Polarization Diversity Receiver segregates the interfere light into two lights with orthogonal states of polarization. Hence, there is always one channel has a better interfere light signal. The experiments are presented to verify the effectiveness of the distributed vibration fiber sensing system proposed.
GENIE(++): A Multi-Block Structured Grid System
NASA Technical Reports Server (NTRS)
Williams, Tonya; Nadenthiran, Naren; Thornburg, Hugh; Soni, Bharat K.
1996-01-01
The computer code GENIE++ is a continuously evolving grid system containing a multitude of proven geometry/grid techniques. The generation process in GENIE++ is based on an earlier version. The process uses several techniques either separately or in combination to quickly and economically generate sculptured geometry descriptions and grids for arbitrary geometries. The computational mesh is formed by using an appropriate algebraic method. Grid clustering is accomplished with either exponential or hyperbolic tangent routines which allow the user to specify a desired point distribution. Grid smoothing can be accomplished by using an elliptic solver with proper forcing functions. B-spline and Non-Uniform Rational B-splines (NURBS) algorithms are used for surface definition and redistribution. The built in sculptured geometry definition with desired distribution of points, automatic Bezier curve/surface generation for interior boundaries/surfaces, and surface redistribution is based on NURBS. Weighted Lagrance/Hermite transfinite interpolation methods, interactive geometry/grid manipulation modules, and on-line graphical visualization of the generation process are salient features of this system which result in a significant time savings for a given geometry/grid application.
Analysis and generation of groundwater concentration time series
NASA Astrophysics Data System (ADS)
Crăciun, Maria; Vamoş, Călin; Suciu, Nicolae
2018-01-01
Concentration time series are provided by simulated concentrations of a nonreactive solute transported in groundwater, integrated over the transverse direction of a two-dimensional computational domain and recorded at the plume center of mass. The analysis of a statistical ensemble of time series reveals subtle features that are not captured by the first two moments which characterize the approximate Gaussian distribution of the two-dimensional concentration fields. The concentration time series exhibit a complex preasymptotic behavior driven by a nonstationary trend and correlated fluctuations with time-variable amplitude. Time series with almost the same statistics are generated by successively adding to a time-dependent trend a sum of linear regression terms, accounting for correlations between fluctuations around the trend and their increments in time, and terms of an amplitude modulated autoregressive noise of order one with time-varying parameter. The algorithm generalizes mixing models used in probability density function approaches. The well-known interaction by exchange with the mean mixing model is a special case consisting of a linear regression with constant coefficients.
NASA Astrophysics Data System (ADS)
Sobieszuk, Paweł; Zamojska-Jaroszewicz, Anna; Makowski, Łukasz
2017-12-01
The influence of the organic loading rate (also known as active anodic chamber volume) on bioelectricity generation in a continuous, two-chamber microbial fuel cell for the treatment of synthetic wastewater, with glucose as the only carbon source, was examined. Ten sets of experiments with different combinations of hydraulic retention times (0.24-1.14 d) and influent chemical oxygen demand concentrations were performed to verify the impact of organic loading rate on the voltage generation capacity of a simple dual-chamber microbial fuel cell working in continuous mode. We found that there is an optimal hydraulic retention time value at which the maximum voltage is generated: 0.41 d. However, there were no similar effects, in terms of voltage generation, when a constant hydraulic retention time with different influent chemical oxygen demand of wastewater was used. The obtained maximal voltage value (600 mV) has also been compared to literature data. Computational fluid dynamics (CFD) was used to calculate the fluid flow and the exit age distribution of fluid elements in the reactor to explain the obtained experimental results and identify the crucial parameters for the design of bioreactors on an industrial scale.
The effect of noise-induced variance on parameter recovery from reaction times.
Vadillo, Miguel A; Garaizar, Pablo
2016-03-31
Technical noise can compromise the precision and accuracy of the reaction times collected in psychological experiments, especially in the case of Internet-based studies. Although this noise seems to have only a small impact on traditional statistical analyses, its effects on model fit to reaction-time distributions remains unexplored. Across four simulations we study the impact of technical noise on parameter recovery from data generated from an ex-Gaussian distribution and from a Ratcliff Diffusion Model. Our results suggest that the impact of noise-induced variance tends to be limited to specific parameters and conditions. Although we encourage researchers to adopt all measures to reduce the impact of noise on reaction-time experiments, we conclude that the typical amount of noise-induced variance found in these experiments does not pose substantial problems for statistical analyses based on model fitting.
Irradiance and spectral distribution control system for controlled environment chambers
NASA Technical Reports Server (NTRS)
Krones, M. J.; Sager, J. C.; Johnson, A. T.; Knott, W. M. (Principal Investigator)
1987-01-01
This paper describes a closed-loop control system for controlling the irradiance and spectral quality generated by fluorescent lamps in a controlled environment chamber. The 400 to 800 nm irradiance and the ratio of the red waveband (600 to 700 nm) to the far-red waveband (700 to 800 nm) were independently controlled and varied as functions of time. A suggested application is to investigate the possibility of synergistic effects of changing irradiance levels and changing spectral distributions on photoperiodism and photomorphogenesis.
Window of visibility - A psychophysical theory of fidelity in time-sampled visual motion displays
NASA Technical Reports Server (NTRS)
Watson, A. B.; Ahumada, A. J., Jr.; Farrell, J. E.
1986-01-01
A film of an object in motion presents on the screen a sequence of static views, while the human observer sees the object moving smoothly across the screen. Questions related to the perceptual identity of continuous and stroboscopic displays are examined. Time-sampled moving images are considered along with the contrast distribution of continuous motion, the contrast distribution of stroboscopic motion, the frequency spectrum of continuous motion, the frequency spectrum of stroboscopic motion, the approximation of the limits of human visual sensitivity to spatial and temporal frequencies by a window of visibility, the critical sampling frequency, the contrast distribution of staircase motion and the frequency spectrum of this motion, and the spatial dependence of the critical sampling frequency. Attention is given to apparent motion, models of motion, image recording, and computer-generated imagery.
NASA Astrophysics Data System (ADS)
Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.
2012-07-01
The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.
Propellant Analysis and Distillation Unit Design
NASA Technical Reports Server (NTRS)
Barragan, Michelle H.; Spangler, Cindy; Barrera, Louis K.
2007-01-01
The NASA White Sands Test Facility (WSTF) routinely operates hypergolic propulsion systems. Some of the onsite activities include performing long duration studies on the operational life of these systems. A few of them have been in use for over twenty years. During this span of time contamination has built up in the propellant and some of the distribution infrastructure. This study investigated the nature of this contamination, the pathology of its generation, and developed a process for removal of the contamination that was cost efficient with minimal waste generation.
Validation of a Monte Carlo Simulation of Binary Time Series.
1981-09-18
the probability distribution corresponding to the population from which the n sample vectors are generated. Simple unbiased estimators were chosen for...Cowcept A s*us Agew Bethesd, Marylnd H. L. Wauom Am D. RoQuE SymMS Reserch Brach , p" Ssms Delsbian September 18, 1981 DTIC EL E C T E SEP 24 =I98ST...is generated from the sample of such vectors produced by several independent replications of the Monte Carlo simulation. Then the validity of the
A novel clinical multimodal multiphoton tomograph for AF, SHG, CARS imaging, and FLIM
NASA Astrophysics Data System (ADS)
Weinigel, Martin; Breunig, Hans Georg; König, Karsten
2014-02-01
We report on a flexible nonlinear medical tomograph with multiple miniaturized detectors for simultaneous acquisition of two-photon autofluorescence (AF), second harmonic generation (SHG) and coherent anti-Stokes Raman scattering (CARS) images. The simultaneous visualization of the distribution of endogenous fluorophores NAD(P)H, melanin and elastin, SHG-active collagen and as well as non-fluorescent lipids within human skin in vivo is possible. Furthermore, fluorescence lifetime images (FLIM) can be generated using time-correlated single photon counting.
NASA Astrophysics Data System (ADS)
Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju
2014-04-01
Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.
Zeroth-order phase-contrast technique.
Pizolato, José Carlos; Cirino, Giuseppe Antonio; Gonçalves, Cristhiane; Neto, Luiz Gonçalves
2007-11-01
What we believe to be a new phase-contrast technique is proposed to recover intensity distributions from phase distributions modulated by spatial light modulators (SLMs) and binary diffractive optical elements (DOEs). The phase distribution is directly transformed into intensity distributions using a 4f optical correlator and an iris centered in the frequency plane as a spatial filter. No phase-changing plates or phase dielectric dots are used as a filter. This method allows the use of twisted nematic liquid-crystal televisions (LCTVs) operating in the real-time phase-mostly regime mode between 0 and p to generate high-intensity multiple beams for optical trap applications. It is also possible to use these LCTVs as input SLMs for optical correlators to obtain high-intensity Fourier transform distributions of input amplitude objects.
Time-sliced perturbation theory for large scale structure I: general formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blas, Diego; Garny, Mathias; Sibiryakov, Sergey
2016-07-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less
Simulations of eddy kinetic energy transport in barotropic turbulence
NASA Astrophysics Data System (ADS)
Grooms, Ian
2017-11-01
Eddy energy transport in rotating two-dimensional turbulence is investigated using numerical simulation. Stochastic forcing is used to generate an inhomogeneous field of turbulence and the time-mean energy profile is diagnosed. An advective-diffusive model for the transport is fit to the simulation data by requiring the model to accurately predict the observed time-mean energy distribution. Isotropic harmonic diffusion of energy is found to be an accurate model in the case of uniform, solid-body background rotation (the f plane), with a diffusivity that scales reasonably well with a mixing-length law κ ∝V ℓ , where V and ℓ are characteristic eddy velocity and length scales. Passive tracer dynamics are added and it is found that the energy diffusivity is 75 % of the tracer diffusivity. The addition of a differential background rotation with constant vorticity gradient β leads to significant changes to the energy transport. The eddies generate and interact with a mean flow that advects the eddy energy. Mean advection plus anisotropic diffusion (with reduced diffusivity in the direction of the background vorticity gradient) is moderately accurate for flows with scale separation between the eddies and mean flow, but anisotropic diffusion becomes a much less accurate model of the transport when scale separation breaks down. Finally, it is observed that the time-mean eddy energy does not look like the actual eddy energy distribution at any instant of time. In the future, stochastic models of the eddy energy transport may prove more useful than models of the mean transport for predicting realistic eddy energy distributions.
NASA Astrophysics Data System (ADS)
Li, Jun-Wei; Cao, Jun-Wei
2010-04-01
One challenge in large-scale scientific data analysis is to monitor data in real-time in a distributed environment. For the LIGO (Laser Interferometer Gravitational-wave Observatory) project, a dedicated suit of data monitoring tools (DMT) has been developed, yielding good extensibility to new data type and high flexibility to a distributed environment. Several services are provided, including visualization of data information in various forms and file output of monitoring results. In this work, a DMT monitor, OmegaMon, is developed for tracking statistics of gravitational-wave (OW) burst triggers that are generated from a specific OW burst data analysis pipeline, the Omega Pipeline. Such results can provide diagnostic information as reference of trigger post-processing and interferometer maintenance.
A generative model for scientific concept hierarchies.
Datta, Srayan; Adar, Eytan
2018-01-01
In many scientific disciplines, each new 'product' of research (method, finding, artifact, etc.) is often built upon previous findings-leading to extension and branching of scientific concepts over time. We aim to understand the evolution of scientific concepts by placing them in phylogenetic hierarchies where scientific keyphrases from a large, longitudinal academic corpora are used as a proxy of scientific concepts. These hierarchies exhibit various important properties, including power-law degree distribution, power-law component size distribution, existence of a giant component and less probability of extending an older concept. We present a generative model based on preferential attachment to simulate the graphical and temporal properties of these hierarchies which helps us understand the underlying process behind scientific concept evolution and may be useful in simulating and predicting scientific evolution.
Quantitative patterns in drone wars
NASA Astrophysics Data System (ADS)
Garcia-Bernardo, Javier; Dodds, Peter Sheridan; Johnson, Neil F.
2016-02-01
Attacks by drones (i.e., unmanned combat air vehicles) continue to generate heated political and ethical debates. Here we examine the quantitative nature of drone attacks, focusing on how their intensity and frequency compare with that of other forms of human conflict. Instead of the power-law distribution found recently for insurgent and terrorist attacks, the severity of attacks is more akin to lognormal and exponential distributions, suggesting that the dynamics underlying drone attacks lie beyond these other forms of human conflict. We find that the pattern in the timing of attacks is consistent with one side having almost complete control, an important if expected result. We show that these novel features can be reproduced and understood using a generative mathematical model in which resource allocation to the dominant side is regulated through a feedback loop.
Long-distance entanglement-based quantum key distribution experiment using practical detectors.
Takesue, Hiroki; Harada, Ken-Ichi; Tamaki, Kiyoshi; Fukuda, Hiroshi; Tsuchizawa, Tai; Watanabe, Toshifumi; Yamada, Koji; Itabashi, Sei-Ichi
2010-08-02
We report an entanglement-based quantum key distribution experiment that we performed over 100 km of optical fiber using a practical source and detectors. We used a silicon-based photon-pair source that generated high-purity time-bin entangled photons, and high-speed single photon detectors based on InGaAs/InP avalanche photodiodes with the sinusoidal gating technique. To calculate the secure key rate, we employed a security proof that validated the use of practical detectors. As a result, we confirmed the successful generation of sifted keys over 100 km of optical fiber with a key rate of 4.8 bit/s and an error rate of 9.1%, with which we can distill secure keys with a key rate of 0.15 bit/s.
A generative model for scientific concept hierarchies
Adar, Eytan
2018-01-01
In many scientific disciplines, each new ‘product’ of research (method, finding, artifact, etc.) is often built upon previous findings–leading to extension and branching of scientific concepts over time. We aim to understand the evolution of scientific concepts by placing them in phylogenetic hierarchies where scientific keyphrases from a large, longitudinal academic corpora are used as a proxy of scientific concepts. These hierarchies exhibit various important properties, including power-law degree distribution, power-law component size distribution, existence of a giant component and less probability of extending an older concept. We present a generative model based on preferential attachment to simulate the graphical and temporal properties of these hierarchies which helps us understand the underlying process behind scientific concept evolution and may be useful in simulating and predicting scientific evolution. PMID:29474409
Classification framework for partially observed dynamical systems
NASA Astrophysics Data System (ADS)
Shen, Yuan; Tino, Peter; Tsaneva-Atanasova, Krasimira
2017-04-01
We present a general framework for classifying partially observed dynamical systems based on the idea of learning in the model space. In contrast to the existing approaches using point estimates of model parameters to represent individual data items, we employ posterior distributions over model parameters, thus taking into account in a principled manner the uncertainty due to both the generative (observational and/or dynamic noise) and observation (sampling in time) processes. We evaluate the framework on two test beds: a biological pathway model and a stochastic double-well system. Crucially, we show that the classification performance is not impaired when the model structure used for inferring posterior distributions is much more simple than the observation-generating model structure, provided the reduced-complexity inferential model structure captures the essential characteristics needed for the given classification task.
Shin, Seungwoo; Kim, Doyeon; Kim, Kyoohyun; Park, YongKeun
2018-06-15
We present a multimodal approach for measuring the three-dimensional (3D) refractive index (RI) and fluorescence distributions of live cells by combining optical diffraction tomography (ODT) and 3D structured illumination microscopy (SIM). A digital micromirror device is utilized to generate structured illumination patterns for both ODT and SIM, which enables fast and stable measurements. To verify its feasibility and applicability, the proposed method is used to measure the 3D RI distribution and 3D fluorescence image of various samples, including a cluster of fluorescent beads, and the time-lapse 3D RI dynamics of fluorescent beads inside a HeLa cell, from which the trajectory of the beads in the HeLa cell is analyzed using spatiotemporal correlations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crooks, Gavin; Sivak, David
Many interesting divergence measures between conjugate ensembles of nonequilibrium trajectories can be experimentally determined from the work distribution of the process. Herein, we review the statistical and physical significance of several of these measures, in particular the relative entropy (dissipation), Jeffreys divergence (hysteresis), Jensen-Shannon divergence (time-asymmetry), Chernoff divergence (work cumulant generating function), and Renyi divergence.
Renewable Generation Integration Study, she joined the laboratory full-time as a post doc and is now a lead determining the intensity and angular distribution of radiation transmitted through homogenous cloud cover ." Environmental Research Letters 8, no. 1 (March 2013). doi:10.1088/1748-9326/8/1/014042 Full list
Making sense of snapshot data: ergodic principle for clonal cell populations
2017-01-01
Population growth is often ignored when quantifying gene expression levels across clonal cell populations. We develop a framework for obtaining the molecule number distributions in an exponentially growing cell population taking into account its age structure. In the presence of generation time variability, the average acquired across a population snapshot does not obey the average of a dividing cell over time, apparently contradicting ergodicity between single cells and the population. Instead, we show that the variation observed across snapshots with known cell age is captured by cell histories, a single-cell measure obtained from tracking an arbitrary cell of the population back to the ancestor from which it originated. The correspondence between cells of known age in a population with their histories represents an ergodic principle that provides a new interpretation of population snapshot data. We illustrate the principle using analytical solutions of stochastic gene expression models in cell populations with arbitrary generation time distributions. We further elucidate that the principle breaks down for biochemical reactions that are under selection, such as the expression of genes conveying antibiotic resistance, which gives rise to an experimental criterion with which to probe selection on gene expression fluctuations. PMID:29187636
Making sense of snapshot data: ergodic principle for clonal cell populations.
Thomas, Philipp
2017-11-01
Population growth is often ignored when quantifying gene expression levels across clonal cell populations. We develop a framework for obtaining the molecule number distributions in an exponentially growing cell population taking into account its age structure. In the presence of generation time variability, the average acquired across a population snapshot does not obey the average of a dividing cell over time, apparently contradicting ergodicity between single cells and the population. Instead, we show that the variation observed across snapshots with known cell age is captured by cell histories, a single-cell measure obtained from tracking an arbitrary cell of the population back to the ancestor from which it originated. The correspondence between cells of known age in a population with their histories represents an ergodic principle that provides a new interpretation of population snapshot data. We illustrate the principle using analytical solutions of stochastic gene expression models in cell populations with arbitrary generation time distributions. We further elucidate that the principle breaks down for biochemical reactions that are under selection, such as the expression of genes conveying antibiotic resistance, which gives rise to an experimental criterion with which to probe selection on gene expression fluctuations. © 2017 The Author(s).
Geographic Visualization of Power-Grid Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R.
2015-06-18
The visualization enables the simulation analyst to see changes in the frequency through time and space. With this technology, the analyst has a bird's eye view of the frequency at loads and generators as the simulated power system responds to the loss of a generator, spikes in load, and other contingencies. The significance of a contingency to the operation of an electrical power system depends critically on how the resulting tansients evolve in time and space. Consequently, these dynamic events can only be understood when seen in their proper geographic context. this understanding is indispensable to engineers working on themore » next generation of distributed sensing and control systems for the smart grid. By making possible a natural and intuitive presentation of dynamic behavior, our new visualization technology is a situational-awareness tool for power-system engineers.« less
Introducing causality violation for improved DPOAE component unmixing
NASA Astrophysics Data System (ADS)
Moleti, Arturo; Sisto, Renata; Shera, Christopher A.
2018-05-01
The DPOAE response consists of the linear superposition of two components, a nonlinear distortion component generated in the overlap region, and a reflection component generated by roughness in the DP resonant region. Due to approximate scaling symmetry, the DPOAE distortion component has approximately constant phase. As the reflection component may be considered as a SFOAE generated by the forward DP traveling wave, it has rapidly rotating phase, relative to that of its source, which is also equal to the phase of the DPOAE distortion component. This different phase behavior permits effective separation of the DPOAE components (unmixing), using time-domain or time-frequency domain filtering. Departures from scaling symmetry imply fluctuations around zero delay of the distortion component, which may seriously jeopardize the accuracy of these filtering techniques. The differential phase-gradient delay of the reflection component obeys causality requirements, i.e., the delay is positive only, and the fine-structure oscillations of amplitude and phase are correlated to each other, as happens for TEOAEs and SFOAEs relative to their stimulus phase. Performing the inverse Fourier (or wavelet) transform of a modified DPOAE complex spectrum, in which a constant phase function is substituted for the measured one, the time (or time-frequency) distribution shows a peak at (exactly) zero delay and long-latency specular symmetric components, with a modified (positive and negative) delay, which is that relative to that of the distortion component in the original response. Component separation, applied to this symmetrized distribution, becomes insensitive to systematic errors associated with violation of the scaling symmetry in specific frequency ranges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Xin-Ping, E-mail: xuxp@mail.ihep.ac.cn; Ide, Yusuke
In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coinmore » and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.« less
Modeling of First-Passage Processes in Financial Markets
NASA Astrophysics Data System (ADS)
Inoue, Jun-Ichi; Hino, Hikaru; Sazuka, Naoya; Scalas, Enrico
2010-03-01
In this talk, we attempt to make a microscopic modeling the first-passage process (or the first-exit process) of the BUND future by minority game with market history. We find that the first-passage process of the minority game with appropriate history length generates the same properties as the BTP future (the middle and long term Italian Government bonds with fixed interest rates), namely, both first-passage time distributions have a crossover at some specific time scale as is the case for the Mittag-Leffler function. We also provide a macroscopic (or a phenomenological) modeling of the first-passage process of the BTP future and show analytically that the first-passage time distribution of a simplest mixture of the normal compound Poisson processes does not have such a crossover.
Sadiq, Rehan; Rodriguez, Manuel J
2005-04-01
Interpreting water quality data routinely generated for control and monitoring purposes in water distribution systems is a complicated task for utility managers. In fact, data for diverse water quality indicators (physico-chemical and microbiological) are generated at different times and at different locations in the distribution system. To simplify and improve the understanding and the interpretation of water quality, methodologies for aggregation and fusion of data must be developed. In this paper, the Dempster-Shafer theory also called theory of evidence is introduced as a potential methodology for interpreting water quality data. The conceptual basis of this methodology and the process for its implementation are presented by two applications. The first application deals with the interpretation of spatial water quality data fusion, while the second application deals with the development of water quality index based on key monitored indicators. Based on the obtained results, the authors discuss the potential contribution of theory of evidence as a decision-making tool for water quality management.
A 12 GHz wavelength spacing multi-wavelength laser source for wireless communication systems
NASA Astrophysics Data System (ADS)
Peng, P. C.; Shiu, R. K.; Bitew, M. A.; Chang, T. L.; Lai, C. H.; Junior, J. I.
2017-08-01
This paper presents a multi-wavelength laser source with 12 GHz wavelength spacing based on a single distributed feedback laser. A light wave generated from the distributed feedback laser is fed into a frequency shifter loop consisting of 50:50 coupler, dual-parallel Mach-Zehnder modulator, optical amplifier, optical filter, and polarization controller. The frequency of the input wavelength is shifted and then re-injected into the frequency shifter loop. By re-injecting the shifted wavelengths multiple times, we have generated 84 optical carriers with 12 GHz wavelength spacing and stable output power. For each channel, two wavelengths are modulated by a wireless data using the phase modulator and transmitted through a 25 km single mode fiber. In contrast to previously developed schemes, the proposed laser source does not incur DC bias drift problem. Moreover, it is a good candidate for radio-over-fiber systems to support multiple users using a single distributed feedback laser.
NASA Technical Reports Server (NTRS)
Chung, Ming-Ying; Ciardo, Gianfranco; Siminiceanu, Radu I.
2007-01-01
The Saturation algorithm for symbolic state-space generation, has been a recent break-through in the exhaustive veri cation of complex systems, in particular globally-asyn- chronous/locally-synchronous systems. The algorithm uses a very compact Multiway Decision Diagram (MDD) encoding for states and the fastest symbolic exploration algo- rithm to date. The distributed version of Saturation uses the overall memory available on a network of workstations (NOW) to efficiently spread the memory load during the highly irregular exploration. A crucial factor in limiting the memory consumption during the symbolic state-space generation is the ability to perform garbage collection to free up the memory occupied by dead nodes. However, garbage collection over a NOW requires a nontrivial communication overhead. In addition, operation cache policies become critical while analyzing large-scale systems using the symbolic approach. In this technical report, we develop a garbage collection scheme and several operation cache policies to help on solving extremely complex systems. Experiments show that our schemes improve the performance of the original distributed implementation, SmArTNow, in terms of time and memory efficiency.
Non-Markovian Effects in Turbulent Diffusion in Magnetized Plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zagorodny, Anatoly; Weiland, Jan
2009-10-08
The derivation of the kinetic equations for inhomogeneous plasma in an external magnetic field is presented. The Fokker-Planck-type equations with the non-Markovian kinetic coefficients are proposed. In the time-local limit (small correlation times with respect to the distribution function relaxation time) the relations obtained recover the results known from the appropriate quasilinear theory and the Dupree-Weinstock theory of plasma turbulence. The equations proposed are used to describe zonal flow generation and to estimate the diffusion coefficient for saturated turbulence.
Use of Life-Test Data Analysis Methodology for Analyzing Undesirable Habitual Behavior
1979-07-06
of times to failu’e for the hardware subjected to life test. In the present context it has been round that detoxification provides a time that one may...the fact that resulting data generated by a single individual tend to belong to a classical distribution of times to failure. Detoxification is used...drug and its immediate toxic effects from his or her body and brain. Detoxification in this sense may result from inpatient or outpatient treatment at a
[Temporal and spatial distribution of red tide in Yangtze River Estuary and adjacent waters].
Liu, Lu-San; Li, Zi-Cheng; Zhou, Juan; Zheng, Bing-Hui; Tang, Jing-Liang
2011-09-01
The events of red tide were collected in Yangtze River Estuary and adjacent waters from 1972 to 2009. Based on geographic information system (GIS) analysis on the temporal and spatial distribution of red tide, the distribution map was generated accordingly. The results show: (1) There are three red tide-prone areas, which are outside the Yangtze River estuary and the eastern of Sheshan, Huaniaoshan-Shengshan-Gouqi, Zhoushan and the eastern of Zhujiajian. The red tide occurred 174 times in total, in which there were 25 times covered the area was larger than 1 000 km2. After 2000, the frequency of red tide were significantly increasing; (2) The frequent occurrence of red tide was in May (51% of total occurrence) and June (20% of total occurrence); (3) In all of the red tide plankton, the dominant species were Prorocentrum danghaiense, Skeletonema costatum, Prorocentrum dantatum, Nactiluca scientillans. The red tides caused by these species were 38, 35, 15, 10 times separately.
Rapid Source Characterization of the 2011 Mw 9.0 off the Pacific coast of Tohoku Earthquake
Hayes, Gavin P.
2011-01-01
On March 11th, 2011, a moment magnitude 9.0 earthquake struck off the coast of northeast Honshu, Japan, generating what may well turn out to be the most costly natural disaster ever. In the hours following the event, the U.S. Geological Survey National Earthquake Information Center led a rapid response to characterize the earthquake in terms of its location, size, faulting source, shaking and slip distributions, and population exposure, in order to place the disaster in a framework necessary for timely humanitarian response. As part of this effort, fast finite-fault inversions using globally distributed body- and surface-wave data were used to estimate the slip distribution of the earthquake rupture. Models generated within 7 hours of the earthquake origin time indicated that the event ruptured a fault up to 300 km long, roughly centered on the earthquake hypocenter, and involved peak slips of 20 m or more. Updates since this preliminary solution improve the details of this inversion solution and thus our understanding of the rupture process. However, significant observations such as the up-dip nature of rupture propagation and the along-strike length of faulting did not significantly change, demonstrating the usefulness of rapid source characterization for understanding the first order characteristics of major earthquakes.
Distributed interactive communication in simulated space-dwelling groups.
Brady, Joseph V; Hienz, Robert D; Hursh, Steven R; Ragusa, Leonard C; Rouse, Charles O; Gasior, Eric D
2004-03-01
This report describes the development and preliminary application of an experimental test bed for modeling human behavior in the context of a computer generated environment to analyze the effects of variations in communication modalities, incentives and stressful conditions. In addition to detailing the methodological development of a simulated task environment that provides for electronic monitoring and recording of individual and group behavior, the initial substantive findings from an experimental analysis of distributed interactive communication in simulated space dwelling groups are described. Crews of three members each (male and female) participated in simulated "planetary missions" based upon a synthetic scenario task that required identification, collection, and analysis of geologic specimens with a range of grade values. The results of these preliminary studies showed clearly that cooperative and productive interactions were maintained between individually isolated and distributed individuals communicating and problem-solving effectively in a computer-generated "planetary" environment over extended time intervals without benefit of one another's physical presence. Studies on communication channel constraints confirmed the functional interchangeability between available modalities with the highest degree of interchangeability occurring between Audio and Text modes of communication. The effects of task-related incentives were determined by the conditions under which they were available with Positive Incentives effectively attenuating decrements in performance under stressful time pressure. c2003 Elsevier Ltd. All rights reserved.
Jędrak, Jakub; Ochab-Marcinek, Anna
2016-09-01
We study a stochastic model of gene expression, in which protein production has a form of random bursts whose size distribution is arbitrary, whereas protein decay is a first-order reaction. We find exact analytical expressions for the time evolution of the cumulant-generating function for the most general case when both the burst size probability distribution and the model parameters depend on time in an arbitrary (e.g., oscillatory) manner, and for arbitrary initial conditions. We show that in the case of periodic external activation and constant protein degradation rate, the response of the gene is analogous to the resistor-capacitor low-pass filter, where slow oscillations of the external driving have a greater effect on gene expression than the fast ones. We also demonstrate that the nth cumulant of the protein number distribution depends on the nth moment of the burst size distribution. We use these results to show that different measures of noise (coefficient of variation, Fano factor, fractional change of variance) may vary in time in a different manner. Therefore, any biological hypothesis of evolutionary optimization based on the nonmonotonic dependence of a chosen measure of noise on time must justify why it assumes that biological evolution quantifies noise in that particular way. Finally, we show that not only for exponentially distributed burst sizes but also for a wider class of burst size distributions (e.g., Dirac delta and gamma) the control of gene expression level by burst frequency modulation gives rise to proportional scaling of variance of the protein number distribution to its mean, whereas the control by amplitude modulation implies proportionality of protein number variance to the mean squared.
An improvement of the measurement of time series irreversibility with visibility graph approach
NASA Astrophysics Data System (ADS)
Wu, Zhenyu; Shang, Pengjian; Xiong, Hui
2018-07-01
We propose a method to improve the measure of real-valued time series irreversibility which contains two tools: the directed horizontal visibility graph and the Kullback-Leibler divergence. The degree of time irreversibility is estimated by the Kullback-Leibler divergence between the in and out degree distributions presented in the associated visibility graph. In our work, we reframe the in and out degree distributions by encoding them with different embedded dimensions used in calculating permutation entropy(PE). With this improved method, we can not only estimate time series irreversibility efficiently, but also detect time series irreversibility from multiple dimensions. We verify the validity of our method and then estimate the amount of time irreversibility of series generated by chaotic maps as well as global stock markets over the period 2005-2015. The result shows that the amount of time irreversibility reaches the peak with embedded dimension d = 3 under circumstances of experiment and financial markets.
Generation of radially-polarized terahertz pulses for coupling into coaxial waveguides
Navarro-Cía, Miguel; Wu, Jiang; Liu, Huiyun; Mitrofanov, Oleg
2016-01-01
Coaxial waveguides exhibit no dispersion and therefore can serve as an ideal channel for transmission of broadband THz pulses. Implementation of THz coaxial waveguide systems however requires THz beams with radially-polarized distribution. We demonstrate the launching of THz pulses into coaxial waveguides using the effect of THz pulse generation at semiconductor surfaces. We find that the radial transient photo-currents produced upon optical excitation of the surface at normal incidence radiate a THz pulse with the field distribution matching the mode of the coaxial waveguide. In this simple scheme, the optical excitation beam diameter controls the spatial profile of the generated radially-polarized THz pulse and allows us to achieve efficient coupling into the TEM waveguide mode in a hollow coaxial THz waveguide. The TEM quasi-single mode THz waveguide excitation and non-dispersive propagation of a short THz pulse is verified experimentally by time-resolved near-field mapping of the THz field at the waveguide output. PMID:27941845
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Computer graphics are often applied for better understanding and interpretation of data under observation. These graphics become more complicated when animation is required during "run-time", as found in many typical modern artificial intelligence and expert systems. Living Color Frame Maker is a solution to many of these real-time graphics problems. Living Color Frame Maker (LCFM) is a graphics generation and management tool for IBM or IBM compatible personal computers. To eliminate graphics programming, the graphic designer can use LCFM to generate computer graphics frames. The graphical frames are then saved as text files, in a readable and disclosed format, which can be easily accessed and manipulated by user programs for a wide range of "real-time" visual information applications. For example, LCFM can be implemented in a frame-based expert system for visual aids in management of systems. For monitoring, diagnosis, and/or controlling purposes, circuit or systems diagrams can be brought to "life" by using designated video colors and intensities to symbolize the status of hardware components (via real-time feedback from sensors). Thus status of the system itself can be displayed. The Living Color Frame Maker is user friendly with graphical interfaces, and provides on-line help instructions. All options are executed using mouse commands and are displayed on a single menu for fast and easy operation. LCFM is written in C++ using the Borland C++ 2.0 compiler for IBM PC series computers and compatible computers running MS-DOS. The program requires a mouse and an EGA/VGA display. A minimum of 77K of RAM is also required for execution. The documentation is provided in electronic form on the distribution medium in WordPerfect format. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The Living Color Frame Maker tool was developed in 1992.
Zhang, Zheshen; Mower, Jacob; Englund, Dirk; Wong, Franco N C; Shapiro, Jeffrey H
2014-03-28
High-dimensional quantum key distribution (HDQKD) offers the possibility of high secure-key rate with high photon-information efficiency. We consider HDQKD based on the time-energy entanglement produced by spontaneous parametric down-conversion and show that it is secure against collective attacks. Its security rests upon visibility data-obtained from Franson and conjugate-Franson interferometers-that probe photon-pair frequency correlations and arrival-time correlations. From these measurements, an upper bound can be established on the eavesdropper's Holevo information by translating the Gaussian-state security analysis for continuous-variable quantum key distribution so that it applies to our protocol. We show that visibility data from just the Franson interferometer provides a weaker, but nonetheless useful, secure-key rate lower bound. To handle multiple-pair emissions, we incorporate the decoy-state approach into our protocol. Our results show that over a 200-km transmission distance in optical fiber, time-energy entanglement HDQKD could permit a 700-bit/sec secure-key rate and a photon information efficiency of 2 secure-key bits per photon coincidence in the key-generation phase using receivers with a 15% system efficiency.
NASA Technical Reports Server (NTRS)
Saganti, P. B.; Zapp, E. N.; Wilson, J. W.; Cucinotta, F. A.
2001-01-01
The US Lab module of the International Space Station (ISS) is a primary working area where the crewmembers are expected to spend majority of their time. Because of the directionality of radiation fields caused by the Earth shadow, trapped radiation pitch angle distribution, and inherent variations in the ISS shielding, a model is needed to account for these local variations in the radiation distribution. We present the calculated radiation dose (rem/yr) values for over 3,000 different points in the working area of the Lab module and estimated radiation dose values for over 25,000 different points in the human body for a given ambient radiation environment. These estimated radiation dose values are presented in a three dimensional animated interactive visualization format. Such interactive animated visualization of the radiation distribution can be generated in near real-time to track changes in the radiation environment during the orbit precession of the ISS.
NASA Astrophysics Data System (ADS)
Matcharashvili, Teimuraz; Chelidze, Tamaz; Zhukova, Natalia; Mepharidze, Ekaterine; Sborshchikov, Alexander
2010-05-01
Many scientific works on dynamics of earthquake generation are devoted to qualitative and quantitative reproduction of behavior of seismic faults. Number of theoretical, numerical or physical models are already designed for this purpose. Main assumption of these works is that the correct model must be capable to reproduce power law type relation for event sizes with magnitudes greater than or equal to a some threshold value, similar to Gutenberg-Richter (GR) law for the size distribution of earthquakes. To model behavior of a seismic faults in laboratory conditions spring-block experimental systems are often used. They enable to generate stick-slip movement, intermittent behavior occurring when two solids in contact slide relative to each other driven at a constant velocity. Wide interest to such spring-block models is caused by the fact that stick-slip is recognized as a basic process underlying earthquakes generation along pre-existing faults. It is worth to mention, that in stick slip experiments reproduction of power law, in slip events size distribution, with b values close or equal to the one found for natural seismicity is possible. Stick-slip process observed in these experimental models is accompanied by a transient elastic waves propagation generated during the rapid release of stress energy in spring-block system. Oscillations of stress energy can be detected as a characteristic acoustic emission (AE). Accompanying stick slip AE is the subject of intense investigation, but many aspects of this process are still unclear. In the present research we aimed to investigate dynamics of stick slip AE in order to find whether its distributional properties obey power law. Experiments have been carried out on spring-block system consisting of fixed and sliding plates of roughly finished basalt samples. The sliding block was driven with a constant velocity. Experiments have been carried out for five different stiffness of pulling spring. Thus five different regimes of stick slip movement has been maintained. The AE accompanying the elementary slip events of stick-slip process were recorded on a PC sound card. The sensor for the AE was a lead circonate-titanate with a natural frequency of 100 KHz. In order to ensure standard conditions in each experiment, sliding surfaces were sanded up by sandpaper and cleaned of a dust. AE data analysis consisted of signal conditioning, filtering, and correct wave trains separation. Onset time of AE was determined at a minimun of Akaike Information Criterion. Afterwards time series of AE characteristics such as: recurrence times between consecutive AE bursts as well as time intervals between their maximums, duration of AE bursts, energy and power of AE, max by modulus of AE wave train amplitudes, etc. have been compiled. Cumulative probability distributions for all these data sets have been constructed and tested on the subject of GR type power law relation. It was found that characteristics of AE of stick slip process are strongly depending on the movement regime. Number of registered AE essentially increased for stiffer spring. At the same time recurrence times and emitted AE energy decreases. Power law type relation have not been observed for all AE characteristics and not for all considered regimes of movement. Power law relation, close to observed for real seismicity, was found for power of AE time series at stiffer springs. It is interesting that recurrence times between maximums of consecutive AE bursts and duration of AE bursts, reveal b in the range of 0.6-1.65. These results point that experimental conditions of stick slip process including movement regimes should be selected with care to ensure similarity between model and natural seismicity distributional characteristics.
DG Planning with Amalgamation of Operational and Reliability Considerations
NASA Astrophysics Data System (ADS)
Battu, Neelakanteshwar Rao; Abhyankar, A. R.; Senroy, Nilanjan
2016-04-01
Distributed Generation has been playing a vital role in dealing issues related to distribution systems. This paper presents an approach which provides policy maker with a set of solutions for DG placement to optimize reliability and real power loss of the system. Optimal location of a Distributed Generator is evaluated based on performance indices derived for reliability index and real power loss. The proposed approach is applied on a 15-bus radial distribution system and a 18-bus radial distribution system with conventional and wind distributed generators individually.
Baker, John [Walnut Creek, CA; Archer, Daniel E [Knoxville, TN; Luke, Stanley John [Pleasanton, CA; Decman, Daniel J [Livermore, CA; White, Gregory K [Livermore, CA
2009-06-23
A tailpulse signal generating/simulating apparatus, system, and method designed to produce electronic pulses which simulate tailpulses produced by a gamma radiation detector, including the pileup effect caused by the characteristic exponential decay of the detector pulses, and the random Poisson distribution pulse timing for radioactive materials. A digital signal process (DSP) is programmed and configured to produce digital values corresponding to pseudo-randomly selected pulse amplitudes and pseudo-randomly selected Poisson timing intervals of the tailpulses. Pulse amplitude values are exponentially decayed while outputting the digital value to a digital to analog converter (DAC). And pulse amplitudes of new pulses are added to decaying pulses to simulate the pileup effect for enhanced realism in the simulation.
True random bit generators based on current time series of contact glow discharge electrolysis
NASA Astrophysics Data System (ADS)
Rojas, Andrea Espinel; Allagui, Anis; Elwakil, Ahmed S.; Alawadhi, Hussain
2018-05-01
Random bit generators (RBGs) in today's digital information and communication systems employ a high rate physical entropy sources such as electronic, photonic, or thermal time series signals. However, the proper functioning of such physical systems is bound by specific constrains that make them in some cases weak and susceptible to external attacks. In this study, we show that the electrical current time series of contact glow discharge electrolysis, which is a dc voltage-powered micro-plasma in liquids, can be used for generating random bit sequences in a wide range of high dc voltages. The current signal is quantized into a binary stream by first using a simple moving average function which makes the distribution centered around zero, and then applying logical operations which enables the binarized data to pass all tests in industry-standard randomness test suite by the National Institute of Standard Technology. Furthermore, the robustness of this RBG against power supply attacks has been examined and verified.
Wakie, Tewodros; Evangelista, Paul H.; Jarnevich, Catherine S.; Laituri, Melinda
2014-01-01
We used correlative models with species occurrence points, Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indices, and topo-climatic predictors to map the current distribution and potential habitat of invasive Prosopis juliflora in Afar, Ethiopia. Time-series of MODIS Enhanced Vegetation Indices (EVI) and Normalized Difference Vegetation Indices (NDVI) with 250 m2 spatial resolution were selected as remote sensing predictors for mapping distributions, while WorldClim bioclimatic products and generated topographic variables from the Shuttle Radar Topography Mission product (SRTM) were used to predict potential infestations. We ran Maxent models using non-correlated variables and the 143 species-occurrence points. Maxent generated probability surfaces were converted into binary maps using the 10-percentile logistic threshold values. Performances of models were evaluated using area under the receiver-operating characteristic (ROC) curve (AUC). Our results indicate that the extent of P. juliflora invasion is approximately 3,605 km2 in the Afar region (AUC = 0.94), while the potential habitat for future infestations is 5,024 km2 (AUC = 0.95). Our analyses demonstrate that time-series of MODIS vegetation indices and species occurrence points can be used with Maxent modeling software to map the current distribution of P. juliflora, while topo-climatic variables are good predictors of potential habitat in Ethiopia. Our results can quantify current and future infestations, and inform management and policy decisions for containing P. juliflora. Our methods can also be replicated for managing invasive species in other East African countries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersson, P., E-mail: peter.andersson@physics.uu.se; Andersson-Sunden, E.; Sjöstrand, H.
2014-08-01
In nuclear boiling water reactor cores, the distribution of water and steam (void) is essential for both safety and efficiency reasons. In order to enhance predictive capabilities, void distribution assessment is performed in two-phase test-loops under reactor-relevant conditions. This article proposes the novel technique of fast-neutron tomography using a portable deuterium-tritium neutron generator to determine the time-averaged void distribution in these loops. Fast neutrons have the advantage of high transmission through the metallic structures and pipes typically concealing a thermal-hydraulic test loop, while still being fairly sensitive to the water/void content. However, commercially available fast-neutron generators also have the disadvantagemore » of a relatively low yield and fast-neutron detection also suffers from relatively low detection efficiency. Fortunately, some loops are axially symmetric, a property which can be exploited to reduce the amount of data needed for tomographic measurement, thus limiting the interrogation time needed. In this article, three axially symmetric test objects depicting a thermal-hydraulic test loop have been examined; steel pipes with outer diameter 24 mm, thickness 1.5 mm, and with three different distributions of the plastic material POM inside the pipes. Data recorded with the FANTOM fast-neutron tomography instrument have been used to perform tomographic reconstructions to assess their radial material distribution. Here, a dedicated tomographic algorithm that exploits the symmetry of these objects has been applied, which is described in the paper. Results are demonstrated in 20 rixel (radial pixel) reconstructions of the interior constitution and 2D visualization of the pipe interior is demonstrated. The local POM attenuation coefficients in the rixels were measured with errors (RMS) of 0.025, 0.020, and 0.022 cm{sup −1}, solid POM attenuation coefficient. The accuracy and precision is high enough to provide a useful indication on the flow mode, and a visualization of the radial material distribution can be obtained. A benefit of this system is its potential to be mounted at any axial height of a two-phase test section without requirements for pre-fabricated entrances or windows. This could mean a significant increase in flexibility of the void distribution assessment capability at many existing two-phase test loops.« less
Andersson, P; Andersson-Sunden, E; Sjöstrand, H; Jacobsson-Svärd, S
2014-08-01
In nuclear boiling water reactor cores, the distribution of water and steam (void) is essential for both safety and efficiency reasons. In order to enhance predictive capabilities, void distribution assessment is performed in two-phase test-loops under reactor-relevant conditions. This article proposes the novel technique of fast-neutron tomography using a portable deuterium-tritium neutron generator to determine the time-averaged void distribution in these loops. Fast neutrons have the advantage of high transmission through the metallic structures and pipes typically concealing a thermal-hydraulic test loop, while still being fairly sensitive to the water/void content. However, commercially available fast-neutron generators also have the disadvantage of a relatively low yield and fast-neutron detection also suffers from relatively low detection efficiency. Fortunately, some loops are axially symmetric, a property which can be exploited to reduce the amount of data needed for tomographic measurement, thus limiting the interrogation time needed. In this article, three axially symmetric test objects depicting a thermal-hydraulic test loop have been examined; steel pipes with outer diameter 24 mm, thickness 1.5 mm, and with three different distributions of the plastic material POM inside the pipes. Data recorded with the FANTOM fast-neutron tomography instrument have been used to perform tomographic reconstructions to assess their radial material distribution. Here, a dedicated tomographic algorithm that exploits the symmetry of these objects has been applied, which is described in the paper. Results are demonstrated in 20 rixel (radial pixel) reconstructions of the interior constitution and 2D visualization of the pipe interior is demonstrated. The local POM attenuation coefficients in the rixels were measured with errors (RMS) of 0.025, 0.020, and 0.022 cm(-1), solid POM attenuation coefficient. The accuracy and precision is high enough to provide a useful indication on the flow mode, and a visualization of the radial material distribution can be obtained. A benefit of this system is its potential to be mounted at any axial height of a two-phase test section without requirements for pre-fabricated entrances or windows. This could mean a significant increase in flexibility of the void distribution assessment capability at many existing two-phase test loops.
Temporal and Spatial Analysis of Monogenetic Volcanic Fields
NASA Astrophysics Data System (ADS)
Kiyosugi, Koji
Achieving an understanding of the nature of monogenetic volcanic fields depends on identification of the spatial and temporal patterns of volcanism in these fields, and their relationships to structures mapped in the shallow crust and inferred in the deep crust and mantle through interpretation of geochemical, radiometric and geophysical data. We investigate the spatial and temporal distributions of volcanism in the Abu Monogenetic Volcano Group, Southwest Japan. E-W elongated volcano distribution, which is identified by a nonparametric kernel method, is found to be consistent with the spatial extent of P-wave velocity anomalies in the lower crust and upper mantle, supporting the idea that the spatial density map of volcanic vents reflects the geometry of a mantle diapir. Estimated basalt supply to the lower crust is constant. This observation and the spatial distribution of volcanic vents suggest stability of magma productivity and essentially constant two-dimensional size of the source mantle diapir. We mapped conduits, dike segments, and sills in the San Rafael sub-volcanic field, Utah, where the shallowest part of a Pliocene magmatic system is exceptionally well exposed. The distribution of conduits matches the major features of dike distribution, including development of clusters and distribution of outliers. The comparison of San Rafael conduit distribution and the distributions of volcanoes in several recently active volcanic fields supports the use of statistical models, such as nonparametric kernel methods, in probabilistic hazard assessment for distributed volcanism. We developed a new recurrence rate calculation method that uses a Monte Carlo procedure to better reflect and understand the impact of uncertainties of radiometric age determinations on uncertainty of recurrence rate estimates for volcanic activity in the Abu, Yucca Mountain Region, and Izu-Tobu volcanic fields. Results suggest that the recurrence rates of volcanic fields can change by more than one order of magnitude on time scales of several hundred thousand to several million years. This suggests that magma generation rate beneath volcanic fields may change over these time scales. Also, recurrence rate varies more than one order of magnitude between these volcanic fields, consistent with the idea that distributed volcanism may be influenced by both the rate of magma generation and the potential for dike interaction during ascent.
Quantum cryptography and applications in the optical fiber network
NASA Astrophysics Data System (ADS)
Luo, Yuhui
2005-09-01
Quantum cryptography, as part of quantum information and communications, can provide absolute security for information transmission because it is established on the fundamental laws of quantum theory, such as the principle of uncertainty, No-cloning theorem and quantum entanglement. In this thesis research, a novel scheme to implement quantum key distribution based on multiphoton entanglement with a new protocol is proposed. Its advantages are: a larger information capacity can be obtained with a longer transmission distance and the detection of multiple photons is easier than that of a single photon. The security and attacks pertaining to such a system are also studied. Next, a quantum key distribution over wavelength division multiplexed (WDM) optical fiber networks is realized. Quantum key distribution in networks is a long-standing problem for practical applications. Here we combine quantum cryptography and WDM to solve this problem because WDM technology is universally deployed in the current and next generation fiber networks. The ultimate target is to deploy quantum key distribution over commercial networks. The problems arising from the networks are also studied in this part. Then quantum key distribution in multi-access networks using wavelength routing technology is investigated in this research. For the first time, quantum cryptography for multiple individually targeted users has been successfully implemented in sharp contrast to that using the indiscriminating broadcasting structure. It overcomes the shortcoming that every user in the network can acquire the quantum key signals intended to be exchanged between only two users. Furthermore, a more efficient scheme of quantum key distribution is adopted, hence resulting in a higher key rate. Lastly, a quantum random number generator based on quantum optics has been experimentally demonstrated. This device is a key component for quantum key distribution as it can create truly random numbers, which is an essential requirement to perform quantum key distribution. This new generator is composed of a single optical fiber coupler with fiber pigtails, which can be easily used in optical fiber communications.
NASA Astrophysics Data System (ADS)
Wang, Jianzong; Chen, Yanjun; Hua, Rui; Wang, Peng; Fu, Jia
2012-02-01
Photovoltaic is a method of generating electrical power by converting solar radiation into direct current electricity using semiconductors that exhibit the photovoltaic effect. Photovoltaic power generation employs solar panels composed of a number of solar cells containing a photovoltaic material. Due to the growing demand for renewable energy sources, the manufacturing of solar cells and photovoltaic arrays has advanced considerably in recent years. Solar photovoltaics are growing rapidly, albeit from a small base, to a total global capacity of 40,000 MW at the end of 2010. More than 100 countries use solar photovoltaics. Driven by advances in technology and increases in manufacturing scale and sophistication, the cost of photovoltaic has declined steadily since the first solar cells were manufactured. Net metering and financial incentives, such as preferential feed-in tariffs for solar-generated electricity; have supported solar photovoltaics installations in many countries. However, the power that generated by solar photovoltaics is affected by the weather and other natural factors dramatically. To predict the photovoltaic energy accurately is of importance for the entire power intelligent dispatch in order to reduce the energy dissipation and maintain the security of power grid. In this paper, we have proposed a big data system--the Solar Photovoltaic Power Forecasting System, called SPPFS to calculate and predict the power according the real-time conditions. In this system, we utilized the distributed mixed database to speed up the rate of collecting, storing and analysis the meteorological data. In order to improve the accuracy of power prediction, the given neural network algorithm has been imported into SPPFS.By adopting abundant experiments, we shows that the framework can provide higher forecast accuracy-error rate less than 15% and obtain low latency of computing by deploying the mixed distributed database architecture for solar-generated electricity.
NASA Astrophysics Data System (ADS)
Shuai, Yanhua; Douglas, Peter M. J.; Zhang, Shuichang; Stolper, Daniel A.; Ellis, Geoffrey S.; Lawson, Michael; Lewan, Michael D.; Formolo, Michael; Mi, Jingkui; He, Kun; Hu, Guoyi; Eiler, John M.
2018-02-01
Multiply isotopically substituted molecules ('clumped' isotopologues) can be used as geothermometers because their proportions at isotopic equilibrium relative to a random distribution of isotopes amongst all isotopologues are functions of temperature. This has allowed measurements of clumped-isotope abundances to be used to constrain formation temperatures of several natural materials. However, kinetic processes during generation, modification, or transport of natural materials can also affect their clumped-isotope compositions. Herein, we show that methane generated experimentally by closed-system hydrous pyrolysis of shale or nonhydrous pyrolysis of coal yields clumped-isotope compositions consistent with an equilibrium distribution of isotopologues under some experimental conditions (temperature-time conditions corresponding to 'low,' 'mature,' and 'over-mature' stages of catagenesis), but can have non-equilibrium (i.e., kinetically controlled) distributions under other experimental conditions ('high' to 'over-mature' stages), particularly for pyrolysis of coal. Non-equilibrium compositions, when present, lead the measured proportions of clumped species to be lower than expected for equilibrium at the experimental temperature, and in some cases to be lower than a random distribution of isotopes (i.e., negative Δ18 values). We propose that the consistency with equilibrium for methane formed by relatively low temperature pyrolysis reflects local reversibility of isotope exchange reactions involving a reactant or transition state species during demethylation of one or more components of kerogen. Non-equilibrium clumped-isotope compositions occur under conditions where 'secondary' cracking of retained oil in shale or wet gas hydrocarbons (C2-5, especially ethane) in coal is prominent. We suggest these non-equilibrium isotopic compositions are the result of the expression of kinetic isotope effects during the irreversible generation of methane from an alkyl precursor. Other interpretations are also explored. These findings provide new insights into the chemistry of thermogenic methane generation, and may provide an explanation of the elevated apparent temperatures recorded by the methane clumped-isotope thermometer in some natural gases. However, it remains unknown if the laboratory experiments capture the processes that occur at the longer time and lower temperatures of natural gas formation.
Snow fracture: From micro-cracking to global failure
NASA Astrophysics Data System (ADS)
Capelli, Achille; Reiweger, Ingrid; Schweizer, Jürg
2017-04-01
Slab avalanches are caused by a crack forming and propagating in a weak layer within the snow cover, which eventually causes the detachment of the overlying cohesive slab. The gradual damage process leading to the nucleation of the initial failure is still not entirely understood. Therefore, we studied the damage process preceding snow failure by analyzing the acoustic emissions (AE) generated by bond failure or micro-cracking. The AE allow studying the ongoing progressive failure in a non-destructive way. We performed fully load-controlled failure experiments on snow samples presenting a weak layer and recorded the generated AE. The size and frequency of the generated AE increased before failure revealing an acceleration of the damage process with increased size and frequency of damage and/or microscopic cracks. The AE energy was power-law distributed and the exponent (b-value) decreased approaching failure. The waiting time followed an exponential distribution with increasing exponential coefficient λ before failure. The decrease of the b-value and the increase of λ correspond to a change in the event distribution statistics indicating a transition from homogeneously distributed uncorrelated damage producing mostly small AE to localized damage, which cause larger correlated events which leads to brittle failure. We observed brittle failure for the fast experiment and a more ductile behavior for the slow experiments. This rate dependence was reflected also in the AE signature. In the slow experiments the b value and λ were almost constant, and the energy rate increase was moderate indicating that the damage process was in a stable state - suggesting the damage and healing processes to be balanced. On a shorter time scale, however, the AE parameters varied indicating that the damage process was not steady but consisted of a sum of small bursts. We assume that the bursts may have been generated by cascades of correlated micro-cracks caused by localization of stresses at a small scale. The healing process may then have prevented the self-organization of this small scale damage and, therefore, the total failure of the sample.
Space-Time Dependent Transport, Activation, and Dose Rates for Radioactivated Fluids.
NASA Astrophysics Data System (ADS)
Gavazza, Sergio
Two methods are developed to calculate the space - and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates generated from the radioactivated fluids flowing through pipes. The work couples space- and time-dependent phenomena, treated as only space- or time-dependent in the open literature. The transport and activation methodology (TAM) is used to numerically calculate space- and time-dependent transport and activation of radionuclides in fluids flowing through pipes exposed to radiation fields, and volumetric radioactive sources created by radionuclide motions. The computer program Radionuclide Activation and Transport in Pipe (RNATPA1) performs the numerical calculations required in TAM. The gamma ray dose methodology (GAM) is used to numerically calculate space- and time-dependent gamma ray dose equivalent rates from the volumetric radioactive sources determined by TAM. The computer program Gamma Ray Dose Equivalent Rate (GRDOSER) performs the numerical calculations required in GAM. The scope of conditions considered by TAM and GAM herein include (a) laminar flow in straight pipe, (b)recirculating flow schemes, (c) time-independent fluid velocity distributions, (d) space-dependent monoenergetic neutron flux distribution, (e) space- and time-dependent activation process of a single parent nuclide and transport and decay of a single daughter radionuclide, and (f) assessment of space- and time-dependent gamma ray dose rates, outside the pipe, generated by the space- and time-dependent source term distributions inside of it. The methodologies, however, can be easily extended to include all the situations of interest for solving the phenomena addressed in this dissertation. A comparison is made from results obtained by the described calculational procedures with analytical expressions. The physics of the problems addressed by the new technique and the increased accuracy versus non -space and time-dependent methods are presented. The value of the methods is also discussed. It has been demonstrated that TAM and GAM can be used to enhance the understanding of the space- and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates related to radioactivated fluids flowing through pipes.
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
On the mixing time of geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan
In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Wemore » specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).« less
A novel weight determination method for time series data aggregation
NASA Astrophysics Data System (ADS)
Xu, Paiheng; Zhang, Rong; Deng, Yong
2017-09-01
Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.
Spatial Burnout in Water Reactors with Nonuniform Startup Distributions of Uranium and Boron
NASA Technical Reports Server (NTRS)
Fox, Thomas A.; Bogart, Donald
1955-01-01
Spatial burnout calculations have been made of two types of water moderated cylindrical reactor using boron as a burnable poison to increase reactor life. Specific reactors studied were a version of the Submarine Advanced Reactor (sAR) and a supercritical water reactor (SCW) . Burnout characteristics such as reactivity excursion, neutron-flux and heat-generation distributions, and uranium and boron distributions have been determined for core lives corresponding to a burnup of approximately 7 kilograms of fully enriched uranium. All reactivity calculations have been based on the actual nonuniform distribution of absorbers existing during intervals of core life. Spatial burnout of uranium and boron and spatial build-up of fission products and equilibrium xenon have been- considered. Calculations were performed on the NACA nuclear reactor simulator using two-group diff'usion theory. The following reactor burnout characteristics have been demonstrated: 1. A significantly lower excursion in reactivity during core life may be obtained by nonuniform rather than uniform startup distribution of uranium. Results for SCW with uranium distributed to provide constant radial heat generation and a core life corresponding to a uranium burnup of 7 kilograms indicated a maximum excursion in reactivity of 2.5 percent. This compared to a maximum excursion of 4.2 percent obtained for the same core life when w'anium was uniformly distributed at startup. Boron was incorporated uniformly in these cores at startup. 2. It is possible to approach constant radial heat generation during the life of a cylindrical core by means of startup nonuniform radial and axial distributions of uranium and boron. Results for SCW with nonuniform radial distribution of uranium to provide constant radial heat generation at startup and with boron for longevity indicate relatively small departures from the initially constant radial heat generation distribution during core life. Results for SAR with a sinusoidal distribution rather than uniform axial distributions of boron indicate significant improvements in axial heat generation distribution during the greater part of core life. 3. Uranium investments for cylindrical reactors with nonuniform radial uranium distributions which provide constant radial heat generation per unit core volume are somewhat higher than for reactors with uniform uranium concentration at startup. On the other hand, uranium investments for reactors with axial boron distributions which approach constant axial heat generation are somewhat smaller than for reactors with uniform boron distributions at startup.
Shen, Huan; Chen, Jianjun; Hua, Linqiang; Zhang, Bing
2014-06-26
The photodissociation dynamics of allyl chloride at 200 and 266 nm has been studied by femtosecond time-resolved mass spectrometry coupled with photoelectron imaging. The molecule was prepared to different excited states by selectively pumping with 400 or 266 nm pulse. The dissociated products were then probed by multiphoton ionization with 800 nm pulse. After absorbing two photons at 400 nm, several dissociation channels were directly observed from the mass spectrum. The two important channels, C-Cl fission and HCl elimination, were found to decay with multiexponential functions. For C-Cl fission, two time constants, 48 ± 1 fs and 85 ± 40 ps, were observed. The first one was due to the fast predissociation process on the repulsive nσ*/πσ* state. The second one could be ascribed to dissociation on the vibrationally excited ground state which is generated after internal conversion from the initially prepared ππ* state. HCl elimination, which is a typical example of a molecular elimination reaction, was found to proceed with two time constants, 600 ± 135 fs and 14 ± 2 ps. We assigned the first one to dissociation on the excited state and the second one to the internal conversion from the ππ* state to the ground state and then dissociation on the ground state. As we excited the molecule with 266 nm light, the transient signals decayed exponentially with a time constant of ∼48 fs, which is coincident with the time scale of C-halogen direct dissociation. Photoelectron images, which provided translational and angular distributions of the generated electron, were also recorded. Detailed analysis of the kinetic energy distribution strongly suggested that C3H4(+) and C3H5(+) were generated from ionization of the neutral radical. The present study reveals the dissociation dynamics of allyl chloride in a time-resolved way.
Understanding the source of multifractality in financial markets
NASA Astrophysics Data System (ADS)
Barunik, Jozef; Aste, Tomaso; Di Matteo, T.; Liu, Ruipeng
2012-09-01
In this paper, we use the generalized Hurst exponent approach to study the multi-scaling behavior of different financial time series. We show that this approach is robust and powerful in detecting different types of multi-scaling. We observe a puzzling phenomenon where an apparent increase in multifractality is measured in time series generated from shuffled returns, where all time-correlations are destroyed, while the return distributions are conserved. This effect is robust and it is reproduced in several real financial data including stock market indices, exchange rates and interest rates. In order to understand the origin of this effect we investigate different simulated time series by means of the Markov switching multifractal model, autoregressive fractionally integrated moving average processes with stable innovations, fractional Brownian motion and Levy flights. Overall we conclude that the multifractality observed in financial time series is mainly a consequence of the characteristic fat-tailed distribution of the returns and time-correlations have the effect to decrease the measured multifractality.
Measures of dependence for multivariate Lévy distributions
NASA Astrophysics Data System (ADS)
Boland, J.; Hurd, T. R.; Pivato, M.; Seco, L.
2001-02-01
Recent statistical analysis of a number of financial databases is summarized. Increasing agreement is found that logarithmic equity returns show a certain type of asymptotic behavior of the largest events, namely that the probability density functions have power law tails with an exponent α≈3.0. This behavior does not vary much over different stock exchanges or over time, despite large variations in trading environments. The present paper proposes a class of multivariate distributions which generalizes the observed qualities of univariate time series. A new consequence of the proposed class is the "spectral measure" which completely characterizes the multivariate dependences of the extreme tails of the distribution. This measure on the unit sphere in M-dimensions, in principle completely general, can be determined empirically by looking at extreme events. If it can be observed and determined, it will prove to be of importance for scenario generation in portfolio risk management.
A modified Monte Carlo model for the ionospheric heating rates
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.
1972-01-01
A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Lian, Jianming; Sun, Yannan
Demand response is representing a significant but largely untapped resource that can greatly enhance the flexibility and reliability of power systems. In this paper, a hierarchical control framework is proposed to facilitate the integrated coordination between distributed energy resources and demand response. The proposed framework consists of coordination and device layers. In the coordination layer, various resource aggregations are optimally coordinated in a distributed manner to achieve the system-level objectives. In the device layer, individual resources are controlled in real time to follow the optimal power generation or consumption dispatched from the coordination layer. For the purpose of practical applications,more » a method is presented to determine the utility functions of controllable loads by taking into account the real-time load dynamics and the preferences of individual customers. The effectiveness of the proposed framework is validated by detailed simulation studies.« less
Effect of reaction-step-size noise on the switching dynamics of stochastic populations
NASA Astrophysics Data System (ADS)
Be'er, Shay; Heller-Algazi, Metar; Assaf, Michael
2016-05-01
In genetic circuits, when the messenger RNA lifetime is short compared to the cell cycle, proteins are produced in geometrically distributed bursts, which greatly affects the cellular switching dynamics between different metastable phenotypic states. Motivated by this scenario, we study a general problem of switching or escape in stochastic populations, where influx of particles occurs in groups or bursts, sampled from an arbitrary distribution. The fact that the step size of the influx reaction is a priori unknown and, in general, may fluctuate in time with a given correlation time and statistics, introduces an additional nondemographic reaction-step-size noise into the system. Employing the probability-generating function technique in conjunction with Hamiltonian formulation, we are able to map the problem in the leading order onto solving a stationary Hamilton-Jacobi equation. We show that compared to the "usual case" of single-step influx, bursty influx exponentially decreases the population's mean escape time from its long-lived metastable state. In particular, close to bifurcation we find a simple analytical expression for the mean escape time which solely depends on the mean and variance of the burst-size distribution. Our results are demonstrated on several realistic distributions and compare well with numerical Monte Carlo simulations.
Sporadic frame dropping impact on quality perception
NASA Astrophysics Data System (ADS)
Pastrana-Vidal, Ricardo R.; Gicquel, Jean Charles; Colomes, Catherine; Cherifi, Hocine
2004-06-01
Over the past few years there has been an increasing interest in real time video services over packet networks. When considering quality, it is essential to quantify user perception of the received sequence. Severe motion discontinuities are one of the most common degradations in video streaming. The end-user perceives a jerky motion when the discontinuities are uniformly distributed over time and an instantaneous fluidity break is perceived when the motion loss is isolated or irregularly distributed. Bit rate adaptation techniques, transmission errors in the packet networks or restitution strategy could be the origin of this perceived jerkiness. In this paper we present a psychovisual experiment performed to quantify the effect of sporadically dropped pictures on the overall perceived quality. First, the perceptual detection thresholds of generated temporal discontinuities were measured. Then, the quality function was estimated in relation to a single frame dropping for different durations. Finally, a set of tests was performed to quantify the effect of several impairments distributed over time. We have found that the detection thresholds are content, duration and motion dependent. The assessment results show how quality is impaired by a single burst of dropped frames in a 10 sec sequence. The effect of several bursts of discarded frames, irregularly distributed over the time is also discussed.
NASA Astrophysics Data System (ADS)
Qin, Y.; Rana, A.; Moradkhani, H.
2014-12-01
The multi downscaled-scenario products allow us to better assess the uncertainty of the changes/variations of precipitation and temperature in the current and future periods. Joint Probability distribution functions (PDFs), of both the climatic variables, might help better understand the interdependence of the two, and thus in-turn help in accessing the future with confidence. Using the joint distribution of temperature and precipitation is also of significant importance in hydrological applications and climate change studies. In the present study, we have used multi-modelled statistically downscaled-scenario ensemble of precipitation and temperature variables using 2 different statistically downscaled climate dataset. The datasets used are, 10 Global Climate Models (GCMs) downscaled products from CMIP5 daily dataset, namely, those from the Bias Correction and Spatial Downscaling (BCSD) technique generated at Portland State University and from the Multivariate Adaptive Constructed Analogs (MACA) technique, generated at University of Idaho, leading to 2 ensemble time series from 20 GCM products. Thereafter the ensemble PDFs of both precipitation and temperature is evaluated for summer, winter, and yearly periods for all the 10 sub-basins across Columbia River Basin (CRB). Eventually, Copula is applied to establish the joint distribution of two variables enabling users to model the joint behavior of the variables with any level of correlation and dependency. Moreover, the probabilistic distribution helps remove the limitations on marginal distributions of variables in question. The joint distribution is then used to estimate the change trends of the joint precipitation and temperature in the current and future, along with estimation of the probabilities of the given change. Results have indicated towards varied change trends of the joint distribution of, summer, winter, and yearly time scale, respectively in all 10 sub-basins. Probabilities of changes, as estimated by the joint precipitation and temperature, will provide useful information/insights for hydrological and climate change predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hongyi; Sivapalan, Murugesu
Hortonian overland flow, Dunne overland flow and subsurface stormflow are the three dominant mechanisms contributing to both the volume and timing of streamflow. A previous study quantified the climatic and landscape controls on the relative dominance of the volumes of the different runoff components. In this paper we explore the impacts of climate, soil and topography on the timing of these runoff components in small catchments within the framework of the Connected Instantaneous Response Functions (CIRF). The CIRF here is viewed as a probability density function of travel times of water droplets associated with a given runoff generation mechanism (frommore » the locations where they are generated to the catchment outlet). CIRF is a refinement of the traditional catchment IRF in that it explicitly accounts for variable contributing areas: only those partial areas of runoff generation which are hydrologically connected to the outlet are regarded as contributing areas. The CIRFs are derived for each runoff mechanism through the numerical simulations with a spatially distributed hydrological model which accounts for spatially distributed runoff generation and routing, involving all three mechanisms, under multiple combinations of climate, soil and topographic properties. The advective and dispersive aspects of catchment’s runoff routing response are captured through the use of, respectively, the mean travel times and dimensionless forms of the CIRFs (i.e., scaled by their respective mean travel times). It was found that the CIRFs, upon non-dimensionalization, collapsed to common characteristic shapes, which could be explained in terms of the relative contributions of hillslope and channel network flows, and especially of the size of the runoff contributing areas. The contributing areas are themselves governed by the competition between drainage and recharge to the water table, and could be explained by a dimensionless drainage index which quantifies this competition. On the other hand, the mean residence times were vastly different in each case, and are governed by relative lengths of the flow pathways, flow velocities (and their variability) and the study also revealed simple indicators based on landscape properties that can explain their magnitudes in different catchments.« less
Modeling transient heat transfer in nuclear waste repositories.
Yang, Shaw-Yang; Yeh, Hund-Der
2009-09-30
The heat of high-level nuclear waste may be generated and released from a canister at final disposal sites. The waste heat may affect the engineering properties of waste canisters, buffers, and backfill material in the emplacement tunnel and the host rock. This study addresses the problem of the heat generated from the waste canister and analyzes the heat distribution between the buffer and the host rock, which is considered as a radial two-layer heat flux problem. A conceptual model is first constructed for the heat conduction in a nuclear waste repository and then mathematical equations are formulated for modeling heat flow distribution at repository sites. The Laplace transforms are employed to develop a solution for the temperature distributions in the buffer and the host rock in the Laplace domain, which is numerically inverted to the time-domain solution using the modified Crump method. The transient temperature distributions for both the single- and multi-borehole cases are simulated in the hypothetical geological repositories of nuclear waste. The results show that the temperature distributions in the thermal field are significantly affected by the decay heat of the waste canister, the thermal properties of the buffer and the host rock, the disposal spacing, and the thickness of the host rock at a nuclear waste repository.
Time-dependent resilience assessment and improvement of urban infrastructure systems
NASA Astrophysics Data System (ADS)
Ouyang, Min; Dueñas-Osorio, Leonardo
2012-09-01
This paper introduces an approach to assess and improve the time-dependent resilience of urban infrastructure systems, where resilience is defined as the systems' ability to resist various possible hazards, absorb the initial damage from hazards, and recover to normal operation one or multiple times during a time period T. For different values of T and its position relative to current time, there are three forms of resilience: previous resilience, current potential resilience, and future potential resilience. This paper mainly discusses the third form that takes into account the systems' future evolving processes. Taking the power transmission grid in Harris County, Texas, USA as an example, the time-dependent features of resilience and the effectiveness of some resilience-inspired strategies, including enhancement of situational awareness, management of consumer demand, and integration of distributed generators, are all simulated and discussed. Results show a nonlinear nature of resilience as a function of T, which may exhibit a transition from an increasing function to a decreasing function at either a threshold of post-blackout improvement rate, a threshold of load profile with consumer demand management, or a threshold number of integrated distributed generators. These results are further confirmed by studying a typical benchmark system such as the IEEE RTS-96. Such common trends indicate that some resilience strategies may enhance infrastructure system resilience in the short term, but if not managed well, they may compromise practical utility system resilience in the long run.
Time-dependent resilience assessment and improvement of urban infrastructure systems.
Ouyang, Min; Dueñas-Osorio, Leonardo
2012-09-01
This paper introduces an approach to assess and improve the time-dependent resilience of urban infrastructure systems, where resilience is defined as the systems' ability to resist various possible hazards, absorb the initial damage from hazards, and recover to normal operation one or multiple times during a time period T. For different values of T and its position relative to current time, there are three forms of resilience: previous resilience, current potential resilience, and future potential resilience. This paper mainly discusses the third form that takes into account the systems' future evolving processes. Taking the power transmission grid in Harris County, Texas, USA as an example, the time-dependent features of resilience and the effectiveness of some resilience-inspired strategies, including enhancement of situational awareness, management of consumer demand, and integration of distributed generators, are all simulated and discussed. Results show a nonlinear nature of resilience as a function of T, which may exhibit a transition from an increasing function to a decreasing function at either a threshold of post-blackout improvement rate, a threshold of load profile with consumer demand management, or a threshold number of integrated distributed generators. These results are further confirmed by studying a typical benchmark system such as the IEEE RTS-96. Such common trends indicate that some resilience strategies may enhance infrastructure system resilience in the short term, but if not managed well, they may compromise practical utility system resilience in the long run.
Sukal-Moulton, Theresa; Krosschell, Kristin J.; Gaebler-Spira, Deborah J.; Dewald, Julius P.A.
2014-01-01
Background Extensive neuromotor development occurs early in human life, but the time that a brain injury occurs during development has not been rigorously studied when quantifying motor impairments. Objective This study investigated the impact of timing of brain injury on magnitude and distribution of weakness in the paretic arm of individuals with childhood-onset hemiparesis. Methods Twenty-four individuals with hemiparesis were divided into time periods of injury before birth (PRE-natal, n=8), around the time of birth (PERI-natal, n=8) or after 6 months of age (POST-natal, n=8). They, along with 8 typically developing peers, participated in maximal isometric shoulder, elbow, wrist, and finger torque generation tasks using a multiple degree-of-freedom load cell to quantify torques in 10 directions. A mixed model ANOVA was used to determine the effect of group and task on a calculated relative weakness ratio between arms. Results There was a significant effect of both time of injury group (p<0.001) and joint torque direction (p<0.001) on the relative weakness of the paretic arm. Distal joints were more affected compared to proximal joints, especially in the POST-natal group. Conclusions The distribution of weakness provides evidence for the relative preservation of ipsilateral corticospinal motor pathways to the paretic limb in those individuals injured earlier, while those who sustained later injury may rely more on indirect ipsilateral cortico-bulbospinal projections during the generation of torques with the paretic arm. PMID:24009182
Sukal-Moulton, Theresa; Krosschell, Kristin J; Gaebler-Spira, Deborah J; Dewald, Julius P A
2014-01-01
Extensive neuromotor development occurs early in human life, but the time that a brain injury occurs during development has not been rigorously studied when quantifying motor impairments. This study investigated the impact of timing of brain injury on the magnitude and distribution of weakness in the paretic arm of individuals with childhood-onset hemiparesis. A total of 24 individuals with hemiparesis were divided into time periods of injury before birth (PRE-natal, n = 8), around the time of birth (PERI-natal, n = 8), or after 6 months of age (POST-natal, n = 8). They, along with 8 typically developing peers, participated in maximal isometric shoulder, elbow, wrist, and finger torque generation tasks using a multiple-degree-of-freedom load cell to quantify torques in 10 directions. A mixed-model ANOVA was used to determine the effect of group and task on a calculated relative weakness ratio between arms. There was a significant effect of both time of injury group (P < .001) and joint torque direction (P < .001) on the relative weakness of the paretic arm. Distal joints were more affected compared with proximal joints, especially in the POST-natal group. The distribution of weakness provides evidence for the relative preservation of ipsilateral corticospinal motor pathways to the paretic limb in those individuals injured earlier, whereas those who sustained later injury may rely more on indirect ipsilateral corticobulbospinal projections during the generation of torques with the paretic arm.
1989-08-01
Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S
Fault latency in the memory - An experimental study on VAX 11/780
NASA Technical Reports Server (NTRS)
Chillarege, Ram; Iyer, Ravishankar K.
1986-01-01
Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. This paper describes an experiment to accurately study the fault latency in the memory subsystem. The experiment employs real memory data from a VAX 11/780 at the University of Illinois. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is also given.
NASA Astrophysics Data System (ADS)
Candela, A.; Brigandì, G.; Aronica, G. T.
2014-07-01
In this paper a procedure to derive synthetic flood design hydrographs (SFDH) using a bivariate representation of rainfall forcing (rainfall duration and intensity) via copulas, which describes and models the correlation between two variables independently of the marginal laws involved, coupled with a distributed rainfall-runoff model, is presented. Rainfall-runoff modelling (R-R modelling) for estimating the hydrological response at the outlet of a catchment was performed by using a conceptual fully distributed procedure based on the Soil Conservation Service - Curve Number method as an excess rainfall model and on a distributed unit hydrograph with climatic dependencies for the flow routing. Travel time computation, based on the distributed unit hydrograph definition, was performed by implementing a procedure based on flow paths, determined from a digital elevation model (DEM) and roughness parameters obtained from distributed geographical information. In order to estimate the primary return period of the SFDH, which provides the probability of occurrence of a hydrograph flood, peaks and flow volumes obtained through R-R modelling were treated statistically using copulas. Finally, the shapes of hydrographs have been generated on the basis of historically significant flood events, via cluster analysis. An application of the procedure described above has been carried out and results presented for the case study of the Imera catchment in Sicily, Italy.
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Englander, Arnold C.
2014-01-01
Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.
A distributed Petri Net controller for a dual arm testbed
NASA Technical Reports Server (NTRS)
Bjanes, Atle
1991-01-01
This thesis describes the design and functionality of a Distributed Petri Net Controller (DPNC). The controller runs under X Windows to provide a graphical interface. The DPNC allows users to distribute a Petri Net across several host computers linked together via a TCP/IP interface. A sub-net executes on each host, interacting with the other sub-nets by passing a token vector from host to host. One host has a command window which monitors and controls the distributed controller. The input to the DPNC is a net definition file generated by Great SPN. Thus, a net may be designed, analyzed and verified using this package before implementation. The net is distributed to the hosts by tagging transitions that are host-critical with the appropriate host number. The controller will then distribute the remaining places and transitions to the hosts by generating the local nets, the local marking vectors and the global marking vector. Each transition can have one or more preconditions which must be fulfilled before the transition can fire, as well as one or more post-processes to be executed after the transition fires. These implement the actual input/output to the environment (machines, signals, etc.). The DPNC may also be used to simulate a Great SPN net since stochastic and deterministic firing rates are implemented in the controller for timed transitions.
A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall
NASA Astrophysics Data System (ADS)
Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.
2017-06-01
Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.
During the course of our research, we expanded and evolved our initial concept to achieve design targets of minimized cost of electricity. Several biofuel pathways were examined, and each had drawbacks in terms of cost (2-3 times market rates for energy), land area required (5 to...
On two-liquid AC electroosmotic system for thin films.
Navarkar, Abhishek; Amiroudine, Sakir; Demekhin, Evgeny A
2016-03-01
Lab-on-chip devices employ EOF for transportation and mixing of liquids. However, when a steady (DC) electric field is applied to the liquids, there are undesirable effects such as degradation of sample, electrolysis, bubble formation, etc. due to large magnitude of electric potential required to generate the flow. These effects can be averted by using a time-periodic or AC electric field. Transport and mixing of nonconductive liquids remain a problem even with this technique. In the present study, a two-liquid system bounded by two rigid plates, which act as substrates, is considered. The potential distribution is derived by assuming a Boltzmann charge distribution and using the Debye-Hückel linearization. Analytical solution of this time-periodic system shows some effects of viscosity ratio and permittivity ratio on the velocity profile. Interfacial electrostatics is also found to play a significant role in deciding velocity gradients at the interface. High frequency of the applied electric field is observed to generate an approximately static velocity profile away from the Electric Double Layer (EDL). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Power conditioning equipment for a thermoelectric outer planet spacecraft, volume 1, book 1
NASA Technical Reports Server (NTRS)
Andrews, R. E. (Editor)
1972-01-01
Equipment was designed to receive power from a radioisotope thermoelectric generator source, condition, distribute, and control this power for the spacecraft loads. The TOPS mission, aimed at a representative tour of the outer planets, would operate for an estimated 12 year period. Unique design characteristics required for the power conditioning equipment results from the long mission time and the need for autonomous on-board operations due to large communications distances and the associated time delays of ground initiated actions. The salient features of the selected power subsystem configuration are: (1) The PCE regulates the power from the radioisotope thermoelectric generator power source at 30 vdc by means of a quad-redundant shunt regulator; (2) 30 vdc power is used by certain loads, but is more generally inverted and distributed as square-wave ac power; (3) a protected bus is used to assure that power is always available to the control computer subsystem to permit corrective action to be initiated in response to fault conditions; and (4) various levels of redundancy are employed to provide high subsystem reliability.
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
Noise effects in bacterial motor switch
NASA Astrophysics Data System (ADS)
Tu, Yuhai
2006-03-01
The clockwise (CW) or counter clockwise (CCW) spinning of bacterial flagellar motors is controlled by the concentration of a phosphorylated protein CheY-P. In this talk, we represent the stochastic switching behavior of a bacterial flagellar motor by a dynamical two-state (CW and CCW) model, with the energy levels of the two states fluctuating in time according to the variation of the CheY-P concentration in the cell. We show that with a generic normal distribution and a modest amplitude for CheY-P concentration fluctuations, the dynamical two-state model is capable of generating a power-law distribution (as opposed to an exponential Poisson-like distribution) for the durations of the CCW states, in agreement with recent experimental observations of Korobkova et al (Nature, 428, 574(2004)). In addition, we show that the power spectrum for the flagellar motor switching time series is not determined solely by the power-law duration distribution, but also by the temporal correlation between the duration times of different CCW intervals. We point out the intrinsic connection between anomalously large fluctuations of the motor output and the overall high gain of the bacterial chemotaxis system. Suggestions for experimental verification of the dynamical two-state model will also be discussed.
Real-Time Load-Side Control of Electric Power Systems
NASA Astrophysics Data System (ADS)
Zhao, Changhong
Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems. (1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control. (2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.
NASA Astrophysics Data System (ADS)
Ichikawa, Yasushi; Oshima, Nobuyuki; Tabuchi, Yuichiro; Ikezoe, Keigo
2014-12-01
Further cost reduction is a critical issue for commercialization of fuel-cell electric vehicles (FCEVs) based on polymer electrolyte fuel cells (PEFCs). The cost of the fuel-cell system is driven by the multiple parts required to maximize stack performance and maintain durability and robustness. The fuel-cell system of the FCEV must be simplified while maintaining functionality. The dead-ended anode is considered as a means of simplification in this study. Generally, if hydrogen is supplied under constant pressure during dead-ended operation, stable power generation is impossible because of accumulation of liquid water produced by power generation and of nitrogen via leakage from the cathode through the membrane. Herein, pressure oscillation is applied to address this issue. Empirical and CFD data are employed to elucidate the mechanism of stable power generation using the pressure swing supply. Simultaneous and time-continuous measurements of the current distribution and gas concentration distribution are also conducted. The results demonstrate that the nitrogen concentration in the anode channel under pressure constant operation differs from that under pressure swing supply conditions. The transient two-dimensional CFD results indicate that oscillatory flow is generated by pressure swing supply, which periodically sweeps out nitrogen from the active area, resulting in stable power generation.
Quasirandom geometric networks from low-discrepancy sequences
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2017-08-01
We define quasirandom geometric networks using low-discrepancy sequences, such as Halton, Sobol, and Niederreiter. The networks are built in d dimensions by considering the d -tuples of digits generated by these sequences as the coordinates of the vertices of the networks in a d -dimensional Id unit hypercube. Then, two vertices are connected by an edge if they are at a distance smaller than a connection radius. We investigate computationally 11 network-theoretic properties of two-dimensional quasirandom networks and compare them with analogous random geometric networks. We also study their degree distribution and their spectral density distributions. We conclude from this intensive computational study that in terms of the uniformity of the distribution of the vertices in the unit square, the quasirandom networks look more random than the random geometric networks. We include an analysis of potential strategies for generating higher-dimensional quasirandom networks, where it is know that some of the low-discrepancy sequences are highly correlated. In this respect, we conclude that up to dimension 20, the use of scrambling, skipping and leaping strategies generate quasirandom networks with the desired properties of uniformity. Finally, we consider a diffusive process taking place on the nodes and edges of the quasirandom and random geometric graphs. We show that the diffusion time is shorter in the quasirandom graphs as a consequence of their larger structural homogeneity. In the random geometric graphs the diffusion produces clusters of concentration that make the process more slow. Such clusters are a direct consequence of the heterogeneous and irregular distribution of the nodes in the unit square in which the generation of random geometric graphs is based on.
Particle propagation effects on wave growth in a solar flux tube
NASA Astrophysics Data System (ADS)
White, S. M.; Melrose, D. B.; Dulk, G. A.
1986-09-01
The evolution of a distribution of electrons is followed after they are injected impulsively at the top of a coronal magnetic loop, with the objective of studying the plasma instabilities which result. At early times the downgoing electrons have beamlike distributions and amplify electrostatic waves via the Cerenkov resonance; the anomalous Doppler resonance is found to be less important. Slightly later, while the electrons are still predominantly downgoing, they are unstable to cyclotron maser generation of z-mode waves with omega(p) much less than Omega, or to second harmonic x-mode waves. The energetics of these instabilities, including saturation effects and heating of the ambient plasma, are discussed. It is suggested that coalescence of two z-mode waves generated by cyclotron maser emission of the downgoing electrons may produce the observed microwave spike bursts.
NASA Astrophysics Data System (ADS)
Izotov, A. I.; Fominykh, A. A.; Nikulin, S. V.; Prokoshev, D. K.; Legoti, A. B.; Timina, N. V.
2018-01-01
A way of reducing irregular current distribution in multi-brush systems of sliding current transfer with its wear reduction due to installing lubricating molybdenum disulphide brushes on slip rings to ensure a greasing nano-sized cover on the slip ring surface is proposed. The authors give the results of industrial tests estimated on the performance effectiveness of lubricating brushes on slip rings of the TBB-320-2UZ-type turbine generator. The results showed that the lubricating brushes reduce a) the wear of 6110 OM+M and EG2AF+M brushes by 1.2 and 2.1 times respectively, b) current distribution irregularity in parallel operating brushes due to stabilizing the contact arc, and c) the temperature of the electrical brush-contact device due to the friction reduction in brushes.
Dust generation in powders: Effect of particle size distribution
NASA Astrophysics Data System (ADS)
Chakravarty, Somik; Le Bihan, Olivier; Fischer, Marc; Morgeneyer, Martin
2017-06-01
This study explores the relationship between the bulk and grain-scale properties of powders and dust generation. A vortex shaker dustiness tester was used to evaluate 8 calcium carbonate test powders with median particle sizes ranging from 2μm to 136μm. Respirable aerosols released from the powder samples were characterised by their particle number and mass concentrations. All the powder samples were found to release respirable fractions of dust particles which end up decreasing with time. The variation of powder dustiness as a function of the particle size distribution was analysed for the powders, which were classified into three groups based on the fraction of particles within the respirable range. The trends we observe might be due to the interplay of several mechanisms like de-agglomeration and attrition and their relative importance.
Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm
NASA Astrophysics Data System (ADS)
Mathai, J.; Mujumdar, P.
2017-12-01
A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.
A statistical study of EMIC waves observed by Cluster: 1. Wave properties
NASA Astrophysics Data System (ADS)
Allen, R. C.; Zhang, J.-C.; Kistler, L. M.; Spence, H. E.; Lin, R.-L.; Klecker, B.; Dunlop, M. W.; André, M.; Jordanova, V. K.
2015-07-01
Electromagnetic ion cyclotron (EMIC) waves are an important mechanism for particle energization and losses inside the magnetosphere. In order to better understand the effects of these waves on particle dynamics, detailed information about the occurrence rate, wave power, ellipticity, normal angle, energy propagation angle distributions, and local plasma parameters are required. Previous statistical studies have used in situ observations to investigate the distribution of these parameters in the magnetic local time versus L-shell (MLT-L) frame within a limited magnetic latitude (MLAT) range. In this study, we present a statistical analysis of EMIC wave properties using 10 years (2001-2010) of data from Cluster, totaling 25,431 min of wave activity. Due to the polar orbit of Cluster, we are able to investigate EMIC waves at all MLATs and MLTs. This allows us to further investigate the MLAT dependence of various wave properties inside different MLT sectors and further explore the effects of Shabansky orbits on EMIC wave generation and propagation. The statistical analysis is presented in two papers. This paper focuses on the wave occurrence distribution as well as the distribution of wave properties. The companion paper focuses on local plasma parameters during wave observations as well as wave generation proxies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuruganti, Phani Teja
The smart grid is a combined process of revitalizing the traditional power grid applications and introducing new applications to improve the efficiency of power generation, transmission and distribution. This can be achieved by leveraging advanced communication and networking technologies. Therefore the selection of the appropriate communication technology for different smart grid applications has been debated a lot in the recent past. After comparing different possible technologies, a recent research study has arrived at a conclusion that the 3G cellular technology is the right choice for distribution side smart grid applications like smart metering, advanced distribution automation and demand response managementmore » system. In this paper, we argue that the current 3G/4G cellular technologies are not an appropriate choice for smart grid distribution applications and propose a Hybrid Spread Spectrum (HSS) based Advanced Metering Infrastructure (AMI) as one of the alternatives to 3G/4G technologies. We present a preliminary PHY and MAC layer design of a HSS based AMI network and evaluate their performance using matlab and NS2 simulations. Also, we propose a time hierarchical scheme that can significantly reduce the volume of random access traffic generated during blackouts and the delay in power outage reporting.« less
Methodology for Generating Conflict Scenarios by Time Shifting Recorded Traffic Data
NASA Technical Reports Server (NTRS)
Paglione, Mike; Oaks, Robert; Bilimoria, Karl D.
2003-01-01
A methodology is presented for generating conflict scenarios that can be used as test cases to estimate the operational performance of a conflict probe. Recorded air traffic data is time shifted to create traffic scenarios featuring conflicts with characteristic properties similar to those encountered in typical air traffic operations. First, a reference set of conflicts is obtained from trajectories that are computed using birth points and nominal flight plans extracted from recorded traffic data. Distributions are obtained for several primary properties (e.g., encounter angle) that are most likely to affect the performance of a conflict probe. A genetic algorithm is then utilized to determine the values of time shifts for the recorded track data so that the primary properties of conflicts generated by the time shifted data match those of the reference set. This methodology is successfully demonstrated using recorded traffic data for the Memphis Air Route Traffic Control Center; a key result is that the required time shifts are less than 5 min for 99% of the tracks. It is also observed that close matching of the primary properties used in this study additionally provides a good match for some other secondary properties.
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
Escovar, Jesús; Bello, Felio J; Morales, Alberto; Moncada, Ligia; Cárdenas, Estrella
2004-10-01
Lutzomyia spinicrassa is a vector of Leishmania braziliensis in Colombia. This sand fly has a broad geographical distribution in Colombia and Venezuela and it is found mainly in coffee plantations. Baseline biological growth data of L. spinicrassa were obtained under experimental laboratory conditions. The development time from egg to adult ranged from 59 to 121 days, with 12.74 weeks in average. Based on cohorts of 100 females, horizontal life table was constructed. The following predictive parameters were obtained: net rate of reproduction (8.4 females per cohort female), generation time (12.74 weeks), intrinsic rate of population increase (0.17), and finite rate of population increment (1.18). The reproductive value for each class age of the cohort females was calculated. Vertical life tables were elaborated and mortality was described for the generation obtained of the field cohort. In addition, for two successive generations, additive variance and heritability for fecundity were estimated.
Hardware system of X-wave generator with simple driving pulses
NASA Astrophysics Data System (ADS)
Li, Xu; Li, Yaqin; Xiao, Feng; Ding, Mingyue; Yuchi, Ming
2013-03-01
The limited diffraction beams such as X-wave have the properties of larger depth of field. Thus, it has the potential to generate ultra-high frame rate ultrasound images. However, in practice, the real-time generation of X-wave ultrasonic field requires complex and high-cost system, especially the precise and specific voltage time distribution part for the excitation of each distinct array element. In order to simplify the hardware realization of X-wave, based on the previous works, X-wave excitation signals were decomposed and expressed as the superposition of a group of simple driving pulses, such as rectangular and triangular waves. The hardware system for the X-wave generator was also designed. The generator consists of a computer for communication with the circuit, universal serial bus (USB) based micro-controller unit (MCU) for data transmission, field programmable gate array (FPGA) based Direct Digital Synthesizer(DDS), 12-bit digital-to-analog (D/A) converter and a two stage amplifier.The hardware simulation results show that the designed system can generate the waveforms at different radius approximating the theoretical X-wave excitations with a maximum error of 0.49% triggered by the quantification of amplitude data.
Competition and Cooperation of Distributed Generation and Power System
NASA Astrophysics Data System (ADS)
Miyake, Masatoshi; Nanahara, Toshiya
Advances in distributed generation technologies together with the deregulation of an electric power industry can lead to a massive introduction of distributed generation. Since most of distributed generation will be interconnected to a power system, coordination and competition between distributed generators and large-scale power sources would be a vital issue in realizing a more desirable energy system in the future. This paper analyzes competitions between electric utilities and cogenerators from the viewpoints of economic and energy efficiency based on the simulation results on an energy system including a cogeneration system. First, we examine best response correspondence of an electric utility and a cogenerator with a noncooperative game approach: we obtain a Nash equilibrium point. Secondly, we examine the optimum strategy that attains the highest social surplus and the highest energy efficiency through global optimization.
Velásquez, Yelitza; Martínez-Sánchez, Ana Isabel; Thomas, Arianna; Rojo, Santos
2017-01-01
Abstract A checklist of the 39 species of blow flies (Calliphoridae and Mesembrinellidae) so far known to occur in Venezuela is provided, based on a thorough literature review and the examination of ca. 500 specimens deposited in the main entomological collections of the country. Data from the literature and museum collections were used to generate distribution maps for 37 species. Three species are recorded from Venezuela for the first time: Chrysomya putoria (Wiedemann, 1830), Mesembrinella spicata Aldrich, 1925 and Mesembrinella umbrosa Aldrich, 1922. PMID:28228670
Making the decoy-state measurement-device-independent quantum key distribution practically useful
NASA Astrophysics Data System (ADS)
Zhou, Yi-Heng; Yu, Zong-Wen; Wang, Xiang-Bin
2016-04-01
The relatively low key rate seems to be the major barrier to its practical use for the decoy-state measurement-device-independent quantum key distribution (MDI-QKD). We present a four-intensity protocol for the decoy-state MDI-QKD that hugely raises the key rate, especially in the case in which the total data size is not large. Also, calculations show that our method makes it possible for secure private communication with fresh keys generated from MDI-QKD with a delay time of only a few seconds.
Kye, Bongoh; Mare, Robert D
2012-11-01
This study examines the intergenerational effects of changes in women's education in South Korea. We define intergenerational effects as changes in the distribution of educational attainment in an offspring generation associated with the changes in a parental generation. Departing from the previous approach in research on social mobility that has focused on intergenerational association, we examine the changes in the distribution of educational attainment across generations. Using a simulation method based on Mare and Maralani's recursive population renewal model, we examine how intergenerational transmission, assortative mating, and differential fertility influence intergenerational effects. The results point to the following conclusions. First, we find a positive intergenerational effect: improvement in women's education leads to improvement in daughter's education. Second, we find that the magnitude of intergenerational effects substantially depends on assortative marriage and differential fertility: assortative mating amplifies and differential fertility dampens the intergenerational effects. Third, intergenerational effects become bigger for the less educated and smaller for the better educated over time, which is a consequence of educational expansion. We compare our results with Mare and Maralani's original Indonesian study to illustrate how the model of intergenerational effects works in different socioeconomic circumstances. Copyright © 2012 Elsevier Inc. All rights reserved.
Distributed processing method for arbitrary view generation in camera sensor network
NASA Astrophysics Data System (ADS)
Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki
2003-05-01
Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
Data Transparency | Distributed Generation Interconnection Collaborative |
quality and availability are increasingly vital for reducing the costs of distributed generation completion in certain areas, increasing accountability for utility application processing. As distributed PV NREL, HECO, TSRG Improving Data Transparency for the Distributed PV Interconnection Process: Emergent
PVUSA: The value of photovoltaics in the distribution system. The Kerman Grid-Support Project
NASA Astrophysics Data System (ADS)
Wenger, Howard J.; Hoff, Thomas E.
1995-05-01
As part of the Photovoltaics for Utility Scale Applications Applications (PVUSA) Project Pacific Gas Electric Company (PG&E) built the Kerman 500-kW photovoltaic power plant. Located near the end of a distribution feeder in a rural section of Fresno County, the plant was not built so much to demonstrate PV technology, but to evaluate its interaction with the local distribution grid and quantify available nontraditional grid-support benefits (those other than energy and capacity). As demand for new generation began to languish in the 1980s, and siting and permitting of power plants and transmission lines became more involved, utilities began considering smaller, distributed power sources. Potential benefits include shorter construction lead time, less capital outlay, and better utilization of existing assets. The results of a PG&E study in 1990/1991 of the benefits from a PV system to the distribution grid prompted the PVUSA Project to construct a plant at Kerman. Completed in 1993, the plant is believed to be the first one specifically built to evaluate the multiple benefits to the grid of a strategically sited plant. Each of nine discrete benefits were evaluated in detail by first establishing the technical impact, then translating the results into present economic value. Benefits span the entire system from distribution feeder to the generation fleet. This work breaks new ground in evaluation of distributed resources, and suggests that resource planning practices be expanded to account for these non-traditional benefits.
An ultra-sparse code underliesthe generation of neural sequences in a songbird
NASA Astrophysics Data System (ADS)
Hahnloser, Richard H. R.; Kozhevnikov, Alexay A.; Fee, Michale S.
2002-09-01
Sequences of motor activity are encoded in many vertebrate brains by complex spatio-temporal patterns of neural activity; however, the neural circuit mechanisms underlying the generation of these pre-motor patterns are poorly understood. In songbirds, one prominent site of pre-motor activity is the forebrain robust nucleus of the archistriatum (RA), which generates stereotyped sequences of spike bursts during song and recapitulates these sequences during sleep. We show that the stereotyped sequences in RA are driven from nucleus HVC (high vocal centre), the principal pre-motor input to RA. Recordings of identified HVC neurons in sleeping and singing birds show that individual HVC neurons projecting onto RA neurons produce bursts sparsely, at a single, precise time during the RA sequence. These HVC neurons burst sequentially with respect to one another. We suggest that at each time in the RA sequence, the ensemble of active RA neurons is driven by a subpopulation of RA-projecting HVC neurons that is active only at that time. As a population, these HVC neurons may form an explicit representation of time in the sequence. Such a sparse representation, a temporal analogue of the `grandmother cell' concept for object recognition, eliminates the problem of temporal interference during sequence generation and learning attributed to more distributed representations.
NASA Astrophysics Data System (ADS)
Ickert, R. B.; Mundil, R.
2012-12-01
Dateable minerals (especially zircon U-Pb) that crystallized at high temperatures but have been redeposited, pose both unique opportunities and challenges for geochronology. Although they have the potential to provide useful information on the depositional age of their host rocks, their relationship to the host is not always well constrained. For example, primary volcanic deposits will often have a lag time (time between eruption and deposition) that is smaller than can be resolved using radiometric techniques, and the age of eruption and of deposition will be coincident within uncertainty. Alternatively, ordinary clastic sedimentary rocks will usually have a long and variable lag time, even for the youngest minerals. Intermediate cases, for example moderately reworked volcanogenic material, will have a short, but unknown lag time. A compounding problem with U-Pb zircon is that the residence time of crystals in their host magma chamber (time between crystallization and eruption) can be high and is variable, even within the products of a single eruption. In cases where the lag and/or residence time suspected to be large relative to the precision of the date, a common objective is to determine the minimum age of a sample of dates, in order to constrain the maximum age of the deposition of the host rock. However, both the extraction of that age as well as assignment of a meaningful uncertainty is not straightforward. A number of ad hoc techniques have been employed in the literature, which may be appropriate for particular data sets or specific problems, but may yield biased or misleading results. Ludwig (2012) has developed an objective, statistically justified method for the determination of the distribution of the minimum age, but it has not been widely adopted. Here we extend this algorithm with a bootstrap (which can show the effect - if any - of the sampling distribution itself). This method has a number of desirable characteristics: It can incorporate all data points while being resistant to outliers, it utilizes the measurement uncertainties, and it does not require the assumption that any given cluster of data represents a single geological event. In brief, the technique generates a synthetic distribution from the input data by resampling with replacement (a bootstrap). Each resample is a random selection from a Gaussian distribution defined by the mean and uncertainty of the data point. For this distribution, the minimum value is calculated. This procedure is repeated many times (>1000) and a distribution of minimum values is generated, from which a confidence interval can be constructed. We demonstrate the application of this technique using natural and synthetic datasets, show the advantages and limitations, and relate it to other methods. We emphasize that this estimate remains strictly a minimum age - as with any other estimate that does not explicitly incorporate lag or residence time, it will not reflect a depositional age if the lag/residence time is larger than the uncertainty of the estimate. We recommend that this or similar techniques be considered by geochronologists. Ludwig, K.R., 2012. Isoplot 3.75, A geochronological toolkit for Microsoft Excel; Berkeley Geochronology Center Special Publication no. 5
System and method for islanding detection and prevention in distributed generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhowmik, Shibashis; Mazhari, Iman; Parkhideh, Babak
Various examples are directed to systems and methods for detecting an islanding condition at an inverter configured to couple a distributed generation system to an electrical grid network. A controller may determine a command frequency and a command frequency variation. The controller may determine that the command frequency variation indicates a potential islanding condition and send to the inverter an instruction to disconnect the distributed generation system from the electrical grid network. When the distributed generation system is disconnected from the electrical grid network, the controller may determine whether the grid network is valid.
Microtubule behavior in the growth cones of living neurons during axon elongation
1991-01-01
To understand how microtubules are generated in the growth cone, we have imaged fluorescently tagged microtubules in living frog embryonic neurons. The neurons were labeled by injecting rhodamine-labeled tubulin into the fertilized egg and explanting the neurons from the neural tube. Microtubules extend deep into the growth cone periphery and adopt three characteristic distributions: (a) dispersed and splayed throughout much of the growth cone; (b) looped and apparently contorted by compression; and (c) bundled into tight arrays. These distributions interconvert on a time scale of several minutes and these interconversions are correlated with the behavior of the growth cone. We observed microtubule growth and shrinkage in growth cones, but are unable to determine their contribution to net assembly. However, translocation of polymer form the axon appears to be a major mechanism of generating new polymer in the growth cone, while bundling of microtubules in the growth cone appears to be the critical step in generating new axon. Neurons that were about to turn spontaneously generated microtubules in the future direction of growth, suggesting that orientation of microtubules might be an important early step in neuronal pathfinding. PMID:1918145
A Distribution-class Locational Marginal Price (DLMP) Index for Enhanced Distribution Systems
NASA Astrophysics Data System (ADS)
Akinbode, Oluwaseyi Wemimo
The smart grid initiative is the impetus behind changes that are expected to culminate into an enhanced distribution system with the communication and control infrastructure to support advanced distribution system applications and resources such as distributed generation, energy storage systems, and price responsive loads. This research proposes a distribution-class analog of the transmission LMP (DLMP) as an enabler of the advanced applications of the enhanced distribution system. The DLMP is envisioned as a control signal that can incentivize distribution system resources to behave optimally in a manner that benefits economic efficiency and system reliability and that can optimally couple the transmission and the distribution systems. The DLMP is calculated from a two-stage optimization problem; a transmission system OPF and a distribution system OPF. An iterative framework that ensures accurate representation of the distribution system's price sensitive resources for the transmission system problem and vice versa is developed and its convergence problem is discussed. As part of the DLMP calculation framework, a DCOPF formulation that endogenously captures the effect of real power losses is discussed. The formulation uses piecewise linear functions to approximate losses. This thesis explores, with theoretical proofs, the breakdown of the loss approximation technique when non-positive DLMPs/LMPs occur and discusses a mixed integer linear programming formulation that corrects the breakdown. The DLMP is numerically illustrated in traditional and enhanced distribution systems and its superiority to contemporary pricing mechanisms is demonstrated using price responsive loads. Results show that the impact of the inaccuracy of contemporary pricing schemes becomes significant as flexible resources increase. At high elasticity, aggregate load consumption deviated from the optimal consumption by up to about 45 percent when using a flat or time-of-use rate. Individual load consumption deviated by up to 25 percent when using a real-time price. The superiority of the DLMP is more pronounced when important distribution network conditions are not reflected by contemporary prices. The individual load consumption incentivized by the real-time price deviated by up to 90 percent from the optimal consumption in a congested distribution network. While the DLMP internalizes congestion management, the consumption incentivized by the real-time price caused overloads.
Subscribe to DGIC Updates | Distributed Generation Interconnection
Distributed Generation Interconnection Collaborative. Subscribe Please provide and submit the following information to subscribe. The mailing list addresses are never sold, rented, distributed, or disclosed in any
This report is a generic verification protocol by which EPA’s Environmental Technology Verification program tests newly developed equipment for distributed generation of electric power, usually micro-turbine generators and internal combustion engine generators. The protocol will ...
NASA Astrophysics Data System (ADS)
Manigrasso, M.; Stabile, L.; Avino, P.; Buonanno, G.
2013-03-01
Aerosol generation events due to combustion processes are characterized by high particle emissions in the nucleation mode range. Such particles are characterized by very short atmospheric lifetimes, leading to rapid decay in time and space from the emission point. Therefore, the deposited fraction of inhaled particles (dose) also changes. In fact, close to the emission source, high short-term peak exposures occur. The related exposure estimates should therefore rely on measurements of aerosol number-size distributions able to track rapid aerosol dynamics. In order to study the influence of the time resolution on such estimates, simultaneous measurements were carried out via Scanning Mobility Particle Sizer (SMPS) and Fast Mobility Particle Sizer (FMPS) spectrometers during particle generation events in both indoor (cooking activities) and outdoor (airstrip and urban street canyons) microenvironments. Aerosol size distributions in the range 16-520 nm were measured by SMPS and FMPS at frequencies of 0.007 s-1 and 1 s-1, respectively. Based on the two datasets, respiratory dosimetry estimates were made on the basis of the deposition model of the International Commission on Radiological Protection. During cooking activities, SMPS measurements give an approximate representation of aerosol temporal evolution. Consequently, the related instant doses can be approximated to a fair degree. In the two outdoor microenvironments considered, aerosol size distributions change rapidly: the FMPS is able to follow such evolution, whereas the SMPS is not. The high short-term peak concentrations, and the consequent respiratory doses, evidenced by FMPS data are hardly described by SMPS, which is unable to track the fast aerosol changes. The health relevance of such short peak exposures has not been thoroughly investigated in scientific literature, therefore, in the present paper highly time-resolved and size-resolved dosimetry estimates were provided in order to deepen this aspect.
A Nonparametric Approach For Representing Interannual Dependence In Monthly Streamflow Sequences
NASA Astrophysics Data System (ADS)
Sharma, A.; Oneill, R.
The estimation of risks associated with water management plans requires generation of synthetic streamflow sequences. The mathematical algorithms used to generate these sequences at monthly time scales are found lacking in two main respects: inability in preserving dependence attributes particularly at large (seasonal to interannual) time lags; and, a poor representation of observed distributional characteristics, in partic- ular, representation of strong assymetry or multimodality in the probability density function. Proposed here is an alternative that naturally incorporates both observed de- pendence and distributional attributes in the generated sequences. Use of a nonpara- metric framework provides an effective means for representing the observed proba- bility distribution, while the use of a Svariable kernelT ensures accurate modeling of & cedil;streamflow data sets that contain a substantial number of zero flow values. A careful selection of prior flows imparts the appropriate short-term memory, while use of an SaggregateT flow variable allows representation of interannual dependence. The non- & cedil;parametric simulation model is applied to monthly flows from the Beaver River near Beaver, Utah, USA, and the Burrendong dam inflows, New South Wales, Australia. Results indicate that while the use of traditional simulation approaches leads to an inaccurate representation of dependence at long (annual and interannual) time scales, the proposed model can simulate both short and long-term dependence. As a result, the proposed model ensures a significantly improved representation of reservoir storage statistics, particularly for systems influenced by long droughts. It is important to note that the proposed method offers a simpler and better alternative to conventional dis- aggregation models as: (a) a separate annual flow series is not required, (b) stringent assumptions relating annual and monthly flows are not needed, and (c) the method does not require the specification of a "water year", instead ensuring that the sum of any sequence of flows lasting twelve months will result in the type of dependence that is observed in the historical annual flow series.
GOES-R GS Product Generation Infrastructure Operations
NASA Astrophysics Data System (ADS)
Blanton, M.; Gundy, J.
2012-12-01
GOES-R GS Product Generation Infrastructure Operations: The GOES-R Ground System (GS) will produce a much larger set of products with higher data density than previous GOES systems. This requires considerably greater compute and memory resources to achieve the necessary latency and availability for these products. Over time, new algorithms could be added and existing ones removed or updated, but the GOES-R GS cannot go down during this time. To meet these GOES-R GS processing needs, the Harris Corporation will implement a Product Generation (PG) infrastructure that is scalable, extensible, extendable, modular and reliable. The primary parts of the PG infrastructure are the Service Based Architecture (SBA), which includes the Distributed Data Fabric (DDF). The SBA is the middleware that encapsulates and manages science algorithms that generate products. The SBA is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. The SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DDF to provide this data communication layer between algorithms. The DDF provides an abstract interface over a distributed and persistent multi-layered storage system (memory based caching above disk-based storage) and an event system that allows algorithm services to know when data is available and to get the data that they need to begin processing when they need it. Together, the SBA and the DDF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trull, J.; Wang, B.; Parra, A.
2015-06-01
Pulse compression in dispersive strontium barium niobate crystal with a random size and distribution of the anti-parallel orientated nonlinear domains is observed via transverse second harmonic generation. The dependence of the transverse width of the second harmonic trace along the propagation direction allows for the determination of the initial chirp and duration of pulses in the femtosecond regime. This technique permits a real-time analysis of the pulse evolution and facilitates fast in-situ correction of pulse chirp acquired in the propagation through an optical system.
Simulative research on generating UWB signals by all-optical BPF
NASA Astrophysics Data System (ADS)
Yang, Chunyong; Hou, Rui; Chen, Shaoping
2007-11-01
The simulating technique is used to investigate generating and distributing Ultra-Wide-Band signals depend on fiber transmission. Numerical result for the system about the frequency response shows that the characteristics of band-pass filter is presented, and the shorter the wavelength is, the bandwidth of lower frequency is wider. Transmission performance simulation for 12.5Gb/s psudo-random sequence also shows that Gaussian pulse signal after transported in fiber is similar to UWB wave pattern mask of FCC in time domain and frequency spectrum specification of FCC in frequency domain .
Remote distribution of a mode-locked pulse train with sub 40-as jitter
NASA Astrophysics Data System (ADS)
Chen, Yi-Fei; Jiang, Jie; Jones, David J.
2006-12-01
Remote transfer of an ultralow-jitter microwave frequency reference signal is demonstrated using the pulse trains generated by a mode-locked fiber laser. The timing jitter in a ~ 30-m fiber link is reduced to 38 attoseconds (as) integrated over a bandwidth from 1 Hz to 10 MHz via active stabilization which represents a significant improvement over previously reported jitter performance. Our approach uses an all-optical generation of the synchronization error signal and an accompanying out-of-loop optical detection technique to verify the jitter performance.
Review of Strategies and Technologies for Demand-Side Management on Isolated Mini-Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, Meg
This review provides an overview of strategies and currently available technologies used for demandside management (DSM) on mini-grids throughout the world. For the purposes of this review, mini-grids are defined as village-scale electricity distribution systems powered by small local generation sources and not connected to a main grid.1 Mini-grids range in size from less than 1 kW to several hundred kW of installed generation capacity and may utilize different generation technologies, such as micro-hydro, biomass gasification, solar, wind, diesel generators, or a hybrid combination of any of these. This review will primarily refer to AC mini-grids, though much of themore » discussion could apply to DC grids as well. Many mini-grids include energy storage, though some rely solely on real-time generation.« less
Distributed Generation Market Demand Model | NREL
Demand Model The Distributed Generation Market Demand (dGen) model simulates the potential adoption of distributed energy resources (DERs) for residential, commercial, and industrial entities in the dGen model can help develop deployment forecasts for distributed resources, including sensitivity to
NASA Astrophysics Data System (ADS)
Prada, D. A.; Sanabria, M. P.; Torres, A. F.; Álvarez, M. A.; Gómez, J.
2018-04-01
The study of persistence in time series in seismic events in two of the most important nets such as Hindu Kush in Afghanistan and Los Santos Santander in Colombia generate great interest due to its high presence of telluric activity. The data were taken from the global seismological network. Using the Jarque-Bera test the presence of gaussian distribution was analyzed, and because the distribution in the series was asymmetric, without presence of mesocurtisity, the Hurst coefficient was calculated using the rescaled range method, with which it was found the fractal dimension associated to these time series and under what is possible to determine the persistence, antipersistence and volatility in these phenomena.
Feasibility of satellite quantum key distribution
NASA Astrophysics Data System (ADS)
Bonato, C.; Tomaello, A.; Da Deppo, V.; Naletto, G.; Villoresi, P.
2009-04-01
In this paper, we present a novel analysis of the feasibility of quantum key distribution between a LEO satellite and a ground station. First of all, we study signal propagation through a turbulent atmosphere for uplinks and downlinks, discussing the contribution of beam spreading and beam wandering. Then we introduce a model for the background noise of the channel during night-time and day-time, calculating the signal-to-noise ratio for different configurations. We also discuss the expected error-rate due to imperfect polarization compensation in the channel. Finally, we calculate the expected key generation rate of a secure key for different configurations (uplink, downlink) and for different protocols (BB84 with and without decoy states, entanglement-based Ekert91 protocol).