Distilled Water Distribution Systems. Laboratory Design Notes.
ERIC Educational Resources Information Center
Sell, J.C.
Factors concerning water distribution systems, including an evaluation of materials and a recommendation of materials best suited for service in typical facilities are discussed. Several installations are discussed in an effort to bring out typical features in selected applications. The following system types are included--(1) industrial…
NASA Astrophysics Data System (ADS)
Bin, Che; Ruoying, Yu; Dongsheng, Dang; Xiangyan, Wang
2017-05-01
Distributed Generation (DG) integrating to the network would cause the harmonic pollution which would cause damages on electrical devices and affect the normal operation of power system. On the other hand, due to the randomness of the wind and solar irradiation, the output of DG is random, too, which leads to an uncertainty of the harmonic generated by the DG. Thus, probabilistic methods are needed to analyse the impacts of the DG integration. In this work we studied the harmonic voltage probabilistic distribution and the harmonic distortion in distributed network after the distributed photovoltaic (DPV) system integrating in different weather conditions, mainly the sunny day, cloudy day, rainy day and the snowy day. The probabilistic distribution function of the DPV output power in different typical weather conditions could be acquired via the parameter identification method of maximum likelihood estimation. The Monte-Carlo simulation method was adopted to calculate the probabilistic distribution of harmonic voltage content at different frequency orders as well as the harmonic distortion (THD) in typical weather conditions. The case study was based on the IEEE33 system and the results of harmonic voltage content probabilistic distribution as well as THD in typical weather conditions were compared.
Equalizing Matching Grants and the Allocative and Distributive Objectives of Public School Financing
ERIC Educational Resources Information Center
Gatti, James F.; Tashman, Leonard J.
1976-01-01
Argues that typical Equalizing Matching Grant (EMG) systems for distributing state school aid cannot be expected to achieve the allocative and distributive goals of school finance. Derives a generalized EMG system and specific school aid formula that satisfy the allocative and distributive criteria. Available from: NTA-TIA, 21 East State Street,…
NASA Astrophysics Data System (ADS)
Xiong, Zhi; Zhu, J. G.; Xue, B.; Ye, Sh. H.; Xiong, Y.
2013-10-01
As a novel network coordinate measurement system based on multi-directional positioning, workspace Measurement and Positioning System (wMPS) has outstanding advantages of good parallelism, wide measurement range and high measurement accuracy, which makes it to be the research hotspots and important development direction in the field of large-scale measurement. Since station deployment has a significant impact on the measurement range and accuracy, and also restricts the use-cost, the optimization method of station deployment was researched in this paper. Firstly, positioning error model was established. Then focusing on the small network consisted of three stations, the typical deployments and error distribution characteristics were studied. Finally, through measuring the simulated fuselage using typical deployments at the industrial spot and comparing the results with Laser Tracker, some conclusions are obtained. The comparison results show that under existing prototype conditions, I_3 typical deployment of which three stations are distributed in a straight line has an average error of 0.30 mm and the maximum error is 0.50 mm in the range of 12 m. Meanwhile, C_3 typical deployment of which three stations are uniformly distributed in the half-circumference of an circle has an average error of 0.17 mm and the maximum error is 0.28 mm. Obviously, C_3 typical deployment has a higher control effect on precision than I_3 type. The research work provides effective theoretical support for global measurement network optimization in the future work.
A Distributed Computing Network for Real-Time Systems
1980-11-03
NUSC Tttchnical Docum&nt 5932 3 November 1980 A Distributed Computing N ~etwork for Real ·- Time Systems Gordon · E. Morrison Combat Control...megabit, 10 megabit, and 20 megabit networks. These values are well within the J state-of-the-art and are typical for real - time systems similar to
Ontology-Based Peer Exchange Network (OPEN)
ERIC Educational Resources Information Center
Dong, Hui
2010-01-01
In current Peer-to-Peer networks, distributed and semantic free indexing is widely used by systems adopting "Distributed Hash Table" ("DHT") mechanisms. Although such systems typically solve a. user query rather fast in a deterministic way, they only support a very narrow search scheme, namely the exact hash key match. Furthermore, DHT systems put…
An Autonomous Distributed Fault-Tolerant Local Positioning System
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2017-01-01
We describe a fault-tolerant, GPS-independent (Global Positioning System) distributed autonomous positioning system for static/mobile objects and present solutions for providing highly-accurate geo-location data for the static/mobile objects in dynamic environments. The reliability and accuracy of a positioning system fundamentally depends on two factors; its timeliness in broadcasting signals and the knowledge of its geometry, i.e., locations and distances of the beacons. Existing distributed positioning systems either synchronize to a common external source like GPS or establish their own time synchrony using a scheme similar to a master-slave by designating a particular beacon as the master and other beacons synchronize to it, resulting in a single point of failure. Another drawback of existing positioning systems is their lack of addressing various fault manifestations, in particular, communication link failures, which, as in wireless networks, are increasingly dominating the process failures and are typically transient and mobile, in the sense that they typically affect different messages to/from different processes over time.
NASA Astrophysics Data System (ADS)
Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng
2018-02-01
Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.
DOT National Transportation Integrated Search
1994-09-01
This report presents a theoretical analysis predicting the temperature distribution, thermal deflections, and thermal stresses that may occur in typical steel Maglev guideways under the proposed Orlando FL thermal environment. Transient, finite eleme...
Maintaining a Distributed File System by Collection and Analysis of Metrics
NASA Technical Reports Server (NTRS)
Bromberg, Daniel
1997-01-01
AFS(originally, Andrew File System) is a widely-deployed distributed file system product used by companies, universities, and laboratories world-wide. However, it is not trivial to operate: runing an AFS cell is a formidable task. It requires a team of dedicated and experienced system administratores who must manage a user base numbring in the thousands, rather than the smaller range of 10 to 500 faced by the typical system administrator.
Evolution of user analysis on the grid in ATLAS
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Legger, F.; ATLAS Collaboration
2017-10-01
More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.
Design of Distributed Engine Control Systems with Uncertain Delay.
Liu, Xiaofeng; Li, Yanxi; Sun, Xu
Future gas turbine engine control systems will be based on distributed architecture, in which, the sensors and actuators will be connected to the controllers via a communication network. The performance of the distributed engine control (DEC) is dependent on the network performance. This study introduces a distributed control system architecture based on a networked cascade control system (NCCS). Typical turboshaft engine-distributed controllers are designed based on the NCCS framework with a H∞ output feedback under network-induced time delays and uncertain disturbances. The sufficient conditions for robust stability are derived via the Lyapunov stability theory and linear matrix inequality approach. Both numerical and hardware-in-loop simulations illustrate the effectiveness of the presented method.
Design of Distributed Engine Control Systems with Uncertain Delay
Li, Yanxi; Sun, Xu
2016-01-01
Future gas turbine engine control systems will be based on distributed architecture, in which, the sensors and actuators will be connected to the controllers via a communication network. The performance of the distributed engine control (DEC) is dependent on the network performance. This study introduces a distributed control system architecture based on a networked cascade control system (NCCS). Typical turboshaft engine-distributed controllers are designed based on the NCCS framework with a H∞ output feedback under network-induced time delays and uncertain disturbances. The sufficient conditions for robust stability are derived via the Lyapunov stability theory and linear matrix inequality approach. Both numerical and hardware-in-loop simulations illustrate the effectiveness of the presented method. PMID:27669005
Intelligent decision support algorithm for distribution system restoration.
Singh, Reetu; Mehfuz, Shabana; Kumar, Parmod
2016-01-01
Distribution system is the means of revenue for electric utility. It needs to be restored at the earliest if any feeder or complete system is tripped out due to fault or any other cause. Further, uncertainty of the loads, result in variations in the distribution network's parameters. Thus, an intelligent algorithm incorporating hybrid fuzzy-grey relation, which can take into account the uncertainties and compare the sequences is discussed to analyse and restore the distribution system. The simulation studies are carried out to show the utility of the method by ranking the restoration plans for a typical distribution system. This algorithm also meets the smart grid requirements in terms of an automated restoration plan for the partial/full blackout of network.
Grid-connected distributed solar power systems
NASA Astrophysics Data System (ADS)
Moyle, R.; Chernoff, H.; Schweizer, T.
This paper discusses some important, though often ignored, technical and economic issues of distributed solar power systems: protection of the utility system and nonsolar customers requires suitable interfaced equipment. Purchase criteria must mirror reality; most analyses use life-cycle costing with low discount rates - most buyers use short payback periods. Distributing, installing, and marketing small, distributed solar systems is more costly than most analyses estimate. Results show that certain local conditions and uncommon purchase considerations can combine to make small, distributed solar power attractive, but lower interconnect costs (per kW), lower marketing and product distribution costs, and more favorable purchase criteria make large, centralized solar energy more attractive. Specifically, the value of dispersed solar systems to investors and utilities can be higher than $2000/kw. However, typical residential owners place a value of well under $1000 on the installed system.
Resource Management for Distributed Parallel Systems
NASA Technical Reports Server (NTRS)
Neuman, B. Clifford; Rao, Santosh
1993-01-01
Multiprocessor systems should exist in the the larger context of distributed systems, allowing multiprocessor resources to be shared by those that need them. Unfortunately, typical multiprocessor resource management techniques do not scale to large networks. The Prospero Resource Manager (PRM) is a scalable resource allocation system that supports the allocation of processing resources in large networks and multiprocessor systems. To manage resources in such distributed parallel systems, PRM employs three types of managers: system managers, job managers, and node managers. There exist multiple independent instances of each type of manager, reducing bottlenecks. The complexity of each manager is further reduced because each is designed to utilize information at an appropriate level of abstraction.
Technology survey of electrical power generation and distribution for MIUS application
NASA Technical Reports Server (NTRS)
Gill, W. L.; Redding, T. E.
1975-01-01
Candidate electrical generation power systems for the modular integrated utility systems (MIUS) program are described. Literature surveys were conducted to cover both conventional and exotic generators. Heat-recovery equipment associated with conventional power systems and supporting equipment are also discussed. Typical ranges of operating conditions and generating efficiencies are described. Power distribution is discussed briefly. Those systems that appear to be applicable to MIUS have been indicated, and the criteria for equipment selection are discussed.
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
"Smoke": Characterization Of Smoke Particulate For Spacecraft Fire Detection
NASA Technical Reports Server (NTRS)
Urban, David L.; Mulholland, George W.; Yang, Jiann; Cleary, Thomas G.; Yuan, Zeng-Guang
2003-01-01
The "Smoke" experiment is a flight definition investigation that seeks to increase our understanding of spacecraft fire detection through measurements of particulate size distributions of preignition smokes from typical spacecraft materials. Owing to the catastrophic risk posed by even a very small fire in a spacecraft, the design goal for spacecraft fire detection is to detect the fire as quickly as possible, preferably in the preignition phase before a real flaming fire has developed. Consequently the target smoke for detection is typically not soot (typical of established hydrocarbon fires) but instead, pyrolysis products, and recondensed polymer particles. At the same time, false alarms are extremely costly as the crew and the ground team must respond quickly to every alarm. The U.S. Space Shuttle (STS: Space Transportation System) and the International Space Station (ISS) both use smoke detection as the primary means of fire detection. These two systems were designed in the absence of any data concerning low-gravity smoke particle (and background dust) size distributions. The STS system uses an ionization detector coupled with a sampling pump and the ISS system is a forward light scattering detector operating in the near IR. These two systems have significantly different sensitivities with the ionization detector being most sensitive (on a mass concentration basis) to smaller particulate and the light scattering detector being most sensitive to particulate that is larger than 1 micron. Since any smoke detection system has inherent size sensitivity characteristics, proper design of future smoke detection systems will require an understanding of the background and alarm particle size distributions that can be expected in a space environment.
2013-01-01
We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions. PMID:23936164
Gonzalez, Elias; Kish, Laszlo B; Balog, Robert S; Enjeti, Prasad
2013-01-01
We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions.
A distributed database view of network tracking systems
NASA Astrophysics Data System (ADS)
Yosinski, Jason; Paffenroth, Randy
2008-04-01
In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.
Data regarding grazing utilization in the western United States are typically compiled within administrative boundaries(e.g. allotment,pasture). For large areas, an assumption of uniform distribution is seldom valid. Previous studies show that vegetation type, degree of slope, an...
Evaluation of two typical distributed energy systems
NASA Astrophysics Data System (ADS)
Han, Miaomiao; Tan, Xiu
2018-03-01
According to the two-natural gas distributed energy system driven by gas engine driven and gas turbine, in this paper, the first and second laws of thermodynamics are used to measure the distributed energy system from the two parties of “quantity” and “quality”. The calculation results show that the internal combustion engine driven distributed energy station has a higher energy efficiency, but the energy efficiency is low; the gas turbine driven distributed energy station energy efficiency is high, but the primary energy utilization rate is relatively low. When configuring the system, we should determine the applicable natural gas distributed energy system technology plan and unit configuration plan according to the actual load factors of the project and the actual factors such as the location, background and environmental requirements of the project. “quality” measure, the utilization of waste heat energy efficiency index is proposed.
Turbulence Modulation and Dense-Spray Structure
1988-08-01
system ............................................. 10 6 Particle size distribution (dp = 0.5 mm) .................................... 13 7 Particle size...54 27 Sketch of the pressurized test apparatus .................................... 55 28 Sketch of the double-pulse holocamera system ...58 29 Sketch of the hologram reconstruction system ............................ 59 30 Typical hologram reconstruction in the dense
The case for distributed irrigation as a development priority in sub-Saharan Africa.
Burney, Jennifer A; Naylor, Rosamond L; Postel, Sandra L
2013-07-30
Distributed irrigation systems are those in which the water access (via pump or human power), distribution (via furrow, watering can, sprinkler, drip lines, etc.), and use all occur at or near the same location. Distributed systems are typically privately owned and managed by individuals or groups, in contrast to centralized irrigation systems, which tend to be publicly operated and involve large water extractions and distribution over significant distances for use by scores of farmers. Here we draw on a growing body of evidence on smallholder farmers, distributed irrigation systems, and land and water resource availability across sub-Saharan Africa (SSA) to show how investments in distributed smallholder irrigation technologies might be used to (i) use the water sources of SSA more productively, (ii) improve nutritional outcomes and rural development throughout SSA, and (iii) narrow the income disparities that permit widespread hunger to persist despite aggregate economic advancement.
Electric Power Distribution System Model Simplification Using Segment Substitution
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...
2017-09-20
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
A Cooperative V2V Alert System to Mitigate Vehicular Traffic ShockWaves
DOT National Transportation Integrated Search
2017-03-01
Vehicle traffic on highway systems are typically not uniformly distributed. In our work, we introduce a protocol that exploits this phenomenon by considering the formations of shock waves and opportunities in adjacent lanes. The objective of this pro...
[Groundwater organic pollution source identification technology system research and application].
Wang, Xiao-Hong; Wei, Jia-Hua; Cheng, Zhi-Neng; Liu, Pei-Bin; Ji, Yi-Qun; Zhang, Gan
2013-02-01
Groundwater organic pollutions are found in large amount of locations, and the pollutions are widely spread once onset; which is hard to identify and control. The key process to control and govern groundwater pollution is how to control the sources of pollution and reduce the danger to groundwater. This paper introduced typical contaminated sites as an example; then carried out the source identification studies and established groundwater organic pollution source identification system, finally applied the system to the identification of typical contaminated sites. First, grasp the basis of the contaminated sites of geological and hydrogeological conditions; determine the contaminated sites characteristics of pollutants as carbon tetrachloride, from the large numbers of groundwater analysis and test data; then find the solute transport model of contaminated sites and compound-specific isotope techniques. At last, through groundwater solute transport model and compound-specific isotope technology, determine the distribution of the typical site of organic sources of pollution and pollution status; invest identified potential sources of pollution and sample the soil to analysis. It turns out that the results of two identified historical pollution sources and pollutant concentration distribution are reliable. The results provided the basis for treatment of groundwater pollution.
Digital Libraries: The Next Generation in File System Technology.
ERIC Educational Resources Information Center
Bowman, Mic; Camargo, Bill
1998-01-01
Examines file sharing within corporations that use wide-area, distributed file systems. Applications and user interactions strongly suggest that the addition of services typically associated with digital libraries (content-based file location, strongly typed objects, representation of complex relationships between documents, and extrinsic…
Distribution system model calibration with big data from AMI and PV inverters
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; ...
2016-03-03
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Distribution system model calibration with big data from AMI and PV inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
NASA Technical Reports Server (NTRS)
Koda, M.; Seinfeld, J. H.
1982-01-01
The reconstruction of a concentration distribution from spatially averaged and noise-corrupted data is a central problem in processing atmospheric remote sensing data. Distributed parameter observer theory is used to develop reconstructibility conditions for distributed parameter systems having measurements typical of those in remote sensing. The relation of the reconstructibility condition to the stability of the distributed parameter observer is demonstrated. The theory is applied to a variety of remote sensing situations, and it is found that those in which concentrations are measured as a function of altitude satisfy the conditions of distributed state reconstructibility.
Projected dryland cropping system shifts in the Pacific Northwest in response to climate change
USDA-ARS?s Scientific Manuscript database
Agriculture in the dryland region of the Inland Pacific Northwest (IPNW, including northern Idaho, eastern Washington and northern Oregon) is typically characterized based on annual rainfall and associated distribution of cropping systems that have evolved in response to biophysical and socio-econom...
Modeling occupancy distribution in large spaces with multi-feature classification algorithm
Wang, Wei; Chen, Jiayu; Hong, Tianzhen
2018-04-07
We present that occupancy information enables robust and flexible control of heating, ventilation, and air-conditioning (HVAC) systems in buildings. In large spaces, multiple HVAC terminals are typically installed to provide cooperative services for different thermal zones, and the occupancy information determines the cooperation among terminals. However, a person count at room-level does not adequately optimize HVAC system operation due to the movement of occupants within the room that creates uneven load distribution. Without accurate knowledge of the occupants’ spatial distribution, the uneven distribution of occupants often results in under-cooling/heating or over-cooling/heating in some thermal zones. Therefore, the lack of high-resolutionmore » occupancy distribution is often perceived as a bottleneck for future improvements to HVAC operation efficiency. To fill this gap, this study proposes a multi-feature k-Nearest-Neighbors (k-NN) classification algorithm to extract occupancy distribution through reliable, low-cost Bluetooth Low Energy (BLE) networks. An on-site experiment was conducted in a typical office of an institutional building to demonstrate the proposed methods, and the experiment outcomes of three case studies were examined to validate detection accuracy. One method based on City Block Distance (CBD) was used to measure the distance between detected occupancy distribution and ground truth and assess the results of occupancy distribution. Finally, the results show the accuracy when CBD = 1 is over 71.4% and the accuracy when CBD = 2 can reach up to 92.9%.« less
Modeling occupancy distribution in large spaces with multi-feature classification algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Wei; Chen, Jiayu; Hong, Tianzhen
We present that occupancy information enables robust and flexible control of heating, ventilation, and air-conditioning (HVAC) systems in buildings. In large spaces, multiple HVAC terminals are typically installed to provide cooperative services for different thermal zones, and the occupancy information determines the cooperation among terminals. However, a person count at room-level does not adequately optimize HVAC system operation due to the movement of occupants within the room that creates uneven load distribution. Without accurate knowledge of the occupants’ spatial distribution, the uneven distribution of occupants often results in under-cooling/heating or over-cooling/heating in some thermal zones. Therefore, the lack of high-resolutionmore » occupancy distribution is often perceived as a bottleneck for future improvements to HVAC operation efficiency. To fill this gap, this study proposes a multi-feature k-Nearest-Neighbors (k-NN) classification algorithm to extract occupancy distribution through reliable, low-cost Bluetooth Low Energy (BLE) networks. An on-site experiment was conducted in a typical office of an institutional building to demonstrate the proposed methods, and the experiment outcomes of three case studies were examined to validate detection accuracy. One method based on City Block Distance (CBD) was used to measure the distance between detected occupancy distribution and ground truth and assess the results of occupancy distribution. Finally, the results show the accuracy when CBD = 1 is over 71.4% and the accuracy when CBD = 2 can reach up to 92.9%.« less
Challenges facing the distribution of an artificial-intelligence-based system for nursing.
Evans, S
1985-04-01
The marketing and successful distribution of artificial-intelligence-based decision-support systems for nursing face special barriers and challenges. Issues that must be confronted arise particularly from the present culture of the nursing profession as well as the typical organizational structures in which nurses predominantly work. Generalizations in the literature based on the limited experience of physician-oriented artificial intelligence applications (predominantly in diagnosis and pharmacologic treatment) must be modified for applicability to other health professions.
The concept of temperature in space plasmas
NASA Astrophysics Data System (ADS)
Livadiotis, G.
2017-12-01
Independently of the initial distribution function, once the system is thermalized, its particles are stabilized into a specific distribution function parametrized by a temperature. Classical particle systems in thermal equilibrium have their phase-space distribution stabilized into a Maxwell-Boltzmann function. In contrast, space plasmas are particle systems frequently described by stationary states out of thermal equilibrium, namely, their distribution is stabilized into a function that is typically described by kappa distributions. The temperature is well-defined for systems at thermal equilibrium or stationary states described by kappa distributions. This is based on the equivalence of the two fundamental definitions of temperature, that is (i) the kinetic definition of Maxwell (1866) and (ii) the thermodynamic definition of Clausius (1862). This equivalence holds either for Maxwellians or kappa distributions, leading also to the equipartition theorem. The temperature and kappa index (together with density) are globally independent parameters characterizing the kappa distribution. While there is no equation of state or any universal relation connecting these parameters, various local relations may exist along the streamlines of space plasmas. Observations revealed several types of such local relations among plasma thermal parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-10-01
In this project, Building America team IBACOS performed field testing in a new construction unoccupied test house in Pittsburgh, Pennsylvania to evaluate heating, ventilating, and air conditioning (HVAC) distribution systems during heating, cooling, and midseason conditions. Four air-based HVAC distribution systems were assessed:-a typical airflow ducted system to the bedrooms, a low airflow ducted system to the bedrooms, a system with transfer fans to the bedrooms, and a system with no ductwork to the bedrooms. The relative ability of each system was considered with respect to relevant Air Conditioning Contractors of America and ASHRAE standards for house temperature uniformity andmore » stability, respectively.« less
NASA Technical Reports Server (NTRS)
Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven
2010-01-01
Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.
Middleware for big data processing: test results
NASA Astrophysics Data System (ADS)
Gankevich, I.; Gaiduchok, V.; Korkhov, V.; Degtyarev, A.; Bogdanov, A.
2017-12-01
Dealing with large volumes of data is resource-consuming work which is more and more often delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some critical size, much effort should be put into improving its fault tolerance. It is difficult to estimate when some particular distributed system needs such facilities for a given workload, so instead they should be implemented in a middleware which works efficiently with a distributed system of any size. It is also difficult to estimate whether a volume of data is large or not, so the middleware should also work with data of any volume. In other words, the purpose of the middleware is to provide facilities that adapt distributed computing system for a given workload. In this paper we introduce such middleware appliance. Tests show that this middleware is well-suited for typical HPC and big data workloads and its performance is comparable with well-known alternatives.
Watershed and Economic Data InterOperability (WEDO) System
Hydrologic modeling is essential for environmental, economic, and human health decision-making. However, sharing of modeling studies is limited within the watershed modeling community. Distribution of hydrologic modeling research typically involves publishing summarized data in p...
Improved Coast Guard Communications Using Commercial Satellites and WWW Technology
DOT National Transportation Integrated Search
1997-06-18
Information collection and distribution are essential components of most Coast Guard missions. However, information needs have typically outpaced the ability of the installed communications systems to meet those needs. This mismatch leads to reduced ...
Watershed and Economic Data InterOperability (WEDO) System (presentation)
Hydrologic modeling is essential for environmental, economic, and human health decision- making. However, sharing of modeling studies is limited within the watershed modeling community. Distribution of hydrologic modeling research typically involves publishing summarized data in ...
NASA Astrophysics Data System (ADS)
Shen, Qian; Bai, Yanfeng; Shi, Xiaohui; Nan, Suqin; Qu, Lijie; Li, Hengxing; Fu, Xiquan
2017-07-01
The difference in imaging quality between different ghost imaging schemes is studied by using coherent-mode representation of partially coherent fields. It is shown that the difference mainly relies on the distribution changes of the decomposition coefficients of the object imaged when the light source is fixed. For a new-designed imaging scheme, we only need to give the distribution of the decomposition coefficients and compare them with that of the existing imaging system, thus one can predict imaging quality. By choosing several typical ghost imaging systems, we theoretically and experimentally verify our results.
The 18/30 GHz fixed communications system service demand assessment. Volume 2: Main text
NASA Technical Reports Server (NTRS)
Gabriszeski, T.; Reiner, P.; Rogers, J.; Terbo, W.
1979-01-01
The total demand for communications services, and satellite transmission services at the 4/6 GHz, 12/14 GHz, and 18/30 GHz frequencies is assessed. The services are voice, video, and data services. Traffic demand, by service, is distributed by geographical regions, population density, and distance between serving points. Further distribution of traffic is made among four major end user groups: business, government, institutions and private individuals. A traffic demand analysis is performed on a typical metropolitan city to examine service distribution trends. The projected cost of C and Ku band satellite systems are compared on an individual service basis to projected terrestrial rates. Separation of traffic between transmission systems, including 18/30 GHz systems, is based on cost, user, and technical considerations.
NASA Technical Reports Server (NTRS)
Jenkins, George
1986-01-01
Prelaunch, launch, mission, and landing distribution of RF and hardline uplink/downlink information between Space Shuttle Orbiter/cargo elements, tracking antennas, and control centers at JSC, KSC, MSFC, GSFC, ESMC/RCC, and Sunnyvale are presented as functional block diagrams. Typical mismatch problems encountered during spacecraft-to-project control center telemetry transmissions are listed along with new items for future support enhancement.
NASA Astrophysics Data System (ADS)
Jenkins, George
Prelaunch, launch, mission, and landing distribution of RF and hardline uplink/downlink information between Space Shuttle Orbiter/cargo elements, tracking antennas, and control centers at JSC, KSC, MSFC, GSFC, ESMC/RCC, and Sunnyvale are presented as functional block diagrams. Typical mismatch problems encountered during spacecraft-to-project control center telemetry transmissions are listed along with new items for future support enhancement.
Attenuation of Typical Sex Differences in 800 Adults with Autism vs. 3,900 Controls
Baron-Cohen, Simon; Cassidy, Sarah; Auyeung, Bonnie; Allison, Carrie; Achoukhi, Maryam; Robertson, Sarah; Pohl, Alexa; Lai, Meng-Chuan
2014-01-01
Sex differences have been reported in autistic traits and systemizing (male advantage), and empathizing (female advantage) among typically developing individuals. In individuals with autism, these cognitive-behavioural profiles correspond to predictions from the “extreme male brain” (EMB) theory of autism (extreme scores on autistic traits and systemizing, below average on empathizing). Sex differences within autism, however, have been under-investigated. Here we show in 811 adults (454 females) with autism and 3,906 age-matched typical control adults (2,562 females) who completed the Empathy Quotient (EQ), the Systemizing Quotient-Revised (SQ-R), and the Autism Spectrum Quotient (AQ), that typical females on average scored higher on the EQ, typical males scored higher on the SQ-R and AQ, and both males and females with autism showed a shift toward the extreme of the “male profile” on these measures and in the distribution of “brain types” (the discrepancy between standardized EQ and SQ-R scores). Further, normative sex differences are attenuated but not abolished in adults with autism. The findings provide strong support for the EMB theory of autism, and highlight differences between males and females with autism. PMID:25029203
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olalla, Carlos; Maksimovic, Dragan; Deline, Chris
Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less
Olalla, Carlos; Maksimovic, Dragan; Deline, Chris; ...
2017-04-26
Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less
Control of a solar-energy-supplied electrical-power system without intermediate circuitry
NASA Astrophysics Data System (ADS)
Leistner, K.
A computer control system is developed for electric-power systems comprising solar cells and small numbers of users with individual centrally controlled converters (and storage facilities when needed). Typical system structures are reviewed; the advantages of systems without an intermediate network are outlined; the demands on a control system in such a network (optimizing generator working point and power distribution) are defined; and a flexible modular prototype system is described in detail. A charging station for lead batteries used in electric automobiles is analyzed as an example. The power requirements of the control system (30 W for generator control and 50 W for communications and distribution control) are found to limit its use to larger networks.
High-Penetration PV Integration Handbook for Distribution Engineers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seguin, Rich; Woyak, Jeremy; Costyk, David
2016-01-01
This handbook has been developed as part of a five-year research project which began in 2010. The National Renewable Energy Laboratory (NREL), Southern California Edison (SCE), Quanta Technology, Satcon Technology Corporation, Electrical Distribution Design (EDD), and Clean Power Research (CPR) teamed together to analyze the impacts of high-penetration levels of photovoltaic (PV) systems interconnected onto the SCE distribution system. This project was designed specifically to leverage the experience that SCE and the project team would gain during the significant installation of 500 MW of commercial scale PV systems (1-5 MW typically) starting in 2010 and completing in 2015 within SCE’smore » service territory through a program approved by the California Public Utility Commission (CPUC).« less
Origins and properties of kappa distributions in space plasmas
NASA Astrophysics Data System (ADS)
Livadiotis, George
2016-07-01
Classical particle systems reside at thermal equilibrium with their velocity distribution function stabilized into a Maxwell distribution. On the contrary, collisionless and correlated particle systems, such as the space and astrophysical plasmas, are characterized by a non-Maxwellian behavior, typically described by the so-called kappa distributions. Empirical kappa distributions have become increasingly widespread across space and plasma physics. However, a breakthrough in the field came with the connection of kappa distributions to the solid statistical framework of Tsallis non-extensive statistical mechanics. Understanding the statistical origin of kappa distributions was the cornerstone of further theoretical developments and applications, some of which will be presented in this talk: (i) The physical meaning of thermal parameters, e.g., temperature and kappa index; (ii) the multi-particle description of kappa distributions; (iii) the phase-space kappa distribution of a Hamiltonian with non-zero potential; (iv) the Sackur-Tetrode entropy for kappa distributions, and (v) the new quantization constant, h _{*}˜10 ^{-22} Js.
Communication and control in an integrated manufacturing system
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Throne, Robert D.; Muthuswamy, Yogesh K.
1987-01-01
Typically, components in a manufacturing system are all centrally controlled. Due to possible communication bottlenecking, unreliability, and inflexibility caused by using a centralized controller, a new concept of system integration called an Integrated Multi-Robot System (IMRS) was developed. The IMRS can be viewed as a distributed real time system. Some of the current research issues being examined to extend the framework of the IMRS to meet its performance goals are presented. These issues include the use of communication coprocessors to enhance performance, the distribution of tasks and the methods of providing fault tolerance in the IMRS. An application example of real time collision detection, as it relates to the IMRS concept, is also presented and discussed.
NASA Astrophysics Data System (ADS)
Kempa, Wojciech M.
2017-12-01
A finite-capacity queueing system with server breakdowns is investigated, in which successive exponentially distributed failure-free times are followed by repair periods. After the processing a customer may either rejoin the queue (feedback) with probability q, or definitely leave the system with probability 1 - q. The system of integral equations for transient queue-size distribution, conditioned by the initial level of buffer saturation, is build. The solution of the corresponding system written for Laplace transforms is found using the linear algebraic approach. The considered queueing system can be successfully used in modelling production lines with machine failures, in which the parameter q may be considered as a typical fraction of items demanding corrections. Morever, this queueing model can be applied in the analysis of real TCP/IP performance, where q stands for the fraction of packets requiring retransmission.
A Framework for the Evaluation of the Cost and Benefits of Microgrids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, Greg Young; Abbey, Chad; Joos, Geza
2011-07-15
A Microgrid is recognized as an innovative technology to help integrate renewables into distribution systems and to provide additional benefits to a variety of stakeholders, such as offsetting infrastructure investments and improving the reliability of the local system. However, these systems require additional investments for control infrastructure, and as such, additional costs and the anticipated benefits need to be quantified in order to determine whether the investment is economically feasible. This paper proposes a methodology for systematizing and representing benefits and their interrelationships based on the UML Use Case paradigm, which allows complex systems to be represented in a concise,more » elegant format. This methodology is demonstrated by determining the economic feasibility of a Microgrid and Distributed Generation installed on a typical Canadian rural distribution system model as a case study. The study attempts to minimize the cost of energy served to the community, considering the fixed costs associated with Microgrids and Distributed Generation, and suggests benefits to a variety of stakeholders.« less
DEVELOPMENT OF A DATA EVALUATION/DECISION SUPPORT SYSTEM FOR REMEDIATION OF SUBSURFACE CONTAMINATION
Subsurface contamination frequently originates from spatially distributed sources of multi-component nonaqueous phase liquids (NAPLs). Such chemicals are typically persistent sources of ground-water contamination that are difficult to characterize. This work addresses the feasi...
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
Thermostatic Radiator Valve Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dentz, Jordan; Ansanelli, Eric
2015-01-01
A large stock of multifamily buildings in the Northeast and Midwest are heated by steam distribution systems. Losses from these systems are typically high and a significant number of apartments are overheated much of the time. Thermostatically controlled radiator valves (TRVs) are one potential strategy to combat this problem, but have not been widely accepted by the residential retrofit market.
Using a Ternary Diagram to Display a System's Evolving Energy Distribution
ERIC Educational Resources Information Center
Brazzle, Bob; Tapp, Anne
2016-01-01
A ternary diagram is a graphical representation used for systems with three components. They are familiar to mineralogists (who typically use them to categorize varieties of solid solution minerals such as feldspar) but are not yet widely used in the physics community. Last year the lead author began using ternary diagrams in his introductory…
A new approach to modelling schistosomiasis transmission based on stratified worm burden.
Gurarie, D; King, C H; Wang, X
2010-11-01
Multiple factors affect schistosomiasis transmission in distributed meta-population systems including age, behaviour, and environment. The traditional approach to modelling macroparasite transmission often exploits the 'mean worm burden' (MWB) formulation for human hosts. However, typical worm distribution in humans is overdispersed, and classic models either ignore this characteristic or make ad hoc assumptions about its pattern (e.g., by assuming a negative binomial distribution). Such oversimplifications can give wrong predictions for the impact of control interventions. We propose a new modelling approach to macro-parasite transmission by stratifying human populations according to worm burden, and replacing MWB dynamics with that of 'population strata'. We developed proper calibration procedures for such multi-component systems, based on typical epidemiological and demographic field data, and implemented them using Wolfram Mathematica. Model programming and calibration proved to be straightforward. Our calibrated system provided good agreement with the individual level field data from the Msambweni region of eastern Kenya. The Stratified Worm Burden (SWB) approach offers many advantages, in that it accounts naturally for overdispersion and accommodates other important factors and measures of human infection and demographics. Future work will apply this model and methodology to evaluate innovative control intervention strategies, including expanded drug treatment programmes proposed by the World Health Organization and its partners.
NASA Technical Reports Server (NTRS)
1982-01-01
Farmers are increasingly turning to aerial applications of pesticides, fertilizers and other materials. Sometimes uneven distribution of the chemicals is caused by worn nozzles, improper alignment of spray nozzles or system leaks. If this happens, job must be redone with added expense to both the pilot and customer. Traditional pattern analysis techniques take days or weeks. Utilizing NASA's wind tunnel and computer validation technology, Dr. Roth, Oklahoma State University (OSU), developed a system for providing answers within minutes. Called the Rapid Distribution Pattern Evaluation System, the OSU system consists of a 100-foot measurement frame tied in to computerized analysis and readout equipment. System is mobile, delivered by trailer to airfields in agricultural areas where OSU conducts educational "fly-ins." A fly-in typically draws 50 to 100 aerial applicators, researchers, chemical suppliers and regulatory officials. An applicator can have his spray pattern checked. A computerized readout, available in five to 12 minutes, provides information for correcting shortcomings in the distribution pattern.
Software for integrated manufacturing systems, part 2
NASA Technical Reports Server (NTRS)
Volz, R. A.; Naylor, A. W.
1987-01-01
Part 1 presented an overview of the unified approach to manufacturing software. The specific characteristics of the approach that allow it to realize the goals of reduced cost, increased reliability and increased flexibility are considered. Why the blending of a components view, distributed languages, generics and formal models is important, why each individual part of this approach is essential, and why each component will typically have each of these parts are examined. An example of a specification for a real material handling system is presented using the approach and compared with the standard interface specification given by the manufacturer. Use of the component in a distributed manufacturing system is then compared with use of the traditional specification with a more traditional approach to designing the system. An overview is also provided of the underlying mechanisms used for implementing distributed manufacturing systems using the unified software/hardware component approach.
Parsai, E Ishmael; Zhang, Zhengdong; Feldmeier, John J
2009-01-01
The commercially available brachytherapy treatment-planning systems today, usually neglects the attenuation effect from stainless steel (SS) tube when Fletcher-Suit-Delclos (FSD) is used in treatment of cervical and endometrial cancers. This could lead to potential inaccuracies in computing dwell times and dose distribution. A more accurate analysis quantifying the level of attenuation for high-dose-rate (HDR) iridium 192 radionuclide ((192)Ir) source is presented through Monte Carlo simulation verified by measurement. In this investigation a general Monte Carlo N-Particles (MCNP) transport code was used to construct a typical geometry of FSD through simulation and compare the doses delivered to point A in Manchester System with and without the SS tubing. A quantitative assessment of inaccuracies in delivered dose vs. the computed dose is presented. In addition, this investigation expanded to examine the attenuation-corrected radial and anisotropy dose functions in a form parallel to the updated AAPM Task Group No. 43 Report (AAPM TG-43) formalism. This will delineate quantitatively the inaccuracies in dose distributions in three-dimensional space. The changes in dose deposition and distribution caused by increased attenuation coefficient resulted from presence of SS are quantified using MCNP Monte Carlo simulations in coupled photon/electron transport. The source geometry was that of the Vari Source wire model VS2000. The FSD was that of the Varian medical system. In this model, the bending angles of tandem and colpostats are 15 degrees and 120 degrees , respectively. We assigned 10 dwell positions to the tandem and 4 dwell positions to right and left colpostats or ovoids to represent a typical treatment case. Typical dose delivered to point A was determined according to Manchester dosimetry system. Based on our computations, the reduction of dose to point A was shown to be at least 3%. So this effect presented by SS-FSD systems on patient dose is of concern.
Implications of biofilm-associated waterborne Cryptosporidium oocysts for the water industry.
Angles, Mark L; Chandy, Joseph P; Cox, Peter T; Fisher, Ian H; Warnecke, Malcolm R
2007-08-01
Waterborne Cryptosporidium has been responsible for drinking water-associated disease outbreaks in a number of developed countries. As a result of the resistance of Cryptosporidium to chlorine, which is typically applied as a final barrier to protect the quality of distributed drinking water, current management practices are focused on source-water management and water treatment as ways of preventing Cryptosporidium from entering drinking-water supplies. In the event that treatment barriers fail, surprisingly little is known of the fate of oocysts once they enter a distribution system. To assess properly the risks of waterborne Cryptosporidium, a more thorough understanding of the fate of oocysts in water distribution systems, with emphasis on Cryptosporidium-biofilm interactions, is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palminitier, Bryan; Broderick, Robert; Mather, Barry
2016-05-01
Wide use of advanced inverters could double the electricity-distribution system’s hosting capacity for distributed PV at low costs—from about 170 GW to 350 GW (see Palmintier et al. 2016). At the distribution system level, increased variable generation due to high penetrations of distributed PV (typically rooftop and smaller ground-mounted systems) could challenge the management of distribution voltage, potentially increase wear and tear on electromechanical utility equipment, and complicate the configuration of circuit-breakers and other protection systems—all of which could increase costs, limit further PV deployment, or both. However, improved analysis of distribution system hosting capacity—the amount of distributed PV thatmore » can be interconnected without changing the existing infrastructure or prematurely wearing out equipment—has overturned previous rule-of-thumb assumptions such as the idea that distributed PV penetrations higher than 15% require detailed impact studies. For example, new analysis suggests that the hosting capacity for distributed PV could rise from approximately 170 GW using traditional inverters to about 350 GW with the use of advanced inverters for voltage management, and it could be even higher using accessible and low-cost strategies such as careful siting of PV systems within a distribution feeder and additional minor changes in distribution operations. Also critical to facilitating distributed PV deployment is the improvement of interconnection processes, associated standards and codes, and compensation mechanisms so they embrace PV’s contributions to system-wide operations. Ultimately SunShot-level PV deployment will require unprecedented coordination of the historically separate distribution and transmission systems along with incorporation of energy storage and “virtual storage,” which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Additional analysis and innovation are neede« less
Establishment of key grid-connected performance index system for integrated PV-ES system
NASA Astrophysics Data System (ADS)
Li, Q.; Yuan, X. D.; Qi, Q.; Liu, H. M.
2016-08-01
In order to further promote integrated optimization operation of distributed new energy/ energy storage/ active load, this paper studies the integrated photovoltaic-energy storage (PV-ES) system which is connected with the distribution network, and analyzes typical structure and configuration selection for integrated PV-ES generation system. By combining practical grid- connected characteristics requirements and technology standard specification of photovoltaic generation system, this paper takes full account of energy storage system, and then proposes several new grid-connected performance indexes such as paralleled current sharing characteristic, parallel response consistency, adjusting characteristic, virtual moment of inertia characteristic, on- grid/off-grid switch characteristic, and so on. A comprehensive and feasible grid-connected performance index system is then established to support grid-connected performance testing on integrated PV-ES system.
Ground Truthing the 'Conventional Wisdom' of Lead Corrosion Control Using Mineralogical Analysis
For drinking water distribution systems (DWDS) with lead-bearing plumbing materials some form of corrosion control is typically necessary, with the goal of mitigating lead release by forming adherent, stable corrosion scales composed of low-solubility mineral phases. Conventional...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flinn, D.G.; Hall, S.; Morris, J.
This volume describes the background research, the application of the proposed loss evaluation techniques, and the results. The research identified present loss calculation methods as appropriate, provided care was taken to represent the various system elements in sufficient detail. The literature search of past methods and typical data revealed that extreme caution in using typical values (load factor, etc.) should be taken to ensure that all factors were referred to the same time base (daily, weekly, etc.). The performance of the method (and computer program) proposed in this project was determined by comparison of results with a rigorous evaluation ofmore » losses on the Salt River Project system. This rigorous evaluation used statistical modeling of the entire system as well as explicit enumeration of all substation and distribution transformers. Further tests were conducted at Public Service Electric and Gas of New Jersey to check the appropriateness of the methods in a northern environment. Finally sensitivity tests indicated data elements inaccuracy of which would most affect the determination of losses using the method developed in this project.« less
NASA Astrophysics Data System (ADS)
Edo, Takahiro; Asai, T.; Tanaka, F.; Yamada, S.; Hosozawa, A.; Gota, H.; Roche, T.; Allfrey, I.; Matsumoto, T.
2017-10-01
A magnetized coaxial plasma gun (MCPG) is a device used to generate a compact toroid (CT), which has a spheromak-like configuration. A typical MCPG consists of a set of axisymmetric cylindrical electrodes, bias coil, and gas-puff valves. In order to expand the CT operating range, the distributions of the bias magnetic field and neutral gas have been investigated. We have developed a new means of generating stuffing flux. By inserting an iron core into the bias coil, the magnetic field increases dramatically; even a small current of a few Amps produces a sufficient bias field. According to a simulation result, it was also suggested that the radial distribution of the bias field is easily controlled. The ejected CT and the target FRC are cooled by excess neutral gas that typical MCPGs require to initiate a breakdown; therefore, we have adopted a miniature gun as a new pre-ionization (PI) system. By introducing this PI system, the breakdown occurs at lower neutral gas density so that the amount of excess neutral gas can be reduced.
Estimation of discontinuous coefficients in parabolic systems: Applications to reservoir simulation
NASA Technical Reports Server (NTRS)
Lamm, P. D.
1984-01-01
Spline based techniques for estimating spatially varying parameters that appear in parabolic distributed systems (typical of those found in reservoir simulation problems) are presented. The problem of determining discontinuous coefficients, estimating both the functional shape and points of discontinuity for such parameters is discussed. Convergence results and a summary of numerical performance of the resulting algorithms are given.
Study of data I/O performance on distributed disk system in mask data preparation
NASA Astrophysics Data System (ADS)
Ohara, Shuichiro; Odaira, Hiroyuki; Chikanaga, Tomoyuki; Hamaji, Masakazu; Yoshioka, Yasuharu
2010-09-01
Data volume is getting larger every day in Mask Data Preparation (MDP). In the meantime, faster data handling is always required. MDP flow typically introduces Distributed Processing (DP) system to realize the demand because using hundreds of CPU is a reasonable solution. However, even if the number of CPU were increased, the throughput might be saturated because hard disk I/O and network speeds could be bottlenecks. So, MDP needs to invest a lot of money to not only hundreds of CPU but also storage and a network device which make the throughput faster. NCS would like to introduce new distributed processing system which is called "NDE". NDE could be a distributed disk system which makes the throughput faster without investing a lot of money because it is designed to use multiple conventional hard drives appropriately over network. NCS studies I/O performance with OASIS® data format on NDE which contributes to realize the high throughput in this paper.
An integrated eVoucher mechanism for flexible loads in real-time retail electricity market
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Tao; Pourbabak, Hajir; Liang, Zheming
This study proposes an innovative economic and engineering coupled framework to encourage typical flexible loads or load aggregators, such as parking lots with high penetration of electric vehicles, to participate directly in the real-time retail electricity market based on an integrated eVoucher program. The integrated eVoucher program entails demand side management, either in the positive or negative direction, following a popular customer-centric design principle. It provides the extra economic benefit to end-users and reduces the risk associated with the wholesale electricity market for electric distribution companies (EDCs), meanwhile improving the potential resilience of the distribution networks with consideration for frequencymore » deviations. When implemented, the eVoucher program allows typical flexible loads, such as electric vehicle parking lots, to adjust their demand and consumption behavior according to financial incentives from an EDC. A distribution system operator (DSO) works as a third party to hasten negotiations between such parking lots and EDCs, as well as the price clearing process. Eventually, both electricity retailers and power system operators will benefit from the active participation of the flexible loads and energy customers.« less
An integrated eVoucher mechanism for flexible loads in real-time retail electricity market
Chen, Tao; Pourbabak, Hajir; Liang, Zheming; ...
2017-01-26
This study proposes an innovative economic and engineering coupled framework to encourage typical flexible loads or load aggregators, such as parking lots with high penetration of electric vehicles, to participate directly in the real-time retail electricity market based on an integrated eVoucher program. The integrated eVoucher program entails demand side management, either in the positive or negative direction, following a popular customer-centric design principle. It provides the extra economic benefit to end-users and reduces the risk associated with the wholesale electricity market for electric distribution companies (EDCs), meanwhile improving the potential resilience of the distribution networks with consideration for frequencymore » deviations. When implemented, the eVoucher program allows typical flexible loads, such as electric vehicle parking lots, to adjust their demand and consumption behavior according to financial incentives from an EDC. A distribution system operator (DSO) works as a third party to hasten negotiations between such parking lots and EDCs, as well as the price clearing process. Eventually, both electricity retailers and power system operators will benefit from the active participation of the flexible loads and energy customers.« less
1976-08-01
bare soil and grass areas, Vicksburg, Mississippi . ....... .. 101 25 Schematic of typical thermal IR scanner system . . . . 103 26 Sensor spatial...following categories: a. Soils b. Vegetation S. Topography d. Bedrock It is the knowledge of these characteristics and their distribution within the...necessary to know the changes in soil , vegetation, topography, and bedrock characteristics as a function of time as well as their spa- tial distribution at
A Distributed Approach to System-Level Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil
2012-01-01
Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Lindsay; Zéphyr, Luckny; Cardell, Judith B.
The evolution of the power system to the reliable, efficient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of renewable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distribution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for cooptimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this framework, microgrids encompass consumers, distributed renewables and storage. The energy managementmore » system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the development of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic optimization, including decomposition and stochastic dual dynamic programming.« less
Performance Monitoring of Residential Hot Water Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Anna; Lanzisera, Steven; Lutz, Jim
Current water distribution systems are designed such that users need to run the water for some time to achieve the desired temperature, wasting energy and water in the process. We developed a wireless sensor network for large-scale, long time-series monitoring of residential water end use. Our system consists of flow meters connected to wireless motes transmitting data to a central manager mote, which in turn posts data to our server via the internet. This project also demonstrates a reliable and flexible data collection system that could be configured for various other forms of end use metering in buildings. The purposemore » of this study was to determine water and energy use and waste in hot water distribution systems in California residences. We installed meters at every end use point and the water heater in 20 homes and collected 1s flow and temperature data over an 8 month period. For a typical shower and dishwasher events, approximately half the energy is wasted. This relatively low efficiency highlights the importance of further examining the energy and water waste in hot water distribution systems.« less
Ground Truthing the ‘Conventional Wisdom’ of Lead Corrosion Control Using Mineralogical Analysis
For drinking water distribution systems (DWDS) with lead-bearing plumbing materials some form of corrosion control is typically necessary, with the goal of mitigating lead release by forming adherent, stable corrosion scales composed of low-solubility mineral phases. Conventional...
Self-organized criticality in asymmetric exclusion model with noise for freeway traffic
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-02-01
The one-dimensional asymmetric simple-exclusion model with open boundaries for parallel update is extended to take into account temporary stopping of particles. The model presents the traffic flow on a highway with temporary deceleration of cars. Introducing temporary stopping into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. In the self-organized critical state, start-stop waves (or traffic jams) appear with various sizes (or lifetimes). The typical interval < s>between consecutive jams scales as < s> ≃ Lv with v = 0.51 ± 0.05 where L is the system size. It is shown that the cumulative jam-interval distribution Ns( L) satisfies the finite-size scaling form ( Ns( L) ≃ L- vf( s/ Lv). Also, the typical lifetime
Extreme statistics and index distribution in the classical 1d Coulomb gas
NASA Astrophysics Data System (ADS)
Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2018-07-01
We consider a 1D gas of N charged particles confined by an external harmonic potential and interacting via the 1D Coulomb potential. For this system we show that in equilibrium the charges settle, on an average, uniformly and symmetrically on a finite region centred around the origin. We study the statistics of the position of the rightmost particle and show that the limiting distribution describing its typical fluctuations is different from the Tracy–Widom distribution found in the 1D log-gas. We also compute the large deviation functions which characterise the atypical fluctuations of far away from its mean value. In addition, we study the gap between the two rightmost particles as well as the index N + , i.e. the number of particles on the positive semi-axis. We compute the limiting distributions associated to the typical fluctuations of these observables as well as the corresponding large deviation functions. We provide numerical supports to our analytical predictions. Part of these results were announced in a recent letter, Dhar et al (2017 Phys. Rev. Lett. 119 060601).
District heating with geothermally heated culinary water supply systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitts, D.R.; Schmitt, R.C.
1979-09-01
An initial feasibility study of using existing culinary water supply systems to provide hot water for space heating and air conditioning to a typical residential community is reported. The Phase I study has centered on methods of using low-to-moderate temperature water for heating purposes including institutional barriers, identification and description of a suitable residential commnity water system, evaluation of thermal losses in both the main distribution system and the street mains within the residential district, estimation of size and cost of the pumping station main heat exchanger, sizing of individual residential heat exchangers, determination of pumping and power requirements duemore » to increased flow through the residential area mains, and pumping and power requirements from the street mains through a typical residence. All results of the engineering study of Phase I are encouraging.« less
The R-Shell approach - Using scheduling agents in complex distributed real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre
1993-01-01
Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.
Effects of lint cleaning on lint trash particle size distribution
USDA-ARS?s Scientific Manuscript database
Cotton quality trash measurements used today typically yield a single value for trash parameters for a lint sample (i.e. High Volume Instrument – percent area; Advanced Fiber Information System – total count, trash size, dust count, trash count, and visible foreign matter). A Cotton Trash Identifica...
Locking Nut with Stress-Distributing Insert
NASA Technical Reports Server (NTRS)
Daniels, Christopher C.
2010-01-01
Reusable holders have been devised for evaluating high-temperature, plasma-resistant re-entry materials, especially fabrics. Typical material samples tested support thermal-protection-system damage repair requiring evaluation prior to re-entry into terrestrial atmosphere. These tests allow evaluation of each material to withstand the most severe predicted re-entry conditions.
ERIC Educational Resources Information Center
Haq, Shaji S.; Kodak, Tiffany
2015-01-01
This study evaluated the effects of massed and distributed practice on the acquisition of tacts and textual behavior in typically developing children. We compared the effects of massed practice (i.e., consolidating all practice opportunities during the week into a single session) and distributed practice (i.e., distributing all practice…
Distributed cooperative control of AC microgrids
NASA Astrophysics Data System (ADS)
Bidram, Ali
In this dissertation, the comprehensive secondary control of electric power microgrids is of concern. Microgrid technical challenges are mainly realized through the hierarchical control structure, including primary, secondary, and tertiary control levels. Primary control level is locally implemented at each distributed generator (DG), while the secondary and tertiary control levels are conventionally implemented through a centralized control structure. The centralized structure requires a central controller which increases the reliability concerns by posing the single point of failure. In this dissertation, the distributed control structure using the distributed cooperative control of multi-agent systems is exploited to increase the secondary control reliability. The secondary control objectives are microgrid voltage and frequency, and distributed generators (DGs) active and reactive powers. Fully distributed control protocols are implemented through distributed communication networks. In the distributed control structure, each DG only requires its own information and the information of its neighbors on the communication network. The distributed structure obviates the requirements for a central controller and complex communication network which, in turn, improves the system reliability. Since the DG dynamics are nonlinear and non-identical, input-output feedback linearization is used to transform the nonlinear dynamics of DGs to linear dynamics. Proposed control frameworks cover the control of microgrids containing inverter-based DGs. Typical microgrid test systems are used to verify the effectiveness of the proposed control protocols.
Zhang, Dan; Wang, Yinghui; Yu, Kefu; Li, Pingyang; Zhang, Ruijie; Xu, Yiyin
2014-11-01
The Lijiang River is a typical karst river of southwestern China. Karst-aquifer systems are more vulnerable to contamination compared to other types of aquifers. The occurrence and distribution of organochlorine pesticides (OCPs) in surface sediments from the Lijiang River were investigated to evaluate their potential ecological risks. The total concentrations of them in sediments ranged from 0.80 to 18.73 ng/g dry weight (dw) (mean 6.83 ng/g dw). The residue levels of OCPs varied in the order of HCB > HCHs > DDTs. Compositional analyses of OCPs showed that HCHs and DDTs were mainly from historical usage. The ecological risk assessment suggested that HCHs and DDTs in Lijiang River sediments may cause adverse ecological risks, particularly at sites near agricultural areas.
1999-02-24
technology. Y2K related failures in business systems will generally cause an en - terprise to lose partial or complete control of critical...generation systems may include steam turbines, diesel en - gines, or hydraulic turbines connected to alternators that gener- ERCOT ;*_... Inter...control centers used to manage sub- transmission and distribution sys- tems. These systems are typically operated using a subset of an en - ergy
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot D.; Zinnecker, Alicia Mae; Culley, Dennis E.
2014-01-01
Distributed Engine Control (DEC) is an enabling technology that has the potential to advance the state-of-the-art in gas turbine engine control. To analyze the capabilities that DEC offers, a Hardware-In-the-Loop (HIL) test bed is being developed at NASA Glenn Research Center. This test bed will support a systems-level analysis of control capabilities in closed-loop engine simulations. The structure of the HIL emulates a virtual test cell by implementing the operator functions, control system, and engine on three separate computers. This implementation increases the flexibility and extensibility of the HIL. Here, a method is discussed for implementing these interfaces by connecting the three platforms over a dedicated Local Area Network (LAN). This approach is verified using the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k), which is typically implemented on one computer. There are marginal differences between the results from simulation of the typical and the three-computer implementation. Additional analysis of the LAN network, including characterization of network load, packet drop, and latency, is presented. The three-computer setup supports the incorporation of complex control models and proprietary engine models into the HIL framework.
Distributed intelligence for supervisory control
NASA Technical Reports Server (NTRS)
Wolfe, W. J.; Raney, S. D.
1987-01-01
Supervisory control systems must deal with various types of intelligence distributed throughout the layers of control. Typical layers are real-time servo control, off-line planning and reasoning subsystems and finally, the human operator. Design methodologies must account for the fact that the majority of the intelligence will reside with the human operator. Hierarchical decompositions and feedback loops as conceptual building blocks that provide a common ground for man-machine interaction are discussed. Examples of types of parallelism and parallel implementation on several classes of computer architecture are also discussed.
NASA Technical Reports Server (NTRS)
1973-01-01
This user's manual describes the FORTRAN IV computer program developed to compute the total vertical load, normal concentrated pressure loads, and the center of pressure of typical SRB water impact slapdown pressure distributions specified in the baseline configuration. The program prepares the concentrated pressure load information in punched card format suitable for input to the STAGS computer program. In addition, the program prepares for STAGS input the inertia reacting loads to the slapdown pressure distributions.
NASA Astrophysics Data System (ADS)
Du, Jian; Sheng, Wanxing; Lin, Tao; Lv, Guangxian
2018-05-01
Nowadays, the smart distribution network has made tremendous progress, and the business visualization becomes even more significant and indispensable. Based on the summarization of traditional visualization technologies and demands of smart distribution network, a panoramic visualization application is proposed in this paper. The overall architecture, integrated architecture and service architecture of panoramic visualization application is firstly presented. Then, the architecture design and main functions of panoramic visualization system are elaborated in depth. In addition, the key technologies related to the application is discussed briefly. At last, two typical visualization scenarios in smart distribution network, which are risk warning and fault self-healing, proves that the panoramic visualization application is valuable for the operation and maintenance of the distribution network.
Koskinen, R; Ali-Vehmas, T; Kämpfer, P; Laurikkala, M; Tsitko, I; Kostyal, E; Atroshi, F; Salkinoja-Salonen, M
2000-10-01
Sphingomonas species were commonly isolated from biofilms in drinking water distribution systems in Finland (three water meters) and Sweden (five water taps in different buildings). The Sphingomonas isolates (n = 38) were characterized by chemotaxonomic, physiological and phylogenetic methods. Fifteen isolates were designated to species Sphingomonas aromaticivorans, seven isolates to S. subterranea, two isolates to S. xenophaga and one isolate to S. stygia. Thirteen isolates represented one or more new species of Sphingomonas. Thirty-three isolates out of 38 grew at 5 degrees C on trypticase soy broth agar (TSBA) and may therefore proliferate in the Nordic drinking water pipeline where the temperature typically ranges from 2 to 12 degrees C. Thirty-three isolates out of 38 grew at 37 degrees C on TSBA and 15 isolates also grew on blood agar at 37 degrees C. Considering the potentially pathogenic features of sphingomonas, their presence in drinking water distribution systems may not be desirable.
Cho, Ming-Yuan; Hoang, Thi Thom
2017-01-01
Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.
Distributed Knowledge Base Systems for Diagnosis and Information Retrieval.
1988-04-08
cycle. For example, patients with cerebral palsy , a disease affecting motor control, typically have several muscles that function improperly in...different phases of the gait cycle. The malfunctions in the case of cerebral palsy are improper contractions of the muscles -- both in terms of the...generally resulted in the neural network level not being a serious contender for Al theory formation and system construction until a new generation
Manual Fire Suppression Methods on Typical Machinery Space Spray Fires
1990-07-31
Aqueous Film Forming Foam Manuscnpt approved April 25, 1990. ( AFFF ), has been incorporated in machinery space fire protection systems to...distribution unlimited. 13. ABSTRACT (Maximum 200 words) A series of tests was conducted to evaluate the effectiveness of Aqueous Film Forming Foami ( AFFF ...machinery space fire protection systems to control running fuel and fuel spray fires (PKP side of TAFES), and bilge fires ( aqueous film forming foam
Thermo-mechanical Design Methodology for ITER Cryodistribution cold boxes
NASA Astrophysics Data System (ADS)
Shukla, Vinit; Patel, Pratik; Das, Jotirmoy; Vaghela, Hitensinh; Bhattacharya, Ritendra; Shah, Nitin; Choukekar, Ketan; Chang, Hyun-Sik; Sarkar, Biswanath
2017-04-01
The ITER cryo-distribution (CD) system is in charge of proper distribution of the cryogen at required mass flow rate, pressure and temperature level to the users; namely the superconducting (SC) magnets and cryopumps (CPs). The CD system is also capable to use the magnet structures as a thermal buffer in order to operate the cryo-plant as much as possible at a steady state condition. A typical CD cold box is equipped with mainly liquid helium (LHe) bath, heat exchangers (HX’s), cryogenic valves, filter, heaters, cold circulator, cold compressor and process piping. The various load combinations which are likely to occur during the life cycle of the CD cold boxes are imposed on the representative model and impacts on the system are analyzed. This study shows that break of insulation vacuum during nominal operation (NO) along with seismic event (Seismic Level-2) is the most stringent load combination having maximum stress of 224 MPa. However, NO+SMHV (Séismes Maximaux Historiquement Vraisemblables = Maximum Historically Probable Earthquakes) load combination is having the least safety margin and will lead the basis of the design of the CD system and its sub components. This paper presents and compares the results of different load combinations which are likely to occur on a typical CD cold box.
Connecting HL Tau to the observed exoplanet sample
NASA Astrophysics Data System (ADS)
Simbulan, Christopher; Tamayo, Daniel; Petrovich, Cristobal; Rein, Hanno; Murray, Norman
2017-08-01
The Atacama Large Millimeter/submilimeter Array (ALMA) recently revealed a set of nearly concentric gaps in the protoplanetary disc surrounding the young star HL Tauri (HL Tau). If these are carved by forming gas giants, this provides the first set of orbital initial conditions for planets as they emerge from their birth discs. Using N-body integrations, we have followed the evolution of the system for 5 Gyr to explore the possible outcomes. We find that HL Tau initial conditions scaled down to the size of typically observed exoplanet orbits naturally produce several populations in the observed exoplanet sample. First, for a plausible range of planetary masses, we can match the observed eccentricity distribution of dynamically excited radial velocity giant planets with eccentricities >0.2. Secondly, we roughly obtain the observed rate of hot Jupiters around FGK stars. Finally, we obtain a large efficiency of planetary ejections of ≈2 per HL Tau-like system, but the small fraction of stars observed to host giant planets makes it hard to match the rate of free-floating planets inferred from microlensing observations. In view of upcoming Gaia results, we also provide predictions for the expected mutual inclination distribution, which is significantly broader than the absolute inclination distributions typically considered by previous studies.
Optimal distribution of integration time for intensity measurements in Stokes polarimetry.
Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng
2015-10-19
We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.
NASA Astrophysics Data System (ADS)
Aktas, Metin; Maral, Hakan; Akgun, Toygar
2018-02-01
Extinction ratio is an inherent limiting factor that has a direct effect on the detection performance of phase-OTDR based distributed acoustics sensing systems. In this work we present a model based analysis of Rayleigh scattering to simulate the effects of extinction ratio on the received signal under varying signal acquisition scenarios and system parameters. These signal acquisition scenarios are constructed to represent typically observed cases such as multiple vibration sources cluttered around the target vibration source to be detected, continuous wave light sources with center frequency drift, varying fiber optic cable lengths and varying ADC bit resolutions. Results show that an insufficient ER can result in high optical noise floor and effectively hide the effects of elaborate system improvement efforts.
NREL/SCE High Penetration PV Integration Project: FY13 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, B. A.; Shah, S.; Norris, B. L.
2014-06-01
In 2010, the National Renewable Energy Laboratory (NREL), Southern California Edison (SCE), Quanta Technology, Satcon Technology Corporation, Electrical Distribution Design (EDD), and Clean Power Research (CPR) teamed to analyze the impacts of high penetration levels of photovoltaic (PV) systems interconnected onto the SCE distribution system. This project was designed specifically to benefit from the experience that SCE and the project team would gain during the installation of 500 megawatts (MW) of utility-scale PV systems (with 1-5 MW typical ratings) starting in 2010 and completing in 2015 within SCE's service territory through a program approved by the California Public Utility Commissionmore » (CPUC). This report provides the findings of the research completed under the project to date.« less
A Review of Distributed Control Techniques for Power Quality Improvement in Micro-grids
NASA Astrophysics Data System (ADS)
Zeeshan, Hafiz Muhammad Ali; Nisar, Fatima; Hassan, Ahmad
2017-05-01
Micro-grid is typically visualized as a small scale local power supply network dependent on distributed energy resources (DERs) that can operate simultaneously with grid as well as in standalone manner. The distributed generator of a micro-grid system is usually a converter-inverter type topology acting as a non-linear load, and injecting harmonics into the distribution feeder. Hence, the negative effects on power quality by the usage of distributed generation sources and components are clearly witnessed. In this paper, a review of distributed control approaches for power quality improvement is presented which encompasses harmonic compensation, loss mitigation and optimum power sharing in multi-source-load distributed power network. The decentralized subsystems for harmonic compensation and active-reactive power sharing accuracy have been analysed in detail. Results have been validated to be consistent with IEEE standards.
Community, time-series epidemiology typically uses either 24-hour integrated particulate matter (PM) concentrations averaged across several monitors in a city or data obtained at a central monitoring site to relate PM concentrations to human health effects. If 24-hour integrated...
Efficient On-Demand Operations in Large-Scale Infrastructures
ERIC Educational Resources Information Center
Ko, Steven Y.
2009-01-01
In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…
On-Line Water Quality Parameters as Indicators of Distribution System Contamination
At a time when the safety and security of services we have typically taken for granted are under question, a real-time or near real-time method of monitoring changes in water quality parameters could provide a critical line of defense in protecting public health. This study was u...
ERIC Educational Resources Information Center
Clarkson, W. W.; And Others
This module introduces the physical, biological, and chemical constituents of wastewaters and sludges which are of concern in land treatment systems. The characteristics of typical municipal wastewater are tabulated for strong, medium, and weak sewages. Some of the factors affecting pollutant concentrations are listed. Flow, distribution and…
Effect of sodium hypochlorite on typical biofilms formed in drinking water distribution systems.
Lin, Huirong; Zhu, Xuan; Wang, Yuxin; Yu, Xin
2017-04-01
Human health and biological safety problems resulting from urban drinking water pipe network biofilms pollution have attracted wide concern. Despite the inclusion of residual chlorine in drinking water distribution systems supplies, the bacterium is a recalcitrant human pathogen capable of forming biofilms on pipe walls and causing health risks. Typical drinking water bacterial biofilms and their response to different concentrations of chlorination was monitored. The results showed that the four bacteria all formed single biofilms susceptible to sodium hypochlorite. After 30 min disinfection, biomass and cultivability decreased with increasing concentration of disinfectant but then increased in high disinfectant doses. PMA-qPCR results indicated that it resulted in little cellular damage. Flow cytometry analysis showed that with increasing doses of disinfectant, the numbers of clusters increased and the sizes of clusters decreased. Under high disinfectant treatment, EPS was depleted by disinfectant and about 0.5-1 mg/L of residual chlorine seemed to be appropriate for drinking water treatment. This research provides an insight into the EPS protection to biofilms. Resistance of biofilms against high levels of chlorine has implications for the delivery of drinking water.
Generation and dose distribution measurement of flash x-ray in KALI-5000 system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menon, Rakhee; Roy, Amitava; Mitra, S.
2008-10-15
Flash x-ray generation studies have been carried out in KALI-5000 Pulse power system. The intense relativistic electron beam has been bombarded on a tantalum target at anode to produce flash x-ray via bremsstrahlung conversion. The typical electron beam parameter was 360 kV, 18 kA, and 100 ns, with a few hundreds of A/cm{sup 2} current density. The x-ray dose has been measured with calcium sulfate:dysposium (CaSO{sub 4}:Dy) thermoluminescent dosimeter and the axial dose distribution has been characterized. It has been observed that the on axis dose falls of with distance {approx}1/x{sup n}, where n varies from 1.8 to 1.85. Amore » maximum on axis dose of 46 mrad has been measured at 1 m distance from the source. A plastic scintillator with optical fiber coupled to a photomultiplier tube has been developed to measure the x-ray pulse width. The typical x-ray pulse width varied from 50 to 80 ns.« less
Herman, Agnieszka
2010-06-01
Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x(-1-α) exp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.
Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems
NASA Astrophysics Data System (ADS)
Herman, Agnieszka
2010-06-01
Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x-1-αexp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.
NASA Technical Reports Server (NTRS)
Allard, R.; Mack, B.; Bayoumi, M. M.
1989-01-01
Most robot systems lack a suitable hardware and software environment for the efficient research of new control and sensing schemes. Typically, engineers and researchers need to be experts in control, sensing, programming, communication and robotics in order to implement, integrate and test new ideas in a robot system. In order to reduce this time, the Robot Controller Test Station (RCTS) has been developed. It uses a modular hardware and software architecture allowing easy physical and functional reconfiguration of a robot. This is accomplished by emphasizing four major design goals: flexibility, portability, ease of use, and ease of modification. An enhanced distributed processing version of RCTS is described. It features an expanded and more flexible communication system design. Distributed processing results in the availability of more local computing power and retains the low cost of microprocessors. A large number of possible communication, control and sensing schemes can therefore be easily introduced and tested, using the same basic software structure.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.
NASA Astrophysics Data System (ADS)
Chao, Woodrew; Ho, Bruce K. T.; Chao, John T.; Sadri, Reza M.; Huang, Lu J.; Taira, Ricky K.
1995-05-01
Our tele-medicine/PACS archive system is based on a three-tier distributed hierarchical architecture, including magnetic disk farms, optical jukebox, and tape jukebox sub-systems. The hierarchical storage management (HSM) architecture, built around a low cost high performance platform [personal computers (PC) and Microsoft Windows NT], presents a very scaleable and distributed solution ideal for meeting the needs of client/server environments such as tele-medicine, tele-radiology, and PACS. These image based systems typically require storage capacities mirroring those of film based technology (multi-terabyte with 10+ years storage) and patient data retrieval times at near on-line performance as demanded by radiologists. With the scaleable architecture, storage requirements can be easily configured to meet the needs of the small clinic (multi-gigabyte) to those of a major hospital (multi-terabyte). The patient data retrieval performance requirement was achieved by employing system intelligence to manage migration and caching of archived data. Relevant information from HIS/RIS triggers prefetching of data whenever possible based on simple rules. System intelligence embedded in the migration manger allows the clustering of patient data onto a single tape during data migration from optical to tape medium. Clustering of patient data on the same tape eliminates multiple tape loading and associated seek time during patient data retrieval. Optimal tape performance can then be achieved by utilizing the tape drives high performance data streaming capabilities thereby reducing typical data retrieval delays associated with streaming tape devices.
Ölçer, İbrahim; Öncü, Ahmet
2017-06-05
Distributed vibration sensing based on phase-sensitive optical time domain reflectometry ( ϕ -OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ -OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ -OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems.
Ölçer, İbrahim; Öncü, Ahmet
2017-01-01
Distributed vibration sensing based on phase-sensitive optical time domain reflectometry (ϕ-OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ-OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ-OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems. PMID:28587240
Quantitative Analysis Method of Output Loss due to Restriction for Grid-connected PV Systems
NASA Astrophysics Data System (ADS)
Ueda, Yuzuru; Oozeki, Takashi; Kurokawa, Kosuke; Itou, Takamitsu; Kitamura, Kiyoyuki; Miyamoto, Yusuke; Yokota, Masaharu; Sugihara, Hiroyuki
Voltage of power distribution line will be increased due to reverse power flow from grid-connected PV systems. In the case of high density grid connection, amount of voltage increasing will be higher than the stand-alone grid connection system. To prevent the over voltage of power distribution line, PV system's output will be restricted if the voltage of power distribution line is close to the upper limit of the control range. Because of this interaction, amount of output loss will be larger in high density case. This research developed a quantitative analysis method for PV systems output and losses to clarify the behavior of grid connected PV systems. All the measured data are classified into the loss factors using 1 minute average of 1 second data instead of typical 1 hour average. Operation point on the I-V curve is estimated to quantify the loss due to the output restriction using module temperature, array output voltage, array output current and solar irradiance. As a result, loss due to output restriction is successfully quantified and behavior of output restriction is clarified.
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, C. Lindsay; Zéphyr, Luckny; Liu, Jialin
The evolution of the power system to the reliable, effi- cient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of re- newable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distri- bution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for co- optimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this frame- work, microgrids encompass consumers, distributed renewablesmore » and storage. The energy management system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the devel- opment of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic op- timization, including decomposition and stochastic dual dynamic programming.« less
NASA Astrophysics Data System (ADS)
Drótos, Gábor; Bódai, Tamás; Tél, Tamás
2016-08-01
In nonautonomous dynamical systems, like in climate dynamics, an ensemble of trajectories initiated in the remote past defines a unique probability distribution, the natural measure of a snapshot attractor, for any instant of time, but this distribution typically changes in time. In cases with an aperiodic driving, temporal averages taken along a single trajectory would differ from the corresponding ensemble averages even in the infinite-time limit: ergodicity does not hold. It is worth considering this difference, which we call the nonergodic mismatch, by taking time windows of finite length for temporal averaging. We point out that the probability distribution of the nonergodic mismatch is qualitatively different in ergodic and nonergodic cases: its average is zero and typically nonzero, respectively. A main conclusion is that the difference of the average from zero, which we call the bias, is a useful measure of nonergodicity, for any window length. In contrast, the standard deviation of the nonergodic mismatch, which characterizes the spread between different realizations, exhibits a power-law decrease with increasing window length in both ergodic and nonergodic cases, and this implies that temporal and ensemble averages differ in dynamical systems with finite window lengths. It is the average modulus of the nonergodic mismatch, which we call the ergodicity deficit, that represents the expected deviation from fulfilling the equality of temporal and ensemble averages. As an important finding, we demonstrate that the ergodicity deficit cannot be reduced arbitrarily in nonergodic systems. We illustrate via a conceptual climate model that the nonergodic framework may be useful in Earth system dynamics, within which we propose the measure of nonergodicity, i.e., the bias, as an order-parameter-like quantifier of climate change.
NASA Astrophysics Data System (ADS)
Kovalevsky, Louis; Langley, Robin S.; Caro, Stephane
2016-05-01
Due to the high cost of experimental EMI measurements significant attention has been focused on numerical simulation. Classical methods such as Method of Moment or Finite Difference Time Domain are not well suited for this type of problem, as they require a fine discretisation of space and failed to take into account uncertainties. In this paper, the authors show that the Statistical Energy Analysis is well suited for this type of application. The SEA is a statistical approach employed to solve high frequency problems of electromagnetically reverberant cavities at a reduced computational cost. The key aspects of this approach are (i) to consider an ensemble of system that share the same gross parameter, and (ii) to avoid solving Maxwell's equations inside the cavity, using the power balance principle. The output is an estimate of the field magnitude distribution in each cavity. The method is applied on a typical aircraft structure.
NASA Astrophysics Data System (ADS)
Cai, Huai-yu; Dong, Xiao-tong; Zhu, Meng; Huang, Zhan-hua
2018-01-01
Wavefront coding for athermal technique can effectively ensure the stability of the optical system imaging in large temperature range, as well as the advantages of compact structure and low cost. Using simulation method to analyze the properties such as PSF and MTF of wavefront coding athermal system under several typical temperature gradient distributions has directive function to characterize the working state of non-ideal temperature environment, and can effectively realize the system design indicators as well. In this paper, we utilize the interoperability of data between Solidworks and ZEMAX to simplify the traditional process of structure/thermal/optical integrated analysis. Besides, we design and build the optical model and corresponding mechanical model of the infrared imaging wavefront coding athermal system. The axial and radial temperature gradients of different degrees are applied to the whole system by using SolidWorks software, thus the changes of curvature, refractive index and the distance between the lenses are obtained. Then, we import the deformation model to ZEMAX for ray tracing, and obtain the changes of PSF and MTF in optical system. Finally, we discuss and evaluate the consistency of the PSF (MTF) of the wavefront coding athermal system and the image restorability, which provides the basis and reference for the optimal design of the wavefront coding athermal system. The results show that the adaptability of single material infrared wavefront coding athermal system to axial temperature gradient can reach the upper limit of temperature fluctuation of 60°C, which is much higher than that of radial temperature gradient.
Two coupled, driven Ising spin systems working as an engine.
Basu, Debarshi; Nandi, Joydip; Jayannavar, A M; Marathe, Rahul
2017-05-01
Miniaturized heat engines constitute a fascinating field of current research. Many theoretical and experimental studies are being conducted that involve colloidal particles in harmonic traps as well as bacterial baths acting like thermal baths. These systems are micron-sized and are subjected to large thermal fluctuations. Hence, for these systems average thermodynamic quantities, such as work done, heat exchanged, and efficiency, lose meaning unless otherwise supported by their full probability distributions. Earlier studies on microengines are concerned with applying Carnot or Stirling engine protocols to miniaturized systems, where system undergoes typical two isothermal and two adiabatic changes. Unlike these models we study a prototype system of two classical Ising spins driven by time-dependent, phase-different, external magnetic fields. These spins are simultaneously in contact with two heat reservoirs at different temperatures for the full duration of the driving protocol. Performance of the model as an engine or a refrigerator depends only on a single parameter, namely the phase between two external drivings. We study this system in terms of fluctuations in efficiency and coefficient of performance (COP). We find full distributions of these quantities numerically and study the tails of these distributions. We also study reliability of the engine. We find the fluctuations dominate mean values of efficiency and COP, and their probability distributions are broad with power law tails.
Two coupled, driven Ising spin systems working as an engine
NASA Astrophysics Data System (ADS)
Basu, Debarshi; Nandi, Joydip; Jayannavar, A. M.; Marathe, Rahul
2017-05-01
Miniaturized heat engines constitute a fascinating field of current research. Many theoretical and experimental studies are being conducted that involve colloidal particles in harmonic traps as well as bacterial baths acting like thermal baths. These systems are micron-sized and are subjected to large thermal fluctuations. Hence, for these systems average thermodynamic quantities, such as work done, heat exchanged, and efficiency, lose meaning unless otherwise supported by their full probability distributions. Earlier studies on microengines are concerned with applying Carnot or Stirling engine protocols to miniaturized systems, where system undergoes typical two isothermal and two adiabatic changes. Unlike these models we study a prototype system of two classical Ising spins driven by time-dependent, phase-different, external magnetic fields. These spins are simultaneously in contact with two heat reservoirs at different temperatures for the full duration of the driving protocol. Performance of the model as an engine or a refrigerator depends only on a single parameter, namely the phase between two external drivings. We study this system in terms of fluctuations in efficiency and coefficient of performance (COP). We find full distributions of these quantities numerically and study the tails of these distributions. We also study reliability of the engine. We find the fluctuations dominate mean values of efficiency and COP, and their probability distributions are broad with power law tails.
Dolan, John R; Gimenez, Audrey; Cornet-Barthaux, Veronique; de Verneil, Alain
2016-11-01
Transient 'hot spots' of phytoplankton productivity occur in the generally oligotrophic Southern Pacific Ocean and we hypothesized that the population structure of tintinnid ciliates, planktonic grazers, would differ from that of a typical oligotrophic sites. Samples were collected over a 1-wk period at each of two sites between Fiji and Tahiti: one of elevated chlorophyll a concentrations and primary productivity with an abundance of N-fixing cyanobacteria Trichodesmium, and a distant oligotrophic site. Tintinnid abundance differed between the sites by a factor of 2. A single species (Favella sp.), absent from the oligotrophic site, highly dominated the 'hot spot' site. However, total species richness was identical (71 spp.) as well as short-term temporal variability (2-4 d). At both sites, species abundance distributions most closely fit a log-series or log-normal distribution and the abundance distributions of ecological types, forms of distinct lorica oral diameter, were the typical geometric. Morphological diversity was only slightly lower at the high productivity site. We found that communities of these plankton grazers in 'hot spots' of phytoplankton productivity in oligotrophic systems, although harboring different species, differ little from surrounding oligotrophic areas in community structure. © 2016 The Author(s) Journal of Eukaryotic Microbiology © 2016 International Society of Protistologists.
We are not the 99 percent: quantifying asphericity in the distribution of Local Group satellites
NASA Astrophysics Data System (ADS)
Forero-Romero, Jaime E.; Arias, Verónica
2018-05-01
We use simulations to build an analytic probability distribution for the asphericity in the satellite distribution around Local Group (LG) type galaxies in the Lambda Cold Dark Matter (LCDM) paradigm. We use this distribution to estimate the atypicality of the satellite distributions in the LG even when the underlying simulations do not have enough systems fully resembling the LG in terms of its typical masses, separation and kinematics. We demonstrate the method using three different simulations (Illustris-1, Illustris-1-Dark and ELVIS) and a number of satellites ranging from 11 to 15. Detailed results differ greatly among the simulations suggesting a strong influence of the typical DM halo mass, the number of satellites and the simulated baryonic effects. However, there are three common trends. First, at most 2% of the pairs are expected to have satellite distributions with the same asphericity as the LG; second, at most 80% of the pairs have a halo with a satellite distribution as aspherical as in M31; and third, at most 4% of the pairs have a halo with satellite distribution as planar as in the MW. These quantitative results place the LG at the level of a 3σ outlier in the LCDM paradigm. We suggest that understanding the reasons for this atypicality requires quantifying the asphericity probability distribution as a function of halo mass and large scale environment. The approach presented here can facilitate that kind of study and other comparisons between different numerical setups and choices to study satellites around LG pairs in simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poerschke, A.; Stecher, D.
2014-06-01
Field testing was performed in a new construction unoccupied test house in Pittsburgh, Pennsylvania. Four air-based heating, ventilation, and air conditioning distribution systems--a typical airflow ducted system to the bedrooms, a low airflow ducted system to the bedrooms, a system with transfer fans to the bedrooms, and a system with no ductwork to the bedrooms--were evaluated during heating, cooling, and midseason conditions. The relative ability of each system was assessed with respect to relevant Air Conditioning Contractors of America and ASHRAE standards for house temperature uniformity and stability, respectively.
A Collaboration Network Model Of Cytokine-Protein Network
NASA Astrophysics Data System (ADS)
Zou, Sheng-Rong; Zhou, Ta; Peng, Yu-Jing; Guo, Zhong-Wei; Gu, Chang-Gui; He, Da-Ren
2008-03-01
Complex networks provide us a new view for investigation of immune systems. We collect data through STRING database and present a network description with cooperation network model. The cytokine-protein network model we consider is constituted by two kinds of nodes, one is immune cytokine types which can be regarded as collaboration acts, the other one is protein type which can be regarded as collaboration actors. From act degree distribution that can be well described by typical SPL (shifted power law) functions [1], we find that HRAS, TNFRSF13C, S100A8, S100A1, MAPK8, S100A7, LIF, CCL4, CXCL13 are highly collaborated with other proteins. It reveals that these mediators are important in cytokine-protein network to regulate immune activity. Dyad in the collaboration networks can be defined as two proteins and they appear in one cytokine collaboration relationship. The dyad act degree distribution can also be well described by typical SPL functions. [1] Assortativity and act degree distribution of some collaboration networks, Hui Chang, Bei-Bei Su, Yue-Ping Zhou, Daren He, Physica A, 383 (2007) 687-702
Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.
Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen
2016-05-01
Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed.
NASA Astrophysics Data System (ADS)
Wu, Huijuan; Qian, Ya; Zhang, Wei; Tang, Chenghao
2017-12-01
High sensitivity of a distributed optical-fiber vibration sensing (DOVS) system based on the phase-sensitivity optical time domain reflectometry (Φ-OTDR) technology also brings in high nuisance alarm rates (NARs) in real applications. In this paper, feature extraction methods of wavelet decomposition (WD) and wavelet packet decomposition (WPD) are comparatively studied for three typical field testing signals, and an artificial neural network (ANN) is built for the event identification. The comparison results prove that the WPD performs a little better than the WD for the DOVS signal analysis and identification in oil pipeline safety monitoring. The identification rate can be improved up to 94.4%, and the nuisance alarm rate can be effectively controlled as low as 5.6% for the identification network with the wavelet packet energy distribution features.
A Distributed Prognostic Health Management Architecture
NASA Technical Reports Server (NTRS)
Bhaskar, Saha; Saha, Sankalita; Goebel, Kai
2009-01-01
This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.
Wu, Hao
2018-05-01
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.
Dynamical Aspects of Quasifission Process in Heavy-Ion Reactions
NASA Astrophysics Data System (ADS)
Knyazheva, G. N.; Itkis, I. M.; Kozulin, E. M.
2015-06-01
The study of mass-energy distributions of binary fragments obtained in the reactions of 36S, 48Ca, 58Fe and 64Ni ions with the 232Th, 238U, 244Pu and 248Cm at energies below and above the Coulomb barrier is presented. For all the reactions the main component of the distributions corresponds to asymmetrical mass division typical for asymmetric quasifission process. To describe the quasifission mass distribution the simple method has been proposed. This method is based on the driving potential of the system and time dependent mass drift. This procedure allows to estimate QF time scale from the measured mass distributions. It has been found that the QF time exponentially decreases when the reaction Coulomb factor Z1Z2 increases.
Residue distribution and biomass recovery following biomass harvest of plantation pine
Johnny Grace III; John Klepac; S. Taylor; Dana Mitchell
2016-01-01
Forest biomass is anticipated to play a significant role in addressing an alternative energy supply. However, the efficiencies of current state-of-the-art recovery systems operating in forest biomass harvests are still relatively unknown. Forest biomass harvest stands typically have higher stand densities and smaller diameter trees than conventional stands which may...
77 FR 17479 - Star Pipe Products, Ltd.; Analysis of Proposed Consent Order To Aid Public Comment
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-26
... largest sellers of DIPF in the United States are Star, McWane, Inc. (``McWane''), and Sigma Corporation (``Sigma''). DIPF are used in municipal water distribution systems to change pipe diameter or pipeline... projects. The end users of DIPF are typically municipal and regional water authorities. DIPF prices are...
Rms-flux relation and fast optical variability simulations of the nova-like system MV Lyr
NASA Astrophysics Data System (ADS)
Dobrotka, A.; Mineshige, S.; Ness, J.-U.
2015-03-01
The stochastic variability (flickering) of the nova-like system (subclass of cataclysmic variable) MV Lyr yields a complicated power density spectrum with four break frequencies. Scaringi et al. analysed high-cadence Kepler data of MV Lyr, taken almost continuously over 600 d, giving the unique opportunity to study multicomponent Power Density Spectra (PDS) over a wide frequency range. We modelled this variability with our statistical model based on disc angular momentum transport via discrete turbulent bodies with an exponential distribution of the dimension scale. Two different models were used, a full disc (developed from the white dwarf to the outer radius of ˜1010 cm) and a radially thin disc (a ring at a distance of ˜1010 cm from the white dwarf) that imitates an outer disc rim. We succeed in explaining the two lowest observed break frequencies assuming typical values for a disc radius of 0.5 and 0.9 times the primary Roche lobe and an α parameter of 0.1-0.4. The highest observed break frequency was also modelled, but with a rather small accretion disc with a radius of 0.3 times the primary Roche lobe and a high α value of 0.9 consistent with previous findings by Scaringi. Furthermore, the simulated light curves exhibit the typical linear rms-flux proportionality linear relation and the typical log-normal flux distribution. As the turbulent process is generating fluctuations in mass accretion that propagate through the disc, this confirms the general knowledge that the typical rms-flux relation is mainly generated by these fluctuations. In general, a higher rms is generated by a larger amount of superposed flares which is compatible with a higher mass accretion rate expressed by a larger flux.
A technique for designing active control systems for astronomical telescope mirrors
NASA Technical Reports Server (NTRS)
Howell, W. E.; Creedon, J. F.
1973-01-01
The problem of designing a control system to achieve and maintain the required surface accuracy of the primary mirror of a large space telescope was considered. Control over the mirror surface is obtained through the application of a corrective force distribution by actuators located on the rear surface of the mirror. The design procedure is an extension of a modal control technique developed for distributed parameter plants with known eigenfunctions to include plants whose eigenfunctions must be approximated by numerical techniques. Instructions are given for constructing the mathematical model of the system, and a design procedure is developed for use with typical numerical data in selecting the number and location of the actuators. Examples of actuator patterns and their effect on various errors are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poerschke, Andrew; Stecher, Dave
2014-06-01
Field testing was performed in a new construction unoccupied test house in Pittsburgh, PA. Four air-based heating, ventilation, and air conditioning distribution systems—a typical airflow ducted system to the bedrooms, a low airflow ducted system to the bedrooms, a system with transfer fans to the bedrooms, and a system with no ductwork to the bedrooms—were evaluated during heating, cooling, and midseason conditions. The relative ability of each system was assessed with respect to relevant Air Conditioning Contractors of America and ASHRAE standards for house temperature uniformity and stability, respectively.
Toward a process-level view of distributed healthcare tasks: Medication management as a case study.
Werner, Nicole E; Malkana, Seema; Gurses, Ayse P; Leff, Bruce; Arbaje, Alicia I
2017-11-01
We aim to highlight the importance of using a process-level view in analyzing distributed healthcare tasks through a case study analysis of medication management (MM). MM during older adults' hospital-to-skilled-home-healthcare (SHHC) transitions is a healthcare process with tasks distributed across people, organizations, and time. MM has typically been studied at the task level, but a process-level is needed to fully understand and improve MM during transitions. A process-level view allows for a broader investigation of how tasks are distributed throughout the work system through an investigation of interactions and the resultant emergent properties. We studied MM during older adults' hospital-to-SHHC transitions through interviews and observations with 60 older adults, their 33 caregivers, and 79 SHHC providers at 5 sites associated with 3 SHHC agencies. Study findings identified key cross-system characteristics not observable at the task-level: (1) identification of emergent properties (e.g., role ambiguity, loosely-coupled teams performing MM) and associated barriers; and (2) examination of barrier propagation across system boundaries. Findings highlight the importance of a process-level view of healthcare delivery occurring across system boundaries. Copyright © 2017 Elsevier Ltd. All rights reserved.
Testing typicality in multiverse cosmology
NASA Astrophysics Data System (ADS)
Azhar, Feraz
2015-05-01
In extracting predictions from theories that describe a multiverse, we face the difficulty that we must assess probability distributions over possible observations prescribed not just by an underlying theory, but by a theory together with a conditionalization scheme that allows for (anthropic) selection effects. This means we usually need to compare distributions that are consistent with a broad range of possible observations with actual experimental data. One controversial means of making this comparison is by invoking the "principle of mediocrity": that is, the principle that we are typical of the reference class implicit in the conjunction of the theory and the conditionalization scheme. In this paper, we quantitatively assess the principle of mediocrity in a range of cosmological settings, employing "xerographic distributions" to impose a variety of assumptions regarding typicality. We find that for a fixed theory, the assumption that we are typical gives rise to higher likelihoods for our observations. If, however, one allows both the underlying theory and the assumption of typicality to vary, then the assumption of typicality does not always provide the highest likelihoods. Interpreted from a Bayesian perspective, these results support the claim that when one has the freedom to consider different combinations of theories and xerographic distributions (or different "frameworks"), one should favor the framework that has the highest posterior probability; and then from this framework one can infer, in particular, how typical we are. In this way, the invocation of the principle of mediocrity is more questionable than has been recently claimed.
Detecting distant homologies on protozoans metabolic pathways using scientific workflows.
da Cruz, Sérgio Manuel Serra; Batista, Vanessa; Silva, Edno; Tosta, Frederico; Vilela, Clarissa; Cuadrat, Rafael; Tschoeke, Diogo; Dávila, Alberto M R; Campos, Maria Luiza Machado; Mattoso, Marta
2010-01-01
Bioinformatics experiments are typically composed of programs in pipelines manipulating an enormous quantity of data. An interesting approach for managing those experiments is through workflow management systems (WfMS). In this work we discuss WfMS features to support genome homology workflows and present some relevant issues for typical genomic experiments. Our evaluation used Kepler WfMS to manage a real genomic pipeline, named OrthoSearch, originally defined as a Perl script. We show a case study detecting distant homologies on trypanomatids metabolic pathways. Our results reinforce the benefits of WfMS over script languages and point out challenges to WfMS in distributed environments.
Trophallaxis-inspired model for distributed transport between randomly interacting agents
NASA Astrophysics Data System (ADS)
Gräwer, Johannes; Ronellenfitsch, Henrik; Mazza, Marco G.; Katifori, Eleni
2017-08-01
Trophallaxis, the regurgitation and mouth to mouth transfer of liquid food between members of eusocial insect societies, is an important process that allows the fast and efficient dissemination of food in the colony. Trophallactic systems are typically treated as a network of agent interactions. This approach, though valuable, does not easily lend itself to analytic predictions. In this work we consider a simple trophallactic system of randomly interacting agents with finite carrying capacity, and calculate analytically and via a series of simulations the global food intake rate for the whole colony as well as observables describing how uniformly the food is distributed within the nest. Our model and predictions provide a useful benchmark to assess to what level the observed food uptake rates and efficiency in food distribution is due to stochastic effects or specific trophallactic strategies by the ant colony. Our work also serves as a stepping stone to describing the collective properties of more complex trophallactic systems, such as those including division of labor between foragers and workers.
Trophallaxis-inspired model for distributed transport between randomly interacting agents.
Gräwer, Johannes; Ronellenfitsch, Henrik; Mazza, Marco G; Katifori, Eleni
2017-08-01
Trophallaxis, the regurgitation and mouth to mouth transfer of liquid food between members of eusocial insect societies, is an important process that allows the fast and efficient dissemination of food in the colony. Trophallactic systems are typically treated as a network of agent interactions. This approach, though valuable, does not easily lend itself to analytic predictions. In this work we consider a simple trophallactic system of randomly interacting agents with finite carrying capacity, and calculate analytically and via a series of simulations the global food intake rate for the whole colony as well as observables describing how uniformly the food is distributed within the nest. Our model and predictions provide a useful benchmark to assess to what level the observed food uptake rates and efficiency in food distribution is due to stochastic effects or specific trophallactic strategies by the ant colony. Our work also serves as a stepping stone to describing the collective properties of more complex trophallactic systems, such as those including division of labor between foragers and workers.
Improved multilayer insulation applications. [spacecraft thermal control
NASA Technical Reports Server (NTRS)
Mikk, G.
1982-01-01
Multilayer insulation blankets used for the attenuation of radiant heat transfer in spacecraft are addressed. Typically, blanket effectiveness is degraded by heat leaks in the joints between adjacent blankets and by heat leaks caused by the blanket fastener system. An approach to blanket design based upon modular sub-blankets with distributed seams and upon an associated fastener system that practically eliminates the through-the-blanket conductive path is described. Test results are discussed providing confirmation of the approach. The specific case of the thermal control system for the optical assembly of the Space Telescope is examined.
Design and Verification of a Distributed Communication Protocol
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Goodloe, Alwyn E.
2009-01-01
The safety of remotely operated vehicles depends on the correctness of the distributed protocol that facilitates the communication between the vehicle and the operator. A failure in this communication can result in catastrophic loss of the vehicle. To complicate matters, the communication system may be required to satisfy several, possibly conflicting, requirements. The design of protocols is typically an informal process based on successive iterations of a prototype implementation. Yet distributed protocols are notoriously difficult to get correct using such informal techniques. We present a formal specification of the design of a distributed protocol intended for use in a remotely operated vehicle, which is built from the composition of several simpler protocols. We demonstrate proof strategies that allow us to prove properties of each component protocol individually while ensuring that the property is preserved in the composition forming the entire system. Given that designs are likely to evolve as additional requirements emerge, we show how we have automated most of the repetitive proof steps to enable verification of rapidly changing designs.
NASA Astrophysics Data System (ADS)
Lisnyak, M.; Pipa, A. V.; Gorchakov, S.; Iseni, S.; Franke, St.; Khapour, A.; Methling, R.; Weltmann, K.-D.
2015-09-01
Spectroscopic investigations of free-burning vacuum arcs in diffuse mode with CuCr electrodes are presented. The experimental conditions of the investigated arc correspond to the typical system for vacuum circuit breakers. Spectra of six species Cu I, Cu II, Cu III, Cr I, Cr II, and Cr III have been analyzed in the wavelength range 350-810 nm. The axial intensity distributions were found to be strongly dependent on the ionization stage of radiating species. Emission distributions of Cr II and Cu II can be distinguished as well as the distributions of Cr III and Cu III. Information on the axial distribution was used to identify the spectra and for identification of overlapping spectral lines. The overview spectra and some spectral windows recorded with high resolution are presented. Analysis of axial distributions of emitted light, which originates from different ionization states, is presented and discussed.
The Wigner distribution and 2D classical maps
NASA Astrophysics Data System (ADS)
Sakhr, Jamal
2017-07-01
The Wigner spacing distribution has a long and illustrious history in nuclear physics and in the quantum mechanics of classically chaotic systems. In this paper, a novel connection between the Wigner distribution and 2D classical mechanics is introduced. Based on a well-known correspondence between the Wigner distribution and the 2D Poisson point process, the hypothesis that typical pseudo-trajectories of a 2D ergodic map have a Wignerian nearest-neighbor spacing distribution (NNSD) is put forward and numerically tested. The standard Euclidean metric is used to compute the interpoint spacings. In all test cases, the hypothesis is upheld, and the range of validity of the hypothesis appears to be robust in the sense that it is not affected by the presence or absence of: (i) mixing; (ii) time-reversal symmetry; and/or (iii) dissipation.
Thermal residual stress evaluation based on phase-shift lateral shearing interferometry
NASA Astrophysics Data System (ADS)
Dai, Xiangjun; Yun, Hai; Shao, Xinxing; Wang, Yanxia; Zhang, Donghuan; Yang, Fujun; He, Xiaoyuan
2018-06-01
An interesting phase-shift lateral shearing interferometry system was proposed to evaluate the thermal residual stress distribution in transparent specimen. The phase-shift interferograms was generated by moving a parallel plane plate. Based on analyzing the fringes deflected by deformation and refractive index change, the stress distribution can be obtained. To verify the validity of the proposed method, a typical experiment was elaborately designed to determine thermal residual stresses of a transparent PMMA plate subjected to the flame of a lighter. The sum of in-plane stress distribution was demonstrated. The experimental data were compared with values measured by digital gradient sensing method. Comparison of the results reveals the effectiveness and feasibility of the proposed method.
Exact Extremal Statistics in the Classical 1D Coulomb Gas
NASA Astrophysics Data System (ADS)
Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2017-08-01
We consider a one-dimensional classical Coulomb gas of N -like charges in a harmonic potential—also known as the one-dimensional one-component plasma. We compute, analytically, the probability distribution of the position xmax of the rightmost charge in the limit of large N . We show that the typical fluctuations of xmax around its mean are described by a nontrivial scaling function, with asymmetric tails. This distribution is different from the Tracy-Widom distribution of xmax for Dyson's log gas. We also compute the large deviation functions of xmax explicitly and show that the system exhibits a third-order phase transition, as in the log gas. Our theoretical predictions are verified numerically.
Lu, Ning; Ge, Shemin
1996-01-01
By including the constant flow of heat and fluid in the horizontal direction, we develop an analytical solution for the vertical temperature distribution within the semiconfining layer of a typical aquifer system. The solution is an extension of the previous one-dimensional theory by Bredehoeft and Papadopulos [1965]. It provides a quantitative tool for analyzing the uncertainty of the horizontal heat and fluid flow. The analytical results demonstrate that horizontal flow of heat and fluid, if at values much smaller than those of the vertical, has a negligible effect on the vertical temperature distribution but becomes significant when it is comparable to the vertical.
Celesti, Antonio; Fazio, Maria; Romano, Agata; Bramanti, Alessia; Bramanti, Placido; Villari, Massimo
2018-05-01
The Open Archive Information System (OAIS) is a reference model for organizing people and resources in a system, and it is already adopted in care centers and medical systems to efficiently manage clinical data, medical personnel, and patients. Archival storage systems are typically implemented using traditional relational database systems, but the relation-oriented technology strongly limits the efficiency in the management of huge amount of patients' clinical data, especially in emerging cloud-based, that are distributed. In this paper, we present an OAIS healthcare architecture useful to manage a huge amount of HL7 clinical documents in a scalable way. Specifically, it is based on a NoSQL column-oriented Data Base Management System deployed in the cloud, thus to benefit from a big tables and wide rows available over a virtual distributed infrastructure. We developed a prototype of the proposed architecture at the IRCCS, and we evaluated its efficiency in a real case of study.
Lang, T; Harth, A; Matyschok, J; Binhammer, T; Schultze, M; Morgner, U
2013-01-14
A 2 + 1 dimensional nonlinear pulse propagation model is presented, illustrating the weighting of different effects for the parametric amplification of ultra-broadband spectra in different regimes of energy scaling. Typical features in the distribution of intensity and phase of state-of-the-art OPA-systems can be understood by cascaded spatial and temporal effects.
Distributed File System Utilities to Manage Large DatasetsVersion 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-05-21
FileUtils provides a suite of tools to manage large datasets typically created by large parallel MPI applications. They are written in C and use standard POSIX I/Ocalls. The current suite consists of tools to copy, compare, remove, and list. The tools provide dramatic speedup over existing Linux tools, which often run as a single process.
Vodovatov, A V; Balonov, M I; Golikov, V Yu; Shatsky, I G; Chipiga, L A; Bernhardsson, C
2017-04-01
In 2009-2014, dose surveys aimed to collect adult patient data and parameters of most common radiographic examinations were performed in six Russian regions. Typical patient doses were estimated for the selected examinations both in entrance surface dose and in effective dose. 75%-percentiles of typical patient effective dose distributions were proposed as preliminary regional diagnostic reference levels (DRLs) for radiography. Differences between the 75%-percentiles of regional typical patient dose distributions did not exceed 30-50% for the examinations with standardized clinical protocols (skull, chest and thoracic spine) and a factor of 1.5 for other examinations. Two different approaches for establishing national DRLs were evaluated: as a 75%-percentile of a pooled regional sample of patient typical doses (pooled method) and as a median of 75%-percentiles of regional typical patient dose distributions (median method). Differences between pooled and median methods for effective dose did not exceed 20%. It was proposed to establish Russian national DRLs in effective dose using a pooled method. In addition, the local authorities were granted an opportunity to establish regional DRLs if the local radiological practice and typical patient dose distributions are significantly different. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Neuroinflammation is increased in the parietal cortex of atypical Alzheimer's disease.
Boon, Baayla D C; Hoozemans, Jeroen J M; Lopuhaä, Boaz; Eigenhuis, Kristel N; Scheltens, Philip; Kamphorst, Wouter; Rozemuller, Annemieke J M; Bouwman, Femke H
2018-05-29
While most patients with Alzheimer's disease (AD) present with memory complaints, 30% of patients with early disease onset present with non-amnestic symptoms. This atypical presentation is thought to be caused by a different spreading of neurofibrillary tangles (NFT) than originally proposed by Braak and Braak. Recent studies suggest a prominent role for neuroinflammation in the spreading of tau pathology. We aimed to explore whether an atypical spreading of pathology in AD is associated with an atypical distribution of neuroinflammation. Typical and atypical AD cases were selected based on both NFT distribution and amnestic or non-amnestic clinical presentation. Immunohistochemistry was performed on the temporal pole and superior parietal lobe of 10 typical and 9 atypical AD cases. The presence of amyloid-beta (N-terminal; IC16), pTau (AT8), reactive astrocytes (GFAP), microglia (Iba1, CD68, and HLA-DP/DQ/DR), and complement factors (C1q, C3d, C4b, and C5b-9) was quantified by image analysis. Differences in lobar distribution patterns of immunoreactivity were statistically assessed using a linear mixed model. We found a temporal dominant distribution for amyloid-beta, GFAP, and Iba1 in both typical and atypical AD. Distribution of pTau, CD68, HLA-DP/DQ/DR, C3d, and C4b differed between AD variants. Typical AD cases showed a temporal dominant distribution of these markers, whereas atypical AD cases showed a parietal dominant distribution. Interestingly, when quantifying for the number of amyloid-beta plaques instead of stained surface area, atypical AD cases differed in distribution pattern from typical AD cases. Remarkably, plaque morphology and localization of neuroinflammation within the plaques was different between the two phenotypes. Our data show a different localization of neuroinflammatory markers and amyloid-beta plaques between AD phenotypes. In addition, these markers reflect the atypical distribution of tau pathology in atypical AD, suggesting that neuroinflammation might be a crucial link between amyloid-beta deposits, tau pathology, and clinical symptoms.
Active Tailoring of Lift Distribution to Enhance Cruise Performance
NASA Technical Reports Server (NTRS)
Flamm, Jeffrey D. (Technical Monitor); Pfeiffer, Neal J.; Christians, Joel G.
2005-01-01
During Phase I of this project, Raytheon Aircraft Company (RAC) has analytically and experimentally evaluated key components of a system that could be implemented for active tailoring of wing lift distribution using low-drag, trailing-edge modifications. Simple systems such as those studied by RAC could be used to enhance the cruise performance of a business jet configuration over a range of typical flight conditions. The trailing-edge modifications focus on simple, deployable mechanisms comprised of extendable small flap panels over portions of the span that could be used to subtly but positively optimize the lift and drag characteristics. The report includes results from low speed wind tunnel testing of the trailing-edge devices, descriptions of potential mechanisms for automation, and an assessment of the technology.
NASA Astrophysics Data System (ADS)
Wang, Qian; Lu, Guangqi; Li, Xiaoyu; Zhang, Yichi; Yun, Zejian; Bian, Di
2018-01-01
To take advantage of the energy storage system (ESS) sufficiently, the factors that the service life of the distributed energy storage system (DESS) and the load should be considered when establishing optimization model. To reduce the complexity of the load shifting of DESS in the solution procedure, the loss coefficient and the equal capacity ratio distribution principle were adopted in this paper. Firstly, the model was established considering the constraint conditions of the cycles, depth, power of the charge-discharge of the ESS, the typical daily load curves, as well. Then, dynamic programming method was used to real-time solve the model in which the difference of power Δs, the real-time revised energy storage capacity Sk and the permission error of depth of charge-discharge were introduced to optimize the solution process. The simulation results show that the optimized results was achieved when the load shifting in the load variance was not considered which means the charge-discharge of the energy storage system was not executed. In the meantime, the service life of the ESS would increase.
2014-08-01
1 Common hydrogeomorphic units that form in stream systems in response to spatially and temporally varying hydrologic and geomorphic processes... geomorphic , and vegetative indica- tors for use in OHWM delineations in arid streams and categorized their typical landscape positions with respect...the presence of a bed and banks. Hydrogeomorphic units are distinct macro- scale geomorphic features formed within stream systems in response to
2010-06-01
essential to fostering the loyalty , dedication and pride that enables the diverse student population within your department to be the very best systems...that I have enjoyed in my short time with you. Without you in my life, to share my success, I could not have ever achieved the level of satisfaction ...used. A typical wall mounted light switch is a single pole single throw switch. A common industrial motor start switch is a three pole single throw
NASA Astrophysics Data System (ADS)
Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten
2017-07-01
Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral
sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral
parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.
A system-approach to the elastohydrodynamic lubrication point-contact problem
NASA Technical Reports Server (NTRS)
Lim, Sang Gyu; Brewe, David E.
1991-01-01
The classical EHL (elastohydrodynamic lubrication) point contact problem is solved using a new system-approach, similar to that introduced by Houpert and Hamrock for the line-contact problem. Introducing a body-fitted coordinate system, the troublesome free-boundary is transformed to a fixed domain. The Newton-Raphson method can then be used to determine the pressure distribution and the cavitation boundary subject to the Reynolds boundary condition. This method provides an efficient and rigorous way of solving the EHL point contact problem with the aid of a supercomputer and a promising method to deal with the transient EHL point contact problem. A typical pressure distribution and film thickness profile are presented and the minimum film thicknesses are compared with the solution of Hamrock and Dowson. The details of the cavitation boundaries for various operating parameters are discussed.
NASA Astrophysics Data System (ADS)
Hu, Zhongjun; Guo, Ke; Jin, Shulan; Pan, Huahua
2018-01-01
The issue that climatic change has great influence on species distribution is currently of great interest in field of biogeography. Six typical Kobresia species are selected from alpine grassland of Tibetan Plateau (TP) as research objects which are the high-quality forage for local husbandry, and their distribution changes are modeled in four periods by using MaxEnt model and GIS technology. The modeling results have shown that the distribution of these six typical Kobresia species in TP was strongly affected by two factors of "the annual precipitation" and "the precipitation in the wettest and driest quarters of the year". The modeling results have also shown that the most suitable habitats of K. pygmeae were located in the area around Qinghai Lake, the Hengduan-Himalayan mountain area, and the hinterland of TP. The most suitable habitats of K. humilis were mainly located in the area around Qinghai Lake and the hinterland of TP during the Last Interglacial period, and gradually merged into a bigger area; K. robusta and K. tibetica were located in the area around Qinghai Lake and the hinterland of TP, but they did not integrate into one area all the time, and K. capillifolia were located in the area around Qinghai Lake and extended to the southwest of the original distributing area, whereas K. macrantha were mainly distributed along the area of the Himalayan mountain chain, which had the smallest distribution area among them, and all these six Kobresia species can be divided into four types of "retreat/expansion" styles according to the changes of suitable habitat areas during the four periods; all these change styles are the result of long-term adaptations of the different species to the local climate changes in regions of TP and show the complexity of relationships between different species and climate. The research results have positive reference value to the protection of species diversity and sustainable development of the local husbandry in TP.
Distributed Health Monitoring System for Reusable Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Lin, C. F.; Figueroa, F.; Politopoulos, T.; Oonk, S.
2009-01-01
The ability to correctly detect and identify any possible failure in the systems, subsystems, or sensors within a reusable liquid rocket engine is a major goal at NASA John C. Stennis Space Center (SSC). A health management (HM) system is required to provide an on-ground operation crew with an integrated awareness of the condition of every element of interest by determining anomalies, examining their causes, and making predictive statements. However, the complexity associated with relevant systems, and the large amount of data typically necessary for proper interpretation and analysis, presents difficulties in implementing complete failure detection, identification, and prognostics (FDI&P). As such, this paper presents a Distributed Health Monitoring System for Reusable Liquid Rocket Engines as a solution to these problems through the use of highly intelligent algorithms for real-time FDI&P, and efficient and embedded processing at multiple levels. The end result is the ability to successfully incorporate a comprehensive HM platform despite the complexity of the systems under consideration.
Westjohn, David B.; Weaver, T.L.
1996-01-01
Electrical-resistivity logs and water-quality data were used to delineate the fresh water/saline-water interface in a 22,000-square-mile area of the central Michigan Basin, where Mississippian and younger geologic units form a regional system of aquifers and confining units.Pleistocene glacial deposits in the central Lower Peninsula of Michigan contain freshwater, except in a 1,600-square-mile area within the Saginaw Lowlands, where these deposits typically contain saline water. Pennsylvanian and Mississippian sandstones are freshwater bearing where they subcrop below permeable Pleistocene glacial deposits. Down regional dip from subcrop areas, salinity of ground water progressively increases in Early Pennsylvanian and Mississippian sandstones, and these units contain brine in the central part of the basin. Freshwater is present in Late Pennsylvanian sandstones in the northern and southern parts of the aquifer system. Typically, saline water is present in Pennsylvanian sandstones in the eastern and western parts of the aquifer system.Relief on the freshwater/saline-water interface is about 500 feet. Altitudes of the interface are low (300 to 400 feet above sea level) along a north-south-trending corridor through the approximate center of the area mapped. In isolated areas in the northern and western parts of the aquifer system, the altitude of the base of freshwater is less than 400 feet, but altitude is typically more than 400 feet. In the southern and northern parts of the aquifer system where Pennsylvanian rocks are thin or absent, altitudes of the base of freshwater range from 700 to 800 feet and from 500 to 700 feet above sea level, respectively.Geologic controls on distribution of freshwater in the regional aquifer system are (1) direct hydraulic connection of sandstone aquifers and freshwater-bearing, permeable glacial deposits, (2) impedance of upward discharge of saline water from sandstones by lodgement tills, (3) impedance of recharge of freshwater to bedrock (or discharge of saline water from bedrock) by Jurassic red beds, and (4) vertical barriers to ground-water flow within and between sandstone units.
Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs
NASA Astrophysics Data System (ADS)
Lanchier, Nicolas
2017-04-01
Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.
Topology determines force distributions in one-dimensional random spring networks.
Heidemann, Knut M; Sageman-Furnas, Andrew O; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F; Wardetzky, Max
2018-02-01
Networks of elastic fibers are ubiquitous in biological systems and often provide mechanical stability to cells and tissues. Fiber-reinforced materials are also common in technology. An important characteristic of such materials is their resistance to failure under load. Rupture occurs when fibers break under excessive force and when that failure propagates. Therefore, it is crucial to understand force distributions. Force distributions within such networks are typically highly inhomogeneous and are not well understood. Here we construct a simple one-dimensional model system with periodic boundary conditions by randomly placing linear springs on a circle. We consider ensembles of such networks that consist of N nodes and have an average degree of connectivity z but vary in topology. Using a graph-theoretical approach that accounts for the full topology of each network in the ensemble, we show that, surprisingly, the force distributions can be fully characterized in terms of the parameters (N,z). Despite the universal properties of such (N,z) ensembles, our analysis further reveals that a classical mean-field approach fails to capture force distributions correctly. We demonstrate that network topology is a crucial determinant of force distributions in elastic spring networks.
Topology determines force distributions in one-dimensional random spring networks
NASA Astrophysics Data System (ADS)
Heidemann, Knut M.; Sageman-Furnas, Andrew O.; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F.; Wardetzky, Max
2018-02-01
Networks of elastic fibers are ubiquitous in biological systems and often provide mechanical stability to cells and tissues. Fiber-reinforced materials are also common in technology. An important characteristic of such materials is their resistance to failure under load. Rupture occurs when fibers break under excessive force and when that failure propagates. Therefore, it is crucial to understand force distributions. Force distributions within such networks are typically highly inhomogeneous and are not well understood. Here we construct a simple one-dimensional model system with periodic boundary conditions by randomly placing linear springs on a circle. We consider ensembles of such networks that consist of N nodes and have an average degree of connectivity z but vary in topology. Using a graph-theoretical approach that accounts for the full topology of each network in the ensemble, we show that, surprisingly, the force distributions can be fully characterized in terms of the parameters (N ,z ) . Despite the universal properties of such (N ,z ) ensembles, our analysis further reveals that a classical mean-field approach fails to capture force distributions correctly. We demonstrate that network topology is a crucial determinant of force distributions in elastic spring networks.
ADMS State of the Industry and Gap Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agalgaonkar, Yashodhan P.; Marinovici, Maria C.; Vadari, Subramanian V.
2016-03-31
An Advanced distribution management system (ADMS) is a platform for optimized distribution system operational management. This platform comprises of distribution management system (DMS) applications, supervisory control and data acquisition (SCADA), outage management system (OMS), and distributed energy resource management system (DERMS). One of the primary objectives of this work is to study and analyze several ADMS component and auxiliary systems. All the important component and auxiliary systems, SCADA, GISs, DMSs, AMRs/AMIs, OMSs, and DERMS, are discussed in this report. Their current generation technologies are analyzed, and their integration (or evolution) with an ADMS technology is discussed. An ADMS technology statemore » of the art and gap analysis is also presented. There are two technical gaps observed. The integration challenge between the component operational systems is the single largest challenge for ADMS design and deployment. Another significant challenge noted is concerning essential ADMS applications, for instance, fault location, isolation, and service restoration (FLISR), volt-var optimization (VVO), etc. There are a relatively small number of ADMS application developers as ADMS software platform is not open source. There is another critical gap and while not being technical in nature (when compared the two above) is still important to consider. The data models currently residing in utility GIS systems are either incomplete or inaccurate or both. This data is essential for planning and operations because it is typically one of the primary sources from which power system model are created. To achieve the full potential of ADMS, the ability to execute acute Power Flow solution is an important pre-requisite. These critical gaps are hindering wider Utility adoption of an ADMS technology. The development of an open architecture platform can eliminate many of these barriers and also aid seamless integration of distribution Utility legacy systems with an ADMS.« less
NASA Technical Reports Server (NTRS)
Chernov, A. A.
2004-01-01
Crystallites, droplets and amorphous precipitates growing from supersaturated solution are surrounded by zones, which are depleted with respect to the molecules they are built of. If two such particles of colloidal size are separated by a distance comparable to their diameters, then the depletion within the gap between particles is deeper than that at the outer portion of the particles. This will cause depletion attraction between the particles should appear. It may cause particle coagulation and decay of the originally homogeneous particle distribution into a system of clouds within which the particle number density is higher, separated by the region of the lower number density. Stability criterion, Q = 4 pi R(exp 3)c/3 >> 1, was analytically found along with typical particle density distribution wavevector q = (Q/I)(exp 1/2)(a/R)(exp 1/4). Here, R and a are the particle and molecular radii, respectively, c is the average molecular number density in solution and I is the squared diffusion length covered by a molecule during a typical time characterizing decay of molecular concentration in solution due to consumption of the molecules by the growing particles.
Cometary spliting - a source for the Jupiter family?
NASA Astrophysics Data System (ADS)
Pittich, E. M.; Rickman, H.
1994-01-01
The quest for the origin of the Jupiter family of comets includes investigating the possibility that a large fraction of this population originates from past splitting events. In particular, one suggested scenario, albeit less attractive on physical grounds, maintains that a giant comet breakup is a major source of short-period comets. By simulating such events and integrating the motions of the fictitious fragments in an accurate solar system model for the typical lifetime of Jupiter family comets, it is possible to check whether the outcome may or may not be compatible with the observed orbital distribution. In this paper we present such integrations for a few typical progenitor orbits and analyze the ensuing thermalization process with particular attention to the Tisserand parameters. It is found that the sets of fragments lose their memory of a common origin very rapidly so that, in general terms, it is difficult to use the random appearance of the observed orbital distribution as evidence against the giant comet splitting hypothesis.
NASA Astrophysics Data System (ADS)
Kataoka, R.; Miyoshi, Y.; Shigematsu, K.; Hampton, D.; Mori, Y.; Kubo, T.; Yamashita, A.; Tanaka, M.; Takahei, T.; Nakai, T.; Miyahara, H.; Shiokawa, K.
2013-09-01
A new stereoscopic measurement technique is developed to obtain an all-sky altitude map of aurora using two ground-based digital single-lens reflex (DSLR) cameras. Two identical full-color all-sky cameras were set with an 8 km separation across the Chatanika area in Alaska (Poker Flat Research Range and Aurora Borealis Lodge) to find localized emission height with the maximum correlation of the apparent patterns in the localized pixels applying a method of the geographical coordinate transform. It is found that a typical ray structure of discrete aurora shows the broad altitude distribution above 100 km, while a typical patchy structure of pulsating aurora shows the narrow altitude distribution of less than 100 km. Because of its portability and low cost of the DSLR camera systems, the new technique may open a unique opportunity not only for scientists but also for night-sky photographers to complementarily attend the aurora science to potentially form a dense observation network.
Distributed Group Design Process: Lessons Learned.
ERIC Educational Resources Information Center
Eseryel, Deniz; Ganesan, Radha
A typical Web-based training development team consists of a project manager, an instructional designer, a subject-matter expert, a graphic artist, and a Web programmer. The typical scenario involves team members working together in the same setting during the entire design and development process. What happens when the team is distributed, that is…
Topology Counts: Force Distributions in Circular Spring Networks.
Heidemann, Knut M; Sageman-Furnas, Andrew O; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F; Wardetzky, Max
2018-02-09
Filamentous polymer networks govern the mechanical properties of many biological materials. Force distributions within these networks are typically highly inhomogeneous, and, although the importance of force distributions for structural properties is well recognized, they are far from being understood quantitatively. Using a combination of probabilistic and graph-theoretical techniques, we derive force distributions in a model system consisting of ensembles of random linear spring networks on a circle. We show that characteristic quantities, such as the mean and variance of the force supported by individual springs, can be derived explicitly in terms of only two parameters: (i) average connectivity and (ii) number of nodes. Our analysis shows that a classical mean-field approach fails to capture these characteristic quantities correctly. In contrast, we demonstrate that network topology is a crucial determinant of force distributions in an elastic spring network. Our results for 1D linear spring networks readily generalize to arbitrary dimensions.
Distributed Knowledge-Based Systems
1989-03-15
For example, patients with cerebral palsy , a disease affecting motor control, typically have several muscles that function improperly in different...phases of the gait cycle. The malfunctions in the case of cerebral palsy are improper contractions of the muscles - both in terms of the magnitude and...problem, if true, has serious implications for how knowledge acquisition should be done. Because some knowledge representation must be the target of
An apparent case of long-distance breeding dispersal by a Mexican spotted owl in New Mexico
Joseph L. Ganey; Jeffrey S. Jenness
2013-01-01
The Mexican spotted owl (Strix occidentalis lucida) is widely but patchily distributed throughout the southwestern United States and the Republic of Mexico (Gutiérrez and others 1995, Ward and others 1995). This owl typically occurs in either rocky canyonlands or forested mountain and canyon systems containing mixed-conifer or pine-oak (Pinus spp. - Quercus spp.)...
A Clonal Lineage of Fusarium oxysporum Circulates in the Tap Water of Different French Hospitals
Sautour, Marc; Gautheron, Nadine; Laurent, Julie; Aho, Serge; Bonnin, Alain; Sixt, Nathalie; Hartemann, Philippe; Dalle, Frédéric; Steinberg, Christian
2016-01-01
ABSTRACT Fusarium oxysporum is typically a soilborne fungus but can also be found in aquatic environments. In hospitals, water distribution systems may be reservoirs for the fungi responsible for nosocomial infections. F. oxysporum was previously detected in the water distribution systems of five French hospitals. Sixty-eight isolates from water representative of all hospital units that were previously sampled and characterized by translation elongation factor 1α sequence typing were subjected to microsatellite analysis and full-length ribosomal intergenic spacer (IGS) sequence typing. All but three isolates shared common microsatellite loci and a common two-locus sequence type (ST). This ST has an international geographical distribution in both the water networks of hospitals and among clinical isolates. The ST dominant in water was not detected among 300 isolates of F. oxysporum that originated from surrounding soils. Further characterization of 15 isolates by vegetative compatibility testing allowed us to conclude that a clonal lineage of F. oxysporum circulates in the tap water of the different hospitals. IMPORTANCE We demonstrated that a clonal lineage of Fusarium oxysporum inhabits the water distribution systems of several French hospitals. This clonal lineage, which appears to be particularly adapted to water networks, represents a potential risk for human infection and raises questions about its worldwide distribution. PMID:27663024
Multifrequency Retrieval of Cloud Ice Particle Size Distributions
2005-01-01
distribution ( Testud et al., 2001) to represent the PSD. The normalized gamma distribution has several advantages over a typical gamma PSD. A typical gamma...variation correlated with variation in ýL ( Testud et al., 2001). This variation on N, with P, requires a priori restrictions on the variance in R in...Geoscience & Rem. Sensing, 40, 541-549. Testud , J., S. Oury, R. A. Black, P. Amayenc, and X. Dou, 2001: The Concept of "Normalized" Distibution to Describe
Transmission and Distribution Efficiency Improvement Rearch and Development Survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, C.L.; Westinghouse Electric Corporation. Advanced Systems Technology.
Purpose of this study was to identify and quantify those technologies for improving transmission and distribution (T and D) system efficiency that could provide the greatest benefits for utility customers in the Pacific Northwest. Improving the efficiency of transmission and distribution systems offers a potential source of conservation within the utility sector. An extensive review of this field resulted in a list of 49 state-of-the-art technologies and 39 future technologies. Of these, 15 from the former list and 7 from the latter were chosen as the most promising and then submitted to an evaluative test - a modeled sample systemmore » for Benton County PUD, a utility with characteristics typical of a BPA customer system. Reducing end-use voltage on secondary distribution systems to decrease the energy consumption of electrical users when possible, called ''Conservation Voltage Reduction,'' was found to be the most cost effective state-of-the-art technology. Voltampere reactive (var) optimization is a similarly cost effective alternative. The most significant reduction in losses on the transmission and distribution system would be achieved through the replacement of standard transformers with high efficiency transformers, such as amorphous steel transformers. Of the future technologies assessed, the ''Distribution Static VAR Generator'' appears to have the greatest potential for technological breakthroughs and, therefore in time, commercialization. ''Improved Dielectric Materials,'' with a relatively low cost and high potential for efficiency improvement, warrant R and D consideration. ''Extruded Three-Conductor Cable'' and ''Six- and Twelve-Phase Transmission'' programs provide only limited gains in efficiency and applicability and are therefore the least cost effective.« less
Adaptive, Distributed Control of Constrained Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Bieniawski, Stefan; Wolpert, David H.
2004-01-01
Product Distribution (PO) theory was recently developed as a broad framework for analyzing and optimizing distributed systems. Here we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MASS), i.e., for distributed stochastic optimization using MAS s. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (Probability dist&&on on the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. One common way to find that equilibrium is to have each agent run a Reinforcement Learning (E) algorithm. PD theory reveals this to be a particular type of search algorithm for minimizing the Lagrangian. Typically that algorithm i s quite inefficient. A more principled alternative is to use a variant of Newton's method to minimize the Lagrangian. Here we compare this alternative to RL-based search in three sets of computer experiments. These are the N Queen s problem and bin-packing problem from the optimization literature, and the Bar problem from the distributed RL literature. Our results confirm that the PD-theory-based approach outperforms the RL-based scheme in all three domains.
NASA Astrophysics Data System (ADS)
Yuan, Jindou; Xu, Jinliang; Wang, Yaodong
2018-03-01
Energy saving and emission reduction have become targets for modern society due to the potential energy crisis and the threat of climate change. A distributed hybrid renewable energy system (HRES) consists of photovoltaic (PV) arrays, a wood-syngas combined heat and power generator (CHP) and back-up batteries is designed to power a typical semi-detached rural house in China which aims to meet the energy demand of a house and to reduce greenhouse gas emissions from the use of fossil fuels. Based on the annual load information of the house and the local meteorological data including solar radiation, air temperature, etc., a system model is set up using HOMER software and is used to simulate all practical configurations to carry out technical and economic evaluations. The performance of the whole HRES system and each component under different configurations are evaluated. The optimized configuration of the system is found
Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot; Thomas, George; Culley, Dennis; Kratz, Jonathan
2017-01-01
Distributed engine control (DEC) systems alter aircraft engine design constraints because of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.
Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot D.; Thomas, George Lindsey; Culley, Dennis E.; Kratz, Jonathan L.
2017-01-01
Distributed engine control (DEC) systems alter aircraft engine design constraints be- cause of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.
Stochastic scheduling on a repairable manufacturing system
NASA Astrophysics Data System (ADS)
Li, Wei; Cao, Jinhua
1995-08-01
In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.
Experimental Research on Boundary Shear Stress in Typical Meandering Channel
NASA Astrophysics Data System (ADS)
Chen, Kai-hua; Xia, Yun-feng; Zhang, Shi-zhao; Wen, Yun-cheng; Xu, Hua
2018-06-01
A novel instrument named Micro-Electro-Mechanical System (MEMS) flexible hot-film shear stress sensor was used to study the boundary shear stress distribution in the generalized natural meandering open channel, and the mean sidewall shear stress distribution along the meandering channel, and the lateral boundary shear stress distribution in the typical cross-section of the meandering channel was analysed. Based on the measurement of the boundary shear stress, a semi-empirical semi-theoretical computing approach of the boundary shear stress was derived including the effects of the secondary flow, sidewall roughness factor, eddy viscosity and the additional Reynolds stress, and more importantly, for the first time, it combined the effects of the cross-section central angle and the Reynolds number into the expressions. Afterwards, a comparison between the previous research and this study was developed. Following the result, we found that the semi-empirical semi-theoretical boundary shear stress distribution algorithm can predict the boundary shear stress distribution precisely. Finally, a single factor analysis was conducted on the relationship between the average sidewall shear stress on the convex and concave bank and the flow rate, water depth, slope ratio, or the cross-section central angle of the open channel bend. The functional relationship with each of the above factors was established, and then the distance from the location of the extreme sidewall shear stress to the bottom of the open channel was deduced based on the statistical theory.
NASA Astrophysics Data System (ADS)
Huang, Jian; Wei, Kai; Jin, Kai; Li, Min; Zhang, YuDong
2018-06-01
The Sodium laser guide star (LGS) plays a key role in modern astronomical Adaptive Optics Systems (AOSs). The spot size and photon return of the Sodium LGS depend strongly on the laser power density distribution at the Sodium layer and thus affect the performance of the AOS. The power density distribution is degraded by turbulence in the uplink path, launch system aberrations, the beam quality of the laser, and so forth. Even without any aberrations, the TE00 Gaussian type is still not the optimal power density distribution to obtain the best balance between the measurement error and temporal error. To optimize and control the LGS power density distribution at the Sodium layer to an expected distribution type, a method that combines pre-correction and beam-shaping is proposed. A typical result shows that under strong turbulence (Fried parameter (r0) of 5 cm) and for a quasi-continuous wave Sodium laser (power (P) of 15 W), in the best case, our method can effectively optimize the distribution from the Gaussian type to the "top-hat" type and enhance the photon return flux of the Sodium LGS; at the same time, the total error of the AOS is decreased by 36% with our technique for a high power laser and poor seeing.
NASA Astrophysics Data System (ADS)
Golvano-Escobal, Irati; Gonzalez-Rosillo, Juan Carlos; Domingo, Neus; Illa, Xavi; López-Barberá, José Francisco; Fornell, Jordina; Solsona, Pau; Aballe, Lucia; Foerster, Michael; Suriñach, Santiago; Baró, Maria Dolors; Puig, Teresa; Pané, Salvador; Nogués, Josep; Pellicer, Eva; Sort, Jordi
2016-07-01
Spatio-temporal patterns are ubiquitous in different areas of materials science and biological systems. However, typically the motifs in these types of systems present a random distribution with many possible different structures. Herein, we demonstrate that controlled spatio-temporal patterns, with reproducible spiral-like shapes, can be obtained by electrodeposition of Co-In alloys inside a confined circular geometry (i.e., in disks that are commensurate with the typical size of the spatio-temporal features). These patterns are mainly of compositional nature, i.e., with virtually no topographic features. Interestingly, the local changes in composition lead to a periodic modulation of the physical (electric, magnetic and mechanical) properties. Namely, the Co-rich areas show higher saturation magnetization and electrical conductivity and are mechanically harder than the In-rich ones. Thus, this work reveals that confined electrodeposition of this binary system constitutes an effective procedure to attain template-free magnetic, electric and mechanical surface patterning with specific and reproducible shapes.
A 20fs synchronization system for lasers and cavities in accelerators and FELs
NASA Astrophysics Data System (ADS)
Wilcox, R. B.; Byrd, J. M.; Doolittle, L. R.; Huang, G.; Staples, J. W.
2010-02-01
A fiber-optic RF distribution system has been developed for synchronizing lasers and RF plants in short pulse FELs. Typical requirements are 50-100fs rms over time periods from 1ms to several hours. Our system amplitude modulates a CW laser signal, senses fiber length using an interferometer, and feed-forward corrects the RF phase digitally at the receiver. We demonstrate less than 15fs rms error over 12 hours, between two independent channels with a fiber path length difference of 200m and transmitting S-band RF. The system is constructed using standard telecommunications components, and uses regular telecom fiber.
Fog-computing concept usage as means to enhance information and control system reliability
NASA Astrophysics Data System (ADS)
Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya
2018-05-01
This paper focuses on the reliability issue of information and control systems (ICS). The authors propose using the elements of the fog-computing concept to enhance the reliability function. The key idea of fog-computing is to shift computations to the fog-layer of the network, and thus to decrease the workload of the communication environment and data processing components. As for ICS, workload also can be distributed among sensors, actuators and network infrastructure facilities near the sources of data. The authors simulated typical workload distribution situations for the “traditional” ICS architecture and for the one with fogcomputing concept elements usage. The paper contains some models, selected simulation results and conclusion about the prospects of the fog-computing as a means to enhance ICS reliability.
Neuronal Vacuolization in Feline Panleukopenia Virus Infection.
Pfankuche, Vanessa M; Jo, Wendy K; van der Vries, Erhard; Jungwirth, Nicole; Lorenzen, Stephan; Osterhaus, Albert D M E; Baumgärtner, Wolfgang; Puff, Christina
2018-03-01
Feline panleukopenia virus (FPV) infections are typically associated with anorexia, vomiting, diarrhea, neutropenia, and lymphopenia. In cases of late prenatal or early neonatal infections, cerebellar hypoplasia is reported in kittens. In addition, single cases of encephalitis are described. FPV replication was recently identified in neurons, although it is mainly found in cells with high mitotic activity. A female cat, 2 months old, was submitted to necropsy after it died with neurologic deficits. Besides typical FPV intestinal tract changes, multifocal, randomly distributed intracytoplasmic vacuoles within neurons of the thoracic spinal cord were found histologically. Next-generation sequencing identified FPV-specific sequences within the central nervous system. FPV antigen was detected within central nervous system cells, including the vacuolated neurons, via immunohistochemistry. In situ hybridization confirmed the presence of FPV DNA within the vacuolated neurons. Thus, FPV should be considered a cause for neuronal vacuolization in cats presenting with ataxia.
Derived virtual devices: a secure distributed file system mechanism
NASA Technical Reports Server (NTRS)
VanMeter, Rodney; Hotz, Steve; Finn, Gregory
1996-01-01
This paper presents the design of derived virtual devices (DVDs). DVDs are the mechanism used by the Netstation Project to provide secure shared access to network-attached peripherals distributed in an untrusted network environment. DVDs improve Input/Output efficiency by allowing user processes to perform I/O operations directly from devices without intermediate transfer through the controlling operating system kernel. The security enforced at the device through the DVD mechanism includes resource boundary checking, user authentication, and restricted operations, e.g., read-only access. To illustrate the application of DVDs, we present the interactions between a network-attached disk and a file system designed to exploit the DVD abstraction. We further discuss third-party transfer as a mechanism intended to provide for efficient data transfer in a typical NAP environment. We show how DVDs facilitate third-party transfer, and provide the security required in a more open network environment.
Fully Quantum Fluctuation Theorems
NASA Astrophysics Data System (ADS)
Åberg, Johan
2018-02-01
Systems that are driven out of thermal equilibrium typically dissipate random quantities of energy on microscopic scales. Crooks fluctuation theorem relates the distribution of these random work costs to the corresponding distribution for the reverse process. By an analysis that explicitly incorporates the energy reservoir that donates the energy and the control system that implements the dynamic, we obtain a quantum generalization of Crooks theorem that not only includes the energy changes in the reservoir but also the full description of its evolution, including coherences. Moreover, this approach opens up the possibility for generalizations of the concept of fluctuation relations. Here, we introduce "conditional" fluctuation relations that are applicable to nonequilibrium systems, as well as approximate fluctuation relations that allow for the analysis of autonomous evolution generated by global time-independent Hamiltonians. We furthermore extend these notions to Markovian master equations, implicitly modeling the influence of the heat bath.
NASA Technical Reports Server (NTRS)
Ott, L.; Putman, B.; Collatz, J.; Gregg, W.
2012-01-01
Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement concepts to create realistic pseudo-datasets. Pseudo-data are averaged over coarse model grid cell areas to better understand the ability of measurements to characterize CO2 distributions and spatial gradients on both short (daily to weekly) and long (monthly to seasonal) time scales
The neutron imaging diagnostic at NIF (invited).
Merrill, F E; Bower, D; Buckles, R; Clark, D D; Danly, C R; Drury, O B; Dzenitis, J M; Fatherley, V E; Fittinghoff, D N; Gallegos, R; Grim, G P; Guler, N; Loomis, E N; Lutz, S; Malone, R M; Martinson, D D; Mares, D; Morley, D J; Morgan, G L; Oertel, J A; Tregillis, I L; Volegov, P L; Weiss, P B; Wilde, C H; Wilson, D C
2012-10-01
A neutron imaging diagnostic has recently been commissioned at the National Ignition Facility (NIF). This new system is an important diagnostic tool for inertial fusion studies at the NIF for measuring the size and shape of the burning DT plasma during the ignition stage of Inertial Confinement Fusion (ICF) implosions. The imaging technique utilizes a pinhole neutron aperture, placed between the neutron source and a neutron detector. The detection system measures the two dimensional distribution of neutrons passing through the pinhole. This diagnostic has been designed to collect two images at two times. The long flight path for this diagnostic, 28 m, results in a chromatic separation of the neutrons, allowing the independently timed images to measure the source distribution for two neutron energies. Typically the first image measures the distribution of the 14 MeV neutrons and the second image of the 6-12 MeV neutrons. The combination of these two images has provided data on the size and shape of the burning plasma within the compressed capsule, as well as a measure of the quantity and spatial distribution of the cold fuel surrounding this core.
Du, Tingsong; Hu, Yang; Ke, Xianting
2015-01-01
An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lisnyak, M.; Pipa, A. V.; Gorchakov, S., E-mail: gorchakov@inp-greifswald.de, E-mail: weltmann@inp-greifswald.de
2015-09-28
Spectroscopic investigations of free-burning vacuum arcs in diffuse mode with CuCr electrodes are presented. The experimental conditions of the investigated arc correspond to the typical system for vacuum circuit breakers. Spectra of six species Cu I, Cu II, Cu III, Cr I, Cr II, and Cr III have been analyzed in the wavelength range 350–810 nm. The axial intensity distributions were found to be strongly dependent on the ionization stage of radiating species. Emission distributions of Cr II and Cu II can be distinguished as well as the distributions of Cr III and Cu III. Information on the axial distribution wasmore » used to identify the spectra and for identification of overlapping spectral lines. The overview spectra and some spectral windows recorded with high resolution are presented. Analysis of axial distributions of emitted light, which originates from different ionization states, is presented and discussed.« less
NASA Astrophysics Data System (ADS)
Han, Xue; Sandels, Claes; Zhu, Kun; Nordström, Lars
2013-08-01
There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles.
Hu, Yang; Ke, Xianting
2015-01-01
An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713
NASA Astrophysics Data System (ADS)
Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo
2017-03-01
The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the total flow variability in the response of the urban drainage systems to heavy rainfall events.
Wu, Huiyun; Sheng, Shen; Huang, Zhisong; Zhao, Siqing; Wang, Hua; Sun, Zhenhai; Xu, Xiegu
2013-02-25
As a new attractive application of the vortex beams, power coupling of annular vortex beam propagating through a two- Cassegrain-telescope optical system in turbulent atmosphere has been investigated. A typical model of annular vortex beam propagating through a two-Cassegrain-telescope optical system is established, the general analytical expression of vortex beams with limited apertures and the analytical formulas for the average intensity distribution at the receiver plane are derived. Under the H-V 5/7 turbulence model, the average intensity distribution at the receiver plane and power coupling efficiency of the optical system are numerically calculated, and the influences of the optical topological charge, the laser wavelength, the propagation path and the receiver apertures on the power coupling efficiency are analyzed. These studies reveal that the average intensity distribution at the receiver plane presents a central dark hollow profile, which is suitable for power coupling by the Cassegrain telescope receiver. In the optical system with optimized parameters, power coupling efficiency can keep in high values with the increase of the propagation distance. Under the atmospheric turbulent conditions, great advantages of vortex beam in power coupling of the two-Cassegrain-telescope optical system are shown in comparison with beam without vortex.
Ontology for Life-Cycle Modeling of Water Distribution Systems: Model View Definition
2013-06-01
Research and Development Center, Construction Engineering Research Laboratory (ERDC-CERL) to develop a life-cycle building model have resulted in the...Laboratory (ERDC-CERL) to develop a life-cycle building model have resulted in the definition of a “core” building information model that contains...developed experimental BIM models us- ing commercial off-the-shelf (COTS) software. Those models represent three types of typical low-rise Army
Improved optics for an ultracentrifuge
NASA Technical Reports Server (NTRS)
Miller, C. G.; Stephens, J. B.
1980-01-01
Ultracentrifuge is important tool in study of polymers, biomolecules, and cell structures. In typical ultracentrifuge rotor supports pair of optically matched vials; one contains sample mixed in solvent, and other is reference that contains only solvent. Doubleslit optical system, transverse to rotor, creates interference pattern on photographic plate each time vials pass through optics. Medium in sample vial displaces interference maximums such that shift gives measurement of density distribution along length of sample.
Segmentation of the Knee for Analysis of Osteoarthritis
NASA Astrophysics Data System (ADS)
Zerfass, Peter; Museyko, Oleg; Bousson, Valérie; Laredo, Jean-Denis; Kalender, Willi A.; Engelke, Klaus
Osteoarthritis changes the load distribution within joints and also changes bone density and structure. Within typical timelines of clinical studies these changes can be very small. Therefore precise definition of evaluation regions which are highly robust and show little to no interand intra-operator variance are essential for high quality quantitative analysis. To achieve this goal we have developed a system for the definition of such regions with minimal user input.
2016-11-28
infrastructure typically include energy, water, wastewater, electricity, natural gas , liquid fuel distribution systems, communication lines (e.g...with state off-road regulations would further reduce air quality and greenhouse gas emissions. Cultural Resources. The waste footprint as well as...maintenance of the prescriptive final cover and erosion control, landfill gas monitoring and well maintenance, groundwater monitoring and well maintenance
The Case for Distributed Engine Control in Turbo-Shaft Engine Systems
NASA Technical Reports Server (NTRS)
Culley, Dennis E.; Paluszewski, Paul J.; Storey, William; Smith, Bert J.
2009-01-01
The turbo-shaft engine is an important propulsion system used to power vehicles on land, sea, and in the air. As the power plant for many high performance helicopters, the characteristics of the engine and control are critical to proper vehicle operation as well as being the main determinant to overall vehicle performance. When applied to vertical flight, important distinctions exist in the turbo-shaft engine control system due to the high degree of dynamic coupling between the engine and airframe and the affect on vehicle handling characteristics. In this study, the impact of engine control system architecture is explored relative to engine performance, weight, reliability, safety, and overall cost. Comparison of the impact of architecture on these metrics is investigated as the control system is modified from a legacy centralized structure to a more distributed configuration. A composite strawman system which is typical of turbo-shaft engines in the 1000 to 2000 hp class is described and used for comparison. The overall benefits of these changes to control system architecture are assessed. The availability of supporting technologies to achieve this evolution is also discussed.
NASA Astrophysics Data System (ADS)
Liu, Ke; Wang, Chang; Liu, Guo-liang; Ding, Ning; Sun, Qi-song; Tian, Zhi-hong
2017-04-01
To investigate the formation of one kind of typical inter-dendritic crack around triple point region in continuous casting(CC) slab during the operation of soft reduction, fully coupled 3D thermo-mechanical finite element models was developed, also plant trials were carried out in a domestic continuous casting machine. Three possible types of soft reduction amount distribution (SRAD) in the soft reduction region were analyzed. The relationship between the typical inter-dendritic cracks and soft reduction conditions is presented and demonstrated in production practice. Considering the critical strain of internal crack formation, a critical tolerance for the soft reduction amount distribution and related casing parameters have been proposed for better contribution of soft reduction to the internal quality of slabs. The typical inter-dendritic crack around the triple point region had been eliminated effectively through the application of proposed suggestions for continuous casting of X70 pipeline steel in industrial practice.
Weighted Distances in Scale-Free Configuration Models
NASA Astrophysics Data System (ADS)
Adriaans, Erwin; Komjáthy, Júlia
2018-01-01
In this paper we study first-passage percolation in the configuration model with empirical degree distribution that follows a power-law with exponent τ \\in (2,3) . We assign independent and identically distributed (i.i.d.) weights to the edges of the graph. We investigate the weighted distance (the length of the shortest weighted path) between two uniformly chosen vertices, called typical distances. When the underlying age-dependent branching process approximating the local neighborhoods of vertices is found to produce infinitely many individuals in finite time—called explosive branching process—Baroni, Hofstad and the second author showed in Baroni et al. (J Appl Probab 54(1):146-164, 2017) that typical distances converge in distribution to a bounded random variable. The order of magnitude of typical distances remained open for the τ \\in (2,3) case when the underlying branching process is not explosive. We close this gap by determining the first order of magnitude of typical distances in this regime for arbitrary, not necessary continuous edge-weight distributions that produce a non-explosive age-dependent branching process with infinite mean power-law offspring distributions. This sequence tends to infinity with the amount of vertices, and, by choosing an appropriate weight distribution, can be tuned to be any growing function that is O(log log n) , where n is the number of vertices in the graph. We show that the result remains valid for the the erased configuration model as well, where we delete loops and any second and further edges between two vertices.
Development of a Bio-nanobattery for Distributed Power Storage Systems
NASA Technical Reports Server (NTRS)
King, Glen C.; Choi, Sang H.; Chu, Sang-Hyon; Kim, Jae-Woo; Park, Yeonjoon; Lillehei, Peter; Watt, Gerald D.; Davis, Robert; Harb, John N.
2004-01-01
Currently available power storage systems, such as those used to supply power to microelectronic devices, typically consist of a single centralized canister and a series of wires to supply electrical power to where it is needed in a circuit. As the size of electrical circuits and components become smaller, there exists a need for a distributed power system to reduce Joule heating, wiring, and to allow autonomous operation of the various functions performed by the circuit. Our research is being conducted to develop a bio-nanobattery using ferritins reconstituted with both an iron core (Fe-ferritin) and a cobalt core (Co-ferritin). Both Co-ferritin and Fe-ferritin were synthesized and characterized as candidates for the bio-nanobattery. The reducing capability was determined as well as the half-cell electrical potentials, indicating an electrical output of nearly 0.5 V for the battery cell. Ferritins having other metallic cores are also being investigated, in order to increase the overall electrical output. Two dimensional ferritin arrays were also produced on various substrates, demonstrating the necessary building blocks for the bio-nanobattery. The bio-nanobattery will play a key role in moving to a distributed power storage system for electronic applications.
Dynamics of Polydisperse Foam-like Emulsion
NASA Astrophysics Data System (ADS)
Hicock, Harry; Feitosa, Klebert
2011-10-01
Foam is a complex fluid whose relaxation properties are associated with the continuous diffusion of gas from small to large bubbles driven by differences in Laplace pressures. We study the dynamics of bubble rearrangements by tracking droplets of a clear, buoyantly neutral emulsion that coarsens like a foam. The droplets are imaged in three dimensions using confocal microscopy. Analysis of the images allows us to measure their positions and radii, and track their evolution in time. We find that the droplet size distribution fits a Weibull distribution characteristics of foam systems. Additionally, we observe that droplets undergo continuous evolution interspersed by occasional large rearrangements in par with local relaxation behavior typical of foams.
How structurally stable are global socioeconomic systems?
Saavedra, Serguei; Rohr, Rudolf P.; Gilarranz, Luis J.; Bascompte, Jordi
2014-01-01
The stability analysis of socioeconomic systems has been centred on answering whether small perturbations when a system is in a given quantitative state will push the system permanently to a different quantitative state. However, typically the quantitative state of socioeconomic systems is subject to constant change. Therefore, a key stability question that has been under-investigated is how strongly the conditions of a system itself can change before the system moves to a qualitatively different behaviour, i.e. how structurally stable the systems is. Here, we introduce a framework to investigate the structural stability of socioeconomic systems formed by a network of interactions among agents competing for resources. We measure the structural stability of the system as the range of conditions in the distribution and availability of resources compatible with the qualitative behaviour in which all the constituent agents can be self-sustained across time. To illustrate our framework, we study an empirical representation of the global socioeconomic system formed by countries sharing and competing for multinational companies used as proxy for resources. We demonstrate that the structural stability of the system is inversely associated with the level of competition and the level of heterogeneity in the distribution of resources. Importantly, we show that the qualitative behaviour of the observed global socioeconomic system is highly sensitive to changes in the distribution of resources. We believe that this work provides a methodological basis to develop sustainable strategies for socioeconomic systems subject to constantly changing conditions. PMID:25165600
An overview of distributed microgrid state estimation and control for smart grids.
Rana, Md Masud; Li, Li
2015-02-12
Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method.
Implementing Access to Data Distributed on Many Processors
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A reference architecture is defined for an object-oriented implementation of domains, arrays, and distributions written in the programming language Chapel. This technology primarily addresses domains that contain arrays that have regular index sets with the low-level implementation details being beyond the scope of this discussion. What is defined is a complete set of object-oriented operators that allows one to perform data distributions for domain arrays involving regular arithmetic index sets. What is unique is that these operators allow for the arbitrary regions of the arrays to be fragmented and distributed across multiple processors with a single point of access giving the programmer the illusion that all the elements are collocated on a single processor. Today's massively parallel High Productivity Computing Systems (HPCS) are characterized by a modular structure, with a large number of processing and memory units connected by a high-speed network. Locality of access as well as load balancing are primary concerns in these systems that are typically used for high-performance scientific computation. Data distributions address these issues by providing a range of methods for spreading large data sets across the components of a system. Over the past two decades, many languages, systems, tools, and libraries have been developed for the support of distributions. Since the performance of data parallel applications is directly influenced by the distribution strategy, users often resort to low-level programming models that allow fine-tuning of the distribution aspects affecting performance, but, at the same time, are tedious and error-prone. This technology presents a reusable design of a data-distribution framework for data parallel high-performance applications. Distributions are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on the performance of applications, it is important that the distribution strategy is flexible, so its behavior can change depending on the needs of the application. At the same time, high productivity concerns require that the user be shielded from error-prone, tedious details such as communication and synchronization.
Experimental Study of Structure/Behavior Relationship for a Metallized Explosive
NASA Astrophysics Data System (ADS)
Bukovsky, Eric; Reeves, Robert; Gash, Alexander; Glumac, Nick
2017-06-01
Metal powders are commonly added to explosive formulations to modify the blast behavior. Although detonation velocity is typically reduced compared to the neat explosive, the metal provides other benefits. Aluminum is a common additive to increase the overall energy output and high-density metals can be useful for enhancing momentum transfer to a target. Typically, metal powder is homogeneously distributed throughout the material; in this study, controlled distributions of metal powder in explosive formulations were investigated. The powder structures were printed using powder bed printing and the porous structures were filled with explosives to create bulk explosive composites. In all cases, the overall ratio between metal and explosive was maintained, but the powder distribution was varied. Samples utilizing uniform distributions to represent typical materials, discrete pockets of metal powder, and controlled, graded powder distributions were created. Detonation experiments were performed to evaluate the influence of metal powder design on the output pressure/time and the overall impulse. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Transfer of a wave packet in double-well potential
NASA Astrophysics Data System (ADS)
Yang, Hai-Feng; Hu, Yao-Hua; Tan, Yong-Gang
2018-04-01
Energy potentials with double-well structures are typical in atoms and molecules systems. A manipulation scheme using Half Cycles Pulses (HCPs) is proposed to transfer a Gaussian wave packet between the two wells. On the basis of quantum mechanical simulations, the time evolution and the energy distribution of the wave packet are evaluated. The effect of time parameters, amplitude, and number of HCPs on spatial and energy distribution of the final state and transfer efficiency are investigated. After a carefully tailored HCPs sequence is applied to the initial wave packet localized in one well, the final state is a wave packet localized in the other well and populated at the lower energy levels with narrower distribution. The present scheme could be used to control molecular reactions and to prepare atoms with large dipole moments.
Dong, Zhicheng; Bao, Zhengyu; Wu, Guoai; Fu, Yangrong; Yang, Yi
2010-11-01
The content and spatial distribution of lead in the aquatic systems in two Chinese tropical cities in Hainan province (Haikou and Sanyan) show an unequal distribution of lead between the urban and the suburban areas. The lead content is significantly higher (72.3 mg/kg) in the urban area than the suburbs (15.0 mg/kg) in Haikou, but quite equal in Sanya (41.6 and 43.9 mg/kg). The frequency distribution histograms suggest that the lead in Haikou and in Sanya derives from different natural and/or anthropogenic sources. The isotopic compositions indicate that urban sediment lead in Haikou originates mainly from anthropogenic sources (automobile exhaust, atmospheric deposition, etc.) which contribute much more than the natural sources, while natural lead (basalt and sea sands) is still dominant in the suburban areas in Haikou. In Sanya, the primary source is natural (soils and sea sands).
Study of power management technology for orbital multi-100KWe applications. Volume 2: Study results
NASA Technical Reports Server (NTRS)
Mildice, J. W.
1980-01-01
The preliminary requirements and technology advances required for cost effective space power management systems for multi-100 kilowatt requirements were identified. System requirements were defined by establishing a baseline space platform in the 250 KE KWe range and examining typical user loads and interfaces. The most critical design parameters identified for detailed analysis include: increased distribution voltages and space plasma losses, the choice between ac and dc distribution systems, shuttle servicing effects on reliability, life cycle costs, and frequency impacts to power management system and payload systems for AC transmission. The first choice for a power management system for this kind of application and size range is a hybrid ac/dc combination with the following major features: modular design and construction-sized minimum weight/life cycle cost; high voltage transmission (100 Vac RMS); medium voltage array or = 440 Vdc); resonant inversion; transformer rotary joint; high frequency power transmission line or = 20 KHz); energy storage on array side or rotary joint; fully redundant; and 10 year life with minimal replacement and repair.
The Synchrotron Spectrum of Fast Cooling Electrons Revisited.
Granot; Piran; Sari
2000-05-10
We discuss the spectrum arising from synchrotron emission by fast cooling (FC) electrons, when fresh electrons are continually accelerated by a strong blast wave, into a power-law distribution of energies. The FC spectrum has so far been described by four power-law segments divided by three break frequencies nusa
A Clonal Lineage of Fusarium oxysporum Circulates in the Tap Water of Different French Hospitals.
Edel-Hermann, Véronique; Sautour, Marc; Gautheron, Nadine; Laurent, Julie; Aho, Serge; Bonnin, Alain; Sixt, Nathalie; Hartemann, Philippe; Dalle, Frédéric; Steinberg, Christian
2016-11-01
Fusarium oxysporum is typically a soilborne fungus but can also be found in aquatic environments. In hospitals, water distribution systems may be reservoirs for the fungi responsible for nosocomial infections. F. oxysporum was previously detected in the water distribution systems of five French hospitals. Sixty-eight isolates from water representative of all hospital units that were previously sampled and characterized by translation elongation factor 1α sequence typing were subjected to microsatellite analysis and full-length ribosomal intergenic spacer (IGS) sequence typing. All but three isolates shared common microsatellite loci and a common two-locus sequence type (ST). This ST has an international geographical distribution in both the water networks of hospitals and among clinical isolates. The ST dominant in water was not detected among 300 isolates of F. oxysporum that originated from surrounding soils. Further characterization of 15 isolates by vegetative compatibility testing allowed us to conclude that a clonal lineage of F. oxysporum circulates in the tap water of the different hospitals. We demonstrated that a clonal lineage of Fusarium oxysporum inhabits the water distribution systems of several French hospitals. This clonal lineage, which appears to be particularly adapted to water networks, represents a potential risk for human infection and raises questions about its worldwide distribution. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
NASA Astrophysics Data System (ADS)
Gerber, S.; Holsman, J. P.
1981-02-01
A proposed design analysis is presented of a passive solar energy efficient system for a typical three level, three bedroom, two story, garage under townhouse. The design incorporates the best, most performance proven and cost effective products, materials, processes, technologies, and subsystems which are available today. Seven distinct categories recognized for analysis are identified as: the exterior environment; the interior environment; conservation of energy; natural energy utilization; auxiliary energy utilization; control and distribution systems; and occupant adaptation. Preliminary design features, fenestration systems, the plenum supply system, the thermal storage party fire walls, direct gain storage, the radiant comfort system, and direct passive cooling systems are briefly described.
Dynamical Formation of Low-mass Merging Black Hole Binaries like GW151226
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Sourav; Rodriguez, Carl L.; Kalogera, Vicky
2017-02-20
Using numerical models for star clusters spanning a wide range in ages and metallicities (Z) we study the masses of binary black holes (BBHs) produced dynamically and merging in the local universe ( z ≲ 0.2). After taking into account cosmological constraints on star formation rate and metallicity evolution, which realistically relate merger delay times obtained from models with merger redshifts, we show here for the first time that while old, metal-poor globular clusters can naturally produce merging BBHs with heavier components, as observed in GW150914, lower-mass BBHs like GW151226 are easily formed dynamically in younger, higher-metallicity clusters. More specifically,more » we show that the mass of GW151226 is well within 1 σ of the mass distribution obtained from our models for clusters with Z/Z{sub ⊙} ≳ 0.5. Indeed, dynamical formation of a system like GW151226 likely requires a cluster that is younger and has a higher metallicity than typical Galactic globular clusters. The LVT151012 system, if real, could have been created in any cluster with Z/Z{sub ⊙} ≲ 0.25. On the other hand, GW150914 is more massive (beyond 1 σ ) than typical BBHs from even the lowest-metallicity (Z/Z{sub ⊙} = 0.005) clusters we consider, but is within 2 σ of the intrinsic mass distribution from our cluster models with Z/Z{sub ⊙} ≲ 0.05; of course, detection biases also push the observed distributions toward higher masses.« less
Recurrence time statistics for finite size intervals
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.
2004-12-01
We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.
NASA Astrophysics Data System (ADS)
Vallée, Geoffroy; Naughton, Thomas; Ong, Hong; Tikotekar, Anand; Engelmann, Christian; Bland, Wesley; Aderholdt, Ferrol; Scott, Stephen L.
Distributed and parallel systems are typically managed with “static” settings: the operating system (OS) and the runtime environment (RTE) are specified at a given time and cannot be changed to fit an application’s needs. This means that every time application developers want to use their application on a new execution platform, the application has to be ported to this new environment, which may be expensive in terms of application modifications and developer time. However, the science resides in the applications and not in the OS or the RTE. Therefore, it should be beneficial to adapt the OS and the RTE to the application instead of adapting the applications to the OS and the RTE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly
Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition,more » substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.« less
NASA Astrophysics Data System (ADS)
Guenther, A. B.; Duhl, T.
2011-12-01
Increasing computational resources have enabled a steady improvement in the spatial resolution used for earth system models. Land surface models and landcover distributions have kept ahead by providing higher spatial resolution than typically used in these models. Satellite observations have played a major role in providing high resolution landcover distributions over large regions or the entire earth surface but ground observations are needed to calibrate these data and provide accurate inputs for models. As our ability to resolve individual landscape components improves, it is important to consider what scale is sufficient for providing inputs to earth system models. The required spatial scale is dependent on the processes being represented and the scientific questions being addressed. This presentation will describe the development a contiguous U.S. landcover database using high resolution imagery (1 to 1000 meters) and surface observations of species composition and other landcover characteristics. The database includes plant functional types and species composition and is suitable for driving land surface models (CLM and MEGAN) that predict land surface exchange of carbon, water, energy and biogenic reactive gases (e.g., isoprene, sesquiterpenes, and NO). We investigate the sensitivity of model results to landcover distributions with spatial scales ranging over six orders of magnitude (1 meter to 1000000 meters). The implications for predictions of regional climate and air quality will be discussed along with recommendations for regional and global earth system modeling.
Quantitative Mapping of the Spatial Distribution of Nanoparticles in Endo-Lysosomes by Local pH.
Wang, Jing; MacEwan, Sarah R; Chilkoti, Ashutosh
2017-02-08
Understanding the intracellular distribution and trafficking of nanoparticle drug carriers is necessary to elucidate their mechanisms of drug delivery and is helpful in the rational design of novel nanoparticle drug delivery systems. The traditional immunofluorescence method to study intracellular distribution of nanoparticles using organelle-specific antibodies is laborious and subject to artifacts. As an alternative, we developed a new method that exploits ratiometric fluorescence imaging of a pH-sensitive Lysosensor dye to visualize and quantify the spatial distribution of nanoparticles in the endosomes and lysosomes of live cells. Using this method, we compared the endolysosomal distribution of cell-penetrating peptide (CPP)-functionalized micelles to unfunctionalized micelles and found that CPP-functionalized micelles exhibited faster endosome-to-lysosome trafficking than unfunctionalized micelles. Ratiometric fluorescence imaging of pH-sensitive Lysosensor dye allows rapid quantitative mapping of nanoparticle distribution in endolysosomes in live cells while minimizing artifacts caused by extensive sample manipulation typical of alternative approaches. This new method can thus serve as an alternative to traditional immunofluorescence approaches to study the intracellular distribution and trafficking of nanoparticles within endosomes and lysosomes.
SimWorx: An ADA Distributed Simulation Application Framework Supporting HLA and DIS
1996-12-01
The authors emphasize that most real systems have elements of several architectural styles; these are called heterogeneous architectures. Typically...In order for frameworks to be used, understood, and maintained, Adair emphasizes they must be clearly documented. 37 2.5.2.2 Framework Use Issues...0a) cuE U)) 00 Z64 Support Category Classes I Component-Type, Max Size _ Item-Type, Max-Size Bounded Buffer ProtectedContainer +Get() +Add() +Put
Technical Performance Measures and Distributed-Simulation Training Systems
2000-01-01
and Salas (1995) indicate that “ free play ” training exercises Acquisition Review Quarterly—Winter 2000 24 “The use of both process measures to...performance change from training period to training period, whereas the alternative to “ free play ”—a structured exercise—was expensive to build and...semiautomated-force operators used their “ free play ” prerogative in the second run. Specifically, units typically train against a lesser able opposing force
NASA Astrophysics Data System (ADS)
Soelistijanto, B.; Muliadi, V.
2018-03-01
Diffie-Hellman (DH) provides an efficient key exchange system by reducing the number of cryptographic keys distributed in the network. In this method, a node broadcasts a single public key to all nodes in the network, and in turn each peer uses this key to establish a shared secret key which then can be utilized to encrypt and decrypt traffic between the peer and the given node. In this paper, we evaluate the key transfer delay and cost performance of DH in opportunistic mobile networks, a specific scenario of MANETs where complete end-to-end paths rarely exist between sources and destinations; consequently, the end-to-end delays in these networks are much greater than typical MANETs. Simulation results, driven by a random node movement model and real human mobility traces, showed that DH outperforms a typical key distribution scheme based on the RSA algorithm in terms of key transfer delay, measured by average key convergence time; however, DH performs as well as the benchmark in terms of key transfer cost, evaluated by total key (copies) forwards.
Cardiological database management system as a mediator to clinical decision support.
Pappas, C; Mavromatis, A; Maglaveras, N; Tsikotis, A; Pangalos, G; Ambrosiadou, V
1996-03-01
An object-oriented medical database management system is presented for a typical cardiologic center, facilitating epidemiological trials. Object-oriented analysis and design were used for the system design, offering advantages for the integrity and extendibility of medical information systems. The system was developed using object-oriented design and programming methodology, the C++ language and the Borland Paradox Relational Data Base Management System on an MS-Windows NT environment. Particular attention was paid to system compatibility, portability, the ease of use, and the suitable design of the patient record so as to support the decisions of medical personnel in cardiovascular centers. The system was designed to accept complex, heterogeneous, distributed data in various formats and from different kinds of examinations such as Holter, Doppler and electrocardiography.
Heilpern, Tal; Manjare, Manoj; Govorov, Alexander O; Wiederrecht, Gary P; Gray, Stephen K; Harutyunyan, Hayk
2018-05-10
Developing a fundamental understanding of ultrafast non-thermal processes in metallic nanosystems will lead to applications in photodetection, photochemistry and photonic circuitry. Typically, non-thermal and thermal carrier populations in plasmonic systems are inferred either by making assumptions about the functional form of the initial energy distribution or using indirect sensors like localized plasmon frequency shifts. Here we directly determine non-thermal and thermal distributions and dynamics in thin films by applying a double inversion procedure to optical pump-probe data that relates the reflectivity changes around Fermi energy to the changes in the dielectric function and in the single-electron energy band occupancies. When applied to normal incidence measurements our method uncovers the ultrafast excitation of a non-Fermi-Dirac distribution and its subsequent thermalization dynamics. Furthermore, when applied to the Kretschmann configuration, we show that the excitation of propagating plasmons leads to a broader energy distribution of electrons due to the enhanced Landau damping.
Woods, Gwen C; Trenholm, Rebecca A; Hale, Bruce; Campbell, Zeke; Dickenson, Eric R V
2015-07-01
Nitrosamines are considered to pose greater health risks than currently regulated DBPs and are subsequently listed as a priority pollutant by the EPA, with potential for future regulation. Denver Water, as part of the EPA's Unregulated Contaminant Monitoring Rule 2 (UCMR2) monitoring campaign, found detectable levels of N-nitrosodimethylamine (NDMA) at all sites of maximum residency within the distribution system. To better understand the occurrence of nitrosamines and nitrosamine precursors, Denver Water undertook a comprehensive year-long monitoring campaign. Samples were taken every two weeks to monitor for NDMA in the distribution system, and quarterly sampling events further examined 9 nitrosamines and nitrosamine precursors throughout the treatment and distribution systems. NDMA levels within the distribution system were typically low (>1.3 to 7.2 ng/L) with a remote distribution site (frequently >200 h of residency) experiencing the highest concentrations found. Eight other nitrosamines (N-nitrosomethylethylamine, N-nitrosodiethylamine, N-nitroso-di-n-propylamine, N-nitroso-di-n-butylamine, N-nitroso-di-phenylamine, N-nitrosopyrrolidine, N-nitrosopiperidine, N-nitrosomorpholine) were also monitored but none of these 8, or precursors of these 8 [as estimated with formation potential (FP) tests], were detected anywhere in raw, partially-treated or distribution samples. Throughout the year, there was evidence that seasonality may impact NDMA formation, such that lower temperatures (~5-10°C) produced greater NDMA than during warmer months. The year of sampling further provided evidence that water quality and weather events may impact NDMA precursor loads. Precursor loading estimates demonstrated that NDMA precursors increased during treatment (potentially from cationic polymer coagulant aids). The precursor analysis also provided evidence that precursors may have increased further within the distribution system itself. This comprehensive study of a large-scale drinking water system provides insight into the variability of NDMA occurrence in a chloraminated system, which may be impacted by seasonality, water quality changes and/or the varied origins of NDMA precursors within a given system. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Godsey, S. E.; Kirchner, J. W.
2008-12-01
The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.
Advanced active health monitoring system of liquid rocket engines
NASA Astrophysics Data System (ADS)
Qing, Xinlin P.; Wu, Zhanjun; Beard, Shawn; Chang, Fu-Kuo
2008-11-01
An advanced SMART TAPE system has been developed for real-time in-situ monitoring and long term tracking of structural integrity of pressure vessels in liquid rocket engines. The practical implementation of the structural health monitoring (SHM) system including distributed sensor network, portable diagnostic hardware and dedicated data analysis software is addressed based on the harsh operating environment. Extensive tests were conducted on a simulated large booster LOX-H2 engine propellant duct to evaluate the survivability and functionality of the system under the operating conditions of typical liquid rocket engines such as cryogenic temperature, vibration loads. The test results demonstrated that the developed SHM system could survive the combined cryogenic temperature and vibration environments and effectively detect cracks as small as 2 mm.
System performance predictions for Space Station Freedom's electric power system
NASA Technical Reports Server (NTRS)
Kerslake, Thomas W.; Hojnicki, Jeffrey S.; Green, Robert D.; Follo, Jeffrey C.
1993-01-01
Space Station Freedom Electric Power System (EPS) capability to effectively deliver power to housekeeping and user loads continues to strongly influence Freedom's design and planned approaches for assembly and operations. The EPS design consists of silicon photovoltaic (PV) arrays, nickel-hydrogen batteries, and direct current power management and distribution hardware and cabling. To properly characterize the inherent EPS design capability, detailed system performance analyses must be performed for early stages as well as for the fully assembled station up to 15 years after beginning of life. Such analyses were repeatedly performed using the FORTRAN code SPACE (Station Power Analysis for Capability Evaluation) developed at the NASA Lewis Research Center over a 10-year period. SPACE combines orbital mechanics routines, station orientation/pointing routines, PV array and battery performance models, and a distribution system load-flow analysis to predict EPS performance. Time-dependent, performance degradation, low earth orbit environmental interactions, and EPS architecture build-up are incorporated in SPACE. Results from two typical SPACE analytical cases are presented: (1) an electric load driven case and (2) a maximum EPS capability case.
Detecting Abnormal Machine Characteristics in Cloud Infrastructures
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Das, Kamalika; Matthews, Bryan L.
2011-01-01
In the cloud computing environment resources are accessed as services rather than as a product. Monitoring this system for performance is crucial because of typical pay-peruse packages bought by the users for their jobs. With the huge number of machines currently in the cloud system, it is often extremely difficult for system administrators to keep track of all machines using distributed monitoring programs such as Ganglia1 which lacks system health assessment and summarization capabilities. To overcome this problem, we propose a technique for automated anomaly detection using machine performance data in the cloud. Our algorithm is entirely distributed and runs locally on each computing machine on the cloud in order to rank the machines in order of their anomalous behavior for given jobs. There is no need to centralize any of the performance data for the analysis and at the end of the analysis, our algorithm generates error reports, thereby allowing the system administrators to take corrective actions. Experiments performed on real data sets collected for different jobs validate the fact that our algorithm has a low overhead for tracking anomalous machines in a cloud infrastructure.
A wireless data acquisition system for acoustic emission testing
NASA Astrophysics Data System (ADS)
Zimmerman, A. T.; Lynch, J. P.
2013-01-01
As structural health monitoring (SHM) systems have seen increased demand due to lower costs and greater capabilities, wireless technologies have emerged that enable the dense distribution of transducers and the distributed processing of sensor data. In parallel, ultrasonic techniques such as acoustic emission (AE) testing have become increasingly popular in the non-destructive evaluation of materials and structures. These techniques, which involve the analysis of frequency content between 1 kHz and 1 MHz, have proven effective in detecting the onset of cracking and other early-stage failure in active structures such as airplanes in flight. However, these techniques typically involve the use of expensive and bulky monitoring equipment capable of accurately sensing AE signals at sampling rates greater than 1 million samples per second. In this paper, a wireless data acquisition system is presented that is capable of collecting, storing, and processing AE data at rates of up to 20 MHz. Processed results can then be wirelessly transmitted in real-time, creating a system that enables the use of ultrasonic techniques in large-scale SHM systems.
Penna, Vessoni Thereza Christina; Martins, Silva Alzira Maria; Mazzola, Priscila Gava
2002-01-01
Background A typical purification system that provides purified water which meets ionic and organic chemical standards, must be protected from microbial proliferation to minimize cross-contamination for use in cleaning and preparations in pharmaceutical industries and in health environments. Methodology Samples of water were taken directly from the public distribution water tank at twelve different stages of a typical purification system were analyzed for the identification of isolated bacteria. Two miniature kits were used: (i) identification system (api 20 NE, Bio-Mérieux) for non-enteric and non-fermenting gram-negative rods; and (ii) identification system (BBL crystal, Becton and Dickson) for enteric and non-fermenting gram-negative rods. The efficiency of the chemical sanitizers used in the stages of the system, over the isolated and identified bacteria in the sampling water, was evaluated by the minimum inhibitory concentration (MIC) method. Results The 78 isolated colonies were identified as the following bacteria genera: Pseudomonas, Flavobacterium and Acinetobacter. According to the miniature kits used in the identification, there was a prevalence of isolation of P. aeruginosa 32.05%, P. picketti (Ralstonia picketti) 23.08%, P. vesiculares 12.82%,P. diminuta 11.54%, F. aureum 6.42%, P. fluorescens 5.13%, A. lwoffi 2.56%, P. putida 2.56%, P. alcaligenes 1.28%, P. paucimobilis 1.28%, and F. multivorum 1.28%. Conclusions We found that research was required for the identification of gram-negative non-fermenting bacteria, which were isolated from drinking water and water purification systems, since Pseudomonas genera represents opportunistic pathogens which disperse and adhere easily to surfaces, forming a biofilm which interferes with the cleaning and disinfection procedures in hospital and industrial environments. PMID:12182763
Measure Guideline. Steam System Balancing and Tuning for Multifamily Residential Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Jayne; Ludwig, Peter; Brand, Larry
2013-04-01
This guideline provides building owners, professionals involved in multifamily audits, and contractors insights for improving the balance and tuning of steam systems. It provides readers an overview of one-pipe steam heating systems, guidelines for evaluating steam systems, typical costs and savings, and guidelines for ensuring quality installations. It also directs readers to additional resources for details not included here. Measures for balancing a distribution system that are covered include replacing main line vents and upgrading radiator vents. Also included is a discussion on upgrading boiler controls and the importance of tuning the settings on new or existing boiler controls. Themore » guideline focuses on one-pipe steam systems, though many of the assessment methods can be generalized to two-pipe steam systems.« less
A Test Generation Framework for Distributed Fault-Tolerant Algorithms
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn; Bushnell, David; Miner, Paul; Pasareanu, Corina S.
2009-01-01
Heavyweight formal methods such as theorem proving have been successfully applied to the analysis of safety critical fault-tolerant systems. Typically, the models and proofs performed during such analysis do not inform the testing process of actual implementations. We propose a framework for generating test vectors from specifications written in the Prototype Verification System (PVS). The methodology uses a translator to produce a Java prototype from a PVS specification. Symbolic (Java) PathFinder is then employed to generate a collection of test cases. A small example is employed to illustrate how the framework can be used in practice.
Rio: a dynamic self-healing services architecture using Jini networking technology
NASA Astrophysics Data System (ADS)
Clarke, James B.
2002-06-01
Current mainstream distributed Java architectures offer great capabilities embracing conventional enterprise architecture patterns and designs. These traditional systems provide robust transaction oriented environments that are in large part focused on data and host processors. Typically, these implementations require that an entire application be deployed on every machine that will be used as a compute resource. In order for this to happen, the application is usually taken down, installed and started with all systems in-sync and knowing about each other. Static environments such as these present an extremely difficult environment to setup, deploy and administer.
Dynamic-scanning-electron-microscope study of friction and wear
NASA Technical Reports Server (NTRS)
Brainard, W. A.; Buckley, D. H.
1974-01-01
A friction and wear apparatus was built into a real time scanning electron microscope (SEM). The apparatus and SEM comprise a system which provides the capability of performing dynamic friction and wear experiments in situ. When the system is used in conjunction with dispersive X-ray analysis, a wide range of information on the wearing process can be obtained. The type of wear and variation with speed, load, and time can be investigated. The source, size, and distribution of wear particles can be determined and metallic transferal observed. Some typical results obtained with aluminum, copper, and iron specimens are given.
Solar Power Satellite (SPS) fiber optic link assessment
NASA Technical Reports Server (NTRS)
1980-01-01
A feasibility demonstration of a 980 MHz fiber optic link for the Solar Power Satellite (SPS) phase reference distribution system was accomplished. A dual fiber-optic link suitable for a phase distribution frequency of 980 MHz was built and tested. The major link components include single mode injection laser diodes, avalanche photodiodes, and multimode high bandwidth fibers. Signal throughput was demonstrated to be stable and of high quality in all cases. For a typical SPS link length of 200 meters, the transmitted phase at 980 MHz varies approximately 2.5 degrees for every deg C of fiber temperature change. This rate is acceptable because of the link length compensation feature of the phase control design.
Facility Monitoring: A Qualitative Theory for Sensor Fusion
NASA Technical Reports Server (NTRS)
Figueroa, Fernando
2001-01-01
Data fusion and sensor management approaches have largely been implemented with centralized and hierarchical architectures. Numerical and statistical methods are the most common data fusion methods found in these systems. Given the proliferation and low cost of processing power, there is now an emphasis on designing distributed and decentralized systems. These systems use analytical/quantitative techniques or qualitative reasoning methods for date fusion.Based on other work by the author, a sensor may be treated as a highly autonomous (decentralized) unit. Each highly autonomous sensor (HAS) is capable of extracting qualitative behaviours from its data. For example, it detects spikes, disturbances, noise levels, off-limit excursions, step changes, drift, and other typical measured trends. In this context, this paper describes a distributed sensor fusion paradigm and theory where each sensor in the system is a HAS. Hence, given the reach qualitative information from each HAS, a paradigm and formal definitions are given so that sensors and processes can reason and make decisions at the qualitative level. This approach to sensor fusion makes it possible the implementation of intuitive (effective) methods to monitor, diagnose, and compensate processes/systems and their sensors. This paradigm facilitates a balanced distribution of intelligence (code and/or hardware) to the sensor level, the process/system level, and a higher controller level. The primary application of interest is in intelligent health management of rocket engine test stands.
Measurements of charge distributions of the fragments in the low energy fission reaction
NASA Astrophysics Data System (ADS)
Wang, Taofeng; Han, Hongyin; Meng, Qinghua; Wang, Liming; Zhu, Liping; Xia, Haihong
2013-01-01
The measurement for charge distributions of fragments in spontaneous fission 252Cf has been performed by using a unique style of detector setup consisting of a typical grid ionization chamber and a ΔΕ-Ε particle telescope, in which a thin grid ionization chamber served as the ΔΕ-section and the E-section was an Au-Si surface barrier detector. The typical physical quantities of fragments, such as mass number and kinetic energies as well as the deposition in the gas ΔΕ detector and E detector were derived from the coincident measurement data. The charge distributions of the light fragments for the fixed mass number A2* and total kinetic energy (TKE) were obtained by the least-squares fits for the response functions of the ΔΕ detector with multi-Gaussian functions representing the different elements. The results of the charge distributions for some typical fragments are shown in this article which indicates that this detection setup has the charge distribution capability of Ζ:ΔΖ>40:1. The experimental method developed in this work for determining the charge distributions of fragments is expected to be employed in the neutron induced fissions of 232Th and 238U or other low energy fission reactions.
Prediction and typicality in multiverse cosmology
NASA Astrophysics Data System (ADS)
Azhar, Feraz
2014-02-01
In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.
Methane Leaks from Natural Gas Systems Follow Extreme Distributions.
Brandt, Adam R; Heath, Garvin A; Cooley, Daniel
2016-11-15
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4 ) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.
Methane Leaks from Natural Gas Systems Follow Extreme Distributions
Brandt, Adam R.; Heath, Garvin A.; Cooley, Daniel
2016-10-14
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ~15,000 measurements from 18 prior studies, we show that all available natural gas leakage datasets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of themore » total leakage volume. While prior studies used lognormal model distributions, we show that lognormal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of datasets to increase sample size is not recommended due to apparent deviation between sampled populations. Finally, understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.« less
Methane Leaks from Natural Gas Systems Follow Extreme Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Adam R.; Heath, Garvin A.; Cooley, Daniel
Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ~15,000 measurements from 18 prior studies, we show that all available natural gas leakage datasets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of themore » total leakage volume. While prior studies used lognormal model distributions, we show that lognormal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of datasets to increase sample size is not recommended due to apparent deviation between sampled populations. Finally, understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.« less
Boundary control for a flexible manipulator based on infinite dimensional disturbance observer
NASA Astrophysics Data System (ADS)
Jiang, Tingting; Liu, Jinkun; He, Wei
2015-07-01
This paper focuses on disturbance observer and boundary control design for the flexible manipulator in presence of both boundary disturbance and spatially distributed disturbance. Taking the infinite-dimensionality of the flexural dynamics into account, this study proposes a partial differential equation (PDE) model. Since the spatially distributed disturbance is infinite dimensional, it cannot be compensated by the typical disturbance observer, which is designed by finite dimensional approach. To estimate the spatially distributed disturbance, we propose a novel infinite dimensional disturbance observer (IDDO). Applying the IDDO as a feedforward compensator, a boundary control scheme is designed to regulate the joint position and eliminate the elastic vibration simultaneously. Theoretical analysis validates the stability of both the proposed disturbance observer and the boundary controller. The performance of the closed-loop system is demonstrated by numerical simulations.
Statistical characterization of discrete conservative systems: The web map
NASA Astrophysics Data System (ADS)
Ruiz, Guiomar; Tirnakli, Ugur; Borges, Ernesto P.; Tsallis, Constantino
2017-10-01
We numerically study the two-dimensional, area preserving, web map. When the map is governed by ergodic behavior, it is, as expected, correctly described by Boltzmann-Gibbs statistics, based on the additive entropic functional SB G[p (x ) ] =-k ∫d x p (x ) lnp (x ) . In contrast, possible ergodicity breakdown and transitory sticky dynamical behavior drag the map into the realm of generalized q statistics, based on the nonadditive entropic functional Sq[p (x ) ] =k 1/-∫d x [p(x ) ] q q -1 (q ∈R ;S1=SB G ). We statistically describe the system (probability distribution of the sum of successive iterates, sensitivity to the initial condition, and entropy production per unit time) for typical values of the parameter that controls the ergodicity of the map. For small (large) values of the external parameter K , we observe q -Gaussian distributions with q =1.935 ⋯ (Gaussian distributions), like for the standard map. In contrast, for intermediate values of K , we observe a different scenario, due to the fractal structure of the trajectories embedded in the chaotic sea. Long-standing non-Gaussian distributions are characterized in terms of the kurtosis and the box-counting dimension of chaotic sea.
Hysteresis, phase transitions, and dangerous transients in electrical power distribution systems
NASA Astrophysics Data System (ADS)
Duclut, Charlie; Backhaus, Scott; Chertkov, Michael
2013-06-01
The majority of dynamical studies in power systems focus on the high-voltage transmission grids where models consider large generators interacting with crude aggregations of individual small loads. However, new phenomena have been observed indicating that the spatial distribution of collective, nonlinear contribution of these small loads in the low-voltage distribution grid is crucial to the outcome of these dynamical transients. To elucidate the phenomenon, we study the dynamics of voltage and power flows in a spatially extended distribution feeder (circuit) connecting many asynchronous induction motors and discover that this relatively simple 1+1 (space+time) dimensional system exhibits a plethora of nontrivial spatiotemporal effects, some of which may be dangerous for power system stability. Long-range motor-motor interactions mediated by circuit voltage and electrical power flows result in coexistence and segregation of spatially extended phases defined by individual motor states, a “normal” state where the motors’ mechanical (rotation) frequency is slightly smaller than the nominal frequency of the basic ac flows and a “stalled” state where the mechanical frequency is small. Transitions between the two states can be initiated by a perturbation of the voltage or base frequency at the head of the distribution feeder. Such behavior is typical of first-order phase transitions in physics, and this 1+1 dimensional model shows many other properties of a first-order phase transition with the spatial distribution of the motors’ mechanical frequency playing the role of the order parameter. In particular, we observe (a) propagation of the phase-transition front with the constant speed (in very long feeders) and (b) hysteresis in transitions between the normal and stalled (or partially stalled) phases.
Hysteresis, phase transitions, and dangerous transients in electrical power distribution systems.
Duclut, Charlie; Backhaus, Scott; Chertkov, Michael
2013-06-01
The majority of dynamical studies in power systems focus on the high-voltage transmission grids where models consider large generators interacting with crude aggregations of individual small loads. However, new phenomena have been observed indicating that the spatial distribution of collective, nonlinear contribution of these small loads in the low-voltage distribution grid is crucial to the outcome of these dynamical transients. To elucidate the phenomenon, we study the dynamics of voltage and power flows in a spatially extended distribution feeder (circuit) connecting many asynchronous induction motors and discover that this relatively simple 1+1 (space+time) dimensional system exhibits a plethora of nontrivial spatiotemporal effects, some of which may be dangerous for power system stability. Long-range motor-motor interactions mediated by circuit voltage and electrical power flows result in coexistence and segregation of spatially extended phases defined by individual motor states, a "normal" state where the motors' mechanical (rotation) frequency is slightly smaller than the nominal frequency of the basic ac flows and a "stalled" state where the mechanical frequency is small. Transitions between the two states can be initiated by a perturbation of the voltage or base frequency at the head of the distribution feeder. Such behavior is typical of first-order phase transitions in physics, and this 1+1 dimensional model shows many other properties of a first-order phase transition with the spatial distribution of the motors' mechanical frequency playing the role of the order parameter. In particular, we observe (a) propagation of the phase-transition front with the constant speed (in very long feeders) and (b) hysteresis in transitions between the normal and stalled (or partially stalled) phases.
Fahr's Syndrome and Secondary Hypoparathyroidism.
Dos Santos, Vitorino Modesta; Da Mata, Ana Medeiros De Farias; Ribeiro, Kelle Regina Alves; Calvo, Isadora Cartaxo De Sousa
2016-01-01
A typical case of Fahr's syndrome is described in a 76-year-old Brazilian female who underwent a total thyroidectomy three decades ago. Six years before the current admission, she started with generalized tonic-clonic seizures. Associated disorders involved extra-pyramidal, cognitive, nocturnal terror and mood changes. With suspicion of hypocalcemia due to secondary hypoparathyroidism, laboratory determinations confirmed the diagnoses. Furthermore, imaging studies of the central nervous system detected multiple calcifications, with characteristic distribution of Fahr's syndrome. Clinical management was successful.
2017-02-17
time for the tomography and diffraction sweeps was approximately 42 min. In a typical quasi -static in-situ experiment, loading is halted and the...data is used to extract individual grain- average stress tensors in a large aggregate of Ti-7Al grains (z500) over a time series of prescribed states...for public release: distribution unlimited. © 2017 ELSEVIER LTD (STINFO COPY) AIR FORCE RESEARCH LABORATORY MATERIALS AND MANUFACTURING
Wide-Band, High-Quantum-Efficiency Photodetector
NASA Technical Reports Server (NTRS)
Jackson, Deborah; Wilson, Daniel; Stern, Jeffrey
2007-01-01
A design has been proposed for a photodetector that would exhibit a high quantum efficiency (as much as 90 percent) over a wide wavelength band, which would typically be centered at a wavelength of 1.55 m. This and similar photodetectors would afford a capability for detecting single photons - a capability that is needed for research in quantum optics as well as for the practical development of secure optical communication systems for distribution of quantum cryptographic keys. The proposed photodetector would be of the hot-electron, phonon-cooled, thin-film superconductor type. The superconducting film in this device would be a meandering strip of niobium nitride. In the proposed photodetector, the quantum efficiency would be increased through incorporation of optiA design has been proposed for a photodetector that would exhibit a high quantum efficiency (as much as 90 percent) over a wide wavelength band, which would typically be centered at a wavelength of 1.55 m. This and similar photodetectors would afford a capability for detecting single photons - a capability that is needed for research in quantum optics as well as for the practical development of secure optical communication systems for distribution of quantum cryptographic keys. The proposed photodetector would be of the hot-electron, phonon-cooled, thin-film superconductor type. The superconducting film in this device would be a meandering strip of niobium nitride. In the proposed photodetector, the quantum efficiency would be increased through incorporation of opti-
NASA Technical Reports Server (NTRS)
Patterson, Michael D.; Derlaga, Joseph M.; Borer, Nicholas K.
2016-01-01
Although the primary function of propellers is typically to produce thrust, aircraft equipped with distributed electric propulsion (DEP) may utilize propellers whose main purpose is to act as a form of high-lift device. These \\high-lift propellers" can be placed upstream of wing such that, when the higher-velocity ow in the propellers' slipstreams interacts with the wing, the lift is increased. This technique is a main design feature of a new NASA advanced design project called Scalable Convergent Electric Propulsion Technology Operations Research (SCEPTOR). The goal of the SCEPTOR project is design, build, and y a DEP aircraft to demonstrate that such an aircraft can be much more ecient than conventional designs. This paper provides details into the high-lift propeller system con guration selection for the SCEPTOR ight demonstrator. The methods used in the high-lift propeller system conceptual design and the tradeo s considered in selecting the number of propellers are discussed.
NASA Astrophysics Data System (ADS)
Nurge, Mark A.
2007-05-01
An electrical capacitance volume tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 × 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This paper presents a method of reconstructing images of high contrast dielectric materials using only the self-capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminium structure inserted at different positions within the sensing region. Comparisons with standard two-dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.
Electrical capacitance volume tomography of high contrast dielectrics using a cuboid geometry
NASA Astrophysics Data System (ADS)
Nurge, Mark A.
An Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This dissertation presents a method of reconstructing images of high contrast dielectric materials using only the self capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminum structure inserted at different positions within the sensing region. Comparisons with standard two dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.
Coordinated single-phase control scheme for voltage unbalance reduction in low voltage network.
Pullaguram, Deepak; Mishra, Sukumar; Senroy, Nilanjan
2017-08-13
Low voltage (LV) distribution systems are typically unbalanced in nature due to unbalanced loading and unsymmetrical line configuration. This situation is further aggravated by single-phase power injections. A coordinated control scheme is proposed for single-phase sources, to reduce voltage unbalance. A consensus-based coordination is achieved using a multi-agent system, where each agent estimates the averaged global voltage and current magnitudes of individual phases in the LV network. These estimated values are used to modify the reference power of individual single-phase sources, to ensure system-wide balanced voltages and proper power sharing among sources connected to the same phase. Further, the high X / R ratio of the filter, used in the inverter of the single-phase source, enables control of reactive power, to minimize voltage unbalance locally. The proposed scheme is validated by simulating a LV distribution network with multiple single-phase sources subjected to various perturbations.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
Power-law decay exponents: A dynamical criterion for predicting thermalization
NASA Astrophysics Data System (ADS)
Távora, Marco; Torres-Herrera, E. J.; Santos, Lea F.
2017-01-01
From the analysis of the relaxation process of isolated lattice many-body quantum systems quenched far from equilibrium, we deduce a criterion for predicting when they are certain to thermalize. It is based on the algebraic behavior ∝t-γ of the survival probability at long times. We show that the value of the power-law exponent γ depends on the shape and filling of the weighted energy distribution of the initial state. Two scenarios are explored in detail: γ ≥2 and γ <1 . Exponents γ ≥2 imply that the energy distribution of the initial state is ergodically filled and the eigenstates are uncorrelated, so thermalization is guaranteed to happen. In this case, the power-law behavior is caused by bounds in the energy spectrum. Decays with γ <1 emerge when the energy eigenstates are correlated and signal lack of ergodicity. They are typical of systems undergoing localization due to strong onsite disorder and are found also in clean integrable systems.
Adaptation, Growth, and Resilience in Biological Distribution Networks
NASA Astrophysics Data System (ADS)
Ronellenfitsch, Henrik; Katifori, Eleni
Highly optimized complex transport networks serve crucial functions in many man-made and natural systems such as power grids and plant or animal vasculature. Often, the relevant optimization functional is nonconvex and characterized by many local extrema. In general, finding the global, or nearly global optimum is difficult. In biological systems, it is believed that such an optimal state is slowly achieved through natural selection. However, general coarse grained models for flow networks with local positive feedback rules for the vessel conductivity typically get trapped in low efficiency, local minima. We show how the growth of the underlying tissue, coupled to the dynamical equations for network development, can drive the system to a dramatically improved optimal state. This general model provides a surprisingly simple explanation for the appearance of highly optimized transport networks in biology such as plant and animal vasculature. In addition, we show how the incorporation of spatially collective fluctuating sources yields a minimal model of realistic reticulation in distribution networks and thus resilience against damage.
Congenital heart defects in Williams syndrome.
Yuan, Shi-Min
2017-01-01
Yuan SM. Congenital heart defects in Williams syndrome. Turk J Pediatr 2017; 59: 225-232. Williams syndrome (WS), also known as Williams-Beuren syndrome, is a rare genetic disorder involving multiple systems including the circulatory system. However, the etiologies of the associated congenital heart defects in WS patients have not been sufficiently elucidated and represent therapeutic challenges. The typical congenital heart defects in WS were supravalvar aortic stenosis, pulmonary stenosis (both valvular and peripheral), aortic coarctation and mitral valvar prolapse. The atypical cardiovascular anomalies include tetralogy of Fallot, atrial septal defects, aortic and mitral valvular insufficiencies, bicuspid aortic valves, ventricular septal defects, total anomalous pulmonary venous return, double chambered right ventricle, Ebstein anomaly and arterial anomalies. Deletion of the elastin gene on chromosome 7q11.23 leads to deficiency or abnormal deposition of elastin during cardiovascular development, thereby leading to widespread cardiovascular abnormalities in WS. In this article, the distribution, treatment and surgical outcomes of typical and atypical cardiac defects in WS are discussed.
NASA Astrophysics Data System (ADS)
Andresen, Juan Carlos; Katzgraber, Helmut G.; Schechter, Moshe
2017-12-01
Random fields disorder Ising ferromagnets by aligning single spins in the direction of the random field in three space dimensions, or by flipping large ferromagnetic domains at dimensions two and below. While the former requires random fields of typical magnitude similar to the interaction strength, the latter Imry-Ma mechanism only requires infinitesimal random fields. Recently, it has been shown that for dilute anisotropic dipolar systems a third mechanism exists, where the ferromagnetic phase is disordered by finite-size glassy domains at a random field of finite magnitude that is considerably smaller than the typical interaction strength. Using large-scale Monte Carlo simulations and zero-temperature numerical approaches, we show that this mechanism applies to disordered ferromagnets with competing short-range ferromagnetic and antiferromagnetic interactions, suggesting its generality in ferromagnetic systems with competing interactions and an underlying spin-glass phase. A finite-size-scaling analysis of the magnetization distribution suggests that the transition might be first order.
Schrems, W A; Laemmer, R; Hoesl, L M; Horn, F K; Mardin, C Y; Kruse, F E; Tornow, R P
2011-10-01
To investigate the influence of atypical retardation pattern (ARP) on the distribution of peripapillary retinal nerve fibre layer (RNFL) thickness measured with scanning laser polarimetry in healthy individuals and to compare these results with RNFL thickness from spectral domain optical coherence tomography (OCT) in the same subjects. 120 healthy subjects were investigated in this study. All volunteers received detailed ophthalmological examination, GDx variable corneal compensation (VCC) and Spectralis-OCT. The subjects were divided into four subgroups according to their typical scan score (TSS): very typical with TSS=100, typical with 99 ≥ TSS ≥ 91, less typical with 90 ≥ TSS ≥ 81 and atypical with TSS ≤ 80. Deviations from very typical normal values were calculated for 32 sectors for each group. There was a systematic variation of the RNFL thickness deviation around the optic nerve head in the atypical group for the GDxVCC results. The highest percentage deviation of about 96% appeared temporal with decreasing deviation towards the superior and inferior sectors, and nasal sectors exhibited a deviation of 30%. Percentage deviations from very typical RNFL values decreased with increasing TSS. No systematic variation could be found if the RNFL thickness deviation between different TSS-groups was compared with the OCT results. The ARP has a major impact on the peripapillary RNFL distribution assessed by GDx VCC; thus, the TSS should be included in the standard printout.
Emmetropisation and the aetiology of refractive errors
Flitcroft, D I
2014-01-01
The distribution of human refractive errors displays features that are not commonly seen in other biological variables. Compared with the more typical Gaussian distribution, adult refraction within a population typically has a negative skew and increased kurtosis (ie is leptokurtotic). This distribution arises from two apparently conflicting tendencies, first, the existence of a mechanism to control eye growth during infancy so as to bring refraction towards emmetropia/low hyperopia (ie emmetropisation) and second, the tendency of many human populations to develop myopia during later childhood and into adulthood. The distribution of refraction therefore changes significantly with age. Analysis of the processes involved in shaping refractive development allows for the creation of a life course model of refractive development. Monte Carlo simulations based on such a model can recreate the variation of refractive distributions seen from birth to adulthood and the impact of increasing myopia prevalence on refractive error distributions in Asia. PMID:24406411
NASA Astrophysics Data System (ADS)
Monfort, Samuel S.; Sibley, Ciara M.; Coyne, Joseph T.
2016-05-01
Future unmanned vehicle operations will see more responsibilities distributed among fewer pilots. Current systems typically involve a small team of operators maintaining control over a single aerial platform, but this arrangement results in a suboptimal configuration of operator resources to system demands. Rather than devoting the full-time attention of several operators to a single UAV, the goal should be to distribute the attention of several operators across several UAVs as needed. Under a distributed-responsibility system, operator task load would be continuously monitored, with new tasks assigned based on system needs and operator capabilities. The current paper sought to identify a set of metrics that could be used to assess workload unobtrusively and in near real-time to inform a dynamic tasking algorithm. To this end, we put 20 participants through a variable-difficulty multiple UAV management simulation. We identified a subset of candidate metrics from a larger pool of pupillary and behavioral measures. We then used these metrics as features in a machine learning algorithm to predict workload condition every 60 seconds. This procedure produced an overall classification accuracy of 78%. An automated tasker sensitive to fluctuations in operator workload could be used to efficiently delegate tasks for teams of UAV operators.
Balancing Hydronic Systems in Multifamily Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruch, Russell; Ludwig, Peter; Maurer, Tessa
2014-07-01
In multifamily hydronic systems, temperature imbalance may be caused by undersized piping, improperly adjusted balancing valves, inefficient water temperature and flow levels, and owner/occupant interaction with the boilers, distribution, and controls. The imbalance leads to tenant discomfort, higher energy use intensity, and inefficient building operation. This research, conducted by Building America team Partnership for Advanced Residential Retrofit, explores cost-effective distribution upgrades and balancing measures in multifamily hydronic systems, providing a resource to contractors, auditors, and building owners on best practices to improve tenant comfort and lower operating costs. The team surveyed existing knowledge on cost-effective retrofits for optimizing distribution inmore » typical multifamily hydronic systems, with the aim of identifying common situations and solutions, and then conducted case studies on two Chicago area buildings with known balancing issues in order to quantify the extent of temperature imbalance. At one of these buildings a booster pump was installed on a loop to an underheated wing of the building. This study found that unit temperature in a multifamily hydronic building can vary as much as 61°F, particularly if windows are opened or tenants use intermittent supplemental heating sources like oven ranges. Average temperature spread at the building as a result of this retrofit decreased from 22.1°F to 15.5°F.« less
Direct Data Distribution From Low-Earth Orbit
NASA Technical Reports Server (NTRS)
Budinger, James M.; Fujikawa, Gene; Kunath, Richard R.; Nguyen, Nam T.; Romanofsky, Robert R.; Spence, Rodney L.
1997-01-01
NASA Lewis Research Center (LeRC) is developing the space and ground segment technologies necessary to demonstrate a direct data distribution (1)3) system for use in space-to-ground communication links from spacecraft in low-Earth orbit (LEO) to strategically located tracking ground terminals. The key space segment technologies include a K-band (19 GHz) MMIC-based transmit phased array antenna, and a multichannel bandwidth- and power-efficient digital encoder/modulate with an aggregate data rate of 622 Mb/s. Along with small (1.8 meter), low-cost tracking terminals on the ground, the D3 system enables affordable distribution of data to the end user or archive facility through interoperability with commercial terrestrial telecommunications networks. The D3 system is applicable to both government and commercial science and communications spacecraft in LEO. The features and benefits of the D3 system concept are described. Starting with typical orbital characteristics, a set of baseline requirements for representative applications is developed, including requirements for onboard storage and tracking terminals, and sample link budgets are presented. Characteristics of the transmit array antenna and digital encoder/modulator are described. The architecture and components of the tracking terminal are described, including technologies for the next generation terminal. Candidate flights of opportunity for risk mitigation and space demonstration of the D3 features are identified.
Data-Driven Residential Load Modeling and Validation in GridLAB-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gotseff, Peter; Lundstrom, Blake
Accurately characterizing the impacts of high penetrations of distributed energy resources (DER) on the electric distribution system has driven modeling methods from traditional static snap shots, often representing a critical point in time (e.g., summer peak load), to quasi-static time series (QSTS) simulations capturing all the effects of variable DER, associated controls and hence, impacts on the distribution system over a given time period. Unfortunately, the high time resolution DER source and load data required for model inputs is often scarce or non-existent. This paper presents work performed within the GridLAB-D model environment to synthesize, calibrate, and validate 1-second residentialmore » load models based on measured transformer loads and physics-based models suitable for QSTS electric distribution system modeling. The modeling and validation approach taken was to create a typical GridLAB-D model home that, when replicated to represent multiple diverse houses on a single transformer, creates a statistically similar load to a measured load for a given weather input. The model homes are constructed to represent the range of actual homes on an instrumented transformer: square footage, thermal integrity, heating and cooling system definition as well as realistic occupancy schedules. House model calibration and validation was performed using the distribution transformer load data and corresponding weather. The modeled loads were found to be similar to the measured loads for four evaluation metrics: 1) daily average energy, 2) daily average and standard deviation of power, 3) power spectral density, and 4) load shape.« less
Latron, Mathilde; Arnaud, Jean-François; Ferla, Héloïse; Godé, Cécile; Duputié, Anne
2018-06-01
Identifying spatial patterns of genetic differentiation across a species range is critical to set up conservation and restoration decision-making. This is especially timely, since global change triggers shifts in species' geographic distribution and in the geographical variation of mating system and patterns of genetic differentiation, with varying consequences at the trailing and leading edges of a species' distribution. Using 454 pyrosequencing, we developed nuclear microsatellite loci for two plant species showing a strictly coastal geographical distribution and contrasting range dynamics: the expanding rock samphire (Crithmum maritimum, 21 loci) and the highly endangered and receding dune pansy (Viola tricolor subsp. curtisii, 12 loci). Population genetic structure was then assessed by genotyping more than 100 individuals from four populations of each of the two target species. Rock samphire displayed high levels of genetic differentiation (F ST = 0.38), and a genetic structure typical of a mostly selfing species (F IS ranging from 0.16 to 0.58). Populations of dune pansy showed a less pronounced level of population structuring (F ST = 0.25) and a genotypic structure more suggestive of a mixed-mating system when excluding two loci with heterozygote excess. These results demonstrate that the genetic markers developed here are useful to assess the mating system of populations of these two species. They will be tools of choice to investigate phylogeographical patterns and variation in mating system over the geographical distribution ranges for two coastal plant species that are subject to dynamic evolution due to rapid contemporary global change.
A self-contamination model for the formation of globular star clusters
NASA Astrophysics Data System (ADS)
Brown, James Howard
Described here is a model of globular cluster formation which allows the self contamination of the cluster by an earlier generation of massive stars. It is first shown that such self-contamination naturally produces an Fe/H in the range from -2.5 to -1.0, precisely the same range observed in the metal poor (halo) globular clusters; this also seems to require that the disk clusters started with a substantial initial metallicity. To minimize the problem of creating homogeneous globular clusters, the second (currently observed) generation of stars is assumed to form in the expanding supershell around the first generation stars. Both numerical and analytic models are used to address this problem. The most important result of this investigation was that the late evolution of the supershell is the most important, and that this phase of the evolution is dominated by the external medium in which the cloud is embedded. This result and the requirement that only the most tightly bound systems may become globular clusters lead to the conclusion that a globular cluster with the mass and binding energy typically observed can be formed at star formation efficiences as low as 10-20 percent. Furthermore, self contamination requires that the typical Fe/H of a bound system be about -1.6, independent of the free parameters of the model, allowing the clusters and field stars to form with different metallicity distributions in spite of their forming at the same time. Since the formation of globular clusters in this model is tied to the external pressure, the halo globular cluster masses and distribution can be used as probes of the early galactic structure. In particular, this model requires an increase in the typical globular cluster mass as one moves out from the galactic center; the masses of the halo clusters are examined, and they show considerable evidence for such a gradient. Based on a pressure distribution derived from this data, the effect of the galactic tidal field on the model is also investigated using an N-body simulation.
PC Software graphics tool for conceptual design of space/planetary electrical power systems
NASA Technical Reports Server (NTRS)
Truong, Long V.
1995-01-01
This paper describes the Decision Support System (DSS), a personal computer software graphics tool for designing conceptual space and/or planetary electrical power systems. By using the DSS, users can obtain desirable system design and operating parameters, such as system weight, electrical distribution efficiency, and bus power. With this tool, a large-scale specific power system was designed in a matter of days. It is an excellent tool to help designers make tradeoffs between system components, hardware architectures, and operation parameters in the early stages of the design cycle. The DSS is a user-friendly, menu-driven tool with online help and a custom graphical user interface. An example design and results are illustrated for a typical space power system with multiple types of power sources, frequencies, energy storage systems, and loads.
Image based method for aberration measurement of lithographic tools
NASA Astrophysics Data System (ADS)
Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa
2018-01-01
Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.
Sun, Jianxin; Moore, Lee; Xue, Wei; Kim, James; Zborowski, Maciej; Chalmers, Jeffrey J
2018-05-01
Magnetic separation of cells has been, and continues to be, widely used in a variety of applications, ranging from healthcare diagnostics to detection of food contamination. Typically, these technologies require cells labeled with antibody magnetic particle conjugate and a high magnetic energy gradient created in the flow containing the labeled cells (i.e., a column packed with magnetically inducible material), or dense packing of magnetic particles next to the flow cell. Such designs, while creating high magnetic energy gradients, are not amenable to easy, highly detailed, mathematic characterization. Our laboratories have been characterizing and developing analysis and separation technology that can be used on intrinsically magnetic cells or spores which are typically orders of magnitude weaker than typically immunomagnetically labeled cells. One such separation system is magnetic deposition microscopy (MDM) which not only separates cells, but deposits them in specific locations on slides for further microscopic analysis. In this study, the MDM system has been further characterized, using finite element and computational fluid mechanics software, and separation performance predicted, using a model which combines: 1) the distribution of the intrinsic magnetophoretic mobility of the cells (spores); 2) the fluid flow within the separation device; and 3) accurate maps of the values of the magnetic field (max 2.27 T), and magnetic energy gradient (max of 4.41 T 2 /mm) within the system. Guided by this model, experimental studies indicated that greater than 95% of the intrinsically magnetic Bacillus spores can be separated with the MDM system. Further, this model allows analysis of cell trajectories which can assist in the design of higher throughput systems. © 2018 Wiley Periodicals, Inc.
System approach to distributed sensor management
NASA Astrophysics Data System (ADS)
Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid
2010-04-01
Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.
Systems study for an Integrated Digital-Electric Aircraft (IDEA)
NASA Technical Reports Server (NTRS)
Tagge, G. E.; Irish, L. A.; Bailey, A. R.
1985-01-01
The results of the Integrated Digital/Electric Aircraft (IDEA) Study are presented. Airplanes with advanced systems were, defined and evaluated, as a means of identifying potential high payoff research tasks. A baseline airplane was defined for comparison, typical of a 1990's airplane with advanced active controls, propulsion, aerodynamics, and structures technology. Trade studies led to definition of an IDEA airplane, with extensive digital systems and electric secondary power distribution. This airplane showed an improvement of 3% in fuel use and 1.8% in DOC relative to the baseline configuration. An alternate configuration, an advanced technology turboprop, was also evaluated, with greater improvement supported by digital electric systems. Recommended research programs were defined for high risk, high payoff areas appropriate for implementation under NASA leadership.
Space reflector technology and its system implications
NASA Technical Reports Server (NTRS)
Billman, K. W.; Gilbreath, W. P.; Bowen, S. W.
1979-01-01
The technical feasibility of providing nearly continuous solar energy to a world-distributed set of conversion sites by means of a system of orbiting, large-area, low-areal-density reflecting structures is examined. Requisite mirror area to provide a chosen, year-averaged site intensity is shown. A modeled reflector structure, with suitable planarity and ability to meet operational torques and loads, is discussed. Typical spatial and temporal insolation profiles are presented. These determine the sizing of components and the output electric power from a baselined photovoltaic conversion system. Technical and economic challenges which, if met, would allow the system to provide a large fraction of future world energy needs at costs competitive to circa-1995 fossil and nuclear sources are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Xianjun
The combined heat and power (CHP)-based distributed generation (DG) or dis-tributed energy resources (DERs) are mature options available in the present energy market, considered to be an effective solution to promote energy efficiency. In the urban environment, the electricity, water and natural gas distribution networks are becoming increasingly interconnected with the growing penetration of the CHP-based DG. Subsequently, this emerging interdependence leads to new topics meriting serious consideration: how much of the CHP-based DG can be accommodated and where to locate these DERs, and given preexisting constraints, how to quantify the mutual impacts on operation performances between these urban energy distribution networks and the CHP-based DG. The early research work was conducted to investigate the feasibility and design methods for one residential microgrid system based on existing electricity, water and gas infrastructures of a residential community, mainly focusing on the economic planning. However, this proposed design method cannot determine the optimal DG sizing and siting for a larger test bed with the given information of energy infrastructures. In this context, a more systematic as well as generalized approach should be developed to solve these problems. In the later study, the model architecture that integrates urban electricity, water and gas distribution networks, and the CHP-based DG system was developed. The proposed approach addressed the challenge of identifying the optimal sizing and siting of the CHP-based DG on these urban energy networks and the mutual impacts on operation performances were also quantified. For this study, the overall objective is to maximize the electrical output and recovered thermal output of the CHP-based DG units. The electricity, gas, and water system models were developed individually and coupled by the developed CHP-based DG system model. The resultant integrated system model is used to constrain the DG's electrical output and recovered thermal output, which are affected by multiple factors and thus analyzed in different case studies. The results indicate that the designed typical gas system is capable of supplying sufficient natural gas for the DG normal operation, while the present water system cannot support the complete recovery of the exhaust heat from the DG units.
The computation ofa (l-a) 100% upper confidence limit (UCL) of the population mean depends upon the data distribution. Typically, environmental data are positively skewed, and a default lognormal distribution (EPA, 1992) is often used to model such data distributions. The H-stati...
Disruption of giant comets in the solar system and around other stars
NASA Technical Reports Server (NTRS)
Whitmire, D. P.; Matese, J. J.
1988-01-01
In a standard cometary mass distribution (dN/dM) alpha M(-a), a = 1.5 to 2.0) most of the mass resides in the largest comets. The maximum mass M sub max for which this distribution holds uncertain but there are theoretical and observational indications that M sub max is at least approx. 10(23)g. Chiron, although formally classified as an asteroid, is most likely a giant comet in this mass range. Its present orbit is unstable and it is expected to evolve into a more typical short period comet orbit on a timescale of approx. 10(6) to 10(7)yr. The breakup of a chiron-like comet of mass approx. 10(23)g could in principle produce approx. 10(5) Halley-size comets, or a distribution with an even larger number. If a giant comet was in a typical short period comet orbit, such a breakup could result in a relatively brief comet shower (duration approx. less than 10(6)yr) with some associated terrestrial impacts. However, the most significant climatic effects may not in general be due to the impacts themselves but to the greatly enhanced zodiacal dust cloud in the inner Solar System. (Although this is probably not the case for the unique K-T impact). Researchers used a least Chi square program with error analysis to confirm that the 2 to 5 micrometer excess spectrum of Giclas 29 to 38 can be adequately fitted with either a disk of small inefficient (or efficient) grains or a single temperature black body. Further monitoring of this star may allow discrimination between these two models.
NASA Technical Reports Server (NTRS)
Nickle, F. R.; Freeman, Arthur B.
1939-01-01
The safety of remotely operated vehicles depends on the correctness of the distributed protocol that facilitates the communication between the vehicle and the operator. A failure in this communication can result in catastrophic loss of the vehicle. To complicate matters, the communication system may be required to satisfy several, possibly conflicting, requirements. The design of protocols is typically an informal process based on successive iterations of a prototype implementation. Yet distributed protocols are notoriously difficult to get correct using such informal techniques. We present a formal specification of the design of a distributed protocol intended for use in a remotely operated vehicle, which is built from the composition of several simpler protocols. We demonstrate proof strategies that allow us to prove properties of each component protocol individually while ensuring that the property is preserved in the composition forming the entire system. Given that designs are likely to evolve as additional requirements emerge, we show how we have automated most of the repetitive proof steps to enable verification of rapidly changing designs.
An Overview of Distributed Microgrid State Estimation and Control for Smart Grids
Rana, Md Masud; Li, Li
2015-01-01
Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method. PMID:25686316
29 CFR 794.133 - “Bulk” distribution.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 3 2014-07-01 2014-07-01 false âBulkâ distribution. 794.133 Section 794.133 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL... § 794.133 “Bulk” distribution. “Bulk” distribution of petroleum products typically connotes those...
29 CFR 794.133 - “Bulk” distribution.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false âBulkâ distribution. 794.133 Section 794.133 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL... § 794.133 “Bulk” distribution. “Bulk” distribution of petroleum products typically connotes those...
Intermittent Water Supply: Prevalence, Practice, and Microbial Water Quality.
Kumpel, Emily; Nelson, Kara L
2016-01-19
Intermittent water supplies (IWS), in which water is provided through pipes for only limited durations, serve at least 300 million people around the world. However, providing water intermittently can compromise water quality in the distribution system. In IWS systems, the pipes do not supply water for periods of time, supply periods are shortened, and pipes experience regular flow restarting and draining. These unique behaviors affect distribution system water quality in ways that are different than during normal operations in continuous water supplies (CWS). A better understanding of the influence of IWS on mechanisms causing contamination can help lead to incremental steps that protect water quality and minimize health risks. This review examines the status and nature of IWS practices throughout the world, the evidence of the effect of IWS on water quality, and how the typical contexts in which IWS systems often exist-low-income countries with under-resourced utilities and inadequate sanitation infrastructure-can exacerbate mechanisms causing contamination. We then highlight knowledge gaps for further research to improve our understanding of water quality in IWS.
Peculiar spectral statistics of ensembles of trees and star-like graphs
NASA Astrophysics Data System (ADS)
Kovaleva, V.; Maximov, Yu; Nechaev, S.; Valba, O.
2017-07-01
In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the ‘Lifshitz singularity’ emerging in the one-dimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However, the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, reflecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of an ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.
Peculiar spectral statistics of ensembles of trees and star-like graphs
Kovaleva, V.; Maximov, Yu; Nechaev, S.; ...
2017-07-11
In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the \\Lifshitz singularity" emerging in the onedimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However,more » the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, re ecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.« less
Peculiar spectral statistics of ensembles of trees and star-like graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovaleva, V.; Maximov, Yu; Nechaev, S.
In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the \\Lifshitz singularity" emerging in the onedimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However,more » the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, re ecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.« less
User-Perceived Reliability of M-for-N (M: N) Shared Protection Systems
NASA Astrophysics Data System (ADS)
Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue
In this paper we investigate the reliability of general type shared protection systems i.e. M for N (M: N) that can typically be applied to various telecommunication network devices. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner. The mathematical analysis gives the closed-form solution of the availability, the recursive computing algorithm of the MTTFF (Mean Time to First Failure) and the MTTF (Mean Time to Failure) perceived by an arbitrary end user. We also show that, under a certain condition, the probability distribution of TTFF (Time to First Failure) can be approximated by a simple exponential distribution. The analysis provides useful information for the analysis and the design of not only the telecommunication network devices but also other general shared protection systems that are subject to service level agreements (SLA) involving user-perceived reliability measures.
NASA Astrophysics Data System (ADS)
Suárez, F.; Aravena, J. E.; Hausner, M. B.; Childress, A. E.; Tyler, S. W.
2011-01-01
In shallow thermohaline-driven lakes it is important to measure temperature on fine spatial and temporal scales to detect stratification or different hydrodynamic regimes. Raman spectra distributed temperature sensing (DTS) is an approach available to provide high spatial and temporal temperature resolution. A vertical high-resolution DTS system was constructed to overcome the problems of typical methods used in the past, i.e., without disturbing the water column, and with resistance to corrosive environments. This system monitors the temperature profile each 1.1 cm vertically and in time averages as small as 10 s. Temperature resolution as low as 0.035 °C is obtained when the data are collected at 5-min intervals. The vertical high-resolution DTS system is used to monitor the thermal behavior of a salt-gradient solar pond, which is an engineered shallow thermohaline system that allows collection and storage of solar energy for a long period of time. This paper describes a method to quantitatively assess accuracy, precision and other limitations of DTS systems to fully utilize the capacity of this technology. It also presents, for the first time, a method to manually calibrate temperatures along the optical fiber.
Side-emitting illuminators using LED sources
NASA Astrophysics Data System (ADS)
Zhao, Feng; Van Derlofske, John F.
2003-11-01
This study investigates illuminators composed of light emitting diode (LED) array sources and side-emitting light guides to provide efficient general illumination. Specifically, new geometries are explored to increase the efficiency of current systems while maintaining desired light distribution. LED technology is already successfully applied in many illumination applications, such as traffic signals and liquid crystal display (LCD) backlighting. It provides energy-efficient, small-package, long-life, and color-adjustable illumination. However, the use of LEDs in general illumination is still in its early stages. Current side-emitting systems typically use a light guide with light sources at one end, an end-cap surface at the other end, and light releasing sidewalls. This geometry introduces efficiency loss that can be as high as 40%. The illuminators analyzed in this study use LED array sources along the longitude of a light guide to increase the system efficiency. These new geometries also provide the freedom of elongating the system without sacrificing system efficiency. In addition, alternative geometries can be used to create white light with monochromatic LED sources. As concluded by this study, the side-emitting illuminators using LED sources gives the possibility of an efficient, distribution-controllable linear lighting system.
Development of Extended Period Pressure-Dependent Demand Water Distribution Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judi, David R.; Mcpherson, Timothy N.
2015-03-20
Los Alamos National Laboratory (LANL) has used modeling and simulation of water distribution systems for N-1 contingency analyses to assess criticality of water system assets. Critical components considered in these analyses include pumps, tanks, and supply sources, in addition to critical pipes or aqueducts. A contingency represents the complete removal of the asset from system operation. For each contingency, an extended period simulation (EPS) is run using EPANET. An EPS simulates water system behavior over a time period, typically at least 24 hours. It assesses the ability of a system to respond and recover from asset disruption through distributed storagemore » in tanks throughout the system. Contingencies of concern are identified as those in which some portion of the water system has unmet delivery requirements. A delivery requirement is defined as an aggregation of water demands within a service area, similar to an electric power demand. The metric used to identify areas of unmet delivery requirement in these studies is a pressure threshold of 15 pounds per square inch (psi). This pressure threshold is used because it is below the required pressure for fire protection. Any location in the model with pressure that drops below this threshold at any time during an EPS is considered to have unmet service requirements and is used to determine cascading consequences. The outage area for a contingency is the aggregation of all service areas with a pressure below the threshold at any time during the EPS.« less
Solar excitation of CdS/Cu2S photovoltaic cells
NASA Technical Reports Server (NTRS)
Boer, K. W.
1976-01-01
Solar radiation of five typical clear weather days and under a variety of conditions is used to determine the spectral distribution of the photonflux at different planes of a CdS/Cu2S solar cell. The fractions of reflected and absorbed flux are determined at each of the relevant interfaces and active volume elements of the solar cell. The density of absorbed photons is given in respect to spectral and spatial distribution. The variance of the obtained distribution, with changes in insolation and absorption spectra of the active solar cell layers, is indicated. A catalog of typical examples is given in the appendix.
Coordinating UAV information for executing national security-oriented collaboration
NASA Astrophysics Data System (ADS)
Isenor, Anthony W.; Allard, Yannick; Lapinski, Anna-Liesa S.; Demers, Hugues; Radulescu, Dan
2014-10-01
Unmanned Aerial Vehicles (UAVs) are being used by numerous nations for defence-related missions. In some cases, the UAV is considered a cost-effective means to acquire data such as imagery over a location or object. Considering Canada's geographic expanse, UAVs are also being suggested as a potential platform for use in surveillance of remote areas, such as northern Canada. However, such activities are typically associated with security as opposed to defence. The use of a defence platform for security activities introduces the issue of information exchange between the defence and security communities and their software applications. This paper explores the flow of information from the system used by the UAVs employed by the Royal Canadian Navy. Multiple computers are setup, each with the information system used by the UAVs, including appropriate communication between the systems. Simulated data that may be expected from a typical maritime UAV mission is then fed into the information system. The information structures common to the Canadian security community are then used to store and transfer the simulated data. The resulting data flow from the defence-oriented UAV system to the security-oriented information structure is then displayed using an open source geospatial application. Use of the information structures and applications relevant to the security community avoids the distribution restrictions often associated with defence-specific applications.
Indonesia’s Electricity Demand Dynamic Modelling
NASA Astrophysics Data System (ADS)
Sulistio, J.; Wirabhuana, A.; Wiratama, M. G.
2017-06-01
Electricity Systems modelling is one of the emerging area in the Global Energy policy studies recently. System Dynamics approach and Computer Simulation has become one the common methods used in energy systems planning and evaluation in many conditions. On the other hand, Indonesia experiencing several major issues in Electricity system such as fossil fuel domination, demand - supply imbalances, distribution inefficiency, and bio-devastation. This paper aims to explain the development of System Dynamics modelling approaches and computer simulation techniques in representing and predicting electricity demand in Indonesia. In addition, this paper also described the typical characteristics and relationship of commercial business sector, industrial sector, and family / domestic sector as electricity subsystems in Indonesia. Moreover, it will be also present direct structure, behavioural, and statistical test as model validation approach and ended by conclusions.
Commissioning of intensity modulated neutron radiotherapy (IMNRT).
Burmeister, Jay; Spink, Robyn; Liang, Liang; Bossenberger, Todd; Halford, Robert; Brandon, John; Delauter, Jonathan; Snyder, Michael
2013-02-01
Intensity modulated neutron radiotherapy (IMNRT) has been developed using inhouse treatment planning and delivery systems at the Karmanos Cancer Center∕Wayne State University Fast Neutron Therapy facility. The process of commissioning IMNRT for clinical use is presented here. Results of commissioning tests are provided including validation measurements using representative patient plans as well as those from the TG-119 test suite. IMNRT plans were created using the Varian Eclipse optimization algorithm and an inhouse planning system for calculation of neutron dose distributions. Tissue equivalent ionization chambers and an ionization chamber array were used for point dose and planar dose distribution comparisons with calculated values. Validation plans were delivered to water and virtual water phantoms using TG-119 measurement points and evaluation techniques. Photon and neutron doses were evaluated both inside and outside the target volume for a typical IMNRT plan to determine effects of intensity modulation on the photon dose component. Monitor unit linearity and effects of beam current and gantry angle on output were investigated, and an independent validation of neutron dosimetry was obtained. While IMNRT plan quality is superior to conventional fast neutron therapy plans for clinical sites such as prostate and head and neck, it is inferior to photon IMRT for most TG-119 planning goals, particularly for complex cases. This results significantly from current limitations on the number of segments. Measured and calculated doses for 11 representative plans (six prostate∕five head and neck) agreed to within -0.8 ± 1.4% and 5.0 ± 6.0% within and outside the target, respectively. Nearly all (22∕24) ion chamber point measurements in the two phantom arrangements were within the respective confidence intervals for the quantity [(measured-planned)∕prescription dose] derived in TG-119. Mean differences for all measurements were 0.5% (max = 7.0%) and 1.4% (max = 4.1%) in water and virtual water, respectively. The mean gamma pass rate for all cases was 92.8% (min = 88.6%). These pass rates are lower than typically achieved with photon IMRT, warranting development of a planar dosimetry system designed specifically for IMNRT and∕or the improvement of neutron beam modeling in the penumbral region. The fractional photon dose component did not change significantly in a typical IMNRT plan versus a conventional fast neutron therapy plan, and IMNRT delivery is not expected to significantly alter the RBE. All other commissioning results were considered satisfactory for clinical implementation of IMNRT, including the external neutron dose validation, which agreed with the predicted neutron dose to within 1%. IMNRT has been successfully commissioned for clinical use. While current plan quality is inferior to photon IMRT, it is superior to conventional fast neutron therapy. Ion chamber validation results for IMNRT commissioning are also comparable to those typically achieved with photon IMRT. Gamma pass rates for planar dose distributions are lower than typically observed for photon IMRT but may be improved with improved planar dosimetry equipment and beam modeling techniques. In the meantime, patient-specific quality assurance measurements should rely more heavily on point dose measurements with tissue equivalent ionization chambers. No significant technical impediments are anticipated in the clinical implementation of IMNRT as described here.
Autonomous formation flying based on GPS — PRISMA flight results
NASA Astrophysics Data System (ADS)
D'Amico, Simone; Ardaens, Jean-Sebastien; De Florio, Sergio
2013-01-01
This paper presents flight results from the early harvest of the Spaceborne Autonomous Formation Flying Experiment (SAFE) conducted in the frame of the Swedish PRISMA technology demonstration mission. SAFE represents one of the first demonstrations in low Earth orbit of an advanced guidance, navigation and control system for dual-spacecraft formations. Innovative techniques based on differential GPS-based navigation and relative orbital elements control are validated and tuned in orbit to fulfill the typical requirements of future distributed scientific instruments for remote sensing.
Initial Transient in Zn-doped InSb Grown in Microgravity
NASA Technical Reports Server (NTRS)
Ostrogorsky, A G.; Marin, C.; Volz, M.; Duffar, T.
2009-01-01
Three Zn-doped InSb crystals were directionally solidified under microgravity conditions at the International Space Station (ISS) Alpha. The distribution of the Zn was measured using SIMS. A short diffusion-controlled transient, typical for systems with k greater than 1 was demonstrated. Static pressure of approximately 4000 N/m2 was imposed on the melt, to prevent bubble formation and dewetting. Still, partial de-wetting has occurred in one experiment, and apparently has disturbed the diffusive transport of Zn in the melt.
An evaluation of descent strategies for TNAV-equipped aircraft in an advanced metering environment
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Schwab, R. W.; Groce, J. L.; Coote, M. A.
1986-01-01
Investigated were the effects on system throughput and fleet fuel usage of arrival aircraft utilizing three 4D RNAV descent strategies (cost optimal, clean-idle Mach/CAS and constant descent angle Mach/CAS), both individually and in combination, in an advanced air traffic control metering environment. Results are presented for all mixtures of arrival traffic consisting of three Boeing commercial jet types and for all combinations of the three descent strategies for a typical en route metering airport arrival distribution.
Newton-like methods for Navier-Stokes solution
NASA Astrophysics Data System (ADS)
Qin, N.; Xu, X.; Richards, B. E.
1992-12-01
The paper reports on Newton-like methods called SFDN-alpha-GMRES and SQN-alpha-GMRES methods that have been devised and proven as powerful schemes for large nonlinear problems typical of viscous compressible Navier-Stokes solutions. They can be applied using a partially converged solution from a conventional explicit or approximate implicit method. Developments have included the efficient parallelization of the schemes on a distributed memory parallel computer. The methods are illustrated using a RISC workstation and a transputer parallel system respectively to solve a hypersonic vortical flow.
The Effects of Soldier Gear Encumbrance on Restraints in a Frontal Crash Environment
2015-08-31
their gear poses a challenge in restraint system design that is not typical in the automotive world. •The weight of the gear encumbrance may have a...Distribution Statement A. Approved for public release. TEST METHODOLOGY •A modified rigid steel seat similar to the type used for ECE R16 compliance testing...structure were non-deformable. 6 Shoulder Restraints Steel Non Deformable D-Rings 5th Point Restraint 5th Point Exiting Through the Seat
Analysis of propellant feedline dynamics
NASA Technical Reports Server (NTRS)
Holster, J. L.; Astleford, W. J.; Gerlach, C. R.
1973-01-01
An analytical model and corresponding computer program for studying disturbances of liquid propellants in typical engine feedline systems were developed. The model includes the effects of steady turbulent mean flow, the influence of distributed compliances, the effects of local compliances, and various factors causing structural-hydraulic coupling. The computer program was set up such that the amplitude and phase of the terminal pressure/input excitation is calculated over any desired frequency range for an arbitrary assembly of various feedline components. A user's manual is included.
2010-06-01
perfect example on how to lead, manage and strive for excellence in every aspect of your life. Your leadership is essential to fostering the loyalty ...share my success, I could not have ever achieved the level of satisfaction and enjoyment that I have. You will never understand how helpful the...A typical wall mounted light switch is a single pole single throw switch. A common industrial motor start switch is a three pole single throw switch
Coordinating complex decision support activities across distributed applications
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1994-01-01
Knowledge-based technologies have been applied successfully to automate planning and scheduling in many problem domains. Automation of decision support can be increased further by integrating task-specific applications with supporting database systems, and by coordinating interactions between such tools to facilitate collaborative activities. Unfortunately, the technical obstacles that must be overcome to achieve this vision of transparent, cooperative problem-solving are daunting. Intelligent decision support tools are typically developed for standalone use, rely on incompatible, task-specific representational models and application programming interfaces (API's), and run on heterogeneous computing platforms. Getting such applications to interact freely calls for platform independent capabilities for distributed communication, as well as tools for mapping information across disparate representations. Symbiotics is developing a layered set of software tools (called NetWorks! for integrating and coordinating heterogeneous distributed applications. he top layer of tools consists of an extensible set of generic, programmable coordination services. Developers access these services via high-level API's to implement the desired interactions between distributed applications.
NASA Technical Reports Server (NTRS)
Scudder, J. D.
1978-01-01
A detailed first principle kinetic theory for electrons which is neither a classical fluid treatment nor an exospheric calculation is presented. This theory illustrates the global and local properties of the solar wind expansion that shape the observed features of the electron distribution function, such as its bifurcation, its skewness and the differential temperatures of the thermal and suprathermal subpopulations. Coulomb collisions are substantial mediators of the interplanetary electron velocity distribution function and they place a zone for a bifurcation of the electron distribution function deep in the corona. The local cause and effect precept which permeates the physics of denser media is modified for electrons in the solar wind. The local form of transport laws and equations of state which apply to collision dominated plasmas are replaced with global relations that explicitly depend on the relative position of the observer to the boundaries of the system.
Random versus maximum entropy models of neural population activity
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry
2017-04-01
The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.
NASA Astrophysics Data System (ADS)
Cha, Moon Hoe
2007-02-01
The NearFar program is a package for carrying out an interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. The program is implemented in Java to perform numerical operations on the nearside and farside angular distributions. It contains a graphical display interface for the numerical results. A test run has been applied to the elastic O16+Si28 scattering at E=1503 MeV. Program summaryTitle of program: NearFar Catalogue identifier: ADYP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYP_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: designed for any machine capable of running Java, developed on PC-Pentium-4 Operating systems under which the program has been tested: Microsoft Windows XP (Home Edition) Program language used: Java Number of bits in a word: 64 Memory required to execute with typical data: case dependent No. of lines in distributed program, including test data, etc.: 3484 Number of bytes distributed program, including test data, etc.: 142 051 Distribution format: tar.gz Other software required: A Java runtime interpreter, or the Java Development Kit, version 5.0 Nature of physical problem: Interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. Method of solution: The user must supply a external data file or PPSM parameters which calculates theoretical values of the quantities to be decomposed. Typical running time: Problem dependent. In a test run, it is about 35 s on a 2.40 GHz Intel P4-processor machine.
Garcia, Cristina Reyes; Manzi, Fatuma; Tediosi, Fabrizio; Hoffman, Stephen L.; James, Eric R.
2013-01-01
Typically, vaccines distributed through the Expanded Program on Immunization (EPI) use a 2–8 °C cold chain with 4–5 stops. The PfSPZ Vaccine comprises whole live-attenuated cryopreserved sporozoites stored in liquid nitrogen (LN2) vapor phase (LNVP) below −140 °C and would be distributed through a LNVP cold chain. The purpose of this study was to model LNVP cold chain distribution for the cryopreserved PfSPZ Vaccine in Tanzania, estimate the costs and compare these costs to those that would be incurred in distributing a ‘conventional’ malaria vaccine through the EPI. Capital and recurrent costs for storage, transportation, labor, energy usage and facilities were determined for the birth cohort in Tanzania over five years. Costs were calculated using WHO/UNESCO calculators. These were applied to a 2–8 °C distribution model with national, regional, district, and health facility levels, and for the cryopreserved vaccine using a ‘modified hub-and-spoke’ (MH-S) LNVP distribution system comprising a central national store, peripheral health facilities and an intermediate district-level transhipment stop. Estimated costs per fully immunized child (FIC) were $ 6.11 for the LNVP-distributed cryopreserved vaccine where the LN2 is generated, and $ 6.04 with purchased LN2 (assuming US $ 1.00/L). The FIC costs for distributing a conventional vaccine using the four level 2–8 °C cold chain were $ 6.10, and with a tariff distribution system as occurs in Tanzania the FIC cost was $ 5.53. The models, therefore, predicted little difference in 5-year distribution costs between the PfSPZ Vaccine distributed through a MH-S LNVP cold chain and a conventional vaccine distributed through the more traditional EPI system. A LNVP cold chain provides additional benefits through the use of durable dry shippers because no refrigerators, freezers or refrigerated trucks are required. Thus strain at the cold chain periphery, vaccine wastage from cold chain failures and the environmental impact of distribution would all be reduced. PMID:23146676
Garcia, Cristina Reyes; Manzi, Fatuma; Tediosi, Fabrizio; Hoffman, Stephen L; James, Eric R
2013-01-02
Typically, vaccines distributed through the Expanded Program on Immunization (EPI) use a 2-8°C cold chain with 4-5 stops. The PfSPZ Vaccine comprises whole live-attenuated cryopreserved sporozoites stored in liquid nitrogen (LN(2)) vapor phase (LNVP) below -140°C and would be distributed through a LNVP cold chain. The purpose of this study was to model LNVP cold chain distribution for the cryopreserved PfSPZ Vaccine in Tanzania, estimate the costs and compare these costs to those that would be incurred in distributing a 'conventional' malaria vaccine through the EPI. Capital and recurrent costs for storage, transportation, labor, energy usage and facilities were determined for the birth cohort in Tanzania over five years. Costs were calculated using WHO/UNESCO calculators. These were applied to a 2-8°C distribution model with national, regional, district, and health facility levels, and for the cryopreserved vaccine using a 'modified hub-and-spoke' (MH-S) LNVP distribution system comprising a central national store, peripheral health facilities and an intermediate district-level transhipment stop. Estimated costs per fully immunized child (FIC) were $ 6.11 for the LNVP-distributed cryopreserved vaccine where the LN(2) is generated, and $ 6.04 with purchased LN(2) (assuming US $ 1.00/L). The FIC costs for distributing a conventional vaccine using the four level 2-8°C cold chain were $ 6.10, and with a tariff distribution system as occurs in Tanzania the FIC cost was $ 5.53. The models, therefore, predicted little difference in 5-year distribution costs between the PfSPZ Vaccine distributed through a MH-S LNVP cold chain and a conventional vaccine distributed through the more traditional EPI system. A LNVP cold chain provides additional benefits through the use of durable dry shippers because no refrigerators, freezers or refrigerated trucks are required. Thus strain at the cold chain periphery, vaccine wastage from cold chain failures and the environmental impact of distribution would all be reduced. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mali, P.; Mukhopadhyay, A.; Manna, S. K.; Haldar, P. K.; Singh, G.
2017-03-01
Horizontal visibility graphs (HVGs) and the sandbox (SB) algorithm usually applied for multifractal characterization of complex network systems that are converted from time series measurements, are used to characterize the fluctuations in pseudorapidity densities of singly charged particles produced in high-energy nucleus-nucleus collisions. Besides obtaining the degree distribution associated with event-wise pseudorapidity distributions, the common set of observables, typical of any multifractality measurement, are studied in 16O-Ag/Br and 32S-Ag/Br interactions, each at an incident laboratory energy of 200 GeV/nucleon. For a better understanding, we systematically compare the experiment with a Monte Carlo model simulation based on the Ultra-relativistic Quantum Molecular Dynamics (UrQMD). Our results suggest that the HVG-SB technique is an efficient tool that can characterize multifractality in multiparticle emission data, and in some cases, it is even superior to other methods more commonly used in this regard.
NASA Technical Reports Server (NTRS)
Cebeci, T.; Kaups, K.; Ramsey, J. A.
1977-01-01
The method described utilizes a nonorthogonal coordinate system for boundary-layer calculations. It includes a geometry program that represents the wing analytically, and a velocity program that computes the external velocity components from a given experimental pressure distribution when the external velocity distribution is not computed theoretically. The boundary layer method is general, however, and can also be used for an external velocity distribution computed theoretically. Several test cases were computed by this method and the results were checked with other numerical calculations and with experiments when available. A typical computation time (CPU) on an IBM 370/165 computer for one surface of a wing which roughly consist of 30 spanwise stations and 25 streamwise stations, with 30 points across the boundary layer is less than 30 seconds for an incompressible flow and a little more for a compressible flow.
NASA Astrophysics Data System (ADS)
Gros, J.-B.; Kuhl, U.; Legrand, O.; Mortessagne, F.
2016-03-01
The effective Hamiltonian formalism is extended to vectorial electromagnetic waves in order to describe statistical properties of the field in reverberation chambers. The latter are commonly used in electromagnetic compatibility tests. As a first step, the distribution of wave intensities in chaotic systems with varying opening in the weak coupling limit for scalar quantum waves is derived by means of random matrix theory. In this limit the only parameters are the modal overlap and the number of open channels. Using the extended effective Hamiltonian, we describe the intensity statistics of the vectorial electromagnetic eigenmodes of lossy reverberation chambers. Finally, the typical quantity of interest in such chambers, namely, the distribution of the electromagnetic response, is discussed. By determining the distribution of the phase rigidity, describing the coupling to the environment, using random matrix numerical data, we find good agreement between the theoretical prediction and numerical calculations of the response.
NASA Technical Reports Server (NTRS)
Maxwell, Theresa G.; McNair, Ann R. (Technical Monitor)
2002-01-01
The planning processes for the International Space Station (ISS) Program are quite complex. Detailed mission planning for ISS on-orbit operations is a distributed function. Pieces of the on-orbit plan are developed by multiple planning organizations, located around the world, based on their respective expertise and responsibilities. The "pieces" are then integrated to yield the final detailed plan that will be executed onboard the ISS. Previous space programs have not distributed the planning and scheduling functions to this extent. Major ISS planning organizations are currently located in the United States (at both the NASA Johnson Space Center (JSC) and NASA Marshall Space Flight Center (MSFC)), in Russia, in Europe, and in Japan. Software systems have been developed by each of these planning organizations to support their assigned planning and scheduling functions. Although there is some cooperative development and sharing of key software components, each planning system has been tailored to meet the unique requirements and operational environment of the facility in which it operates. However, all the systems must operate in a coordinated fashion in order to effectively and efficiently produce a single integrated plan of ISS operations, in accordance with the established planning processes. This paper addresses lessons learned during the development of these multiple distributed planning systems, from the perspective of the developer of one of the software systems. The lessons focus on the coordination required to allow the multiple systems to operate together, rather than on the problems associated with the development of any particular system. Included in the paper is a discussion of typical problems faced during the development and coordination process, such as incompatible development schedules, difficulties in defining system interfaces, technical coordination and funding for shared tools, continually evolving planning concepts/requirements, programmatic and budget issues, and external influences. Techniques that mitigated some of these problems will also be addressed, along with recommendations for any future programs involving the development of multiple planning and scheduling systems. Many of these lessons learned are not unique to the area of planning and scheduling systems, so may be applied to other distributed ground systems that must operate in concert to successfully support space mission operations.
NASA Technical Reports Server (NTRS)
Maxwell, Theresa G.
2002-01-01
The planning processes for the International Space Station (ISS) Program are quite complex. Detailed mission planning for ISS on-orbit operations is a distributed function. Pieces of the on-orbit plan are developed by multiple planning organizations, located around the world, based on their respective expertise and responsibilities. The pieces are then integrated to yield the final detailed plan that will be executed onboard the ISS. Previous space programs have not distributed the planning and scheduling functions to this extent. Major ISS planning organizations are currently located in the United States (at both the NASA Johnson Space Center (JSC) and NASA Marshall Space Flight Center (MSFC)), in Russia, in Europe, and in Japan. Software systems have been developed by each of these planning organizations to support their assigned planning and scheduling functions. Although there is some cooperative development and sharing of key software components, each planning system has been tailored to meet the unique requirements and operational environment of the facility in which it operates. However, all the systems must operate in a coordinated fashion in order to effectively and efficiently produce a single integrated plan of ISS operations, in accordance with the established planning processes. This paper addresses lessons learned during the development of these multiple distributed planning systems, from the perspective of the developer of one of the software systems. The lessons focus on the coordination required to allow the multiple systems to operate together, rather than on the problems associated with the development of any particular system. Included in the paper is a discussion of typical problems faced during the development and coordination process, such as incompatible development schedules, difficulties in defining system interfaces, technical coordination and funding for shared tools, continually evolving planning concepts/requirements, programmatic and budget issues, and external influences. Techniques that mitigated some of these problems will also be addressed, along with recommendations for any future programs involving the development of multiple planning and scheduling systems. Many of these lessons learned are not unique to the area of planning and scheduling systems, so may be applied to other distributed ground systems that must operate in concert to successfully support space mission operations.
Analysis of digital communication signals and extraction of parameters
NASA Astrophysics Data System (ADS)
Al-Jowder, Anwar
1994-12-01
The signal classification performance of four types of electronics support measure (ESM) communications detection systems is compared from the standpoint of the unintended receiver (interceptor). Typical digital communication signals considered include binary phase shift keying (BPSK), quadrature phase shift keying (QPSK), frequency shift keying (FSK), and on-off keying (OOK). The analysis emphasizes the use of available signal processing software. Detection methods compared include broadband energy detection, FFT-based narrowband energy detection, and two correlation methods which employ the fast Fourier transform (FFT). The correlation methods utilize modified time-frequency distributions, where one of these is based on the Wigner-Ville distribution (WVD). Gaussian white noise is added to the signal to simulate various signal-to-noise ratios (SNR's).
Statistical analysis of field data for aircraft warranties
NASA Astrophysics Data System (ADS)
Lakey, Mary J.
Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.
Decoy-state quantum key distribution with biased basis choice
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states. PMID:23948999
Decoy-state quantum key distribution with biased basis choice.
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states.
Domain generality vs. modality specificity: The paradox of statistical learning
Frost, Ram; Armstrong, Blair C.; Siegelman, Noam; Christiansen, Morten H.
2015-01-01
Statistical learning is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying distributional properties of the input. Recent studies examining whether there are commonalities in the learning of distributional information across different domains or modalities consistently reveal, however, modality and stimulus specificity. An important question is, therefore, how and why a hypothesized domain-general learning mechanism systematically produces such effects. We offer a theoretical framework according to which statistical learning is not a unitary mechanism, but a set of domain-general computational principles, that operate in different modalities and therefore are subject to the specific constraints characteristic of their respective brain regions. This framework offers testable predictions and we discuss its computational and neurobiological plausibility. PMID:25631249
Linking brain, mind and behavior.
Makeig, Scott; Gramann, Klaus; Jung, Tzyy-Ping; Sejnowski, Terrence J; Poizner, Howard
2009-08-01
Cortical brain areas and dynamics evolved to organize motor behavior in our three-dimensional environment also support more general human cognitive processes. Yet traditional brain imaging paradigms typically allow and record only minimal participant behavior, then reduce the recorded data to single map features of averaged responses. To more fully investigate the complex links between distributed brain dynamics and motivated natural behavior, we propose the development of wearable mobile brain/body imaging (MoBI) systems that continuously capture the wearer's high-density electrical brain and muscle signals, three-dimensional body movements, audiovisual scene and point of regard, plus new data-driven analysis methods to model their interrelationships. The new imaging modality should allow new insights into how spatially distributed brain dynamics support natural human cognition and agency.
A Methodology for Quantifying Certain Design Requirements During the Design Phase
NASA Technical Reports Server (NTRS)
Adams, Timothy; Rhodes, Russel
2005-01-01
A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.
Summary of Tactile User Interfaces Techniques and Systems
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly
2005-01-01
Mental workload can be de.ned as the ratio of demand to allocated resources. Multiple-resource theory stresses the importance of distribution of tasks and information across various human sensory channels to reduce mental workload. One sensory channel that has been of interest since the late 1800s is touch. Unlike the more typical displays that target vision or hearing, tactile displays present information to the user s sense of touch. We present a summary of different methods for tactile display, historic and more recent systems that incorporate tactile display for information presentation, advantages and disadvantages of targeting the tactile channel, and future directions in tactile display research.
Summary of Tactile User Interfaces Techniques and Systems
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly
2004-01-01
Mental workload can be defined as the ratio of demand to allocated resources. Multiple- resource theory stresses the importance of distribution of tasks and information across various sensory channels of the human to reduce mental workload. One sensory channel that has been of interest since the late 1800s is touch. Unlike the more typical displays that target vision or hearing, tactile displays present information to the user s sense of touch. We present a summary of different methods for tactile display; historic and more recent systems that incorporate tactile display for information presentation; advantages and disadvantages of targeting the tactile channel; and future directions in tactile display research.
Localization in momentum space of ultracold atoms in incommensurate lattices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larcher, M.; Dalfovo, F.; Modugno, M.
2011-01-15
We characterize the disorder-induced localization in momentum space for ultracold atoms in one-dimensional incommensurate lattices, according to the dual Aubry-Andre model. For low disorder the system is localized in momentum space, and the momentum distribution exhibits time-periodic oscillations of the relative intensity of its components. The behavior of these oscillations is explained by means of a simple three-mode approximation. We predict their frequency and visibility by using typical parameters of feasible experiments. Above the transition the system diffuses in momentum space, and the oscillations vanish when averaged over different realizations, offering a clear signature of the transition.
Attributes of the Federal Energy Management Program's Federal Site Building Characteristics Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loper, Susan A.; Sandusky, William F.
2010-12-31
Typically, the Federal building stock is referred to as a group of about one-half million buildings throughout the United States. Additional information beyond this level is generally limited to distribution of that total by agency and maybe distribution of the total by state. However, additional characterization of the Federal building stock is required as the Federal sector seeks ways to implement efficiency projects to reduce energy and water use intensity as mandated by legislation and Executive Order. Using a Federal facility database that was assembled for use in a geographic information system tool, additional characterization of the Federal building stockmore » is provided including information regarding the geographical distribution of sites, building counts and percentage of total by agency, distribution of sites and building totals by agency, distribution of building count and floor space by Federal building type classification by agency, and rank ordering of sites, buildings, and floor space by state. A case study is provided regarding how the building stock has changed for the Department of Energy from 2000 through 2008.« less
Probabilistic distance-based quantizer design for distributed estimation
NASA Astrophysics Data System (ADS)
Kim, Yoon Hak
2016-12-01
We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.
Underestimating extreme events in power-law behavior due to machine-dependent cutoffs
NASA Astrophysics Data System (ADS)
Radicchi, Filippo
2014-11-01
Power-law distributions are typical macroscopic features occurring in almost all complex systems observable in nature. As a result, researchers in quantitative analyses must often generate random synthetic variates obeying power-law distributions. The task is usually performed through standard methods that map uniform random variates into the desired probability space. Whereas all these algorithms are theoretically solid, in this paper we show that they are subject to severe machine-dependent limitations. As a result, two dramatic consequences arise: (i) the sampling in the tail of the distribution is not random but deterministic; (ii) the moments of the sample distribution, which are theoretically expected to diverge as functions of the sample sizes, converge instead to finite values. We provide quantitative indications for the range of distribution parameters that can be safely handled by standard libraries used in computational analyses. Whereas our findings indicate possible reinterpretations of numerical results obtained through flawed sampling methodologies, they also pave the way for the search for a concrete solution to this central issue shared by all quantitative sciences dealing with complexity.
Triplets of galaxies: Their dynamics, evolution, and the origin of chaos in them
NASA Technical Reports Server (NTRS)
Chernin, Arthur D.; Ivanov, Alexei V.
1990-01-01
Recently Karachentsev's group at The Smithsonian Astrophysical Observatory (SAO) (6-meter Telescope Observatory) published a list of 84 triple systems of galaxies with their distances, radial (line of sight) velocities, and angular sizes (Karachentseva et al., 1988). This gives a new ground for studies of the dark matter problem which fills the gap between the large cosmic scales (White, 1987; Dekel and Rees, 1987, and Einasto et al., 1977) and the scale of individual galaxies (Erickson et al., 1987). The data on the typical velocity dispersions and linear dimension of the triplets indicate that they contain considerable amounts of dark matter (see also earlier work of Karachentseva et al., 1979). Numerical simulations show that the statistical characteristics of the Karachentsev triplets can be imitated by model ensembles of triple systems with dark matter masses M sub d = (1-3 x 10(exp 12) M sub O, which is almost ten times greater than the typical mass of stellar galaxies estimated by the standard mass-to-luminosity ration (Kiseleva and Chernin, 1988). Here, the authors report that important information can be drawn from the data on the visible configurations of these systems. The statistics of configurations provide an independent evidence for dark matter in the triplets; moreover, it enables one to argue that dark matter seems to be distributed over the whole volume of the typical triplet forming its common corona rather than concentrated within individual coronae (or haloes) of the member galaxies.
Meiofauna hotspot in the Atacama Trench, eastern South Pacific Ocean
NASA Astrophysics Data System (ADS)
Danovaro, R.; Gambi, C.; Della Croce, N.
2002-05-01
Meiofaunal assemblages were investigated (in terms of abundance, biomass, individual size and community structure) at bathyal and hadal depths (from 1050 to 7800 m) in the Atacama Trench in the upwelling sector of the eastern South Pacific Ocean, in relation to the distribution and availability of potential food sources (phytopigments, biochemical compounds and bacterial biomass) in this highly productive region. Meiofaunal density and biomass in the Atacama Trench were one to two orders of magnitude higher than values reported in other "oligotrophic" hadal systems. The Atacama Trench presented very high concentrations of nutritionally rich organic matter at 7800-m depth and displayed characteristics typical of eutrophic systems. Surprisingly, despite a decrease in chlorophyll- a and organic matter concentrations of about 50% from bathyal to hadal depths, meiofaunal abundance in hadal sediments was 10-fold higher than at bathyal depths. As indicated by the higher protein to carbohydrate ratio observed in trench sediments, the extraordinarily high meiofaunal density reported in the Atacama Trench was more dependent upon organic matter quality than on its quantity. The trophic richness of the system was reflected by a shift of the size structure of the benthic organisms. In contrast with typical trends of deep-sea systems, the ratio of bacterial to meiofaunal biomass decreased with increasing depth and, in the Atacama Trench, meiofaunal biomass largely dominated total benthic biomass. Nematodes at 7800-m depth accounted for more than 80% of total density and about 50% of total meiofaunal biomass. In hadal sediments a clear meiofaunal dwarfism was observed: the individual body size of nematodes and other taxa was reduced by 30-40% compared to individuals collected at bathyal depths. The peculiarity of this trophic-rich system allows rejection of previous hypotheses, which explained deep-sea dwarfism by the extremely oligotrophic conditions typical of deep-sea regions.
Nature of multiple-nucleus cluster galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merritt, D.
1984-05-01
In models for the evolution of galaxy clusters which include dynamical friction with the dark binding matter, the distribution of galaxies becomes more concentrated to the cluster center with time. In a cluster like Coma, this evolution could increase by a factor of approximately 3 the probability of finding a galaxy very close to the cluster center, without decreasing the typical velocity of such a galaxy significantly below the cluster mean. Such an enhancement is roughly what is needed to explain the large number of first-ranked cluster galaxies which are observed to have extra ''nuclei''; it is also consistent withmore » the high velocities typically measured for these ''nuclei.'' Unlike the cannibalism model, this model predicts that the majority of multiple-nucleus systems are transient phenomena, and not galaxies in the process of merging.« less
Precision Pointing in Space Using Arrays of Shape Memory Based Linear Actuators
NASA Astrophysics Data System (ADS)
Sonawane, Nikhil
Space systems such as communication satellites, earth observation satellites and telescope require accurate pointing to observe fixed targets over prolonged time. These systems typically use reaction wheels to slew the spacecraft and gimballing systems containing motors to achieve precise pointing. Motor based actuators have limited life as they contain moving parts that require lubrication in space. Alternate methods have utilized piezoelectric actuators. This paper presents Shape memory alloys (SMA) actuators for control of a deployable antenna placed on a satellite. The SMAs are operated as a series of distributed linear actuators. These distributed linear actuators are not prone to single point failures and although each individual actuator is imprecise due to hysteresis and temperature variation, the system as a whole achieves reliable results. The SMAs can be programmed to perform a series of periodic motion and operate as a mechanical guidance system that is not prone to damage from radiation or space weather. Efforts are focused on developing a system that can achieve 1 degree pointing accuracy at first, with an ultimate goal of achieving a few arc seconds accuracy. Bench top model of the actuator system has been developed and working towards testing the system under vacuum. A demonstration flight of the technology is planned aboard a CubeSat.
Jain, A.; Deguet, A.; Iordachita, I.; Chintalapani, G.; Vikal, S.; Blevins, J.; Le, Y.; Armour, E.; Burdette, C.; Song, D.; Fichtinger, G.
2015-01-01
Purpose Brachytherapy (radioactive seed insertion) has emerged as one of the most effective treatment options for patients with prostate cancer, with the added benefit of a convenient outpatient procedure. The main limitation in contemporary brachytherapy is faulty seed placement, predominantly due to the presence of intra-operative edema (tissue expansion). Though currently not available, the capability to intra-operatively monitor the seed distribution, can make a significant improvement in cancer control. We present such a system here. Methods Intra-operative measurement of edema in prostate brachytherapy requires localization of inserted radioactive seeds relative to the prostate. Seeds were reconstructed using a typical non-isocentric C-arm, and exported to a commercial brachytherapy treatment planning system. Technical obstacles for 3D reconstruction on a non-isocentric C-arm include pose-dependent C-arm calibration; distortion correction; pose estimation of C-arm images; seed reconstruction; and C-arm to TRUS registration. Results In precision-machined hard phantoms with 40–100 seeds and soft tissue phantoms with 45–87 seeds, we correctly reconstructed the seed implant shape with an average 3D precision of 0.35 mm and 0.24 mm, respectively. In a DoD Phase-1 clinical trial on six patients with 48–82 planned seeds, we achieved intra-operative monitoring of seed distribution and dosimetry, correcting for dose inhomogeneities by inserting an average of over four additional seeds in the six enrolled patients (minimum 1; maximum 9). Additionally, in each patient, the system automatically detected intra-operative seed migration induced due to edema (mean 3.84 mm, STD 2.13 mm, Max 16.19 mm). Conclusions The proposed system is the first of a kind that makes intra-operative detection of edema (and subsequent re-optimization) possible on any typical non-isocentric C-arm, at negligible additional cost to the existing clinical installation. It achieves a significantly more homogeneous seed distribution, and has the potential to affect a paradigm shift in clinical practice. Large scale studies and commercialization are currently underway. PMID:21168357
Devices development and techniques research for space life sciences
NASA Astrophysics Data System (ADS)
Zhang, A.; Liu, B.; Zheng, C.
The development process and the status quo of the devices and techniques for space life science in China and the main research results in this field achieved by Shanghai Institute of Technical Physics SITP CAS are reviewed concisely in this paper On the base of analyzing the requirements of devices and techniques for supporting space life science experiments and researches one designment idea of developing different intelligent modules with professional function standard interface and easy to be integrated into system is put forward and the realization method of the experiment system with intelligent distributed control based on the field bus are discussed in three hierarchies Typical sensing or control function cells with certain self-determination control data management and communication abilities are designed and developed which are called Intelligent Agents Digital hardware network system which are consisted of the distributed Agents as the intelligent node is constructed with the normative opening field bus technology The multitask and real-time control application softwares are developed in the embedded RTOS circumstance which is implanted into the system hardware and space life science experiment system platform with characteristic of multitasks multi-courses professional and instant integration will be constructed
Northward dispersal of sea kraits (Laticauda semifasciata) beyond their typical range
Park, Jaejin; Kim, Il-Hun; Fong, Jonathan J.; Koo, Kyo-Soung; Choi, Woo-Jin; Tsai, Tein-Shun
2017-01-01
Marine reptiles are declining globally, and recent climate change may be a contributing factor. The study of sea snakes collected beyond their typical distribution range provides valuable insight on how climate change affects marine reptile populations. Recently, we collected 12 Laticauda semifasciata (11 females, 1 male) from the waters around southern South Korea—an area located outside its typical distribution range (Japan, China including Taiwan, Philippines and Indonesia). We investigated the genetic origin of Korean specimens by analyzing mitochondrial cytochrome b gene (Cytb) sequences. Six individuals shared haplotypes with a group found in Taiwan-southern Ryukyu Islands, while the remaining six individuals shared haplotypes with a group encompassing the entire Ryukyu Archipelago. These results suggest L. semifasciata moved into Korean waters from the Taiwan-Ryukyu region via the Taiwan Warm Current and/or the Kuroshio Current, with extended survival facilitated by ocean warming. We highlight several contributing factors that increase the chances that L. semifasciata establishes new northern populations beyond the original distribution range. PMID:28644894
NASA Astrophysics Data System (ADS)
Tick, G. R.; Wei, S.; Sun, H.; Zhang, Y.
2016-12-01
Pore-scale heterogeneity, NAPL distribution, and sorption/desorption processes can significantly affect aqueous phase elution and mass flux in porous media systems. The application of a scale-independent fractional derivative model (tFADE) was used to simulate elution curves for a series of columns (5 cm, 7 cm, 15 cm, 25 cm, and 80 cm) homogeneously packed with 20/30-mesh sand and distributed with uniform saturations (7-24%) of NAPL phase trichloroethene (TCE). An additional set of columns (7 cm and 25 cm) were packed with a heterogeneous distribution of quartz sand upon which TCE was emplaced by imbibing the immiscible liquid, under stable displacement conditions, to simulate a spill-type process. The tFADE model was able to better represent experimental elution behavior for systems that exhibited extensive long-term concentration tailing requiring much less parameters compared to typical multi-rate mass transfer models (MRMT). However, the tFADE model was not able to effectively simulate the entire elution curve for such systems with short concentration tailing periods since it assumes a power-law distribution for the dissolution rate for TCE. Such limitations may be solved using the tempered fractional derivative model, which can capture the single-rate mass transfer process and therefore the short elution concentration tailing behavior. Numerical solution for the tempered fractional-derivative model in bounded domains however remains a challenge and therefore requires further study. However, the tFADE model shows excellent promise for understanding impacts on concentration elution behavior for systems in which physical heterogeneity, non-uniform NAPL distribution, and pronounced sorption-desorption effects dominate or are present.
NASA Technical Reports Server (NTRS)
Packard, Michael H.
2002-01-01
Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.
Application of Bayesian Classification to Content-Based Data Management
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.
2004-01-01
The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.
Co-delivery of chemotherapeutics and proteins for synergistic therapy.
He, Chaoliang; Tang, Zhaohui; Tian, Huayu; Chen, Xuesi
2016-03-01
Combination therapy with chemotherapeutics and protein therapeutics, typically cytokines and antibodies, has been a type of crucial approaches for synergistic cancer treatment. However, conventional approaches by simultaneous administration of free chemotherapeutic drugs and proteins lead to limitations for further optimizing the synergistic effects, due to the distinct in vivo pharmacokinetics and distribution of small drugs and proteins, insufficient tumor selectivity and tumor accumulation, unpredictable drug/protein ratios at tumor sites, short half-lives, and serious systemic adverse effects. Consequently, to obtain optimal synergistic anti-tumor efficacy, considerable efforts have been devoted to develop the co-delivery systems for co-incorporating chemotherapeutics and proteins into a single carrier system and subsequently releasing the dual or multiple payloads at desired target sites in a more controllable manner. The co-delivery systems result in markedly enhanced blood stability and in vivo half-lives of the small drugs and proteins, elevated tumor accumulation, as well as the capability of delivering the multiple agents to the same target sites with rational drug/protein ratios, which may facilitate maximizing the synergistic effects and therefore lead to optimal antitumor efficacy. This review emphasizes the recent advances in the co-delivery systems for chemotherapeutics and proteins, typically cytokines and antibodies, for systemic or localized synergistic cancer treatment. Moreover, the proposed mechanisms responsible for the synergy of chemotherapeutic drugs and proteins are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
Wood crib fire free burning test in ISO room
NASA Astrophysics Data System (ADS)
Qiang, Xu; Griffin, Greg; Bradbury, Glenn; Dowling, Vince
2006-04-01
In the research of application potential of water mist fire suppression system for fire fighting in train luggage carriage, a series of experiments were conducted in ISO room on wood crib fire with and without water mist actuation. The results of free burn test without water mist suppression are used as reference in evaluating the efficiency of water mist suppression system. As part of the free burn test, several tests have been done under the hood of ISO room to calibrate the size of the crib fire and these tests can also be used in analyzing the wall effect in room fire hazard. In these free burning experiments, wood cribs of four sizes under the hood were tested. The temperature of crib fire, heat flux around the fire, gas concentration in hood of ISO room were measured in the experiments and two sets of thermal imaging system were used to get the temperature distribution and the typical shape of the free burning flames. From the experiments, the radiation intensity in specific positions around the fire, the effective heat of combustion, mass loss, oxygen consumption rate for different sizes of fire, typical structure of the flame and self extinguishment time was obtained for each crib size.
Spatial averaging for small molecule diffusion in condensed phase environments
NASA Astrophysics Data System (ADS)
Plattner, Nuria; Doll, J. D.; Meuwly, Markus
2010-07-01
Spatial averaging is a new approach for sampling rare-event problems. The approach modifies the importance function which improves the sampling efficiency while keeping a defined relation to the original statistical distribution. In this work, spatial averaging is applied to multidimensional systems for typical problems arising in physical chemistry. They include (I) a CO molecule diffusing on an amorphous ice surface, (II) a hydrogen molecule probing favorable positions in amorphous ice, and (III) CO migration in myoglobin. The systems encompass a wide range of energy barriers and for all of them spatial averaging is found to outperform conventional Metropolis Monte Carlo. It is also found that optimal simulation parameters are surprisingly similar for the different systems studied, in particular, the radius of the point cloud over which the potential energy function is averaged. For H2 diffusing in amorphous ice it is found that facile migration is possible which is in agreement with previous suggestions from experiment. The free energy barriers involved are typically lower than 1 kcal/mol. Spatial averaging simulations for CO in myoglobin are able to locate all currently characterized metastable states. Overall, it is found that spatial averaging considerably improves the sampling of configurational space.
Elegent—An elastic event generator
NASA Astrophysics Data System (ADS)
Kašpar, J.
2014-03-01
Although elastic scattering of nucleons may look like a simple process, it presents a long-lasting challenge for theory. Due to missing hard energy scale, the perturbative QCD cannot be applied. Instead, many phenomenological/theoretical models have emerged. In this paper we present a unified implementation of some of the most prominent models in a C++ library, moreover extended to account for effects of the electromagnetic interaction. The library is complemented with a number of utilities. For instance, programs to sample many distributions of interest in four-momentum transfer squared, t, impact parameter, b, and collision energy √{s}. These distributions at ISR, Spp¯S, RHIC, Tevatron and LHC energies are available for download from the project web site. Both in the form of ROOT files and PDF figures providing comparisons among the models. The package includes also a tool for Monte-Carlo generation of elastic scattering events, which can easily be embedded in any other program framework. Catalogue identifier: AERT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERT_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 10551 No. of bytes in distributed program, including test data, etc.: 126316 Distribution format: tar.gz Programming language: C++. Computer: Any in principle, tested on x86-64 architecture. Operating system: Any in principle, tested on GNU/Linux. RAM: Strongly depends on the task, but typically below 20MB Classification: 11.6. External routines: ROOT, HepMC Nature of problem: Monte-Carlo simulation of elastic nucleon-nucleon collisions Solution method: Implementation of some of the most prominent phenomenological/theoretical models providing cumulative distribution function that is used for random event generation. Running time: Strongly depends on the task, but typically below 1 h.
Size distribution and coating thickness of black carbon from the Canadian oil sands operations
NASA Astrophysics Data System (ADS)
Cheng, Yuan; Li, Shao-Meng; Gordon, Mark; Liu, Peter
2018-02-01
Black carbon (BC) plays an important role in the Earth's climate system. However, parameterizations of BC size and mixing state have not been well addressed in aerosol-climate models, introducing substantial uncertainties into the estimation of radiative forcing by BC. In this study, we focused on BC emissions from the oil sands (OS) surface mining activities in northern Alberta, based on an aircraft campaign conducted over the Athabasca OS region in 2013. A total of 14 flights were made over the OS source area, in which the aircraft was typically flown in a four- or five-sided polygon pattern along flight tracks encircling an OS facility. Another 3 flights were performed downwind of the OS source area, each of which involved at least three intercepting locations where the well-mixed OS plume was measured along flight tracks perpendicular to the wind direction. Comparable size distributions were observed for refractory black carbon (rBC) over and downwind of the OS facilities, with rBC mass median diameters (MMDs) between ˜ 135 and 145 nm that were characteristic of fresh urban emissions. This MMD range corresponded to rBC number median diameters (NMDs) of ˜ 60-70 nm, approximately 100 % higher than the NMD settings in some aerosol-climate models. The typical in- and out-of-plume segments of a flight, which had different rBC concentrations and photochemical ages, showed consistent rBC size distributions in terms of MMD, NMD and the corresponding distribution widths. Moreover, rBC size distributions remained unchanged at different downwind distances from the source area, suggesting that atmospheric aging would not necessarily change rBC size distribution. However, aging indeed influenced rBC mixing state. Coating thickness for rBC cores in the diameter range of 130-160 nm was nearly doubled (from ˜ 20 to 40 nm) within 3 h when the OS plume was transported over a distance of 90 km from the source area.
Light distribution modulated diffuse reflectance spectroscopy.
Huang, Pin-Yuan; Chien, Chun-Yu; Sheu, Chia-Rong; Chen, Yu-Wen; Tseng, Sheng-Hao
2016-06-01
Typically, a diffuse reflectance spectroscopy (DRS) system employing a continuous wave light source would need to acquire diffuse reflectances measured at multiple source-detector separations for determining the absorption and reduced scattering coefficients of turbid samples. This results in a multi-fiber probe structure and an indefinite probing depth. Here we present a novel DRS method that can utilize a few diffuse reflectances measured at one source-detector separation for recovering the optical properties of samples. The core of innovation is a liquid crystal (LC) cell whose scattering property can be modulated by the bias voltage. By placing the LC cell between the light source and the sample, the spatial distribution of light in the sample can be varied as the scattering property of the LC cell modulated by the bias voltage, and this would induce intensity variation of the collected diffuse reflectance. From a series of Monte Carlo simulations and phantom measurements, we found that this new light distribution modulated DRS (LDM DRS) system was capable of accurately recover the absorption and scattering coefficients of turbid samples and its probing depth only varied by less than 3% over the full bias voltage variation range. Our results suggest that this LDM DRS platform could be developed to various low-cost, efficient, and compact systems for in-vivo superficial tissue investigation.
Light distribution modulated diffuse reflectance spectroscopy
Huang, Pin-Yuan; Chien, Chun-Yu; Sheu, Chia-Rong; Chen, Yu-Wen; Tseng, Sheng-Hao
2016-01-01
Typically, a diffuse reflectance spectroscopy (DRS) system employing a continuous wave light source would need to acquire diffuse reflectances measured at multiple source-detector separations for determining the absorption and reduced scattering coefficients of turbid samples. This results in a multi-fiber probe structure and an indefinite probing depth. Here we present a novel DRS method that can utilize a few diffuse reflectances measured at one source-detector separation for recovering the optical properties of samples. The core of innovation is a liquid crystal (LC) cell whose scattering property can be modulated by the bias voltage. By placing the LC cell between the light source and the sample, the spatial distribution of light in the sample can be varied as the scattering property of the LC cell modulated by the bias voltage, and this would induce intensity variation of the collected diffuse reflectance. From a series of Monte Carlo simulations and phantom measurements, we found that this new light distribution modulated DRS (LDM DRS) system was capable of accurately recover the absorption and scattering coefficients of turbid samples and its probing depth only varied by less than 3% over the full bias voltage variation range. Our results suggest that this LDM DRS platform could be developed to various low-cost, efficient, and compact systems for in-vivo superficial tissue investigation. PMID:27375931
Evaluation of uptake and distribution of gold nanoparticles in solid tumors
NASA Astrophysics Data System (ADS)
England, Christopheri G.; Gobin, André M.; Frieboes, Hermann B.
2015-11-01
Although nanotherapeutics offer a targeted and potentially less toxic alternative to systemic chemotherapy in cancer treatment, nanotherapeutic transport is typically hindered by abnormal characteristics of tumor tissue. Once nanoparticles targeted to tumor cells arrive in the circulation of tumor vasculature, they must extravasate from irregular vessels and diffuse through the tissue to ideally reach all malignant cells in cytotoxic concentrations. The enhanced permeability and retention effect can be leveraged to promote extravasation of appropriately sized particles from tumor vasculature; however, therapeutic success remains elusive partly due to inadequate intra-tumoral transport promoting heterogeneous nanoparticle uptake and distribution. Irregular tumor vasculature not only hinders particle transport but also sustains hypoxic tissue kregions with quiescent cells, which may be unaffected by cycle-dependent chemotherapeutics released from nanoparticles and thus regrow tumor tissue following nanotherapy. Furthermore, a large proportion of systemically injected nanoparticles may become sequestered by the reticulo-endothelial system, resulting in overall diminished efficacy. We review recent work evaluating the uptake and distribution of gold nanoparticles in pre-clinical tumor models, with the goal to help improve nanotherapy outcomes. We also examine the potential role of novel layered gold nanoparticles designed to address some of these critical issues, assessing their uptake and transport in cancerous tissue.
Li, Yu-Hong; Hu, Hong-You; Liu, Jing-Chun; Wu, Gui-Lan
2010-03-01
The distribution, mobility and potential risks of Cu, Zn, and Pb in four typical plant-sediment co-systems of the Quanzhou Bay estuary wetland in southeast China were investigated using a sequential extraction procedure. The results show that the sediments were moderately or heavily contaminated with Zn and Pb in all four plant communities, and the plant-sediment systems could act as a sink for the heavy metals. In all investigated sediments, only a small proportion of measured heavy metals were distributed in exchangeable fraction and carbonate fraction, while the reducible fraction contained the highest amount of Zn and Pb of the total readily bioavailable fractions, and the oxidizable fraction exhibited a higher retention capacity for Zn and Cu, but lower for Pb. Alternanthera philoxeroides had the best ability to accumulate heavy metals among the four species. Phragmites communis was quite tolerant to Zn and Pb and had a good capability to transfer Zn and Pb. Aegiceras corniculatum seems to be effective in resisting heavy metal pollution, and therefore cannot serve as an indicator of contamination. The urgent need for many local enterprises is to carry out high-tech cleaner production to reduce the emission of pollutants and achieve a resource-economical and environment-friendly development.
Shannon information entropy in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Ma, Chun-Wang; Ma, Yu-Gang
2018-03-01
The general idea of information entropy provided by C.E. Shannon "hangs over everything we do" and can be applied to a great variety of problems once the connection between a distribution and the quantities of interest is found. The Shannon information entropy essentially quantify the information of a quantity with its specific distribution, for which the information entropy based methods have been deeply developed in many scientific areas including physics. The dynamical properties of heavy-ion collisions (HICs) process make it difficult and complex to study the nuclear matter and its evolution, for which Shannon information entropy theory can provide new methods and observables to understand the physical phenomena both theoretically and experimentally. To better understand the processes of HICs, the main characteristics of typical models, including the quantum molecular dynamics models, thermodynamics models, and statistical models, etc., are briefly introduced. The typical applications of Shannon information theory in HICs are collected, which cover the chaotic behavior in branching process of hadron collisions, the liquid-gas phase transition in HICs, and the isobaric difference scaling phenomenon for intermediate mass fragments produced in HICs of neutron-rich systems. Even though the present applications in heavy-ion collision physics are still relatively simple, it would shed light on key questions we are seeking for. It is suggested to further develop the information entropy methods in nuclear reactions models, as well as to develop new analysis methods to study the properties of nuclear matters in HICs, especially the evolution of dynamics system.
Adsorbed radioactivity and radiographic imaging of surfaces of stainless steel and titanium
NASA Astrophysics Data System (ADS)
Jung, Haijo
1997-11-01
Type 304 stainless steel used for typical surface materials of spent fuel shipping casks and titanium were exposed in the spent fuel storage pool of a typical PWR power plant. Adsorption characteristics, effectiveness of decontamination by water cleaning and by electrocleaning, and swipe effectiveness on the metal surfaces were studied. A variety of environmental conditions had been manipulated to stimulate the potential 'weeping' phenomenon that often occurs with spent fuel shipping casks during transit. In a previous study, few heterogeneous effects of adsorbed contamination onto metal surfaces were observed. Radiographic images of cask surfaces were made in this study and showed clearly heterogeneous activity distributions. Acquired radiographic images were digitized and further analyzed with an image analysis computer package and compared to calibrated images by using standard sources. The measurements of activity distribution by using the radiographic image method were consistent with that using a HPGe detector. This radiographic image method was used to study the effects of electrocleaning for total and specified areas. The Modulation Transfer Function (MTF) of a film-screen system in contact with a radioactive metal surface was studied with neutron activated gold foils and showed more broad resolution properties than general diagnostic x-ray film-screen systems. Microstructure between normal areas and hot spots showed significant differences, and one hot spot appearing as a dot on the film image consisted of several small hot spots (about 10 μm in diameter). These hot spots were observed as structural defects of the metal surfaces.
Photovoltaic module mounting system
Miros, Robert H. J.; Mittan, Margaret Birmingham; Seery, Martin N; Holland, Rodney H
2012-09-18
A solar array mounting system having unique installation, load distribution, and grounding features, and which is adaptable for mounting solar panels having no external frame. The solar array mounting system includes flexible, pedestal-style feet and structural links connected in a grid formation on the mounting surface. The photovoltaic modules are secured in place via the use of attachment clamps that grip the edge of the typically glass substrate. The panel mounting clamps are then held in place by tilt brackets and/or mid-link brackets that provide fixation for the clamps and align the solar panels at a tilt to the horizontal mounting surface. The tilt brackets are held in place atop the flexible feet and connected link members thus creating a complete mounting structure.
Photovoltaic module mounting system
Miros, Robert H. J. [Fairfax, CA; Mittan, Margaret Birmingham [Oakland, CA; Seery, Martin N [San Rafael, CA; Holland, Rodney H [Novato, CA
2012-04-17
A solar array mounting system having unique installation, load distribution, and grounding features, and which is adaptable for mounting solar panels having no external frame. The solar array mounting system includes flexible, pedestal-style feet and structural links connected in a grid formation on the mounting surface. The photovoltaic modules are secured in place via the use of attachment clamps that grip the edge of the typically glass substrate. The panel mounting clamps are then held in place by tilt brackets and/or mid-link brackets that provide fixation for the clamps and align the solar panels at a tilt to the horizontal mounting surface. The tilt brackets are held in place atop the flexible feet and connected link members thus creating a complete mounting structure.
The concept of the mechanically active guideway as a novel approach to maglev
NASA Technical Reports Server (NTRS)
Horwath, T. G.
1992-01-01
A maglev system that is suitable for operation in the United States will have to meet unique requirements which determine the major systems characteristics. Maglev configurations presently developed in Germany and Japan are based on conventional maglev concepts and as such do not meet all of the requirements. A novel maglev guideway concept is introduced as a solution. This concept, the mechanically active guideway, is articulated in three degrees of freedom and assumes system functions which normally reside in the maglev vehicle. The mechanically active guideway contains spatially distributed actuators which are energized under computer control at the time of vehicle passage to achieve bank angle adjustment and ride quality control. A typical realization of the concept is outlined.
Balancing Hydronic Systems in Multifamily Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruch, R.; Ludwig, P.; Maurer, T.
2014-07-01
In multifamily hydronic systems, temperature imbalance may be caused by undersized piping, improperly adjusted balancing valves, inefficient water temperature and flow levels, and owner/occupant interaction with the boilers, distribution and controls. The effects of imbalance include tenant discomfort, higher energy use intensity and inefficient building operation. This paper explores cost-effective distribution upgrades and balancing measures in multifamily hydronic systems, providing a resource to contractors, auditors, and building owners on best practices to improve tenant comfort and lower operating costs. The research was conducted by The Partnership for Advanced Residential Retrofit (PARR) in conjunction with Elevate Energy. The team surveyed existingmore » knowledge on cost-effective retrofits for optimizing distribution in typical multifamily hydronic systems, with the aim of identifying common situations and solutions, and then conducted case studies on two Chicago area buildings with known balancing issues in order to quantify the extent of temperature imbalance. At one of these buildings a booster pump was installed on a loop to an underheated wing of the building. This study found that unit temperature in a multifamily hydronic building can vary as much as 61 degrees F, particularly if windows are opened or tenants use intermittent supplemental heating sources like oven ranges. Average temperature spread at the building as a result of this retrofit decreased from 22.1 degrees F to 15.5 degrees F.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-09-01
In multifamily hydronic systems, temperature imbalance may be caused by undersized piping, improperly adjusted balancing valves, inefficient water temperature and flow levels, and owner/occupant interaction with the boilers, distribution and controls. The effects of imbalance include tenant discomfort, higher energy use intensity and inefficient building operation. This paper explores cost-effective distribution upgrades and balancing measures in multifamily hydronic systems, providing a resource to contractors, auditors, and building owners on best practices to improve tenant comfort and lower operating costs. The research was conducted by The Partnership for Advanced Residential Retrofit (PARR) in conjunction with Elevate Energy. The team surveyed existingmore » knowledge on cost-effective retrofits for optimizing distribution in typical multifamily hydronic systems, with the aim of identifying common situations and solutions, and then conducted case studies on two Chicago area buildings with known balancing issues in order to quantify the extent of temperature imbalance. At one of these buildings a booster pump was installed on a loop to an underheated wing of the building. This study found that unit temperature in a multifamily hydronic building can vary as much as 61 degrees F, particularly if windows are opened or tenants use intermittent supplemental heating sources like oven ranges. Average temperature spread at the building as a result of this retrofit decreased from 22.1 degrees F to 15.5 degrees F.« less
Multi-client quantum key distribution using wavelength division multiplexing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grice, Warren P; Bennink, Ryan S; Earl, Dennis Duncan
Quantum Key Distribution (QKD) exploits the rules of quantum mechanics to generate and securely distribute a random sequence of bits to two spatially separated clients. Typically a QKD system can support only a single pair of clients at a time, and so a separate quantum link is required for every pair of users. We overcome this limitation with the design and characterization of a multi-client entangled-photon QKD system with the capacity for up to 100 clients simultaneously. The time-bin entangled QKD system includes a broadband down-conversion source with two unique features that enable the multi-user capability. First, the photons aremore » emitted across a very large portion of the telecom spectrum. Second, and more importantly, the photons are strongly correlated in their energy degree of freedom. Using standard wavelength division multiplexing (WDM) hardware, the photons can be routed to different parties on a quantum communication network, while the strong spectral correlations ensure that each client is linked only to the client receiving the conjugate wavelength. In this way, a single down-conversion source can support dozens of channels simultaneously--and to the extent that the WDM hardware can send different spectral channels to different clients, the system can support multiple client pairings. We will describe the design and characterization of the down-conversion source, as well as the client stations, which must be tunable across the emission spectrum.« less
A study of subsurface wastewater infiltration systems for distributed rural sewage treatment.
Qin, Wei; Dou, Junfeng; Ding, Aizhong; Xie, En; Zheng, Lei
2014-08-01
Three types of subsurface wastewater infiltration systems (SWIS) were developed to study the efficiency of organic pollutant removal from distributed rural sewage under various conditions. Of the three different layered substrate systems, the one with the greatest amount of decomposed cow dung (5%) and soil (DCDS) showed the highest removal efficiency with respect to total nitrogen (TN), where the others showed no significant difference. The TN removal efficiency was increased with an increasing filling height of DCDS. Compared with the TN removal efficiency of 25% in the system without DCDS, the removal efficiency of the systems in which DCDS filled half and one fourth of the height was increased by 72% and 31%, respectively. Based on seasonal variations in the discharge of the typical rural family, the SWIS were run at three different hydraulic loads of 6.5, 13 and 20 cm/d. These results illustrated that SWIS could perform well at any of the given hydraulic loads. The results of trials using different inlet configurations showed that the effluent concentration of the contaminants in the system operating a multiple-inlet mode was much lower compared with the system operated under single-inlet conditions. The effluent concentration ofa pilot-scale plant achieved the level III criteria specified by the Surface Water Quality Standard at the initial stage.
Displacements of Metallic Thermal Protection System Panels During Reentry
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Blosser, Max L.; Wurster, Kathryn E.
2006-01-01
Bowing of metallic thermal protection systems for reentry of a previously proposed single-stage-to-orbit reusable launch vehicle was studied. The outer layer of current metallic thermal protection system concepts typically consists of a honeycomb panel made of a high temperature nickel alloy. During portions of reentry when the thermal protection system is exposed to rapidly varying heating rates, a significant temperature gradient develops across the honeycomb panel thickness, resulting in bowing of the honeycomb panel. The deformations of the honeycomb panel increase the roughness of the outer mold line of the vehicle, which could possibly result in premature boundary layer transition, resulting in significantly higher downstream heating rates. The aerothermal loads and parameters for three locations on the centerline of the windward side of this vehicle were calculated using an engineering code. The transient temperature distributions through a metallic thermal protection system were obtained using 1-D finite volume thermal analysis, and the resulting displacements of the thermal protection system were calculated. The maximum deflection of the thermal protection system throughout the reentry trajectory was 6.4 mm. The maximum ratio of deflection to boundary layer thickness was 0.032. Based on previously developed distributed roughness correlations, it was concluded that these defections will not result in tripping the hypersonic boundary layer.
Hybrid modeling and empirical analysis of automobile supply chain network
NASA Astrophysics Data System (ADS)
Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying
2017-05-01
Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.
Localization of phonons in mass-disordered alloys: A typical medium dynamical cluster approach
Jarrell, Mark; Moreno, Juana; Raja Mondal, Wasim; ...
2017-07-20
The effect of disorder on lattice vibrational modes has been a topic of interest for several decades. In this article, we employ a Green's function based approach, namely, the dynamical cluster approximation (DCA), to investigate phonons in mass-disordered systems. Detailed benchmarks with previous exact calculations are used to validate the method in a wide parameter space. An extension of the method, namely, the typical medium DCA (TMDCA), is used to study Anderson localization of phonons in three dimensions. We show that, for binary isotopic disorder, lighter impurities induce localized modes beyond the bandwidth of the host system, while heavier impuritiesmore » lead to a partial localization of the low-frequency acoustic modes. For a uniform (box) distribution of masses, the physical spectrum is shown to develop long tails comprising mostly localized modes. The mobility edge separating extended and localized modes, obtained through the TMDCA, agrees well with results from the transfer matrix method. A reentrance behavior of the mobility edge with increasing disorder is found that is similar to, but somewhat more pronounced than, the behavior in disordered electronic systems. Our work establishes a computational approach, which recovers the thermodynamic limit, is versatile and computationally inexpensive, to investigate lattice vibrations in disordered lattice systems.« less
Localization of phonons in mass-disordered alloys: A typical medium dynamical cluster approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarrell, Mark; Moreno, Juana; Raja Mondal, Wasim
The effect of disorder on lattice vibrational modes has been a topic of interest for several decades. In this article, we employ a Green's function based approach, namely, the dynamical cluster approximation (DCA), to investigate phonons in mass-disordered systems. Detailed benchmarks with previous exact calculations are used to validate the method in a wide parameter space. An extension of the method, namely, the typical medium DCA (TMDCA), is used to study Anderson localization of phonons in three dimensions. We show that, for binary isotopic disorder, lighter impurities induce localized modes beyond the bandwidth of the host system, while heavier impuritiesmore » lead to a partial localization of the low-frequency acoustic modes. For a uniform (box) distribution of masses, the physical spectrum is shown to develop long tails comprising mostly localized modes. The mobility edge separating extended and localized modes, obtained through the TMDCA, agrees well with results from the transfer matrix method. A reentrance behavior of the mobility edge with increasing disorder is found that is similar to, but somewhat more pronounced than, the behavior in disordered electronic systems. Our work establishes a computational approach, which recovers the thermodynamic limit, is versatile and computationally inexpensive, to investigate lattice vibrations in disordered lattice systems.« less
Spectra of conditionalization and typicality in the multiverse
NASA Astrophysics Data System (ADS)
Azhar, Feraz
2016-02-01
An approach to testing theories describing a multiverse, that has gained interest of late, involves comparing theory-generated probability distributions over observables with their experimentally measured values. It is likely that such distributions, were we indeed able to calculate them unambiguously, will assign low probabilities to any such experimental measurements. An alternative to thereby rejecting these theories, is to conditionalize the distributions involved by restricting attention to domains of the multiverse in which we might arise. In order to elicit a crisp prediction, however, one needs to make a further assumption about how typical we are of the chosen domains. In this paper, we investigate interactions between the spectra of available assumptions regarding both conditionalization and typicality, and draw out the effects of these interactions in a concrete setting; namely, on predictions of the total number of species that contribute significantly to dark matter. In particular, for each conditionalization scheme studied, we analyze how correlations between densities of different dark matter species affect the prediction, and explicate the effects of assumptions regarding typicality. We find that the effects of correlations can depend on the conditionalization scheme, and that in each case atypicality can significantly change the prediction. In doing so, we demonstrate the existence of overlaps in the predictions of different "frameworks" consisting of conjunctions of theory, conditionalization scheme and typicality assumption. This conclusion highlights the acute challenges involved in using such tests to identify a preferred framework that aims to describe our observational situation in a multiverse.
Statistical Features of Complex Systems ---Toward Establishing Sociological Physics---
NASA Astrophysics Data System (ADS)
Kobayashi, Naoki; Kuninaka, Hiroto; Wakita, Jun-ichi; Matsushita, Mitsugu
2011-07-01
Complex systems have recently attracted much attention, both in natural sciences and in sociological sciences. Members constituting a complex system evolve through nonlinear interactions among each other. This means that in a complex system the multiplicative experience or, so to speak, the history of each member produces its present characteristics. If attention is paid to any statistical property in any complex system, the lognormal distribution is the most natural and appropriate among the standard or ``normal'' statistics to overview the whole system. In fact, the lognormality emerges rather conspicuously when we examine, as familiar and typical examples of statistical aspects in complex systems, the nursing-care period for the aged, populations of prefectures and municipalities, and our body height and weight. Many other examples are found in nature and society. On the basis of these observations, we discuss the possibility of sociological physics.
First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Nishikawa, Hiroaki
2014-01-01
A time-dependent extension of the first-order hyperbolic system method for advection-diffusion problems is introduced. Diffusive/viscous terms are written and discretized as a hyperbolic system, which recovers the original equation in the steady state. The resulting scheme offers advantages over traditional schemes: a dramatic simplification in the discretization, high-order accuracy in the solution gradients, and orders-of-magnitude convergence acceleration. The hyperbolic advection-diffusion system is discretized by the second-order upwind residual-distribution scheme in a unified manner, and the system of implicit-residual-equations is solved by Newton's method over every physical time step. The numerical results are presented for linear and nonlinear advection-diffusion problems, demonstrating solutions and gradients produced to the same order of accuracy, with rapid convergence over each physical time step, typically less than five Newton iterations.
A Geospatial Comparison of Distributed Solar Heat and Power in Europe and the US
Norwood, Zack; Nyholm, Emil; Otanicar, Todd; Johnsson, Filip
2014-01-01
The global trends for the rapid growth of distributed solar heat and power in the last decade will likely continue as the levelized cost of production for these technologies continues to decline. To be able to compare the economic potential of solar technologies one must first quantify the types and amount of solar resource that each technology can utilize; second, estimate the technological performance potential based on that resource; and third, compare the costs of each technology across regions. In this analysis, we have performed the first two steps in this process. We use physical and empirically validated models of a total of 8 representative solar system types: non-tracking photovoltaics, 2d-tracking photovoltaics, high concentration photovoltaics, flat-plate thermal, evacuated tube thermal, concentrating trough thermal, concentrating solar combined heat and power, and hybrid concentrating photovoltaic/thermal. These models are integrated into a simulation that uses typical meteorological year weather data to create a yearly time series of heat and electricity production for each system over 12,846 locations in Europe and 1,020 locations in the United States. Through this simulation, systems composed of various permutations of collector-types and technologies can be compared geospatially and temporally in terms of their typical production in each location. For example, we see that silicon solar cells show a significant advantage in yearly electricity production over thin-film cells in the colder climatic regions, but that advantage is lessened in regions that have high average irradiance. In general, the results lead to the conclusion that comparing solar technologies across technology classes simply on cost per peak watt, as is usually done, misses these often significant regional differences in annual performance. These results have implications for both solar power development and energy systems modeling of future pathways of the electricity system. PMID:25474632
Condition monitoring of distributed systems using two-stage Bayesian inference data fusion
NASA Astrophysics Data System (ADS)
Jaramillo, Víctor H.; Ottewill, James R.; Dudek, Rafał; Lepiarczyk, Dariusz; Pawlik, Paweł
2017-03-01
In industrial practice, condition monitoring is typically applied to critical machinery. A particular piece of machinery may have its own condition monitoring system that allows the health condition of said piece of equipment to be assessed independently of any connected assets. However, industrial machines are typically complex sets of components that continuously interact with one another. In some cases, dynamics resulting from the inception and development of a fault can propagate between individual components. For example, a fault in one component may lead to an increased vibration level in both the faulty component, as well as in connected healthy components. In such cases, a condition monitoring system focusing on a specific element in a connected set of components may either incorrectly indicate a fault, or conversely, a fault might be missed or masked due to the interaction of a piece of equipment with neighboring machines. In such cases, a more holistic condition monitoring approach that can not only account for such interactions, but utilize them to provide a more complete and definitive diagnostic picture of the health of the machinery is highly desirable. In this paper, a Two-Stage Bayesian Inference approach allowing data from separate condition monitoring systems to be combined is presented. Data from distributed condition monitoring systems are combined in two stages, the first data fusion occurring at a local, or component, level, and the second fusion combining data at a global level. Data obtained from an experimental rig consisting of an electric motor, two gearboxes, and a load, operating under a range of different fault conditions is used to illustrate the efficacy of the method at pinpointing the root cause of a problem. The obtained results suggest that the approach is adept at refining the diagnostic information obtained from each of the different machine components monitored, therefore improving the reliability of the health assessment of each individual element, as well as the entire piece of machinery.
A geospatial comparison of distributed solar heat and power in Europe and the US.
Norwood, Zack; Nyholm, Emil; Otanicar, Todd; Johnsson, Filip
2014-01-01
The global trends for the rapid growth of distributed solar heat and power in the last decade will likely continue as the levelized cost of production for these technologies continues to decline. To be able to compare the economic potential of solar technologies one must first quantify the types and amount of solar resource that each technology can utilize; second, estimate the technological performance potential based on that resource; and third, compare the costs of each technology across regions. In this analysis, we have performed the first two steps in this process. We use physical and empirically validated models of a total of 8 representative solar system types: non-tracking photovoltaics, 2d-tracking photovoltaics, high concentration photovoltaics, flat-plate thermal, evacuated tube thermal, concentrating trough thermal, concentrating solar combined heat and power, and hybrid concentrating photovoltaic/thermal. These models are integrated into a simulation that uses typical meteorological year weather data to create a yearly time series of heat and electricity production for each system over 12,846 locations in Europe and 1,020 locations in the United States. Through this simulation, systems composed of various permutations of collector-types and technologies can be compared geospatially and temporally in terms of their typical production in each location. For example, we see that silicon solar cells show a significant advantage in yearly electricity production over thin-film cells in the colder climatic regions, but that advantage is lessened in regions that have high average irradiance. In general, the results lead to the conclusion that comparing solar technologies across technology classes simply on cost per peak watt, as is usually done, misses these often significant regional differences in annual performance. These results have implications for both solar power development and energy systems modeling of future pathways of the electricity system.
NASA Astrophysics Data System (ADS)
Huang, T.; Alarcon, C.; Quach, N. T.
2014-12-01
Capture, curate, and analysis are the typical activities performed at any given Earth Science data center. Modern data management systems must be adaptable to heterogeneous science data formats, scalable to meet the mission's quality of service requirements, and able to manage the life-cycle of any given science data product. Designing a scalable data management doesn't happen overnight. It takes countless hours of refining, refactoring, retesting, and re-architecting. The Horizon data management and workflow framework, developed at the Jet Propulsion Laboratory, is a portable, scalable, and reusable framework for developing high-performance data management and product generation workflow systems to automate data capturing, data curation, and data analysis activities. The NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC)'s Data Management and Archive System (DMAS) is its core data infrastructure that handles capturing and distribution of hundreds of thousands of satellite observations each day around the clock. DMAS is an application of the Horizon framework. The NASA Global Imagery Browse Services (GIBS) is NASA's Earth Observing System Data and Information System (EOSDIS)'s solution for making high-resolution global imageries available to the science communities. The Imagery Exchange (TIE), an application of the Horizon framework, is a core subsystem for GIBS responsible for data capturing and imagery generation automation to support the EOSDIS' 12 distributed active archive centers and 17 Science Investigator-led Processing Systems (SIPS). This presentation discusses our ongoing effort in refining, refactoring, retesting, and re-architecting the Horizon framework to enable data-intensive science and its applications.
NASA Astrophysics Data System (ADS)
Zhang, Chuan-Biao; Ming, Li; Xin, Zhou
2015-12-01
Ensemble simulations, which use multiple short independent trajectories from dispersive initial conformations, rather than a single long trajectory as used in traditional simulations, are expected to sample complex systems such as biomolecules much more efficiently. The re-weighted ensemble dynamics (RED) is designed to combine these short trajectories to reconstruct the global equilibrium distribution. In the RED, a number of conformational functions, named as basis functions, are applied to relate these trajectories to each other, then a detailed-balance-based linear equation is built, whose solution provides the weights of these trajectories in equilibrium distribution. Thus, the sufficient and efficient selection of basis functions is critical to the practical application of RED. Here, we review and present a few possible ways to generally construct basis functions for applying the RED in complex molecular systems. Especially, for systems with less priori knowledge, we could generally use the root mean squared deviation (RMSD) among conformations to split the whole conformational space into a set of cells, then use the RMSD-based-cell functions as basis functions. We demonstrate the application of the RED in typical systems, including a two-dimensional toy model, the lattice Potts model, and a short peptide system. The results indicate that the RED with the constructions of basis functions not only more efficiently sample the complex systems, but also provide a general way to understand the metastable structure of conformational space. Project supported by the National Natural Science Foundation of China (Grant No. 11175250).
Improving ATLAS grid site reliability with functional tests using HammerCloud
NASA Astrophysics Data System (ADS)
Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan
2012-12-01
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.
NASA Technical Reports Server (NTRS)
2001-01-01
REI Systems, Inc. developed a software solution that uses the Internet to eliminate the paperwork typically required to document and manage complex business processes. The data management solution, called Electronic Handbooks (EHBs), is presently used for the entire SBIR program processes at NASA. The EHB-based system is ideal for programs and projects whose users are geographically distributed and are involved in complex management processes and procedures. EHBs provide flexible access control and increased communications while maintaining security for systems of all sizes. Through Internet Protocol- based access, user authentication and user-based access restrictions, role-based access control, and encryption/decryption, EHBs provide the level of security required for confidential data transfer. EHBs contain electronic forms and menus, which can be used in real time to execute the described processes. EHBs use standard word processors that generate ASCII HTML code to set up electronic forms that are viewed within a web browser. EHBs require no end-user software distribution, significantly reducing operating costs. Each interactive handbook simulates a hard-copy version containing chapters with descriptions of participants' roles in the online process.
Medusa: A Scalable MR Console Using USB
Stang, Pascal P.; Conolly, Steven M.; Santos, Juan M.; Pauly, John M.; Scott, Greig C.
2012-01-01
MRI pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multi-channel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the Universal Serial Bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multi-channel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200
Re-Organizing Earth Observation Data Storage to Support Temporal Analysis of Big Data
NASA Technical Reports Server (NTRS)
Lynnes, Christopher
2017-01-01
The Earth Observing System Data and Information System archives many datasets that are critical to understanding long-term variations in Earth science properties. Thus, some of these are large, multi-decadal datasets. Yet the challenge in long time series analysis comes less from the sheer volume than the data organization, which is typically one (or a small number of) time steps per file. The overhead of opening and inventorying complex, API-driven data formats such as Hierarchical Data Format introduces a small latency at each time step, which nonetheless adds up for datasets with O(10^6) single-timestep files. Several approaches to reorganizing the data can mitigate this overhead by an order of magnitude: pre-aggregating data along the time axis (time-chunking); storing the data in a highly distributed file system; or storing data in distributed columnar databases. Storing a second copy of the data incurs extra costs, so some selection criteria must be employed, which would be driven by expected or actual usage by the end user community, balanced against the extra cost.
Re-organizing Earth Observation Data Storage to Support Temporal Analysis of Big Data
NASA Astrophysics Data System (ADS)
Lynnes, C.
2017-12-01
The Earth Observing System Data and Information System archives many datasets that are critical to understanding long-term variations in Earth science properties. Thus, some of these are large, multi-decadal datasets. Yet the challenge in long time series analysis comes less from the sheer volume than the data organization, which is typically one (or a small number of) time steps per file. The overhead of opening and inventorying complex, API-driven data formats such as Hierarchical Data Format introduces a small latency at each time step, which nonetheless adds up for datasets with O(10^6) single-timestep files. Several approaches to reorganizing the data can mitigate this overhead by an order of magnitude: pre-aggregating data along the time axis (time-chunking); storing the data in a highly distributed file system; or storing data in distributed columnar databases. Storing a second copy of the data incurs extra costs, so some selection criteria must be employed, which would be driven by expected or actual usage by the end user community, balanced against the extra cost.
Stress Analysis of Bolted, Segmented Cylindrical Shells Exhibiting Flange Mating-Surface Waviness
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.
2009-01-01
Bolted, segmented cylindrical shells are a common structural component in many engineering systems especially for aerospace launch vehicles. Segmented shells are often needed due to limitations of manufacturing capabilities or transportation issues related to very long, large-diameter cylindrical shells. These cylindrical shells typically have a flange or ring welded to opposite ends so that shell segments can be mated together and bolted to form a larger structural system. As the diameter of these shells increases, maintaining strict fabrication tolerances for the flanges to be flat and parallel on a welded structure is an extreme challenge. Local fit-up stresses develop in the structure due to flange mating-surface mismatch (flange waviness). These local stresses need to be considered when predicting a critical initial flaw size. Flange waviness is one contributor to the fit-up stress state. The present paper describes the modeling and analysis effort to simulate fit-up stresses due to flange waviness in a typical bolted, segmented cylindrical shell. Results from parametric studies are presented for various flange mating-surface waviness distributions and amplitudes.
Implementation of remote sensing data for flood forecasting
NASA Astrophysics Data System (ADS)
Grimaldi, S.; Li, Y.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.
2016-12-01
Flooding is one of the most frequent and destructive natural disasters. A timely, accurate and reliable flood forecast can provide vital information for flood preparedness, warning delivery, and emergency response. An operational flood forecasting system typically consists of a hydrologic model, which simulates runoff generation and concentration, and a hydraulic model, which models riverine flood wave routing and floodplain inundation. However, these two types of models suffer from various sources of uncertainties, e.g., forcing data initial conditions, model structure and parameters. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using streamflow measurements, and such applications are limited in well-gauged areas. The recent increasing availability of spatially distributed Remote Sensing (RS) data offers new opportunities for flood events investigation and forecast. Based on an Australian case study, this presentation will discuss the use 1) of RS soil moisture data to constrain a hydrologic model, and 2) of RS-derived flood extent and level to constrain a hydraulic model. The hydrological model is based on a semi-distributed system coupled with a two-soil-layer rainfall-runoff model GRKAL and a linear Muskingum routing model. Model calibration was performed using either 1) streamflow data only or 2) both streamflow and RS soil moisture data. The model was then further constrained through the integration of real-time soil moisture data. The hydraulic model is based on LISFLOOD-FP which solves the 2D inertial approximation of the Shallow Water Equations. Streamflow data and RS-derived flood extent and levels were used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space was quantified and discussed.
Monitoring data transfer latency in CMS computing operations
Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo; ...
2015-12-23
During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, andmore » to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.« less
Monitoring data transfer latency in CMS computing operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo
During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, andmore » to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.« less
High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza R.; Nishikawa, Hiroaki
2014-01-01
In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.
Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application
Zhang, Ping; Li, Wenjun; Sun, Hua
2016-01-01
Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy. PMID:27551747
Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application.
Zhang, Ping; Li, Wenjun; Sun, Hua
2016-01-01
Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy.
NASA Astrophysics Data System (ADS)
Suárez, F.; Aravena, J. E.; Hausner, M. B.; Childress, A. E.; Tyler, S. W.
2011-03-01
In shallow thermohaline-driven lakes it is important to measure temperature on fine spatial and temporal scales to detect stratification or different hydrodynamic regimes. Raman spectra distributed temperature sensing (DTS) is an approach available to provide high spatial and temporal temperature resolution. A vertical high-resolution DTS system was constructed to overcome the problems of typical methods used in the past, i.e., without disturbing the water column, and with resistance to corrosive environments. This paper describes a method to quantitatively assess accuracy, precision and other limitations of DTS systems to fully utilize the capacity of this technology, with a focus on vertical high-resolution to measure temperatures in shallow thermohaline environments. It also presents a new method to manually calibrate temperatures along the optical fiber achieving significant improved resolution. The vertical high-resolution DTS system is used to monitor the thermal behavior of a salt-gradient solar pond, which is an engineered shallow thermohaline system that allows collection and storage of solar energy for a long period of time. The vertical high-resolution DTS system monitors the temperature profile each 1.1 cm vertically and in time averages as small as 10 s. Temperature resolution as low as 0.035 °C is obtained when the data are collected at 5-min intervals.
Ship Detection in SAR Image Based on the Alpha-stable Distribution
Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng
2008-01-01
This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794
Belgrade, M J
1999-11-01
Neuropathic pain can seem enigmatic at first because it can last indefinitely and often a cause is not evident. However, heightened awareness of typical characteristics, such as the following, makes identification fairly easy: The presence of certain accompanying conditions (e.g., diabetes, HIV or herpes zoster infection, multiple sclerosis) Pain described as shooting, stabbing, lancinating, burning, or searing Pain worse at night Pain following anatomic nerve distribution Pain in a numb or insensate site The presence of allodynia Neuropathic pain responds poorly to standard pain therapies and usually requires specialized medications (e.g., anticonvulsants, tricyclic antidepressants, opioid analgesics) for optimal control. Successful pain control is enhanced with use of a systematic approach consisting of disease modification, local or regional measures, and systemic therapy.
Vector-Boson Fusion Higgs Production at Three Loops in QCD.
Dreyer, Frédéric A; Karlberg, Alexander
2016-08-12
We calculate the next-to-next-to-next-to-leading-order (N^{3}LO) QCD corrections to inclusive vector-boson fusion Higgs production at proton colliders, in the limit in which there is no color exchange between the hadronic systems associated with the two colliding protons. We also provide differential cross sections for the Higgs transverse momentum and rapidity distributions. We find that the corrections are at the 1‰-2‰ level, well within the scale uncertainty of the next-to-next-to-leading-order calculation. The associated scale uncertainty of the N^{3}LO calculation is typically found to be below the 2‰ level. We also consider theoretical uncertainties due to missing higher order parton distribution functions, and provide an estimate of their importance.
Anisotropic Shear Dispersion Parameterization for Mesoscale Eddy Transport
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Fox-Kemper, B.
2016-02-01
The effects of mesoscale eddies are universally treated isotropically in general circulation models. However, the processes that the parameterization approximates, such as shear dispersion, typically have strongly anisotropic characteristics. The Gent-McWilliams/Redi mesoscale eddy parameterization is extended for anisotropy and tested using 1-degree Community Earth System Model (CESM) simulations. The sensitivity of the model to anisotropy includes a reduction of temperature and salinity biases, a deepening of the southern ocean mixed-layer depth, and improved ventilation of biogeochemical tracers, particularly in oxygen minimum zones. The parameterization is further extended to include the effects of unresolved shear dispersion, which sets the strength and direction of anisotropy. The shear dispersion parameterization is similar to drifter observations in spatial distribution of diffusivity and high-resolution model diagnosis in the distribution of eddy flux orientation.
Solution of the Fokker-Planck equation with mixing of angular harmonics by beam-beam charge exchange
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, D.R.
1989-09-01
A method for solving the linear Fokker-Planck equation with anisotropic beam-beam charge exchange loss is presented. The 2-D equation is transformed to a system of coupled 1-D equations which are solved iteratively as independent equations. Although isotropic approximations to the beam-beam losses lead to inaccurate fast ion distributions, typically only a few angular harmonics are needed to include accurately the effect of the beam-beam charge exchange loss on the usual integrals of the fast ion distribution. Consequently, the algorithm converges very rapidly and is much more efficient than a 2-D finite difference method. A convenient recursion formula for the couplingmore » coefficients is given and generalization of the method is discussed. 13 refs., 2 figs.« less
Adaptive distributed outlier detection for WSNs.
De Paola, Alessandra; Gaglio, Salvatore; Lo Re, Giuseppe; Milazzo, Fabrizio; Ortolani, Marco
2015-05-01
The paradigm of pervasive computing is gaining more and more attention nowadays, thanks to the possibility of obtaining precise and continuous monitoring. Ease of deployment and adaptivity are typically implemented by adopting autonomous and cooperative sensory devices; however, for such systems to be of any practical use, reliability and fault tolerance must be guaranteed, for instance by detecting corrupted readings amidst the huge amount of gathered sensory data. This paper proposes an adaptive distributed Bayesian approach for detecting outliers in data collected by a wireless sensor network; our algorithm aims at optimizing classification accuracy, time complexity and communication complexity, and also considering externally imposed constraints on such conflicting goals. The performed experimental evaluation showed that our approach is able to improve the considered metrics for latency and energy consumption, with limited impact on classification accuracy.
Diagnosing entropy production and dissipation in fully kinetic plasmas
NASA Astrophysics Data System (ADS)
Juno, J.; TenBarge, J. M.; Hakim, A.; Dorland, W.
2017-12-01
Many plasma systems, from the core of a tokamak to the outer heliosphere, are weakly collisional and thus most accurately described by kinetic theory. The typical approach to solving the kinetic equation has been the particle-in-cell algorithm, which, while a powerful tool, introduces counting noise into the particle distribution function. The counting noise is particularly problematic when attempting to study grand challenge problems such as entropy production from phenomena like shocks and turbulence. In this poster, we present studies of entropy production and dissipation processes present in simple turbulence and shock calculations using the continuum Vlasov-Maxwell solver in the Gkeyll framework. Particular emphasis is placed on a novel diagnostic, the field-particle correlation, which is especially efficient at separating the secular energy transfer into its constituent components, for example, cyclotron damping, Landau damping, or transit-time damping, when applied to a noise-free distribution function. Using reduced systems such as completely transverse electromagnetic shocks, we also explore the signatures of perpendicular, non-resonant, energization mechanisms.
Directional radiation pattern in structural-acoustic coupled system
NASA Astrophysics Data System (ADS)
Seo, Hee-Seon; Kim, Yang-Hann
2005-07-01
In this paper we demonstrate the possibility of designing a radiator using structural-acoustic interaction by predicting the pressure distribution and radiation pattern of a structural-acoustic coupling system that is composed by a wall and two spaces. If a wall separates spaces, then the wall's role in transporting the acoustic characteristics of the spaces is important. The spaces can be categorized as bounded finite space and unbounded infinite space. The wall considered in this study composes two plates and an opening, and the wall separates one space that is highly reverberant and the other that is unbounded without any reflection. This rather hypothetical circumstance is selected to study the general coupling problem between the finite and infinite acoustic domains. We developed an equation that predicts the energy distribution and energy flow in the two spaces separated by a wall, and its computational examples are presented. Three typical radiation patterns that include steered, focused, and omnidirected are presented. A designed radiation pattern is also presented by using the optimal design algorithm.
Direct and full-scale experimental verifications towards ground-satellite quantum key distribution
NASA Astrophysics Data System (ADS)
Wang, Jian-Yu; Yang, Bin; Liao, Sheng-Kai; Zhang, Liang; Shen, Qi; Hu, Xiao-Fang; Wu, Jin-Cai; Yang, Shi-Ji; Jiang, Hao; Tang, Yan-Lin; Zhong, Bo; Liang, Hao; Liu, Wei-Yue; Hu, Yi-Hua; Huang, Yong-Mei; Qi, Bo; Ren, Ji-Gang; Pan, Ge-Sheng; Yin, Juan; Jia, Jian-Jun; Chen, Yu-Ao; Chen, Kai; Peng, Cheng-Zhi; Pan, Jian-Wei
2013-05-01
Quantum key distribution (QKD) provides the only intrinsically unconditional secure method for communication based on the principle of quantum mechanics. Compared with fibre-based demonstrations, free-space links could provide the most appealing solution for communication over much larger distances. Despite significant efforts, all realizations to date rely on stationary sites. Experimental verifications are therefore extremely crucial for applications to a typical low Earth orbit satellite. To achieve direct and full-scale verifications of our set-up, we have carried out three independent experiments with a decoy-state QKD system, and overcome all conditions. The system is operated on a moving platform (using a turntable), on a floating platform (using a hot-air balloon), and with a high-loss channel to demonstrate performances under conditions of rapid motion, attitude change, vibration, random movement of satellites, and a high-loss regime. The experiments address wide ranges of all leading parameters relevant to low Earth orbit satellites. Our results pave the way towards ground-satellite QKD and a global quantum communication network.
Localization of a bacterial cytoplasmic receptor is dynamic and changes with cell-cell contacts
Mauriello, Emilia M. F.; Astling, David P.; Sliusarenko, Oleksii; Zusman, David R.
2009-01-01
Directional motility in the gliding bacterium Myxococcus xanthus requires controlled cell reversals mediated by the Frz chemosensory system. FrzCD, a cytoplasmic chemoreceptor, does not form membrane-bound polar clusters typical for most bacteria, but rather cytoplasmic clusters that appear helically arranged and span the cell length. The distribution of FrzCD in living cells was found to be dynamic: FrzCD was localized in clusters that continuously changed their size, number, and position. The number of FrzCD clusters was correlated with cellular reversal frequency: fewer clusters were observed in hypo-reversing mutants and additional clusters were observed in hyper-reversing mutants. When moving cells made side-to-side contacts, FrzCD clusters in adjacent cells showed transient alignments. These events were frequently followed by one of the interacting cells reversing. These observations suggest that FrzCD detects signals from a cell contact-sensitive signaling system and then re-localizes as it directs reversals to distributed motility engines. PMID:19273862
Method for Assessing the Integrated Risk of Soil Pollution in Industrial and Mining Gathering Areas
Guan, Yang; Shao, Chaofeng; Gu, Qingbao; Ju, Meiting; Zhang, Qian
2015-01-01
Industrial and mining activities are recognized as major sources of soil pollution. This study proposes an index system for evaluating the inherent risk level of polluting factories and introduces an integrated risk assessment method based on human health risk. As a case study, the health risk, polluting factories and integrated risks were analyzed in a typical industrial and mining gathering area in China, namely, Binhai New Area. The spatial distribution of the risk level was determined using a Geographic Information System. The results confirmed the following: (1) Human health risk in the study area is moderate to extreme, with heavy metals posing the greatest threat; (2) Polluting factories pose a moderate to extreme inherent risk in the study area. Such factories are concentrated in industrial and urban areas, but are irregularly distributed and also occupy agricultural land, showing a lack of proper planning and management; (3) The integrated risks of soil are moderate to high in the study area. PMID:26580644
NASA Astrophysics Data System (ADS)
Madhikar, Pratik Ravindra
The most important and crucial design feature while designing an Aircraft Electric Power Distribution System (EPDS) is reliability. In EPDS, the distribution of power is from top level generators to bottom level loads through various sensors, actuators and rectifiers with the help of AC & DC buses and control switches. As the demands of the consumer is never ending and the safety is utmost important, there is an increase in loads and as a result increase in power management. Therefore, the design of an EPDS should be optimized to have maximum efficiency. This thesis discusses an integrated tool that is based on a Need Based Design method and Fault Tree Analysis (FTA) to achieve the optimum design of an EPDS to provide maximum reliability in terms of continuous connectivity, power management and minimum cost. If an EPDS is formulated as an optimization problem then it can be solved with the help of connectivity, cost and power constraints by using a linear solver to get the desired output of maximum reliability at minimum cost. Furthermore, the thesis also discusses the viability and implementation of the resulted topology on typical large aircraft specifications.
On the temperature control in self-controlling hyperthermia therapy
NASA Astrophysics Data System (ADS)
Ebrahimi, Mahyar
2016-10-01
In self-controlling hyperthermia therapy, once the desired temperature is reached, the heat generation ceases and overheating is prevented. In order to design a system that generates sufficient heat without thermal ablation of surrounding healthy tissue, a good understanding of temperature distribution and its change with time is imperative. This study is conducted to extend our understanding about the heat generation and transfer, temperature distribution and temperature rise pattern in the tumor and surrounding tissue during self-controlling magnetic hyperthermia. A model consisting of two concentric spheres that represents the tumor and its surrounding tissue is considered and temperature change pattern and temperature distribution in tumor and surrounding tissue are studied. After describing the model and its governing equations and constants precisely, a typical numerical solution of the model is presented. Then it is showed that how different parameters like Curie temperature of nanoparticles, magnetic field amplitude and nanoparticles concentration can affect the temperature change pattern during self-controlling magnetic hyperthermia. The model system herein discussed can be useful to gain insight on the self-controlling magnetic hyperthermia while applied to cancer treatment in real scenario and can be useful for treatment strategy determination.
The Phobos neutral and ionized torus
NASA Astrophysics Data System (ADS)
Poppe, A. R.; Curry, S. M.; Fatemi, S.
2016-05-01
Charged particle sputtering, micrometeoroid impact vaporization, and photon-stimulated desorption are fundamental processes operating at airless surfaces throughout the solar system. At larger bodies, such as Earth's Moon and several of the outer planet moons, these processes generate tenuous surface-bound exospheres that have been observed by a variety of methods. Phobos and Deimos, in contrast, are too gravitationally weak to keep ejected neutrals bound and, thus, are suspected to generate neutral tori in orbit around Mars. While these tori have not yet been detected, the distribution and density of both the neutral and ionized components are of fundamental interest. We combine a neutral Monte Carlo model and a hybrid plasma model to investigate both the neutral and ionized components of the Phobos torus. We show that the spatial distribution of the neutral torus is highly dependent on each individual species (due to ionization rates that span nearly 4 orders of magnitude) and on the location of Phobos with respect to Mars. Additionally, we present the flux distribution of torus pickup ions throughout the Martian system and estimate typical pickup ion fluxes. We find that the predicted pickup ion fluxes are too low to perturb the ambient plasma, consistent with previous null detections by spacecraft around Mars.
An integrated tool for loop calculations: AITALC
NASA Astrophysics Data System (ADS)
Lorca, Alejandro; Riemann, Tord
2006-01-01
AITALC, a new tool for automating loop calculations in high energy physics, is described. The package creates Fortran code for two-fermion scattering processes automatically, starting from the generation and analysis of the Feynman graphs. We describe the modules of the tool, the intercommunication between them and illustrate its use with three examples. Program summaryTitle of the program:AITALC version 1.2.1 (9 August 2005) Catalogue identifier:ADWO Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWO Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC i386 Operating system:GNU/ LINUX, tested on different distributions SuSE 8.2 to 9.3, Red Hat 7.2, Debian 3.0, Ubuntu 5.04. Also on SOLARIS Programming language used:GNU MAKE, DIANA, FORM, FORTRAN77 Additional programs/libraries used:DIANA 2.35 ( QGRAF 2.0), FORM 3.1, LOOPTOOLS 2.1 ( FF) Memory required to execute with typical data:Up to about 10 MB No. of processors used:1 No. of lines in distributed program, including test data, etc.:40 926 No. of bytes in distributed program, including test data, etc.:371 424 Distribution format:tar gzip file High-speed storage required:from 1.5 to 30 MB, depending on modules present and unfolding of examples Nature of the physical problem:Calculation of differential cross sections for ee annihilation in one-loop approximation. Method of solution:Generation and perturbative analysis of Feynman diagrams with later evaluation of matrix elements and form factors. Restriction of the complexity of the problem:The limit of application is, for the moment, the 2→2 particle reactions in the electro-weak standard model. Typical running time:Few minutes, being highly depending on the complexity of the process and the FORTRAN compiler.
NASA Astrophysics Data System (ADS)
Howell, Robert R.; Radebaugh, Jani; M. C Lopes, Rosaly; Kerber, Laura; Solomonidou, Anezina; Watkins, Bryn
2017-10-01
Using remote sensing of planetary volcanism on objects such as Io to determine eruption conditions is challenging because the emitting region is typically not resolved and because exposed lava cools so quickly. A model of the cooling rate and eruption mechanism is typically used to predict the amount of surface area at different temperatures, then that areal distribution is convolved with a Planck blackbody emission curve, and the predicted spectra is compared with observation. Often the broad nature of the Planck curve makes interpretation non-unique. However different eruption mechanisms (for example cooling fire fountain droplets vs. cooling flows) have very different area vs. temperature distributions which can often be characterized by simple power laws. Furthermore different composition magmas have significantly different upper limit cutoff temperatures. In order to test these models in August 2016 and May 2017 we obtained spatially resolved observations of spreading Kilauea pahoehoe flows and fire fountains using a three-wavelength near-infrared prototype camera system. We have measured the area vs. temperature distribution for the flows and find that over a relatively broad temperature range the distribution does follow a power law matching the theoretical predictions. As one approaches the solidus temperature the observed area drops below the simple model predictions by an amount that seems to vary inversely with the vigor of the spreading rate. At these highest temperatures the simple models are probably inadequate. It appears necessary to model the visco-elastic stretching of the very thin crust which covers even the most recently formed surfaces. That deviation between observations and the simple models may be particularly important when using such remote sensing observations to determine magma eruption temperatures.
Web-Based Learning Support System
NASA Astrophysics Data System (ADS)
Fan, Lisa
Web-based learning support system offers many benefits over traditional learning environments and has become very popular. The Web is a powerful environment for distributing information and delivering knowledge to an increasingly wide and diverse audience. Typical Web-based learning environments, such as Web-CT, Blackboard, include course content delivery tools, quiz modules, grade reporting systems, assignment submission components, etc. They are powerful integrated learning management systems (LMS) that support a number of activities performed by teachers and students during the learning process [1]. However, students who study a course on the Internet tend to be more heterogeneously distributed than those found in a traditional classroom situation. In order to achieve optimal efficiency in a learning process, an individual learner needs his or her own personalized assistance. For a web-based open and dynamic learning environment, personalized support for learners becomes more important. This chapter demonstrates how to realize personalized learning support in dynamic and heterogeneous learning environments by utilizing Adaptive Web technologies. It focuses on course personalization in terms of contents and teaching materials that is according to each student's needs and capabilities. An example of using Rough Set to analyze student personal information to assist students with effective learning and predict student performance is presented.
Decadal water quality variations at three typical basins of Mekong, Murray and Yukon
NASA Astrophysics Data System (ADS)
Khan, Afed U.; Jiang, Jiping; Wang, Peng
2018-02-01
Decadal distribution of water quality parameters is essential for surface water management. Decadal distribution analysis was conducted to assess decadal variations in water quality parameters at three typical watersheds of Murray, Mekong and Yukon. Right distribution shifts were observed for phosphorous and nitrogen parameters at the Mekong watershed monitoring sites while left shifts were noted at the Murray and Yukon monitoring sites. Nutrients pollution increases with time at the Mekong watershed while decreases at the Murray and Yukon watershed monitoring stations. The results implied that watershed located in densely populated developing area has higher risk of water quality deterioration in comparison to thinly populated developed area. The present study suggests best management practices at watershed scale to modulate water pollution.
Application of the FADS system on the Re-entry Module
NASA Astrophysics Data System (ADS)
Zhen, Huang
2016-07-01
The aerodynamic model for Flush Air Data Sensing System (FADS) is built based on the surface pressure distribution obtained through the pressure orifices laid on specific positions of the surface,and the flight parameters,such as angle of attack,angle of side-slip,Mach number,free-stream static pressure and dynamic pressure are inferred from the aerodynamic model.The flush air data sensing system (FADS) has been used on several flight tests of aircraft and re-entry vehicle,such as,X-15,space shuttle,F-14,X-33,X-43A and so on. This paper discusses the application of the FADS on the re-entry module with blunt body to obtain high-precision aerodynamic parameters.First of all,a basic theory and operating principle of the FADS is shown.Then,the applications of the FADS on typical aircrafts and re-entry vehicles are described.Thirdly,the application mode on the re-entry module with blunt body is discussed in detail,including aerodynamic simulation,pressure distribution,trajectory reconstruction and the hardware shoule be used,such as flush air data sensing system(FADS),inertial navigation system (INS),data acquisition system,data storage system.Finally,ablunt module re-entry flight test from low earth orbit (LEO) is planned to obtain aerodynamic parameters and amend the aerodynamic model with this FADS system data.The results show that FADS system can be applied widely in re-entry module with blunt bodies.
AlJarullah, Asma; El-Masri, Samir
2013-08-01
The goal of a national electronic health records integration system is to aggregate electronic health records concerning a particular patient at different healthcare providers' systems to provide a complete medical history of the patient. It holds the promise to address the two most crucial challenges to the healthcare systems: improving healthcare quality and controlling costs. Typical approaches for the national integration of electronic health records are a centralized architecture and a distributed architecture. This paper proposes a new approach for the national integration of electronic health records, the semi-centralized approach, an intermediate solution between the centralized architecture and the distributed architecture that has the benefits of both approaches. The semi-centralized approach is provided with a clearly defined architecture. The main data elements needed by the system are defined and the main system modules that are necessary to achieve an effective and efficient functionality of the system are designed. Best practices and essential requirements are central to the evolution of the proposed architecture. The proposed architecture will provide the basis for designing the simplest and the most effective systems to integrate electronic health records on a nation-wide basis that maintain integrity and consistency across locations, time and systems, and that meet the challenges of interoperability, security, privacy, maintainability, mobility, availability, scalability, and load balancing.
Supervised learning of probability distributions by neural networks
NASA Technical Reports Server (NTRS)
Baum, Eric B.; Wilczek, Frank
1988-01-01
Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.
Distributed Computer Systems for the Republic of Turkish Navy.
1985-12-01
and the entire medium spectrum is consumed by the signal. Baseband LAN’s are typically accessed via a carrier sensed multi-access collision detect...Process IPrcoess 2 Cocr~un i cat i Ots B uffer Buffer 6.:,e~2er 1. t -s cessage c~.ur2:s I.essage okul.ks "r, Prcesse eroes pac: et 5packet S et t4Qet...control. a. Routing Routing is the decision process which determines the path a message follows from its source to its destina- tion. Some routing
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1993-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X DataSlice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1992-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X Data Slice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
Lagrangian turbulence: Structures and mixing in admissible model flows
NASA Astrophysics Data System (ADS)
Ottino, Julio M.
1991-12-01
The goal of our research was to bridge the gap between modern ideas from dynamical systems and chaos and more traditional approaches to turbulence. In order to reach this objective we conducted theoretical and computational work on two systems: (1) a perturbed-Kelvin cat eyes flow, and (2) prototype solutions of the Navier-Stokes equations near solid walls. The main results obtained are two-fold: we have been able to produce flows capable of producing complex distributions of vorticity, and we have been able to construct flowfields, based on solutions of the Navier-Stokes equations, which are capable of displaying both Eulerian and Lagrangian turbulence. These results exemplify typical mechanisms of mixing enhancement in transitional flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Michael D.; Dowell, Jessica L.; Rhode, Katherine L., E-mail: youngmd@indiana.edu, E-mail: jlwind@astro.indiana.edu, E-mail: rhode@astro.indiana.edu
We present results from a study of the globular cluster (GC) systems of four spiral and S0 galaxies imaged as part of an ongoing wide-field survey of the GC systems of giant galaxies. The target galaxies-the SB0 galaxy NGC 1023, the SBb galaxy NGC 1055, and an isolated pair comprised of the Sbc galaxy NGC 7339 and the S0 galaxy NGC 7332-were observed in BVR filters with the WIYN 3.5 m telescope and Minimosaic camera. For two of the galaxies, we combined the WIYN imaging with previously published data from the Hubble Space Telescope and the Keck Observatory to helpmore » characterize the GC distribution in the central few kiloparsecs. We determine the radial distribution (surface density of GCs versus projected radius) of each galaxy's GC system and use it to calculate the total number of GCs (N{sub GC}). We find N{sub GC} = 490 {+-} 30, 210 {+-} 40, 175 {+-} 15, and 75 {+-} 10 for NGC 1023, NGC 1055, NGC 7332, and NGC 7339, respectively. We also calculate the GC specific frequency (N{sub GC} normalized by host galaxy luminosity or mass) and find values typical of those of the other spiral and E/S0 galaxies in the survey. The two lenticular galaxies have sufficient numbers of GC candidates for us to perform statistical tests for bimodality in the GC color distributions. We find evidence at a high confidence level (>95%) for two populations in the B - R distribution of the GC system of NGC 1023. We find weaker evidence for bimodality (>81% confidence) in the GC color distribution of NGC 7332. Finally, we identify eight GC candidates that may be associated with the Magellanic dwarf galaxy NGC 1023A, which is a satellite of NGC 1023.« less
Haq, Shaji S; Kodak, Tiffany
2015-01-01
This study evaluated the effects of massed and distributed practice on the acquisition of tacts and textual behavior in typically developing children. We compared the effects of massed practice (i.e., consolidating all practice opportunities during the week into a single session) and distributed practice (i.e., distributing all practice opportunities across 4 sessions during the week) on the acquisition of textual behavior in English, tacting pictures of common nouns in Spanish, and textual behavior in Spanish using an adapted alternating treatments design embedded within a multiple probe design. We also examined correct responding during probes that (a) excluded prompts and reinforcement and (b) occurred 48 hr after training each week. The results indicated that distributed practice was the more efficient training format. Maintenance data collected up to 4 weeks after training also indicated that the participants consistently displayed higher levels of correct responding to targets that had been trained in distributed format. We discuss implications for practice and potential areas for future research. © Society for the Experimental Analysis of Behavior.
SPANG: a SPARQL client supporting generation and reuse of queries for distributed RDF databases.
Chiba, Hirokazu; Uchiyama, Ikuo
2017-02-08
Toward improved interoperability of distributed biological databases, an increasing number of datasets have been published in the standardized Resource Description Framework (RDF). Although the powerful SPARQL Protocol and RDF Query Language (SPARQL) provides a basis for exploiting RDF databases, writing SPARQL code is burdensome for users including bioinformaticians. Thus, an easy-to-use interface is necessary. We developed SPANG, a SPARQL client that has unique features for querying RDF datasets. SPANG dynamically generates typical SPARQL queries according to specified arguments. It can also call SPARQL template libraries constructed in a local system or published on the Web. Further, it enables combinatorial execution of multiple queries, each with a distinct target database. These features facilitate easy and effective access to RDF datasets and integrative analysis of distributed data. SPANG helps users to exploit RDF datasets by generation and reuse of SPARQL queries through a simple interface. This client will enhance integrative exploitation of biological RDF datasets distributed across the Web. This software package is freely available at http://purl.org/net/spang .
The stress distribution in pin-loaded orthotropic plates
NASA Technical Reports Server (NTRS)
Klang, E. C.; Hyer, M. W.
1985-01-01
The performance of mechanically fastened composite joints was studied. Specially, a single-bolt connector was modeled as a pin-loaded, infinite plate. The model that was developed used two dimensional, complex variable, elasticity techniques combined with a boundary collocation procedure to produce solutions for the problem. Through iteration, the boundary conditions were satisfied and the stresses in the plate were calculated. Several graphite epoxy laminates were studied. In addition, parameters such as the pin modulus, coefficient of friction, and pin-plate clearance were varied. Conclusions drawn from this study indicate: (1) the material properties (i.e., laminate configuration) of the plate alter the stress state and, for highly orthotropic materials, the contact stress deviates greatly from the cosinusoidal distribution often assumed; (2) friction plays a major role in the distribution of stresses in the plate; (3) reversing the load direction also greatly effects the stress distribution in the plate; (4) clearance (or interference) fits change the contact angle and thus the location of the peak hoop stress; and (5) a rigid pin appears to be a good assumption for typical material systems.
Design optimization of large-size format edge-lit light guide units
NASA Astrophysics Data System (ADS)
Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.
2016-04-01
In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.
Inthavong, Kiao; Fung, Man Chiu; Yang, William; Tu, Jiyuan
2015-02-01
To evaluate the deposition efficiency of spray droplets in a nasal cavity produced from a spray device, it is important to determine droplet size distribution, velocity, and its dispersion during atomization. Due to the limiting geometric dimensions of the nasal cavity airway, the spray plume cannot develop to its full size inside the nasal vestibule to penetrate the nasal valve region for effective drug deposition. Particle/droplet image analysis was used to determine local mean droplet sizes at eight regions within the spray plume under different actuation pressures that represent typical hand operation from pediatric to adult patients. The results showed that higher actuation pressure produces smaller droplets in the atomization. Stronger actuation pressure typical of adult users produces a longer period of the fully atomized spray stage, despite a shorter overall spray duration. This produces finer droplets when compared with the data obtained by weaker actuation pressure, typical of pediatric users. The experimental technique presented is able to capture a more complete representation of the droplet size distribution and the atomization process during an actuation. The measured droplet size distribution produced can be related to the empirically defined deposition efficiency curve of the nasal cavity, allowing a prediction of the likely deposition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, T.Y.; Bentz, J.; Simpson, R.
1997-02-01
The objective of the Lower Head Failure (LHF) Experiment Program is to experimentally investigate and characterize the failure of the reactor vessel lower head due to thermal and pressure loads under severe accident conditions. The experiment is performed using 1/5-scale models of a typical PWR pressure vessel. Experiments are performed for various internal pressure and imposed heat flux distributions with and without instrumentation guide tube penetrations. The experimental program is complemented by a modest modeling program based on the application of vessel creep rupture codes developed in the TMI Vessel Investigation Project. The first three experiments under the LHF programmore » investigated the creep rupture of simulated reactor pressure vessels without penetrations. The heat flux distributions for the three experiments are uniform (LHF-1), center-peaked (LHF-2), and side-peaked (LHF-3), respectively. For all the experiments, appreciable vessel deformation was observed to initiate at vessel wall temperatures above 900K and the vessel typically failed at approximately 1000K. The size of failure was always observed to be smaller than the heated region. For experiments with non-uniform heat flux distributions, failure typically occurs in the region of peak temperature. A brief discussion of the effect of penetration is also presented.« less
The WorkQueue project - a task queue for the CMS workload management system
NASA Astrophysics Data System (ADS)
Ryu, S.; Wakefield, S.
2012-12-01
We present the development and first experience of a new component (termed WorkQueue) in the CMS workload management system. This component provides a link between a global request system (Request Manager) and agents (WMAgents) which process requests at compute and storage resources (known as sites). These requests typically consist of creation or processing of a data sample (possibly terabytes in size). Unlike the standard concept of a task queue, the WorkQueue does not contain fully resolved work units (known typically as jobs in HEP). This would require the WorkQueue to run computationally heavy algorithms that are better suited to run in the WMAgents. Instead the request specifies an algorithm that the WorkQueue uses to split the request into reasonable size chunks (known as elements). An advantage of performing lazy evaluation of an element is that expanding datasets can be accommodated by having job details resolved as late as possible. The WorkQueue architecture consists of a global WorkQueue which obtains requests from the request system, expands them and forms an element ordering based on the request priority. Each WMAgent contains a local WorkQueue which buffers work close to the agent, this overcomes temporary unavailability of the global WorkQueue and reduces latency for an agent to begin processing. Elements are pulled from the global WorkQueue to the local WorkQueue and into the WMAgent based on the estimate of the amount of work within the element and the resources available to the agent. WorkQueue is based on CouchDB, a document oriented NoSQL database. The WorkQueue uses the features of CouchDB (map/reduce views and bi-directional replication between distributed instances) to provide a scalable distributed system for managing large queues of work. The project described here represents an improvement over the old approach to workload management in CMS which involved individual operators feeding requests into agents. This new approach allows for a system where individual WMAgents are transient and can be added or removed from the system as needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Auslander, David; Culler, David; Wright, Paul
The goal of the 2.5 year Distributed Intelligent Automated Demand Response (DIADR) project was to reduce peak electricity load of Sutardja Dai Hall at UC Berkeley by 30% while maintaining a healthy, comfortable, and productive environment for the occupants. We sought to bring together both central and distributed control to provide “deep” demand response1 at the appliance level of the building as well as typical lighting and HVAC applications. This project brought together Siemens Corporate Research and Siemens Building Technology (the building has a Siemens Apogee Building Automation System (BAS)), Lawrence Berkeley National Laboratory (leveraging their Open Automated Demand Responsemore » (openADR), Auto-Demand Response, and building modeling expertise), and UC Berkeley (related demand response research including distributed wireless control, and grid-to-building gateway development). Sutardja Dai Hall houses the Center for Information Technology Research in the Interest of Society (CITRIS), which fosters collaboration among industry and faculty and students of four UC campuses (Berkeley, Davis, Merced, and Santa Cruz). The 141,000 square foot building, occupied in 2009, includes typical office spaces and a nanofabrication laboratory. Heating is provided by a district heating system (steam from campus as a byproduct of the campus cogeneration plant); cooling is provided by one of two chillers: a more typical electric centrifugal compressor chiller designed for the cool months (Nov- March) and a steam absorption chiller for use in the warm months (April-October). Lighting in the open office areas is provided by direct-indirect luminaries with Building Management System-based scheduling for open areas, and occupancy sensors for private office areas. For the purposes of this project, we focused on the office portion of the building. Annual energy consumption is approximately 8053 MWh; the office portion is estimated as 1924 MWh. The maximum peak load during the study period was 1175 kW. Several new tools facilitated this work, such as the Smart Energy Box, the distributed load controller or Energy Information Gateway, the web-based DR controller (dubbed the Central Load-Shed Coordinator or CLSC), and the Demand Response Capacity Assessment & Operation Assistance Tool (DRCAOT). In addition, an innovative data aggregator called sMAP (simple Measurement and Actuation Profile) allowed data from different sources collected in a compact form and facilitated detailed analysis of the building systems operation. A smart phone application (RAP or Rapid Audit Protocol) facilitated an inventory of the building’s plug loads. Carbon dioxide sensors located in conference rooms and classrooms allowed demand controlled ventilation. The extensive submetering and nimble access to this data provided great insight into the details of the building operation as well as quick diagnostics and analyses of tests. For example, students discovered a short-cycling chiller, a stuck damper, and a leaking cooling coil in the first field tests. For our final field tests, we were able to see how each zone was affected by the DR strategies (e.g., the offices on the 7th floor grew very warm quickly) and fine-tune the strategies accordingly.« less
A search for model parsimony in a real time flood forecasting system
NASA Astrophysics Data System (ADS)
Grossi, G.; Balistrocchi, M.
2009-04-01
As regards the hydrological simulation of flood events, a physically based distributed approach is the most appealing one, especially in those areas where the spatial variability of the soil hydraulic properties as well as of the meteorological forcing cannot be left apart, such as in mountainous regions. On the other hand, dealing with real time flood forecasting systems, less detailed models requiring a minor number of parameters may be more convenient, reducing both the computational costs and the calibration uncertainty. In fact in this case a precise quantification of the entire hydrograph pattern is not necessary, while the expected output of a real time flood forecasting system is just an estimate of the peak discharge, the time to peak and in some cases the flood volume. In this perspective a parsimonious model has to be found in order to increase the efficiency of the system. A suitable case study was identified in the northern Apennines: the Taro river is a right tributary to the Po river and drains about 2000 km2 of mountains, hills and floodplain, equally distributed . The hydrometeorological monitoring of this medium sized watershed is managed by ARPA Emilia Romagna through a dense network of uptodate gauges (about 30 rain gauges and 10 hydrometers). Detailed maps of the surface elevation, land use and soil texture characteristics are also available. Five flood events were recorded by the new monitoring network in the years 2003-2007: during these events the peak discharge was higher than 1000 m3/s, which is actually quite a high value when compared to the mean discharge rate of about 30 m3/s. The rainfall spatial patterns of such storms were analyzed in previous works by means of geostatistical tools and a typical semivariogram was defined, with the aim of establishing a typical storm structure leading to flood events in the Taro river. The available information was implemented into a distributed flood event model with a spatial resolution of 90m; then the hydrologic detail was reduced by progressively assuming a uniform rainfall field and constant soil properties. A semi-distributed model, obtained by subdividing the catchment into three sub-catchment, and a lumped model were also applied to simulate the selected flood events. Errors were quantified in terms of the peak discharge ratio, the flood volume and the time to peak by comparing the simulated hydrographs to the observed ones.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Electric Field Simulation of Surge Capacitors with Typical Defects
NASA Astrophysics Data System (ADS)
Zhang, Chenmeng; Mao, Yuxiang; Xie, Shijun; Zhang, Yu
2018-03-01
The electric field of power capacitors with different typical defects in DC working condition and impulse oscillation working condition is studied in this paper. According to the type and location of defects and considering the influence of space charge, two-dimensional models of surge capacitors with different typical defects are simulated based on ANSYS. The distribution of the electric field inside the capacitor is analyzed, and the concentration of electric field and its influence on the insulation performance are obtained. The results show that the type of defects, the location of defects and the space charge all affect the electric field distribution inside the capacitor in varying degrees. Especially the electric field distortion in the local area such as sharp corners and burrs is relatively larger, which increases the probability of partial discharge inside the surge capacitor.
The origin of polygonal troughs on the northern plains of Mars
NASA Astrophysics Data System (ADS)
Pechmann, J. C.
1980-05-01
The morphology, distribution, geologic environment and relative age of large-scale polygonal trough systems on Mars are examined. The troughs are steep-walled, flat-floored, sinuous depressions typically 200-800 m wide, 20-120 m deep and spaced 5-10 km apart. The mechanics of formation of tension cracks is reviewed to identify the factors controlling the scale of tension crack systems; special emphasis is placed on thermal cracking in permafrost. It is shown that because of the extremely large scale of the Martian fracture systems, they could not have formed by thermal cracking in permafrost, dessication cracking in sediments or contraction cracking in cooling lava. On the basis of photogeologic evidence and analog studies, it is proposed that polygonal troughs on the northern plains of Mars are grabens.
Dataflow computing approach in high-speed digital simulation
NASA Technical Reports Server (NTRS)
Ercegovac, M. D.; Karplus, W. J.
1984-01-01
New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.
Telesca, Luciano; Lovallo, Michele; Ramirez-Rojas, Alejandro; Flores-Marquez, Leticia
2014-01-01
By using the method of the visibility graph (VG) the synthetic seismicity generated by a simple stick-slip system with asperities is analysed. The stick-slip system mimics the interaction between tectonic plates, whose asperities are given by sandpapers of different granularity degrees. The VG properties of the seismic sequences have been put in relationship with the typical seismological parameter, the b-value of the Gutenberg-Richter law. Between the b-value of the synthetic seismicity and the slope of the least square line fitting the k-M plot (relationship between the magnitude M of each synthetic event and its connectivity degree k) a close linear relationship is found, also verified by real seismicity.
A survey of manufacturers of solar thermal energy systems
NASA Technical Reports Server (NTRS)
Levine, N.; Slonski, M. L.
1982-01-01
Sixty-seven firms that had received funding for development of solar thermal energy systems (STES) were surveyed. The effect of the solar thermal technology systems program in accelerating (STES) were assessed. The 54 firms still developing STES were grouped into a production typology comparing the three major technologies with three basic functions. It was discovered that large and small firms were developing primarily central receiver systems, but also typically worked on more than one technology. Most medium-sized firms worked only on distributed systems. Federal support of STES was perceived as necessary to allow producers to take otherwise unacceptable risks. Approximately half of the respondents would drop out of STES if support were terminated, including a disproportionate number of medium-sized firms. A differentiated view of the technology, taking into account differing firm sizes and the various stages of technology development, was suggested for policy and planning purposes.
A survey of manufacturers of solar thermal energy systems
NASA Astrophysics Data System (ADS)
Levine, N.; Slonski, M. L.
1982-08-01
Sixty-seven firms that had received funding for development of solar thermal energy systems (STES) were surveyed. The effect of the solar thermal technology systems program in accelerating (STES) were assessed. The 54 firms still developing STES were grouped into a production typology comparing the three major technologies with three basic functions. It was discovered that large and small firms were developing primarily central receiver systems, but also typically worked on more than one technology. Most medium-sized firms worked only on distributed systems. Federal support of STES was perceived as necessary to allow producers to take otherwise unacceptable risks. Approximately half of the respondents would drop out of STES if support were terminated, including a disproportionate number of medium-sized firms. A differentiated view of the technology, taking into account differing firm sizes and the various stages of technology development, was suggested for policy and planning purposes.
NASA Astrophysics Data System (ADS)
Hilfinger, Andreas; Chen, Mark; Paulsson, Johan
2012-12-01
Studies of stochastic biological dynamics typically compare observed fluctuations to theoretically predicted variances, sometimes after separating the intrinsic randomness of the system from the enslaving influence of changing environments. But variances have been shown to discriminate surprisingly poorly between alternative mechanisms, while for other system properties no approaches exist that rigorously disentangle environmental influences from intrinsic effects. Here, we apply the theory of generalized random walks in random environments to derive exact rules for decomposing time series and higher statistics, rather than just variances. We show for which properties and for which classes of systems intrinsic fluctuations can be analyzed without accounting for extrinsic stochasticity and vice versa. We derive two independent experimental methods to measure the separate noise contributions and show how to use the additional information in temporal correlations to detect multiplicative effects in dynamical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2014-09-01
Forced-air distribution systems (duct systems) typically are installed out of sight for aesthetic reasons, most often in unconditioned areas such as attics or crawlspaces. Any leakage of air to or from the duct system in unconditioned space not only loses energy, but impacts home and equipment durability and indoor air quality. An obvious solution is to bring the duct system into the interior of the house, either by sealing the area where the ducts are installed (attic or crawlspace) or by building an interior cavity or chase above the ceiling plane (raised ceiling or fur-up chase) or below the ceilingmore » plane (dropped ceiling or fur-down) for the duct system. In this project, Building America Partnership for Improved Residential Construction team partnered with Tommy Williams Homes to implement an inexpensive, quick, and effective method of building a fur-down chase.« less
Feasibility of solid oxide fuel cell dynamic hydrogen coproduction to meet building demand
NASA Astrophysics Data System (ADS)
Shaffer, Brendan; Brouwer, Jacob
2014-02-01
A dynamic internal reforming-solid oxide fuel cell system model is developed and used to simulate the coproduction of electricity and hydrogen while meeting the measured dynamic load of a typical southern California commercial building. The simulated direct internal reforming-solid oxide fuel cell (DIR-SOFC) system is controlled to become an electrical load following device that well follows the measured building load data (3-s resolution). The feasibility of the DIR-SOFC system to meet the dynamic building demand while co-producing hydrogen is demonstrated. The resulting thermal responses of the system to the electrical load dynamics as well as those dynamics associated with the filling of a hydrogen collection tank are investigated. The DIR-SOFC system model also allows for resolution of the fuel cell species and temperature distributions during these dynamics since thermal gradients are a concern for DIR-SOFC.
Implementing bioinformatic workflows within the bioextract server
USDA-ARS?s Scientific Manuscript database
Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, P.R.; Van Dyke, J.W.; McConnell, B.W.
It is estimated that electric utilities use about 40 million distribution transformers in supplying electricity to customers in the United States. Although utility distribution transformers collectively have a high average efficiency, they account for approximately 61 billion kWh of the 229 billion kWh of energy lost annually in the delivery of electricity. Distribution transformers are being replaced over time by new, more efficient, lower-loss units during routine utility maintenance of power distribution systems. Maintenance is typically not performed on units in service. However, units removed from service with appreciable remaining life are often refurbished and returned to stock. Distribution transformersmore » may be removed from service for many reasons, including failure, over- or underloading, or line upgrades such as voltage changes or rerouting. When distribution transformers are removed from service, a decision must be made whether to dispose of the transformer and purchase a lower-loss replacement or to refurbish the transformer and return it to stock for future use. This report contains findings and recommendations on replacing utility distribution transformers during routine maintenance, which is required by section 124(c) of the Energy Policy Act of 1992. The objectives of the study are to evaluate the practicability, cost-effectiveness, and potential energy savings of replacing or upgrading existing transformers during routine utility maintenance and to develop recommendations on was to achieve the potential energy savings.« less
Spatial Light Modulators as Optical Crossbar Switches
NASA Technical Reports Server (NTRS)
Juday, Richard
2003-01-01
A proposed method of implementing cross connections in an optical communication network is based on the use of a spatial light modulator (SLM) to form controlled diffraction patterns that connect inputs (light sources) and outputs (light sinks). Sources would typically include optical fibers and/or light-emitting diodes; sinks would typically include optical fibers and/or photodetectors. The sources and/or sinks could be distributed in two dimensions; that is, on planes. Alternatively or in addition, sources and/or sinks could be distributed in three dimensions -- for example, on curved surfaces or in more complex (including random) three-dimensional patterns.
Semantic Indexing of Medical Learning Objects: Medical Students' Usage of a Semantic Network
Gießler, Paul; Ohnesorge-Radtke, Ursula; Spreckelsen, Cord
2015-01-01
Background The Semantically Annotated Media (SAM) project aims to provide a flexible platform for searching, browsing, and indexing medical learning objects (MLOs) based on a semantic network derived from established classification systems. Primarily, SAM supports the Aachen emedia skills lab, but SAM is ready for indexing distributed content and the Simple Knowledge Organizing System standard provides a means for easily upgrading or even exchanging SAM’s semantic network. There is a lack of research addressing the usability of MLO indexes or search portals like SAM and the user behavior with such platforms. Objective The purpose of this study was to assess the usability of SAM by investigating characteristic user behavior of medical students accessing MLOs via SAM. Methods In this study, we chose a mixed-methods approach. Lean usability testing was combined with usability inspection by having the participants complete four typical usage scenarios before filling out a questionnaire. The questionnaire was based on the IsoMetrics usability inventory. Direct user interaction with SAM (mouse clicks and pages accessed) was logged. Results The study analyzed the typical usage patterns and habits of students using a semantic network for accessing MLOs. Four scenarios capturing characteristics of typical tasks to be solved by using SAM yielded high ratings of usability items and showed good results concerning the consistency of indexing by different users. Long-tail phenomena emerge as they are typical for a collaborative Web 2.0 platform. Suitable but nonetheless rarely used keywords were assigned to MLOs by some users. Conclusions It is possible to develop a Web-based tool with high usability and acceptance for indexing and retrieval of MLOs. SAM can be applied to indexing multicentered repositories of MLOs collaboratively. PMID:27731860
Semantic Indexing of Medical Learning Objects: Medical Students' Usage of a Semantic Network.
Tix, Nadine; Gießler, Paul; Ohnesorge-Radtke, Ursula; Spreckelsen, Cord
2015-11-11
The Semantically Annotated Media (SAM) project aims to provide a flexible platform for searching, browsing, and indexing medical learning objects (MLOs) based on a semantic network derived from established classification systems. Primarily, SAM supports the Aachen emedia skills lab, but SAM is ready for indexing distributed content and the Simple Knowledge Organizing System standard provides a means for easily upgrading or even exchanging SAM's semantic network. There is a lack of research addressing the usability of MLO indexes or search portals like SAM and the user behavior with such platforms. The purpose of this study was to assess the usability of SAM by investigating characteristic user behavior of medical students accessing MLOs via SAM. In this study, we chose a mixed-methods approach. Lean usability testing was combined with usability inspection by having the participants complete four typical usage scenarios before filling out a questionnaire. The questionnaire was based on the IsoMetrics usability inventory. Direct user interaction with SAM (mouse clicks and pages accessed) was logged. The study analyzed the typical usage patterns and habits of students using a semantic network for accessing MLOs. Four scenarios capturing characteristics of typical tasks to be solved by using SAM yielded high ratings of usability items and showed good results concerning the consistency of indexing by different users. Long-tail phenomena emerge as they are typical for a collaborative Web 2.0 platform. Suitable but nonetheless rarely used keywords were assigned to MLOs by some users. It is possible to develop a Web-based tool with high usability and acceptance for indexing and retrieval of MLOs. SAM can be applied to indexing multicentered repositories of MLOs collaboratively.
The physical oceanography of upwelling systems and the development of harmful algal blooms
Pitcher, G.C.; Figueiras, F.G.; Hickey, B.M.; Moita, M.T.
2011-01-01
The upwelling systems of the eastern boundaries of the world’s oceans are susceptible to harmful algal blooms (HABs) because they are highly productive, nutrient-rich environments, prone to high-biomass blooms. This review identifies those aspects of the physical environment important in the development of HABs in upwelling systems through description and comparison of bloom events in the Benguela, California and Iberia systems. HAB development is dictated by the influence of wind stress on the surface boundary layer through a combination of its influence on surface mixed-layer characteristics and shelf circulation patterns. The timing of HABs is controlled by windstress fluctuations and buoyancy inputs at the seasonal, event and interannual scales. Within this temporal framework, various mesoscale features that interrupt typical upwelling circulation patterns, determine the spatial distribution of HABs. The inner shelf in particular provides a mosaic of shifting habitats, some of which favour HABs. Changes in coastline configuration and orientation, and bottom topography are important in determining the distribution of HABs through their influence on water stratification and retention. A spectrum of coastline configurations, including headlands, capes, peninsulas, Rías, bays and estuaries, representing systems of increasing isolation from the open coast and consequent increasing retention times, are assessed in terms of their vulnerability to HABs. PMID:22053120
Two-layer wireless distributed sensor/control network based on RF
NASA Astrophysics Data System (ADS)
Feng, Li; Lin, Yuchi; Zhou, Jingjing; Dong, Guimei; Xia, Guisuo
2006-11-01
A project of embedded Wireless Distributed Sensor/Control Network (WDSCN) based on RF is presented after analyzing the disadvantages of traditional measure and control system. Because of high-cost and complexity, such wireless techniques as Bluetooth and WiFi can't meet the needs of WDSCN. The two-layer WDSCN is designed based on RF technique, which operates in the ISM free frequency channel with low power and high transmission speed. Also the network is low cost, portable and moveable, integrated with the technologies of computer network, sensor, microprocessor and wireless communications. The two-layer network topology is selected in the system; a simple but efficient self-organization net protocol is designed to fit the periodic data collection, event-driven and store-and-forward. Furthermore, adaptive frequency hopping technique is adopted for anti-jamming apparently. The problems about power reduction and synchronization of data in wireless system are solved efficiently. Based on the discussion above, a measure and control network is set up to control such typical instruments and sensors as temperature sensor and signal converter, collect data, and monitor environmental parameters around. This system works well in different rooms. Experiment results show that the system provides an efficient solution to WDSCN through wireless links, with high efficiency, low power, high stability, flexibility and wide working range.
A TCP/IP framework for ethernet-based measurement, control and experiment data distribution
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Minny, J.
2010-11-01
A complete modular but scalable TCP/IP based scientific instrument control and data distribution system has been designed and realized. The system features an IEEE 802.3 compliant 10 Mbps Medium Access Controller (MAC) and Physical Layer Device that is suitable for the full-duplex monitoring and control of various physically widespread measurement transducers in the presence of a local network infrastructure. The cumbersomeness of exchanging and synchronizing data between the various transducer units using physical storage media led to the choice of TCP/IP as a logical alternative. The system and methods developed are scalable for broader usage over the Internet. The system comprises a PIC18f2620 and ENC28j60 based hardware and a software component written in C, Java/Javascript and Visual Basic.NET programming languages for event-level monitoring and browser user-interfaces respectively. The system exchanges data with the host network through IPv4 packets requested and received on a HTTP page. It also responds to ICMP echo, UDP and ARP requests through a user selectable integrated DHCP and static IPv4 address allocation scheme. The round-trip time, throughput and polling frequency are estimated and reported. A typical application to temperature monitoring and logging is also presented.
Sims, Lee B; Huss, Maya K; Frieboes, Hermann B; Steinbach-Rankins, Jill M
2017-10-05
Advanced stage cancer treatments are often invasive and painful-typically comprised of surgery, chemotherapy, and/or radiation treatment. Low transport efficiency during systemic chemotherapy may require high chemotherapeutic doses to effectively target cancerous tissue, resulting in systemic toxicity. Nanotherapeutic platforms have been proposed as an alternative to more safely and effectively deliver therapeutic agents directly to tumor sites. However, cellular internalization and tumor penetration are often diametrically opposed, with limited access to tumor regions distal from vasculature, due to irregular tissue morphologies. To address these transport challenges, nanoparticles (NPs) are often surface-modified with ligands to enhance transport and longevity after localized or systemic administration. Here, we evaluate stealth polyethylene-glycol (PEG), cell-penetrating (MPG), and CPP-stealth (MPG/PEG) poly(lactic-co-glycolic-acid) (PLGA) NP co-treatment strategies in 3D cell culture representing hypo-vascularized tissue. Smaller, more regularly-shaped avascular tissue was generated using the hanging drop (HD) method, while more irregularly-shaped masses were formed with the liquid overlay (LO) technique. To compare NP distribution differences within the same type of tissue as a function of different cancer types, we selected HeLa, cervical epithelial adenocarcinoma cells; CaSki, cervical epidermoid carcinoma cells; and SiHa, grade II cervical squamous cell carcinoma cells. In HD tumors, enhanced distribution relative to unmodified NPs was measured for MPG and PEG NPs in HeLa, and for all modified NPs in SiHa spheroids. In LO tumors, the greatest distribution was observed for MPG and MPG/PEG NPs in HeLa, and for PEG and MPG/PEG NPs in SiHa spheroids. Pre-clinical evaluation of PLGA-modified NP distribution into hypo-vascularized tumor tissue may benefit from considering tissue morphology in addition to cancer type.
NASA Technical Reports Server (NTRS)
Uthe, Edward E.; Nielsen, Norman B.; Livingston, John M.
1992-01-01
The 1990 Clean Air Act Amendments mandated attainment of the ozone standard established by the U.S. Environmental Protection Agency. Improved photochemical models validated by experimental data are needed to develop strategies for reducing near surface ozone concentrations downwind of urban and industrial centers. For more than 10 years, lidar has been used on large aircraft to provide unique information on ozone distributions in the atmosphere. However, compact airborne lidar systems are needed for operation on small aircraft of the type typically used on regional air quality investigations to collect data with which to develop and validate air quality models. Data presented in this paper will consist of a comparison between airborne differential absorption lidar (DIAL) and airborne in-situ ozone measurements. Also discussed are future plans to improve the airborne ultraviolet-DIAL for ozone and other gas observations and addition of a Fourier Transform Infrared (FTIR) emission spectrometer to investigate the effects of other gas species on vertical ozone distribution.
NASA Technical Reports Server (NTRS)
Morris, Robert A.
1990-01-01
The emphasis is on defining a set of communicating processes for intelligent spacecraft secondary power distribution and control. The computer hardware and software implementation platform for this work is that of the ADEPTS project at the Johnson Space Center (JSC). The electrical power system design which was used as the basis for this research is that of Space Station Freedom, although the functionality of the processes defined here generalize to any permanent manned space power control application. First, the Space Station Electrical Power Subsystem (EPS) hardware to be monitored is described, followed by a set of scenarios describing typical monitor and control activity. Then, the parallel distributed problem solving approach to knowledge engineering is introduced. There follows a two-step presentation of the intelligent software design for secondary power control. The first step decomposes the problem of monitoring and control into three primary functions. Each of the primary functions is described in detail. Suggestions for refinements and embelishments in design specifications are given.
Modelling radionuclide transport in fractured media with a dynamic update of K d values
Trinchero, Paolo; Painter, Scott L.; Ebrahimi, Hedieh; ...
2015-10-13
Radionuclide transport in fractured crystalline rocks is a process of interest in evaluating long term safety of potential disposal systems for radioactive wastes. Given their numerical efficiency and the absence of numerical dispersion, Lagrangian methods (e.g. particle tracking algorithms) are appealing approaches that are often used in safety assessment (SA) analyses. In these approaches, many complex geochemical retention processes are typically lumped into a single parameter: the distribution coefficient (Kd). Usually, the distribution coefficient is assumed to be constant over the time frame of interest. However, this assumption could be critical under long-term geochemical changes as it is demonstrated thatmore » the distribution coefficient depends on the background chemical conditions (e.g. pH, Eh, and major chemistry). In this study, we provide a computational framework that combines the efficiency of Lagrangian methods with a sound and explicit description of the geochemical changes of the site and their influence on the radionuclide retention properties.« less
Controls on the distribution of alkylphenols and BTEX in oilfield waters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dale, J.D.; Aplin, A.C.; Larter, S.R.
1996-10-01
Controls on the abundance of alkylphenols and BTEX in oilfield waters are poorly understood, but are important because these species are the main dissolved pollutants in produced waters and may also be used as indicators of both the proximity and migration range of petroleum. Using (1) measurements of alkyl phenols and BTEX in oilfield waters and associated petroleums, and (b) oil/water partition coefficients under subsurface conditions we conclude that: (1) The distribution of alkylphenols and BTEX in formation waters are controlled by partition equilibrium with petroleum. Phenol and benzene typically account for 50% of total phenols and total BTEX respectively.more » (2) The concentrations of alkylphenols and BTEX in produced waters equilibriated with oil in reservoirs or in separator systems vary predictably as a function of pressure, temperature and salinity. This suggests that oil/water partition is the primary control influencing the distribution of alkylphenols and BTEX in oilfield waters and that other processes such as hydrolysis processes at the oil-water contact are secondary.« less
Effects of spatial and temporal resolution on simulated feedbacks from polygonal tundra.
NASA Astrophysics Data System (ADS)
Coon, E.; Atchley, A. L.; Painter, S. L.; Karra, S.; Moulton, J. D.; Wilson, C. J.; Liljedahl, A.
2014-12-01
Earth system land models typically resolve permafrost regions at spatial resolutions grossly larger than the scales of topographic variation. This observation leads to two critical questions: How much error is introduced by this lack of resolution, and what is the effect of this approximation on other coupled components of the Earth system, notably the energy balance and carbon cycle? Here we use the Arctic Terrestrial Simulator (ATS) to run micro-topography resolving simulations of polygonal ground, driven by meteorological data from Barrow, AK, to address these questions. ATS couples surface and subsurface processes, including thermal hydrology, surface energy balance, and a snow model. Comparisons are made between one-dimensional "column model" simulations (similar to, for instance, CLM or other land models typically used in Earth System models) and higher-dimensional simulations which resolve micro-topography, allowing for distributed surface runoff, horizontal flow in the subsurface, and uneven snow distribution. Additionally, we drive models with meteorological data averaged over different time scales from daily to weekly moving windows. In each case, we compare fluxes important to the surface energy balance including albedo, latent and sensible heat fluxes, and land-to-atmosphere long-wave radiation. Results indicate that spatial topography variation and temporal variability are important in several ways. Snow distribution greatly affects the surface energy balance, fundamentally changing the partitioning of incoming solar radiation between the subsurface and the atmosphere. This has significant effects on soil moisture and temperature, with implications for vegetation and decomposition. Resolving temporal variability is especially important in spring, when early warm days can alter the onset of snowmelt by days to weeks. We show that high-resolution simulations are valuable in evaluating current land models, especially in areas of polygonal ground. This work was supported by LANL Laboratory Directed Research and Development Project LDRD201200068DR and by the The Next-Generation Ecosystem Experiments (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the DOE Office of Science. LA-UR-14-26227.
Microscale air quality impacts of distributed power generation facilities.
Olaguer, Eduardo P; Knipping, Eladio; Shaw, Stephanie; Ravindran, Satish
2016-08-01
The electric system is experiencing rapid growth in the adoption of a mix of distributed renewable and fossil fuel sources, along with increasing amounts of off-grid generation. New operational regimes may have unforeseen consequences for air quality. A three-dimensional microscale chemical transport model (CTM) driven by an urban wind model was used to assess gaseous air pollutant and particulate matter (PM) impacts within ~10 km of fossil-fueled distributed power generation (DG) facilities during the early afternoon of a typical summer day in Houston, TX. Three types of DG scenarios were considered in the presence of motor vehicle emissions and a realistic urban canopy: (1) a 25-MW natural gas turbine operating at steady state in either simple cycle or combined heating and power (CHP) mode; (2) a 25-MW simple cycle gas turbine undergoing a cold startup with either moderate or enhanced formaldehyde emissions; and (3) a data center generating 10 MW of emergency power with either diesel or natural gas-fired backup generators (BUGs) without pollution controls. Simulations of criteria pollutants (NO2, CO, O3, PM) and the toxic pollutant, formaldehyde (HCHO), were conducted assuming a 2-hr operational time period. In all cases, NOx titration dominated ozone production near the source. The turbine scenarios did not result in ambient concentration enhancements significantly exceeding 1 ppbv for gaseous pollutants or over 1 µg/m(3) for PM after 2 hr of emission, assuming realistic plume rise. In the case of the datacenter with diesel BUGs, ambient NO2 concentrations were enhanced by 10-50 ppbv within 2 km downwind of the source, while maximum PM impacts in the immediate vicinity of the datacenter were less than 5 µg/m(3). Plausible scenarios of distributed fossil generation consistent with the electricity grid's transformation to a more flexible and modernized system suggest that a substantial amount of deployment would be required to significantly affect air quality on a localized scale. In particular, natural gas turbines typically used in distributed generation may have minor effects. Large banks of diesel backup generators such as those used by data centers, on the other hand, may require pollution controls or conversion to natural gas-fired reciprocal internal combustion engines to decrease nitrogen dioxide pollution.
Rainwater harvesting in the South American Dry Chaco
NASA Astrophysics Data System (ADS)
Magliano, P. N.; Baldi, G.; Murray, F.; Aurand, S.; Paez, R. A.; Jobbagy, E. G.
2014-12-01
A vast fraction of the South American Dry Chaco ecoregion still relies on rainwater harvesting (RWH) to support, not only livestock production, but domestic and industrial uses as well. As a result, water capture and storage infrastructure is widely disseminated throughout the region. In this work we characterized the most typical RWH systems in two contrastingly developed sub-regions of Dry Chaco ranging from extensive ranching to intensive beef and dairy production (central Argentina and western Paraguay, respectively). In each sub-region, we quantified RWH density, spatial distribution and associations with landscape features; by other hand, we illustrated how the daily dynamic of water stock in a typical RWH system contributes to assess their capture and storage efficiency. We found that randomly distributed, low-tech RWH systems prevail in central Argentina, while clustered and hi-tech systems do it in western Paraguay. Their density was highly contrasting between sub-regions (0.098 vs. 0.94 units/ km2 in central Argentina and western Paraguay, respectively), being exponentially associated with land cleared fraction and proximity to villages. The daily monitoring of water level suggested a positive but complex response of water capture to precipitation. The elongated catchment area, created by roads and trails, could have partially decoupled local precipitation and water yield of the impoundment, favouring the capture of remote precipitation events and generating highly variable water yield under large local precipitation events. Once stored, the rates of water level decline suggested that infiltration exceeded evaporation as a water output pathway (59 vs. 41%, respectively, of total losses). Across both study areas, RWH accounts for less than 1% of the annual precipitation, playing a minor role on the regional water balance; however at a local level, they can affect several hydrological fluxes including the onset of groundwater recharge and the mitigation of extreme runoff events along roads and trails.
Zhou, Xinyan; Zhang, Kejia; Zhang, Tuqiao; Li, Cong; Mao, Xinwei
2017-05-01
It is important for water utilities to provide esthetically acceptable drinking water to the public, because our consumers always initially judge the quality of the tap water by its color, taste, and odor (T&O). Microorganisms in drinking water contribute largely to T&O production and drinking water distribution systems (DWDS) are known to harbor biofilms and microorganisms in bulk water, even in the presence of a disinfectant. These microbes include T&O-causing bacteria, fungi, and algae, which may lead to unwanted effects on the organoleptic quality of distributed water. Importantly, the understanding of types of these microbes and their T&O compound-producing mechanisms is needed to prevent T&O formation during drinking water distribution. Additionally, new disinfection strategies and operation methods of DWDS are also needed for better control of T&O problems in drinking water. This review covers: (1) the microbial species which can produce T&O compounds in DWDS; (2) typical T&O compounds in DWDS and their formation mechanisms by microorganisms; (3) several common factors in DWDS which can influence the growth and T&O generation of microbes; and (4) several strategies to control biofilm and T&O compound formation in DWDS. At the end of this review, recommendations were given based on the conclusion of this review.
Automatic Tools for Enhancing the Collaborative Experience in Large Projects
NASA Astrophysics Data System (ADS)
Bourilkov, D.; Rodriquez, J. L.
2014-06-01
With the explosion of big data in many fields, the efficient management of knowledge about all aspects of the data analysis gains in importance. A key feature of collaboration in large scale projects is keeping a log of what is being done and how - for private use, reuse, and for sharing selected parts with collaborators and peers, often distributed geographically on an increasingly global scale. Even better if the log is automatically created on the fly while the scientist or software developer is working in a habitual way, without the need for extra efforts. This saves time and enables a team to do more with the same resources. The CODESH - COllaborative DEvelopment SHell - and CAVES - Collaborative Analysis Versioning Environment System projects address this problem in a novel way. They build on the concepts of virtual states and transitions to enhance the collaborative experience by providing automatic persistent virtual logbooks. CAVES is designed for sessions of distributed data analysis using the popular ROOT framework, while CODESH generalizes the approach for any type of work on the command line in typical UNIX shells like bash or tcsh. Repositories of sessions can be configured dynamically to record and make available the knowledge accumulated in the course of a scientific or software endeavor. Access can be controlled to define logbooks of private sessions or sessions shared within or between collaborating groups. A typical use case is building working scalable systems for analysis of Petascale volumes of data as encountered in the LHC experiments. Our approach is general enough to find applications in many fields.
Contamination of New Jersey beach sand with magnetite spherules from industrial air pollution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassinan, W.T.; Puffer, J.H.
1992-01-01
Spherical particles composed of magnetite, typically 120 [mu]m to 2,450 [mu]m in diameter, are accumulating in the beach sands of New Jersey. Most magnetite spherule surfaces are highly polished but some are corroded or abraded. Their interiors are typically vesicular. Magnetite spherules from 213 New Jersey beach sand samples collected during May 1991 are chemically and morphologically the same as those filtered from industrial smokestacks and the air supplied of Newark, New Jersey and Philadelphia. The average concentration of spherules in New Jersey beach sand is 35 per kg throughout the northern 43 km of beach south of Newark (frommore » Sandy Hook to Belmar Beach). They are rare to absent in the central 86 km stretch of beach but average 34 per kg of sand throughout the southern 91 km of beach east of Philadelphia (from Ventnor City to Villas Beach). The distribution of magnetite spherules in New Jersey beach sand is consistent with a transport pathway model that involves: (1) Prevailing wind dispersal from industrial sources, (2) erosion of spherules that have settled out of the air into the surface drainage system that flows toward the New Jersey coast and (3) longshore transport of spherule contaminated sand away from inlets identified as locations where most of the spherules enter the beach system. The spherules, therefore, are useful tracers indicating how industrial airborne fallout is transported to and along shorelines. The distribution pattern is consistent with generally northward longshore currents north of the Manasquan inlet and generally southward longshore currents south of the Abescon inlet.« less
NASA Astrophysics Data System (ADS)
Fu, W. J.; Jiang, P. K.; Zhou, G. M.; Zhao, K. L.
2013-12-01
The spatial variation of forest litter carbon (FLC) density in the typical subtropical forests in southeast China was investigated using Moran's I, geostatistics and a geographical information system (GIS). A total of 839 forest litter samples were collected based on a 12 km (South-North) × 6 km (East-West) grid system in Zhejiang Province. Forest litter carbon density values were very variable, ranging from 10.2 kg ha-1 to 8841.3 kg ha-1, with an average of 1786.7 kg ha-1. The aboveground biomass had the strongest positive correlation with FLC density, followed by forest age and elevation. Global Moran's I revealed that FLC density had significant positive spatial autocorrelation. Clear spatial patterns were observed using Local Moran's I. A spherical model was chosen to fit the experimental semivariogram. The moderate "nugget-to-sill" (0.536) value revealed that both natural and anthropogenic factors played a key role in spatial heterogeneity of FLC density. High FLC density values were mainly distributed in northwestern and western part of Zhejiang province, which were related to adopting long-term policy of forest conservation in these areas. While Hang-Jia-Hu (HJH) Plain, Jin-Qu (JQ) basin and coastal areas had low FLC density due to low forest coverage and intensive management of economic forests. These spatial patterns in distribution map were in line with the spatial-cluster map described by local Moran's I. Therefore, Moran's I, combined with geostatistics and GIS could be used to study spatial patterns of environmental variables related to forest ecosystem.
Multiscale implementation of infinite-swap replica exchange molecular dynamics.
Yu, Tang-Qing; Lu, Jianfeng; Abrams, Cameron F; Vanden-Eijnden, Eric
2016-10-18
Replica exchange molecular dynamics (REMD) is a popular method to accelerate conformational sampling of complex molecular systems. The idea is to run several replicas of the system in parallel at different temperatures that are swapped periodically. These swaps are typically attempted every few MD steps and accepted or rejected according to a Metropolis-Hastings criterion. This guarantees that the joint distribution of the composite system of replicas is the normalized sum of the symmetrized product of the canonical distributions of these replicas at the different temperatures. Here we propose a different implementation of REMD in which (i) the swaps obey a continuous-time Markov jump process implemented via Gillespie's stochastic simulation algorithm (SSA), which also samples exactly the aforementioned joint distribution and has the advantage of being rejection free, and (ii) this REMD-SSA is combined with the heterogeneous multiscale method to accelerate the rate of the swaps and reach the so-called infinite-swap limit that is known to optimize sampling efficiency. The method is easy to implement and can be trivially parallelized. Here we illustrate its accuracy and efficiency on the examples of alanine dipeptide in vacuum and C-terminal β-hairpin of protein G in explicit solvent. In this latter example, our results indicate that the landscape of the protein is a triple funnel with two folded structures and one misfolded structure that are stabilized by H-bonds.
McGann, Mary; Erikson, Li H.; Wan, Elmira; Powell, Charles; Maddocks, Rosalie F.; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.
2013-01-01
Although conventional sediment parameters (mean grain size, sorting, and skewness) and provenance have typically been used to infer sediment transport pathways, most freshwater, brackish, and marine environments are also characterized by abundant sediment constituents of biological, and possibly anthropogenic and volcanic, origin that can provide additional insight into local sedimentary processes. The biota will be spatially distributed according to its response to environmental parameters such as water temperature, salinity, dissolved oxygen, organic carbon content, grain size, and intensity of currents and tidal flow, whereas the presence of anthropogenic and volcanic constituents will reflect proximity to source areas and whether they are fluvially- or aerially-transported. Because each of these constituents have a unique environmental signature, they are a more precise proxy for that source area than the conventional sedimentary process indicators. This San Francisco Bay Coastal System study demonstrates that by applying a multi-proxy approach, the primary sites of sediment transport can be identified. Many of these sites are far from where the constituents originated, showing that sediment transport is widespread in the region. Although not often used, identifying and interpreting the distribution of naturally-occurring and allochthonous biologic, anthropogenic, and volcanic sediment constituents is a powerful tool to aid in the investigation of sediment transport pathways in other coastal systems.
Canine spontaneous glioma: A translational model system for convection-enhanced delivery
Dickinson, Peter J.; LeCouteur, Richard A.; Higgins, Robert J.; Bringas, John R.; Larson, Richard F.; Yamashita, Yoji; Krauze, Michal T.; Forsayeth, John; Noble, Charles O.; Drummond, Daryl C.; Kirpotin, Dmitri B.; Park, John W.; Berger, Mitchel S.; Bankiewicz, Krystof S.
2010-01-01
Canine spontaneous intracranial tumors bear striking similarities to their human tumor counterparts and have the potential to provide a large animal model system for more realistic validation of novel therapies typically developed in small rodent models. We used spontaneously occurring canine gliomas to investigate the use of convection-enhanced delivery (CED) of liposomal nanoparticles, containing topoisomerase inhibitor CPT-11. To facilitate visualization of intratumoral infusions by real-time magnetic resonance imaging (MRI), we included identically formulated liposomes loaded with Gadoteridol. Real-time MRI defined distribution of infusate within both tumor and normal brain tissues. The most important limiting factor for volume of distribution within tumor tissue was the leakage of infusate into ventricular or subarachnoid spaces. Decreased tumor volume, tumor necrosis, and modulation of tumor phenotype correlated with volume of distribution of infusate (Vd), infusion location, and leakage as determined by real-time MRI and histopathology. This study demonstrates the potential for canine spontaneous gliomas as a model system for the validation and development of novel therapeutic strategies for human brain tumors. Data obtained from infusions monitored in real time in a large, spontaneous tumor may provide information, allowing more accurate prediction and optimization of infusion parameters. Variability in Vd between tumors strongly suggests that real-time imaging should be an essential component of CED therapeutic trials to allow minimization of inappropriate infusions and accurate assessment of clinical outcomes. PMID:20488958
Palmer, Antony L; Lee, Chris; Ratcliffe, Ailsa J; Bradley, David; Nisbet, Andrew
2013-10-07
A novel phantom is presented for 'full system' dosimetric audit comparing planned and delivered dose distributions in HDR gynaecological brachytherapy, using clinical treatment applicators. The brachytherapy applicator dosimetry test object consists of a near full-scatter water tank with applicator and film supports constructed of Solid Water, accommodating any typical cervix applicator. Film dosimeters are precisely held in four orthogonal planes bisecting the intrauterine tube, sampling dose distributions in the high risk clinical target volume, points A and B, bladder, rectum and sigmoid. The applicator position is fixed prior to CT scanning and through treatment planning and irradiation. The CT data is acquired with the applicator in a near clinical orientation to include applicator reconstruction in the system test. Gamma analysis is used to compare treatment planning system exported RTDose grid with measured multi-channel film dose maps. Results from two pilot audits are presented, using Ir-192 and Co-60 HDR sources, with a mean gamma passing rate of 98.6% using criteria of 3% local normalization and 3 mm distance to agreement (DTA). The mean DTA between prescribed dose and measured film dose at point A was 1.2 mm. The phantom was funded by IPEM and will be used for a UK national brachytherapy dosimetry audit.
NASA Astrophysics Data System (ADS)
Redfern, Andrew; Koplow, Michael; Wright, Paul
2007-01-01
Most residential heating, ventilating, and air-conditioning (HVAC) systems utilize a single zone for conditioning air throughout the entire house. While inexpensive, these systems lead to wide temperature distributions and inefficient cooling due to the difference in thermal loads in different rooms. The end result is additional cost to the end user because the house is over conditioned. To reduce the total amount of energy used in a home and to increase occupant comfort there is a need for a better control system using multiple temperature zones. Typical multi-zone systems are costly and require extensive infrastructure to function. Recent advances in wireless sensor networks (WSNs) have enabled a low cost drop-in wireless vent register control system. The register control system is controlled by a master controller unit, which collects sensor data from a distributed wireless sensor network. Each sensor node samples local settings (occupancy, light, humidity and temperature) and reports the data back to the master control unit. The master control unit compiles the incoming data and then actuates the vent resisters to control the airflow throughout the house. The control system also utilizes a smart thermostat with a movable set point to enable the user to define their given comfort levels. The new system can reduce the run time of the HVAC system and thus decreasing the amount of energy used and increasing the comfort of the home occupations.
NASA Astrophysics Data System (ADS)
Zakhozhay, Olga V.
2017-04-01
We study a possibility to detect signatures of brown dwarf companions in a circumstellar disc based on spectral energy distributions. We present the results of spectral energy distribution simulations for a system with a 0.8 M⊙ central object and a companion with a mass of 30 M J embedded in a typical protoplanetary disc. We use a solution to the one-dimensional radiative transfer equation to calculate the protoplanetary disc flux density and assume, that the companion moves along a circular orbit and clears a gap. The width of the gap is assumed to be the diameter of the brown dwarf Hill sphere. Our modelling shows that the presence of such a gap can initiate an additional minimum in the spectral energy distribution profile of a protoplanetary disc at λ = 10-100 μm. We found that it is possible to detect signatures of the companion when it is located within 10 AU, even when it is as small as 3 M J. The spectral energy distribution of a protostellar disc with a massive fragment (of relatively cold temperature 400 K) might have a similar double peaked profile to the spectral energy distribution of a more evolved disc that contains a gap.
Forest Growth and Yield Models Viewed From a Different Perspective
Jeffery C. Goelz
2002-01-01
Typically, when different forms of growth and yield models are considered, they are grouped into convenient discrete classes. As a heuristic device, I chose to use a contrasting perspective, that all growth and yield models are diameter distribution models that merely differ in regard to which diameter distribution is employed and how the distribution is projected to...
Sebastian Martinuzzi; Lee A. Vierling; William A. Gould; Kerri T. Vierling; Andrew T. Hudak
2009-01-01
Remote sensing provides critical information for broad scale assessments of wildlife habitat distribution and conservation. However, such efforts have been typically unable to incorporate information about vegetation structure, a variable important for explaining the distribution of many wildlife species. We evaluated the consequences of incorporating remotely sensed...
60-Hz electric and magnetic fields generated by a distribution network.
Héroux, P
1987-01-01
From a mobile unit, 60-Hz electric and magnetic fields generated by Hydro-Québec's distribution network were measured. Nine runs, representative of various human environments, were investigated. Typical values were 32 V/m and 0.16 microT. The electrical distribution networks investigated were major contributors to the electric and magnetic environments.
Optical-CT 3D Dosimetry Using Fresnel Lenses with Minimal Refractive-Index Matching Fluid
Bache, Steven; Malcolm, Javian; Adamovics, John; Oldham, Mark
2016-01-01
Telecentric optical computed tomography (optical-CT) is a state-of-the-art method for visualizing and quantifying 3-dimensional dose distributions in radiochromic dosimeters. In this work a prototype telecentric system (DFOS—Duke Fresnel Optical-CT Scanner) is evaluated which incorporates two substantial design changes: the use of Fresnel lenses (reducing lens costs from $10-30K t0 $1-3K) and the use of a ‘solid tank’ (which reduces noise, and the volume of refractively matched fluid from 1ltr to 10cc). The efficacy of DFOS was evaluated by direct comparison against commissioned scanners in our lab. Measured dose distributions from all systems were compared against the predicted dose distributions from a commissioned treatment planning system (TPS). Three treatment plans were investigated including a simple four-field box treatment, a multiple small field delivery, and a complex IMRT treatment. Dosimeters were imaged within 2h post irradiation, using consistent scanning techniques (360 projections acquired at 1 degree intervals, reconstruction at 2mm). DFOS efficacy was evaluated through inspection of dose line-profiles, and 2D and 3D dose and gamma maps. DFOS/TPS gamma pass rates with 3%/3mm dose difference/distance-to-agreement criteria ranged from 89.3% to 92.2%, compared to from 95.6% to 99.0% obtained with the commissioned system. The 3D gamma pass rate between the commissioned system and DFOS was 98.2%. The typical noise rates in DFOS reconstructions were up to 3%, compared to under 2% for the commissioned system. In conclusion, while the introduction of a solid tank proved advantageous with regards to cost and convenience, further work is required to improve the image quality and dose reconstruction accuracy of the new DFOS optical-CT system. PMID:27019460
Optical-CT 3D Dosimetry Using Fresnel Lenses with Minimal Refractive-Index Matching Fluid.
Bache, Steven; Malcolm, Javian; Adamovics, John; Oldham, Mark
2016-01-01
Telecentric optical computed tomography (optical-CT) is a state-of-the-art method for visualizing and quantifying 3-dimensional dose distributions in radiochromic dosimeters. In this work a prototype telecentric system (DFOS-Duke Fresnel Optical-CT Scanner) is evaluated which incorporates two substantial design changes: the use of Fresnel lenses (reducing lens costs from $10-30K t0 $1-3K) and the use of a 'solid tank' (which reduces noise, and the volume of refractively matched fluid from 1 ltr to 10 cc). The efficacy of DFOS was evaluated by direct comparison against commissioned scanners in our lab. Measured dose distributions from all systems were compared against the predicted dose distributions from a commissioned treatment planning system (TPS). Three treatment plans were investigated including a simple four-field box treatment, a multiple small field delivery, and a complex IMRT treatment. Dosimeters were imaged within 2 h post irradiation, using consistent scanning techniques (360 projections acquired at 1 degree intervals, reconstruction at 2mm). DFOS efficacy was evaluated through inspection of dose line-profiles, and 2D and 3D dose and gamma maps. DFOS/TPS gamma pass rates with 3%/3mm dose difference/distance-to-agreement criteria ranged from 89.3% to 92.2%, compared to from 95.6% to 99.0% obtained with the commissioned system. The 3D gamma pass rate between the commissioned system and DFOS was 98.2%. The typical noise rates in DFOS reconstructions were up to 3%, compared to under 2% for the commissioned system. In conclusion, while the introduction of a solid tank proved advantageous with regards to cost and convenience, further work is required to improve the image quality and dose reconstruction accuracy of the new DFOS optical-CT system.
Landscape distribution of desert cattle: effects of diet and vegetation type
USDA-ARS?s Scientific Manuscript database
Livestock with heritage genetics may increase chances for simultaneously achieving conservation and agricultural production goals in the American Southwest. Past research shows that compared with conventional Angus x Hereford crossbreds (AH), heritage Raramuri Criollo (RC) typically distribute thems...
A THREE-DIMENSIONAL MODEL ASSESSMENT OF THE GLOBAL DISTRIBUTION OF HEXACHLOROBENZENE
The distributions of persistent organic pollutants (POPs) in the global environment have been studied typically with box/fugacity models with simplified treatments of atmospheric transport processes1. Such models are incapable of simulating the complex three-dimensional mechanis...
Larson, Felicity V; Lai, Meng-Chuan; Wagner, Adam P; Baron-Cohen, Simon; Holland, Anthony J
2015-01-01
Males and females in the general population differ, on average, in their drive for empathizing (higher in females) and systemizing (higher in males). People with autism spectrum disorder (ASD) show a drive for systemizing over empathizing, irrespective of sex, which led to the conceptualisation of ASD as an 'extreme of the typical male brain'. The opposite cognitive profile, an 'extreme of the typical female brain', has been proposed to be linked to conditions such as psychosis and mania/hypomania. We compared an empathizing-over-systemizing bias (for short 'empathizing bias') in individuals with ASD, who had experienced psychotic illness (N = 64) and who had not (N = 71). There were overall differences in the distribution of cognitive style. Adults with ASD who had experienced psychosis were more likely to show an empathizing bias than adults with ASD who had no history of psychosis. This was modulated by IQ, and the group-difference was driven mainly by individuals with above-average IQ. In women with ASD and psychosis, the link between mania/hypomania and an empathizing bias was greater than in men with ASD. The bias for empathizing over systemizing may be linked to the presence of psychosis in people with ASD. Further research is needed in a variety of clinical populations, to understand the role an empathizing bias may play in the development and manifestation of mental illness.
Interaction Support for Information Finding and Comparative Analysis in Online Video
ERIC Educational Resources Information Center
Xia, Jinyue
2017-01-01
Current online video interaction is typically designed with a focus on straightforward distribution and passive consumption of individual videos. This "click play, sit back and watch" context is typical of videos for entertainment. However, there are many task scenarios that require active engagement and analysis of video content as a…
NASA Astrophysics Data System (ADS)
Flores, Robert Joseph
Growing concerns over greenhouse gas and pollutant emissions have increased the pressure to shift energy conversion paradigms from current forms to more sustainable methods, such as through the use of distributed energy resources (DER) at industrial and commercial buildings. This dissertation is concerned with the optimal design and dispatch of a DER system installed at an industrial or commercial building. An optimization model that accurately captures typical utility costs and the physical constraints of a combined cooling, heating, and power (CCHP) system is designed to size and operate a DER system at a building. The optimization model is then used with cooperative game theory to evaluate the financial performance of a CCHP investment. The CCHP model is then modified to include energy storage, solar powered generators, alternative fuel sources, carbon emission limits, and building interactions with public and fleet PEVs. Then, a separate plugin electric vehicle (PEV) refueling model is developed to determine the cost to operate a public Level 3 fast charging station. The CCHP design and dispatch results show the size of the building load and consistency of the thermal loads are critical to positive financial performance. While using the CCHP system to produce cooling can provide savings, heat production drives positive financial performance. When designing the DER system to reduce carbon emissions, the use of renewable fuels can allow for a gas turbine system with heat recovery to reduce carbon emissions for a large university by 67%. Further reductions require large photovoltaic installations coupled with energy storage or the ability to export electricity back to the grid if costs are to remain relatively low. When considering Level 3 fast charging equipment, demand charges at low PEV travel levels are sufficiently high to discourage adoption. Integration of the equipment can reduce demand charge costs only if the building maximum demand does not coincide with PEV refueling. Electric vehicle refueling does not typically affect DER design at low PEV travel levels, but can as electric vehicle travel increases. However, as PEV travel increases, the stochastic nature of PEV refueling disappears, and the optimization problem may become deterministic.
NASA Astrophysics Data System (ADS)
Behzadi, Naghi; Ahansaz, Bahram
2018-04-01
We propose a mechanism for quantum state transfer (QST) over a binary tree spin network on the basis of incomplete collapsing measurements. To this aim, we perform initially a weak measurement (WM) on the central qubit of the binary tree network where the state of our concern has been prepared on that qubit. After the time evolution of the whole system, a quantum measurement reversal (QMR) is performed on a chosen target qubit. By taking optimal value for the strength of QMR, it is shown that the QST quality from the sending qubit to any typical target qubit on the binary tree is considerably improved in terms of the WM strength. Also, we show that how high-quality entanglement distribution over the binary tree network is achievable by using this approach.
Westerwalbesloh, Christoph; Grünberger, Alexander; Stute, Birgit; Weber, Sophie; Wiechert, Wolfgang; Kohlheyer, Dietrich; von Lieres, Eric
2015-11-07
A microfluidic device for microbial single-cell cultivation of bacteria was modeled and simulated using COMSOL Multiphysics. The liquid velocity field and the mass transfer within the supply channels and cultivation chambers were calculated to gain insight in the distribution of supplied nutrients and metabolic products secreted by the cultivated bacteria. The goal was to identify potential substrate limitations or product accumulations within the cultivation device. The metabolic uptake and production rates, colony size, and growth medium composition were varied covering a wide range of operating conditions. Simulations with glucose as substrate did not show limitations within the typically used concentration range, but for alternative substrates limitations could not be ruled out. This lays the foundation for further studies and the optimization of existing picoliter bioreactor systems.
Strategies to improve learning of all students in a class
NASA Astrophysics Data System (ADS)
Suraishkumar, G. K.
2018-05-01
The statistical distribution of the student learning abilities in a typical undergraduate engineering class poses a significant challenge to simultaneously improve the learning of all the students in the class. With traditional instruction styles, the students with significantly high learning abilities are not satisfied due to a feeling of unfulfilled potential, and the students with significantly low learning abilities feel lost. To address the challenge in an undergraduate core/required course on 'transport phenomena in biological systems', a combination of learning strategies such as active learning including co-operative group learning, challenge exercises, and others were employed in a pro-advising context. The short-term and long-term impacts were evaluated through student course performances and input, respectively. The results show that it is possible to effectively address the challenge posed by the distribution of student learning abilities in a class.
Calomiris, J J; Armstrong, J L; Seidler, R J
1984-06-01
Bacterial isolates from the drinking water system of an Oregon coastal community were examined to assess the association of metal tolerance with multiple antibiotic resistance. Positive correlations between tolerance to high levels of Cu2+, Pb2+, and Zn2+ and multiple antibiotic resistance were noted among bacteria from distribution waters but not among bacteria from raw waters. Tolerances to higher levels of Al3+ and Sn2+ were demonstrated more often by raw water isolates which were not typically multiple antibiotic resistant. A similar incidence of tolerance to Cd2+ was demonstrated by isolates of both water types and was not associated with multiple antibiotic resistance. These results suggest that simultaneous selection phenomena occurred in distribution water for bacteria which exhibited unique patterns of tolerance to Cu2+, Pb2+, and Zn2+ and antibiotic resistance.
Marinelli, A; Dunning, M; Weathersby, S; Hemsing, E; Xiang, D; Andonian, G; O'Shea, F; Miao, Jianwei; Hast, C; Rosenzweig, J B
2013-03-01
With the advent of coherent x rays provided by the x-ray free-electron laser (FEL), strong interest has been kindled in sophisticated diffraction imaging techniques. In this Letter, we exploit such techniques for the diagnosis of the density distribution of the intense electron beams typically utilized in an x-ray FEL itself. We have implemented this method by analyzing the far-field coherent transition radiation emitted by an inverse-FEL microbunched electron beam. This analysis utilizes an oversampling phase retrieval method on the transition radiation angular spectrum to reconstruct the transverse spatial distribution of the electron beam. This application of diffraction imaging represents a significant advance in electron beam physics, having critical applications to the diagnosis of high-brightness beams, as well as the collective microbunching instabilities afflicting these systems.
Political opinion formation: Initial opinion distribution and individual heterogeneity of tolerance
NASA Astrophysics Data System (ADS)
Jin, Cheng; Li, Yifu; Jin, Xiaogang
2017-02-01
Opinion dynamics on networks have received serious attention for its profound prospects in social behaviours and self-organized systems. However, political opinion formation, as one typical and significant case, remains lacking in discussion. Previous agent-based simulations propose various models that are based on different mechanisms like the coevolution between network topology and status transition. Nonetheless, even under the same network topology and with the same simple mechanism, forming opinions can still be uncertain. In this work, we propose two features, the initial distribution of opinions and the individual heterogeneity of tolerances on opinion changing, in political opinion formation. These two features are imbedded in the network construction phase of a classical model. By comparing multi simple-party systems, along with a detailed analysis on the two-party system, we capture the critical phenomenon of fragmentation, polarization and consensus both in the persistent stable stage and in-process. We further introduce the average ratio of nearest neighbours to characterize the stage of opinion formation. The results show that the initial distribution of opinions leads to different evolution results on similar random networks. In addition, the existence of stubborn nodes plays a special role: only nodes that are extremely stubborn can cause the change of final opinion distribution while in other cases they only delay the time to reach stability. If stubborn nodes are small in number, their effects are confined within a small range. This theoretical work goes deeper on an existing model, it is an early exploration on qualitative and quantitative simulation of party competition.
Selective flotation of phosphate minerals with hydroxamate collectors
Miller, Jan D.; Wang, Xuming; Li, Minhua
2002-01-01
A method is disclosed for separating phosphate minerals from a mineral mixture, particularly from high-dolomite containing phosphate ores. The method involves conditioning the mineral mixture by contacting in an aqueous in environment with a collector in an amount sufficient for promoting flotation of phosphate minerals. The collector is a hydroxamate compound of the formula; ##STR1## wherein R is generally hydrophobic and chosen such that the collector has solubility or dispersion properties it can be distributed in the mineral mixture, typically an alkyl, aryl, or alkylaryl group having 6 to 18 carbon atoms. M is a cation, typically hydrogen, an alkali metal or an alkaline earth metal. Preferably, the collector also comprises an alcohol of the formula, R'--OH wherein R' is generally hydrophobic and chosen such that the collector has solubility or dispersion properties so that it can be distributed in the mineral mixture, typically an alkyl, aryl, or alkylaryl group having 6 to 18 carbon atoms.
Accelerating numerical solution of stochastic differential equations with CUDA
NASA Astrophysics Data System (ADS)
Januszewski, M.; Kostur, M.
2010-01-01
Numerical integration of stochastic differential equations is commonly used in many branches of science. In this paper we present how to accelerate this kind of numerical calculations with popular NVIDIA Graphics Processing Units using the CUDA programming environment. We address general aspects of numerical programming on stream processors and illustrate them by two examples: the noisy phase dynamics in a Josephson junction and the noisy Kuramoto model. In presented cases the measured speedup can be as high as 675× compared to a typical CPU, which corresponds to several billion integration steps per second. This means that calculations which took weeks can now be completed in less than one hour. This brings stochastic simulation to a completely new level, opening for research a whole new range of problems which can now be solved interactively. Program summaryProgram title: SDE Catalogue identifier: AEFG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Gnu GPL v3 No. of lines in distributed program, including test data, etc.: 978 No. of bytes in distributed program, including test data, etc.: 5905 Distribution format: tar.gz Programming language: CUDA C Computer: any system with a CUDA-compatible GPU Operating system: Linux RAM: 64 MB of GPU memory Classification: 4.3 External routines: The program requires the NVIDIA CUDA Toolkit Version 2.0 or newer and the GNU Scientific Library v1.0 or newer. Optionally gnuplot is recommended for quick visualization of the results. Nature of problem: Direct numerical integration of stochastic differential equations is a computationally intensive problem, due to the necessity of calculating multiple independent realizations of the system. We exploit the inherent parallelism of this problem and perform the calculations on GPUs using the CUDA programming environment. The GPU's ability to execute hundreds of threads simultaneously makes it possible to speed up the computation by over two orders of magnitude, compared to a typical modern CPU. Solution method: The stochastic Runge-Kutta method of the second order is applied to integrate the equation of motion. Ensemble-averaged quantities of interest are obtained through averaging over multiple independent realizations of the system. Unusual features: The numerical solution of the stochastic differential equations in question is performed on a GPU using the CUDA environment. Running time: < 1 minute
The solar spectrum at typical clear weather days
NASA Technical Reports Server (NTRS)
Boer, K. W.
1976-01-01
The solar spectrum in the range of 300 is less than lambda is less than 1500nm is given for five typical clear weather days. These days are selected to represent typical seasonal conditions in respect to airmass water vapor, ozone, and turbidity. Present data are reviewed, and specific conditions are selected. The spectral distribution of the irradiance is given for the direct component, the scattered skylight, the total flux on a horizontal surface, and the flux on an inclined surface normal to the direct beam.
PARTICLE SIZE DISTRIBUTIONS FOR AN OFFICE AEROSOL
The article discusses an evaluation of the effect of percent outdoor air supplied and occupation level on the particle size distributions and mass concentrations for a typical office building. (NOTE: As attention has become focused on indoor air pollution control, it has become i...
Limayem, Alya; Martin, Elizabeth M
2014-01-01
Antibiotics are frequently used in agricultural systems to promote livestock health and to control bacterial contaminants. Given the upsurge of the resistant fecal indicator bacteria (FIB) in the surface waters, a novel statistical method namely, microbial risk assessment (MRA) was performed, to evaluate the probability of infection by resistant FIB on populations exposed to recreational waters. Diarrheagenic Escherichia coli, except E. coli O157:H7, were selected for their prevalence in aquatic ecosystem. A comparative study between a typical E. coli pathway and a case scenario aggravated by antibiotic use has been performed via Crystal Ball® software in an effort to analyze a set of available inputs provided by the US institutions including E. coli concentrations in US Great Lakes through using random sampling and probability distributions. Results from forecasting a possible worst-case scenario dose-response, accounted for an approximate 50% chance for 20% of the exposed human populations to be infected by recreational water in the U.S. However, in a typical scenario, there is a 50% chance of infection for only 1% of the exposed human populations. The uncertain variable, E. coli concentration accounted for approximately 92.1% in a typical scenario as the major contributing factor of the dose-response model. Resistant FIB in recreational waters that are exacerbated by a low dose of antibiotic pollutants would increase the adverse health effects in exposed human populations by 10 fold.
Mars Global Surveyor TES Results: Observations of Water Ice Clouds
NASA Technical Reports Server (NTRS)
Pearl, John C.; Smith, M. D.; Conrath, B. J.; Bandfield, J. L.; Christensen, P. R.
1999-01-01
On July 31, 1999, Mars Global Surveyor completed its first martian year in orbit. During this time, the Thermal Emission Spectrometer (TES) experiment gathered extensive data on water ice clouds. We report here on three types of martian clouds. 1) Martian southern summer has long been characterized as the season when the most severe dust storms occur. It is now apparent that northern spring/summer is characterized as a time of substantial low latitude ice clouds [1]. TES observations beginning in the northern summer (Lsubs=107) show a well developed cloud belt between 10S and 30N latitude; 12 micron opacities were typically 0.15. This system decreased dramatically after Lsubs= 130. Thereafter, remnants were most persistent over the Tharsis ridge. 2) Clouds associated with major orographic features follow a different pattern [2]. Clouds of this type were present prior to the regional Noachis dust storm of 1997. They disappeared with the onset of the storm, but reappeared rather quickly following its decay. Typical infrared opacities were near 0.5. 3) Extensive, very thin clouds are also widespread [3]. Found at high altitudes (above 35 km), their opacities are typically a few hundredths. At times, such as in northern spring, these clouds are limited in their northern extent only by the southern edge of the polar vortex. We describe the distribution, infrared optical properties, and seasonal trends of these systems during the first martian year of TES operations.
Analysis on Voltage Profile of Distribution Network with Distributed Generation
NASA Astrophysics Data System (ADS)
Shao, Hua; Shi, Yujie; Yuan, Jianpu; An, Jiakun; Yang, Jianhua
2018-02-01
Penetration of distributed generation has some impacts on a distribution network in load flow, voltage profile, reliability, power loss and so on. After the impacts and the typical structures of the grid-connected distributed generation are analyzed, the back/forward sweep method of the load flow calculation of the distribution network is modelled including distributed generation. The voltage profiles of the distribution network affected by the installation location and the capacity of distributed generation are thoroughly investigated and simulated. The impacts on the voltage profiles are summarized and some suggestions to the installation location and the capacity of distributed generation are given correspondingly.
Sensor Network Middleware for Cyber-Physical Systems: Opportunities and Challenges
NASA Astrophysics Data System (ADS)
Singh, G.
2015-12-01
Wireless Sensor Network middleware typically provides abstractions for common tasks such as atomicity, synchronization and communication with the intention of isolating the developers of distributed applications from lower-level details of the underlying platforms. Developing middleware to meet the performance constraints of applications is an important challenge. Although one would like to develop generic middleware services which can be used in a variety of different applications, efficiency considerations often force developers to design middleware and algorithms customized to specific operational contexts. This presentation will discuss techniques to design middleware that is customizable to suit the performance needs of specific applications. We also discuss the challenges poised in designing middleware for pervasive sensor networks and cyber-physical systems with specific focus on environmental monitoring.
Performance evaluation of a 2-mode PV grid connected system in Thailand -- Case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jivacate, C.; Mongconvorawan, S.; Sinratanapukdee, E.
A PV grid connected system with small battery bank has been set up in a rural district, North Thailand in order to demonstrate a 2-mode operation concept. The objective is to gain experience on the PV grid connected concept without battery storage. However, due to the evening peak demand and a rather weak distribution grid which is typical in rural areas, small battery bank is still required to enable the maximum energy transfer to grid for the time being before moving fully to the no battery mode. The analyzed data seems to indicate possible performance improvement by re-arranging the numbermore » of PV modules and battery in the string.« less
Modelling Parameters Characterizing Selected Water Supply Systems in Lower Silesia Province
NASA Astrophysics Data System (ADS)
Nowogoński, Ireneusz; Ogiołda, Ewa
2017-12-01
The work presents issues of modelling water supply systems in the context of basic parameters characterizing their operation. In addition to typical parameters, such as water pressure and flow rate, assessing the age of the water is important, as a parameter of assessing the quality of the distributed medium. The analysis was based on two facilities, including one with a diverse spectrum of consumers, including residential housing and industry. The carried out simulations indicate the possibility of the occurrence of water quality degradation as a result of excessively long periods of storage in the water supply network. Also important is the influence of the irregularity of water use, especially in the case of supplying various kinds of consumers (in the analysed case - mining companies).
Removing Grit During Wastewater Treatment: CFD Analysis of HDVS Performance.
Meroney, Robert N; Sheker, Robert E
2016-05-01
Computational Fluid Dynamics (CFD) was used to simulate the grit and sand separation effectiveness of a typical hydrodynamic vortex separator (HDVS) system. The analysis examined the influences on the separator efficiency of: flow rate, fluid viscosities, total suspended solids (TSS), and particle size and distribution. It was found that separator efficiency for a wide range of these independent variables could be consolidated into a few curves based on the particle fall velocity to separator inflow velocity ratio, Ws/Vin. Based on CFD analysis it was also determined that systems of different sizes with length scale ratios ranging from 1 to 10 performed similarly when Ws/Vin and TSS were held constant. The CFD results have also been compared to a limited range of experimental data.
Energy audit data for a resort island in the South China Sea.
Basir Khan, M Reyasudin; Jidin, Razali; Pasupuleti, Jagadeesh
2016-03-01
The data consists of actual generation-side auditing including the distribution of loads, seasonal load profiles, and types of loads as well as an analysis of local development planning of a resort island in the South China Sea. The data has been used to propose an optimal combination of hybrid renewable energy systems that able to mitigate the diesel fuel dependency on the island. The resort island selected is Tioman, as it represents the typical energy requirements of many resort islands in the South China Sea. The data presented are related to the research article "Optimal Combination of Solar, Wind, Micro-Hydro and Diesel Systems based on Actual Seasonal Load Profiles for a Resort Island in the South China Sea" [1].
Non-thermal transitions in a model inspired by moral decisions
NASA Astrophysics Data System (ADS)
Alamino, Roberto C.
2016-08-01
This work introduces a model in which agents of a network act upon one another according to three different kinds of moral decisions. These decisions are based on an increasing level of sophistication in the empathy capacity of the agent, a hierarchy which we name Piaget’s ladder. The decision strategy of the agents is non-rational, in the sense they are arbitrarily fixed, and the model presents quenched disorder given by the distribution of its defining parameters. An analytical solution for this model is obtained in the large system limit as well as a leading order correction for finite-size systems which shows that typical realisations of the model develop a phase structure with both continuous and discontinuous non-thermal transitions.
Slow quench dynamics of a one-dimensional Bose gas confined to an optical lattice.
Bernier, Jean-Sébastien; Roux, Guillaume; Kollath, Corinna
2011-05-20
We analyze the effect of a linear time variation of the interaction strength on a trapped one-dimensional Bose gas confined to an optical lattice. The evolution of different observables such as the experimentally accessible on site particle distribution are studied as a function of the ramp time by using time-dependent numerical techniques. We find that the dynamics of a trapped system typically displays two regimes: For long ramp times, the dynamics is governed by density redistribution, while at short ramp times, local dynamics dominates as the evolution is identical to that of an homogeneous system. In the homogeneous limit, we also discuss the nontrivial scaling of the energy absorbed with the ramp time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yasuno, Satoshi, E-mail: yasuno@spring8.or.jp; Koganezawa, Tomoyuki; Watanabe, Takeshi
Hard X-ray photoelectron spectroscopy (HAXPES) is a powerful tool for investigating the chemical and electronic states of bulk and buried interface in a non-destructive manner due to the large probing depth of this technique. At BL46XU of SPring-8, there are two HAXPES systems equipped with different electron spectrometers, which can be utilized appropriately according to the purpose in various industrial researches. In this article, these systems are outlined, and two typical examples of HAXPES studies performed by them are presented, which focus on the silicidation at Ni/SiC interface and the energy distribution of interface states at SiO{sub 2}/a-InGaZnO.
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149
Optimization of tomographic reconstruction workflows on geographically distributed resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
NASA Astrophysics Data System (ADS)
Fukami, Christine S.; Sullivan, Amy P.; Ryan Fulgham, S.; Murschell, Trey; Borch, Thomas; Smith, James N.; Farmer, Delphine K.
2016-07-01
Particle-into-Liquid Samplers (PILS) have become a standard aerosol collection technique, and are widely used in both ground and aircraft measurements in conjunction with off-line ion chromatography (IC) measurements. Accurate and precise background samples are essential to account for gas-phase components not efficiently removed and any interference in the instrument lines, collection vials or off-line analysis procedures. For aircraft sampling with PILS, backgrounds are typically taken with in-line filters to remove particles prior to sample collection once or twice per flight with more numerous backgrounds taken on the ground. Here, we use data collected during the Front Range Air Pollution and Photochemistry Éxperiment (FRAPPÉ) to demonstrate that not only are multiple background filter samples are essential to attain a representative background, but that the chemical background signals do not follow the Gaussian statistics typically assumed. Instead, the background signals for all chemical components analyzed from 137 background samples (taken from ∼78 total sampling hours over 18 flights) follow a log-normal distribution, meaning that the typical approaches of averaging background samples and/or assuming a Gaussian distribution cause an over-estimation of background samples - and thus an underestimation of sample concentrations. Our approach of deriving backgrounds from the peak of the log-normal distribution results in detection limits of 0.25, 0.32, 3.9, 0.17, 0.75 and 0.57 μg m-3 for sub-micron aerosol nitrate (NO3-), nitrite (NO2-), ammonium (NH4+), sulfate (SO42-), potassium (K+) and calcium (Ca2+), respectively. The difference in backgrounds calculated from assuming a Gaussian distribution versus a log-normal distribution were most extreme for NH4+, resulting in a background that was 1.58× that determined from fitting a log-normal distribution.
Schipper, Aafke M; Posthuma, Leo; de Zwart, Dick; Huijbregts, Mark A J
2014-12-16
Quantitative relationships between species richness and single environmental factors, also called species sensitivity distributions (SSDs), are helpful to understand and predict biodiversity patterns, identify environmental management options and set environmental quality standards. However, species richness is typically dependent on a variety of environmental factors, implying that it is not straightforward to quantify SSDs from field monitoring data. Here, we present a novel and flexible approach to solve this, based on the method of stacked species distribution modeling. First, a species distribution model (SDM) is established for each species, describing its probability of occurrence in relation to multiple environmental factors. Next, the predictions of the SDMs are stacked along the gradient of each environmental factor with the remaining environmental factors at fixed levels. By varying those fixed levels, our approach can be used to investigate how field-based SSDs for a given environmental factor change in relation to changing confounding influences, including for example optimal, typical, or extreme environmental conditions. This provides an asset in the evaluation of potential management measures to reach good ecological status.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chase Qishi; Zhu, Michelle Mengxia
The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less
Phase synchronization in the forced Lorenz system
NASA Astrophysics Data System (ADS)
Park, Eun-Hyoung; Zaks, Michael A.; Kurths, Jürgen
1999-12-01
We demonstrate that the dynamics of phase synchronization in a chaotic system under weak periodic forcing depends crucially on the distribution of intrinsic characteristic times of this system. Under the external periodic action, the frequency of every unstable periodic orbit is locked to the frequency of the force. In systems which in the autonomous case displays nearly isochronous chaotic rotations, the locking ratio is the same for all periodic orbits; since a typical chaotic orbit wanders between the periodic ones, its phase follows the phase of the force. For the Lorenz attractor with its unbounded times of return onto a Poincaré surface, such state of perfect phase synchronization is inaccessible. Analysis with the help of unstable periodic orbits shows that this state is replaced by another one, which we call ``imperfect phase synchronization,'' and in which we observe alternation of temporal segments, corresponding to different rational values of frequency lockings.
Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas
2017-01-01
As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.
Litwin, Mieczysław; Feber, Janusz; Niemirska, Anna; Michałkiewicz, Jacek
2016-02-01
There is an increasing amount of data indicating that primary hypertension (PH) is not only a hemodynamic phenomenon but also a complex syndrome involving abnormal fat tissue distribution, over-activity of the sympathetic nervous system (SNS), metabolic abnormalities, and activation of the immune system. In children, PH usually presents with a typical phenotype of disturbed body composition, accelerated biological maturity, and subtle immunological and metabolic abnormalities. This stage of the disease is potentially reversible. However, long-lasting over-activity of the SNS and immuno-metabolic alterations usually lead to an irreversible stage of cardiovascular disease. We describe an intermediate phenotype of children with PH, showing that PH is associated with accelerated development, i.e., early premature aging of the immune, metabolic, and vascular systems. The associations and determinants of hypertensive organ damage, the principles of treatment, and the possibility of rejuvenation of the cardiovascular system are discussed.
Manipulating Scrip Systems: Sybils and Collusion
NASA Astrophysics Data System (ADS)
Kash, Ian A.; Friedman, Eric J.; Halpern, Joseph Y.
Game-theoretic analyses of distributed and peer-to-peer systems typically use the Nash equilibrium solution concept, but this explicitly excludes the possibility of strategic behavior involving more than one agent. We examine the effects of two types of strategic behavior involving more than one agent, sybils and collusion, in the context of scrip systems where agents provide each other with service in exchange for scrip. Sybils make an agent more likely to be chosen to provide service, which generally makes it harder for agents without sybils to earn money and decreases social welfare. Surprisingly, in certain circumstances it is possible for sybils to make all agents better off. While collusion is generally bad, in the context of scrip systems it actually tends to make all agents better off, not merely those who collude. These results also provide insight into the effects of allowing agents to advertise and loan money.
NASA Astrophysics Data System (ADS)
Liu, Chao; Liu, Qiangsheng; Cen, Zhaofeng; Li, Xiaotong
2010-11-01
Polarization state of only completely polarized light can be analyzed by some software, ZEMAX for example. Based on principles of geometrical optics, novel descriptions of the light with different polarization state are provided in this paper. Differential calculus is well used for saving the polarization state and amplitudes of sampling rays when ray tracing. The polarization state changes are analyzed in terms of several typical circumstances, such as Brewster incidence, total reflection. Natural light and partially polarized light are discussed as an important aspect. Further more, a computing method including composition and decomposition of sampling rays at each surface is also set up to analyze the energy transmission of the rays for optical systems. Adopting these analysis methods mentioned, not only the polarization state changes of the incident rays can be obtained, but also the energy distributions can be calculated. Since the energy distributions are obtained, the surface with the most energy loss will be found in the optical system. The energy value and polarization state of light reaching the image surface will also be available. These analysis methods are very helpful for designing or analyzing optical systems, such as analyzing the energy of stray light in high power optical systems, researching the influences of optical surfaces to rays' polarization state in polarization imaging systems and so on.
Simão, Ana; Densham, Paul J; Haklay, Mordechai Muki
2009-05-01
Spatial planning typically involves multiple stakeholders. To any specific planning problem, stakeholders often bring different levels of knowledge about the components of the problem and make assumptions, reflecting their individual experiences, that yield conflicting views about desirable planning outcomes. Consequently, stakeholders need to learn about the likely outcomes that result from their stated preferences; this learning can be supported through enhanced access to information, increased public participation in spatial decision-making and support for distributed collaboration amongst planners, stakeholders and the public. This paper presents a conceptual system framework for web-based GIS that supports public participation in collaborative planning. The framework combines an information area, a Multi-Criteria Spatial Decision Support System (MC-SDSS) and an argumentation map to support distributed and asynchronous collaboration in spatial planning. After analysing the novel aspects of this framework, the paper describes its implementation, as a proof of concept, in a system for Web-based Participatory Wind Energy Planning (WePWEP). Details are provided on the specific implementation of each of WePWEP's four tiers, including technical and structural aspects. Throughout the paper, particular emphasis is placed on the need to support user learning throughout the planning process.
A Comparison of Filter-based Approaches for Model-based Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Saha, Bhaskar; Goebel, Kai
2012-01-01
Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is generally divided into two sequential problems: a joint state-parameter estimation problem, in which, using the model, the health of a system or component is determined based on the observations; and a prediction problem, in which, using the model, the stateparameter distribution is simulated forward in time to compute end of life and remaining useful life. The first problem is typically solved through the use of a state observer, or filter. The choice of filter depends on the assumptions that may be made about the system, and on the desired algorithm performance. In this paper, we review three separate filters for the solution to the first problem: the Daum filter, an exact nonlinear filter; the unscented Kalman filter, which approximates nonlinearities through the use of a deterministic sampling method known as the unscented transform; and the particle filter, which approximates the state distribution using a finite set of discrete, weighted samples, called particles. Using a centrifugal pump as a case study, we conduct a number of simulation-based experiments investigating the performance of the different algorithms as applied to prognostics.
How do binary separations depend on cloud initial conditions?
NASA Astrophysics Data System (ADS)
Sterzik, M. F.; Durisen, R. H.; Zinnecker, H.
2003-11-01
We explore the consequences of a star formation scenario in which the isothermal collapse of a rotating, star-forming core is followed by prompt fragmentation into a cluster containing a small number (N <~ 10) of protostars and/or substellar objects. The subsequent evolution of the cluster is assumed to be dominated by dynamical interactions among cluster members, and this establishes the final properties of the binary and multiple systems. The characteristic scale of the fragmenting core is determined by the cloud initial conditions (such as temperature, angular momentum and mass), and we are able to relate the separation distributions of the final binary population to the properties of the star-forming core. Because the fragmentation scale immediately after the isothermal collapse is typically a factor of 3-10 too large, we conjecture that fragmentation into small clusters followed by dynamical evolution is required to account for the observed binary separation distributions. Differences in the environmental properties of the cores are expected to imprint differences on the characteristic dimensions of the binary systems they form. Recent observations of hierarchical systems, differences in binary characteristics among star forming regions and systematic variations in binary properties with primary mass can be interpreted in the context of this scenario.
High-spatial-resolution nanoparticle x-ray fluorescence tomography
NASA Astrophysics Data System (ADS)
Larsson, Jakob C.; Vâgberg, William; Vogt, Carmen; Lundström, Ulf; Larsson, Daniel H.; Hertz, Hans M.
2016-03-01
X-ray fluorescence tomography (XFCT) has potential for high-resolution 3D molecular x-ray bio-imaging. In this technique the fluorescence signal from targeted nanoparticles (NPs) is measured, providing information about the spatial distribution and concentration of the NPs inside the object. However, present laboratory XFCT systems typically have limited spatial resolution (>1 mm) and suffer from long scan times and high radiation dose even at high NP concentrations, mainly due to low efficiency and poor signal-to-noise ratio. We have developed a laboratory XFCT system with high spatial resolution (sub-100 μm), low NP concentration and vastly decreased scan times and dose, opening up the possibilities for in-vivo small-animal imaging research. The system consists of a high-brightness liquid-metal-jet microfocus x-ray source, x-ray focusing optics and an energy-resolving photon-counting detector. By using the source's characteristic 24 keV line-emission together with carefully matched molybdenum nanoparticles the Compton background is greatly reduced, increasing the SNR. Each measurement provides information about the spatial distribution and concentration of the Mo nanoparticles. A filtered back-projection method is used to produce the final XFCT image.
Watson, Jean-Paul; Murray, Regan; Hart, William E.
2009-11-13
We report that the sensor placement problem in contamination warning system design for municipal water distribution networks involves maximizing the protection level afforded by limited numbers of sensors, typically quantified as the expected impact of a contamination event; the issue of how to mitigate against high-consequence events is either handled implicitly or ignored entirely. Consequently, expected-case sensor placements run the risk of failing to protect against high-consequence 9/11-style attacks. In contrast, robust sensor placements address this concern by focusing strictly on high-consequence events and placing sensors to minimize the impact of these events. We introduce several robust variations of themore » sensor placement problem, distinguished by how they quantify the potential damage due to high-consequence events. We explore the nature of robust versus expected-case sensor placements on three real-world large-scale distribution networks. We find that robust sensor placements can yield large reductions in the number and magnitude of high-consequence events, with only modest increases in expected impact. Finally, the ability to trade-off between robust and expected-case impacts is a key unexplored dimension in contamination warning system design.« less
Assessing the ability of plants to respond to climatic change through distribution shifts
Mark W. Schwartz
1996-01-01
Predictions of future global warming suggest northward shifts of up to 800 km in the equilibrium distributions of plant species. Historical data estimating the maximum rate of tree distribution shifts (migration) suggest that most species will not keep pace with future rates of human-induced climatic change. Previous plant migrations have occurred at rates typically...