Park, Subok; Gallas, Bradon D; Badano, Aldo; Petrick, Nicholas A; Myers, Kyle J
2007-04-01
A previous study [J. Opt. Soc. Am. A22, 3 (2005)] has shown that human efficiency for detecting a Gaussian signal at a known location in non-Gaussian distributed lumpy backgrounds is approximately 4%. This human efficiency is much less than the reported 40% efficiency that has been documented for Gaussian-distributed lumpy backgrounds [J. Opt. Soc. Am. A16, 694 (1999) and J. Opt. Soc. Am. A18, 473 (2001)]. We conducted a psychophysical study with a number of changes, specifically in display-device calibration and data scaling, from the design of the aforementioned study. Human efficiency relative to the ideal observer was found again to be approximately 5%. Our variance analysis indicates that neither scaling nor display made a statistically significant difference in human performance for the task. We conclude that the non-Gaussian distributed lumpy background is a major factor in our low human-efficiency results.
Liu, Lei; Zhao, Jing
2014-01-01
An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users. PMID:24790579
Zhong, Cheng; Liu, Lei; Zhao, Jing
2014-01-01
An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users.
Shannon, Gary William; Buker, Carol Marie
2010-01-01
Teledermatology provides a partial solution to the problem of accessibility to dermatology services in underserved areas, yet methodologies to determine the locations and geographic dimensions of these areas and the locational efficiency of remote teledermatology sites have been found wanting. This article illustrates an innovative Geographic Information Systems approach using dermatologists' addresses, U.S. Census population data, and the Topologically Integrated Geographic Encoding and Referencing System. Travel-time-based service areas were calculated and mapped for each dermatologist in the state of Kentucky and for possible locations of several remote teledermatology sites. Populations within the current and possible remote service areas were determined. These populations and associated maps permit assessment of the locational efficiency of the current distribution of dermatologists, location of underserved areas, and the potential contribution of proposed hypothetical teledermatology sites. This approach is a valuable and practical tool for evaluating access to current distributions of dermatologists as well as planning for and implementing teledermatology.
Cluster analysis for determining distribution center location
NASA Astrophysics Data System (ADS)
Lestari Widaningrum, Dyah; Andika, Aditya; Murphiyanto, Richard Dimas Julian
2017-12-01
Determination of distribution facilities is highly important to survive in the high level of competition in today’s business world. Companies can operate multiple distribution centers to mitigate supply chain risk. Thus, new problems arise, namely how many and where the facilities should be provided. This study examines a fast-food restaurant brand, which located in the Greater Jakarta. This brand is included in the category of top 5 fast food restaurant chain based on retail sales. There were three stages in this study, compiling spatial data, cluster analysis, and network analysis. Cluster analysis results are used to consider the location of the additional distribution center. Network analysis results show a more efficient process referring to a shorter distance to the distribution process.
NASA Astrophysics Data System (ADS)
Scheingraber, Christoph; Käser, Martin; Allmann, Alexander
2017-04-01
Probabilistic seismic risk analysis (PSRA) is a well-established method for modelling loss from earthquake events. In the insurance industry, it is widely employed for probabilistic modelling of loss to a distributed portfolio. In this context, precise exposure locations are often unknown, which results in considerable loss uncertainty. The treatment of exposure uncertainty has already been identified as an area where PSRA would benefit from increased research attention. However, so far, epistemic location uncertainty has not been in the focus of a large amount of research. We propose a new framework for efficient treatment of location uncertainty. To demonstrate the usefulness of this novel method, a large number of synthetic portfolios resembling real-world portfolios is systematically analyzed. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on loss variability. Several sampling criteria to increase the computational efficiency of the framework are proposed and put into the wider context of well-established Monte-Carlo variance reduction techniques. The performance of each of the proposed criteria is analyzed.
Evaluation of two typical distributed energy systems
NASA Astrophysics Data System (ADS)
Han, Miaomiao; Tan, Xiu
2018-03-01
According to the two-natural gas distributed energy system driven by gas engine driven and gas turbine, in this paper, the first and second laws of thermodynamics are used to measure the distributed energy system from the two parties of “quantity” and “quality”. The calculation results show that the internal combustion engine driven distributed energy station has a higher energy efficiency, but the energy efficiency is low; the gas turbine driven distributed energy station energy efficiency is high, but the primary energy utilization rate is relatively low. When configuring the system, we should determine the applicable natural gas distributed energy system technology plan and unit configuration plan according to the actual load factors of the project and the actual factors such as the location, background and environmental requirements of the project. “quality” measure, the utilization of waste heat energy efficiency index is proposed.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
Geostatistical models are appropriate for spatially distributed data measured at irregularly spaced locations. We propose an efficient Markov chain Monte Carlo (MCMC) algorithm for fitting Bayesian geostatistical models with substantial numbers of unknown parameters to sizable...
Source polarization effects in an optical fiber fluorosensor
NASA Technical Reports Server (NTRS)
Egalon, Claudio O.; Rogowski, Robert S.
1992-01-01
The exact field solution of a step-index profile fiber was used to determine the injection efficiency of a thin-film distribution of polarized sources located in the cladding of an optical fiber. Previous results for random source orientation were confirmed. The behavior of the power efficiency, P(eff), of a polarized distribution of sources was found to be similar to the behavior of a fiber with sources with random orientation. However, for sources polarized in either the x or y direction, P(eff) was found to be more efficient.
Seaman, Shaun R; Hughes, Rachael A
2018-06-01
Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.
Lin, Yuan; Zhang, Zhongzhi
2013-03-07
The trapping process in polymer systems constitutes a fundamental mechanism for various other dynamical processes taking place in these systems. In this paper, we study the trapping problem in two representative polymer networks, Cayley trees and Vicsek fractals, which separately model dendrimers and regular hyperbranched polymers. Our goal is to explore the impact of trap location on the efficiency of trapping in these two important polymer systems, with the efficiency being measured by the average trapping time (ATT) that is the average of source-to-trap mean first-passage time over every staring point in the whole networks. For Cayley trees, we derive an exact analytic formula for the ATT to an arbitrary trap node, based on which we further obtain the explicit expression of ATT for the case that the trap is uniformly distributed. For Vicsek fractals, we provide the closed-form solution for ATT to a peripheral node farthest from the central node, as well as the numerical solutions for the case when the trap is placed on other nodes. Moreover, we derive the exact formula for the ATT corresponding to the trapping problem when the trap has a uniform distribution over all nodes. Our results show that the influence of trap location on the trapping efficiency is completely different for the two polymer networks. In Cayley trees, the leading scaling of ATT increases with the shortest distance between the trap and the central node, implying that trap's position has an essential impact on the trapping efficiency; while in Vicsek fractals, the effect of location of the trap is negligible, since the dominant behavior of ATT is identical, respective of the location where the trap is placed. We also present that for all cases of trapping problems being studied, the trapping process is more efficient in Cayley trees than in Vicsek fractals. We demonstrate that all differences related to trapping in the two polymer systems are rooted in their underlying topological structures.
Reward-based spatial crowdsourcing with differential privacy preservation
NASA Astrophysics Data System (ADS)
Xiong, Ping; Zhang, Lefeng; Zhu, Tianqing
2017-11-01
In recent years, the popularity of mobile devices has transformed spatial crowdsourcing (SC) into a novel mode for performing complicated projects. Workers can perform tasks at specified locations in return for rewards offered by employers. Existing methods ensure the efficiency of their systems by submitting the workers' exact locations to a centralised server for task assignment, which can lead to privacy violations. Thus, implementing crowsourcing applications while preserving the privacy of workers' location is a key issue that needs to be tackled. We propose a reward-based SC method that achieves acceptable utility as measured by task assignment success rates, while efficiently preserving privacy. A differential privacy model ensures rigorous privacy guarantee, and Laplace noise is introduced to protect workers' exact locations. We then present a reward allocation mechanism that adjusts each piece of the reward for a task using the distribution of the workers' locations. Through experimental results, we demonstrate that this optimised-reward method is efficient for SC applications.
Modeling the South American range of the cerulean warbler
S. Barker; S. Benítez; J. Baldy; D. Cisneros Heredia; G. Colorado Zuluaga; F. Cuesta; I. Davidson; D. Díaz; A. Ganzenmueller; S. García; M. K. Girvan; E. Guevara; P. Hamel; A. B. Hennessey; O. L. Hernández; S. Herzog; D. Mehlman; M. I. Moreno; E. Ozdenerol; P. Ramoni-Perazzi; M. Romero; D. Romo; P. Salaman; T. Santander; C. Tovar; M. Welton; T. Will; C. Pedraza; G. Galindo
2006-01-01
Successful conservation of rare species requires detailed knowledge of the speciesâ distribution. Modeling spatial distribution is an efficient means of locating potential habitats. Cerulean Warbler (Dendroica cerulea, Parulidae) was listed as a Vulnerable Species by the International Union for the Conservation of Nature and Natural Resources in...
DISPAQ: Distributed Profitable-Area Query from Big Taxi Trip Data.
Putri, Fadhilah Kurnia; Song, Giltae; Kwon, Joonho; Rao, Praveen
2017-09-25
One of the crucial problems for taxi drivers is to efficiently locate passengers in order to increase profits. The rapid advancement and ubiquitous penetration of Internet of Things (IoT) technology into transportation industries enables us to provide taxi drivers with locations that have more potential passengers (more profitable areas) by analyzing and querying taxi trip data. In this paper, we propose a query processing system, called Distributed Profitable-Area Query ( DISPAQ ) which efficiently identifies profitable areas by exploiting the Apache Software Foundation's Spark framework and a MongoDB database. DISPAQ first maintains a profitable-area query index (PQ-index) by extracting area summaries and route summaries from raw taxi trip data. It then identifies candidate profitable areas by searching the PQ-index during query processing. Then, it exploits a Z-Skyline algorithm, which is an extension of skyline processing with a Z-order space filling curve, to quickly refine the candidate profitable areas. To improve the performance of distributed query processing, we also propose local Z-Skyline optimization, which reduces the number of dominant tests by distributing killer profitable areas to each cluster node. Through extensive evaluation with real datasets, we demonstrate that our DISPAQ system provides a scalable and efficient solution for processing profitable-area queries from huge amounts of big taxi trip data.
DISPAQ: Distributed Profitable-Area Query from Big Taxi Trip Data †
Putri, Fadhilah Kurnia; Song, Giltae; Rao, Praveen
2017-01-01
One of the crucial problems for taxi drivers is to efficiently locate passengers in order to increase profits. The rapid advancement and ubiquitous penetration of Internet of Things (IoT) technology into transportation industries enables us to provide taxi drivers with locations that have more potential passengers (more profitable areas) by analyzing and querying taxi trip data. In this paper, we propose a query processing system, called Distributed Profitable-Area Query (DISPAQ) which efficiently identifies profitable areas by exploiting the Apache Software Foundation’s Spark framework and a MongoDB database. DISPAQ first maintains a profitable-area query index (PQ-index) by extracting area summaries and route summaries from raw taxi trip data. It then identifies candidate profitable areas by searching the PQ-index during query processing. Then, it exploits a Z-Skyline algorithm, which is an extension of skyline processing with a Z-order space filling curve, to quickly refine the candidate profitable areas. To improve the performance of distributed query processing, we also propose local Z-Skyline optimization, which reduces the number of dominant tests by distributing killer profitable areas to each cluster node. Through extensive evaluation with real datasets, we demonstrate that our DISPAQ system provides a scalable and efficient solution for processing profitable-area queries from huge amounts of big taxi trip data. PMID:28946679
Multiobjective assessment of distributed energy storage location in electricity networks
NASA Astrophysics Data System (ADS)
Ribeiro Gonçalves, José António; Neves, Luís Pires; Martins, António Gomes
2017-07-01
This paper presents a methodology to provide information to a decision maker on the associated impacts, both of economic and technical nature, of possible management schemes of storage units for choosing the best location of distributed storage devices, with a multiobjective optimisation approach based on genetic algorithms. The methodology was applied to a case study, a known distribution network model in which the installation of distributed storage units was tested, using lithium-ion batteries. The obtained results show a significant influence of the charging/discharging profile of batteries on the choice of their best location, as well as the relevance that these choices may have for the different network management objectives, for example, for reducing network energy losses or minimising voltage deviations. Results also show a difficult cost-effectiveness of an energy-only service, with the tested systems, both due to capital cost and due to the efficiency of conversion.
Modeling the South American range of the cerulean warbler
S. Barker; S. Benítez; J. Baldy; D. Cisneros Heredia; G. Colorado Zuluaga; F. Cuesta; I. Davidson; D. Díaz; A. Ganzenmueller; S. García; M. K. Girvan; E. Guevara; P. Hamel; A. B. Hennessey; O. L. Hernández; S. Herzog; D. Mehlman; M. I. Moreno; E. Ozdenerol; P. Ramoni-Perazzi; M. Romero; D. Romo; P. Salaman; T. Santander; C. Tovar; M. Welton; T. Will; C. Galindo Pedraza
2007-01-01
Successful conservation of rare species requires detailed knowledge of the speciesâ distribution. Modeling spatial distribution is an efficient means of locating potential habitats. Cerulean Warbler (Dendroica cerulea, Parulidae) was listed as a Vulnerable Species by the International Union for the Conservation of Nature and Natural Resources in 2004...
Quantum partial search for uneven distribution of multiple target items
NASA Astrophysics Data System (ADS)
Zhang, Kun; Korepin, Vladimir
2018-06-01
Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.
The Metadata Cloud: The Last Piece of a Distributed Data System Model
NASA Astrophysics Data System (ADS)
King, T. A.; Cecconi, B.; Hughes, J. S.; Walker, R. J.; Roberts, D.; Thieman, J. R.; Joy, S. P.; Mafi, J. N.; Gangloff, M.
2012-12-01
Distributed data systems have existed ever since systems were networked together. Over the years the model for distributed data systems have evolved from basic file transfer to client-server to multi-tiered to grid and finally to cloud based systems. Initially metadata was tightly coupled to the data either by embedding the metadata in the same file containing the data or by co-locating the metadata in commonly named files. As the sources of data multiplied, data volumes have increased and services have specialized to improve efficiency; a cloud system model has emerged. In a cloud system computing and storage are provided as services with accessibility emphasized over physical location. Computation and data clouds are common implementations. Effectively using the data and computation capabilities requires metadata. When metadata is stored separately from the data; a metadata cloud is formed. With a metadata cloud information and knowledge about data resources can migrate efficiently from system to system, enabling services and allowing the data to remain efficiently stored until used. This is especially important with "Big Data" where movement of the data is limited by bandwidth. We examine how the metadata cloud completes a general distributed data system model, how standards play a role and relate this to the existing types of cloud computing. We also look at the major science data systems in existence and compare each to the generalized cloud system model.
Forwarding Pointers for Efficient Location Management in Distributed Mobile Environments
1994-09-01
signalling trac on the SS7 signalling system(capacity of 56 Kbps) is expected to be 4-11 times greater for cellular networks than for ISDN and3-4 times...load. Thus location updatewill become a major bottleneck at the switches (such as SS7 ) and mechanisms to control the costof location update are...ACM, pp. 19-28, Oct. 1994.16 [3] Kathleen S. Meier-Hellstern, et. al., \\The Use of SS7 and GSM to support high density per-sonal communications
Fiber optic reference frequency distribution to remote beam waveguide antennas
NASA Technical Reports Server (NTRS)
Calhoun, Malcolm; Kuhnle, Paul; Law, Julius
1995-01-01
In the NASA/JPL Deep Space Network (DSN), radio science experiments (probing outer planet atmospheres, rings, gravitational waves, etc.) and very long-base interferometry (VLBI) require ultra-stable, low phase noise reference frequency signals at the user locations. Typical locations for radio science/VLBI exciters and down-converters are the cone areas of the 34 m high efficiency antennas or the 70 m antennas, located several hundred meters from the reference frequency standards. Over the past three years, fiber optic distribution links have replaced coaxial cable distribution for reference frequencies to these antenna sites. Optical fibers are the preferred medium for distribution because of their low attenuation, immunity to EMI/IWI, and temperature stability. A new network of Beam Waveguide (BWG) antennas presently under construction in the DSN requires hydrogen maser stability at tens of kilometers distance from the frequency standards central location. The topic of this paper is the design and implementation of an optical fiber distribution link which provides ultra-stable reference frequencies to users at a remote BWG antenna. The temperature profile from the earth's surface to a depth of six feet over a time period of six months was used to optimize the placement of the fiber optic cables. In-situ evaluation of the fiber optic link performance indicates Allan deviation on the order of parts in 10(exp -15) at 1000 and 10,000 seconds averaging time; thus, the link stability degradation due to environmental conditions still preserves hydrogen maser stability at the user locations. This paper reports on the implementation of optical fibers and electro-optic devices for distributing very stable, low phase noise reference signals to remote BWG antenna locations. Allan deviation and phase noise test results for a 16 km fiber optic distribution link are presented in the paper.
Fiber optic reference frequency distribution to remote beam waveguide antennas
NASA Astrophysics Data System (ADS)
Calhoun, Malcolm; Kuhnle, Paul; Law, Julius
1995-05-01
In the NASA/JPL Deep Space Network (DSN), radio science experiments (probing outer planet atmospheres, rings, gravitational waves, etc.) and very long-base interferometry (VLBI) require ultra-stable, low phase noise reference frequency signals at the user locations. Typical locations for radio science/VLBI exciters and down-converters are the cone areas of the 34 m high efficiency antennas or the 70 m antennas, located several hundred meters from the reference frequency standards. Over the past three years, fiber optic distribution links have replaced coaxial cable distribution for reference frequencies to these antenna sites. Optical fibers are the preferred medium for distribution because of their low attenuation, immunity to EMI/IWI, and temperature stability. A new network of Beam Waveguide (BWG) antennas presently under construction in the DSN requires hydrogen maser stability at tens of kilometers distance from the frequency standards central location. The topic of this paper is the design and implementation of an optical fiber distribution link which provides ultra-stable reference frequencies to users at a remote BWG antenna. The temperature profile from the earth's surface to a depth of six feet over a time period of six months was used to optimize the placement of the fiber optic cables. In-situ evaluation of the fiber optic link performance indicates Allan deviation on the order of parts in 10(exp -15) at 1000 and 10,000 seconds averaging time; thus, the link stability degradation due to environmental conditions still preserves hydrogen maser stability at the user locations. This paper reports on the implementation of optical fibers and electro-optic devices for distributing very stable, low phase noise reference signals to remote BWG antenna locations. Allan deviation and phase noise test results for a 16 km fiber optic distribution link are presented in the paper.
A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A
2016-01-01
This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.
Multipositional silica-coated silver nanoparticles for high-performance polymer solar cells.
Choi, Hyosung; Lee, Jung-Pil; Ko, Seo-Jin; Jung, Jae-Woo; Park, Hyungmin; Yoo, Seungmin; Park, Okji; Jeong, Jong-Ryul; Park, Soojin; Kim, Jin Young
2013-05-08
We demonstrate high-performance polymer solar cells using the plasmonic effect of multipositional silica-coated silver nanoparticles. The location of the nanoparticles is critical for increasing light absorption and scattering via enhanced electric field distribution. The device incorporating nanoparticles between the hole transport layer and the active layer achieves a power conversion efficiency of 8.92% with an external quantum efficiency of 81.5%. These device efficiencies are the highest values reported to date for plasmonic polymer solar cells using metal nanoparticles.
Lopes, António Luís; Botelho, Luís Miguel
2013-01-01
In this paper, we describe a distributed coordination system that allows agents to seamlessly cooperate in problem solving by partially contributing to a problem solution and delegating the subproblems for which they do not have the required skills or knowledge to appropriate agents. The coordination mechanism relies on a dynamically built semantic overlay network that allows the agents to efficiently locate, even in very large unstructured networks, the necessary skills for a specific problem. Each agent performs partial contributions to the problem solution using a new distributed goal-directed version of the Graphplan algorithm. This new goal-directed version of the original Graphplan algorithm provides an efficient solution to the problem of "distraction", which most forward-chaining algorithms suffer from. We also discuss a set of heuristics to be used in the backward-search process of the planning algorithm in order to distribute this process amongst idle agents in an attempt to find a solution in less time. The evaluation results show that our approach is effective in building a scalable and efficient agent society capable of solving complex distributable problems. PMID:23704885
Air velocity distribution in a commercial broiler house
USDA-ARS?s Scientific Manuscript database
Increasing air velocity during tunnel ventilation in commercial broiler production facilities improves production efficiency, and many housing design specifications require a minimum air velocity. Air velocities are typically assessed with a hand-held velocity meter at random locations, rather than ...
Determination of a Limited Scope Network's Lightning Detection Efficiency
NASA Technical Reports Server (NTRS)
Rompala, John T.; Blakeslee, R.
2008-01-01
This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.
Optimization of pressure gauge locations for water distribution systems using entropy theory.
Yoo, Do Guen; Chang, Dong Eil; Jun, Hwandon; Kim, Joong Hoon
2012-12-01
It is essential to select the optimal pressure gauge location for effective management and maintenance of water distribution systems. This study proposes an objective and quantified standard for selecting the optimal pressure gauge location by defining the pressure change at other nodes as a result of demand change at a specific node using entropy theory. Two cases are considered in terms of demand change: that in which demand at all nodes shows peak load by using a peak factor and that comprising the demand change of the normal distribution whose average is the base demand. The actual pressure change pattern is determined by using the emitter function of EPANET to reflect the pressure that changes practically at each node. The optimal pressure gauge location is determined by prioritizing the node that processes the largest amount of information it gives to (giving entropy) and receives from (receiving entropy) the whole system according to the entropy standard. The suggested model is applied to one virtual and one real pipe network, and the optimal pressure gauge location combination is calculated by implementing the sensitivity analysis based on the study results. These analysis results support the following two conclusions. Firstly, the installation priority of the pressure gauge in water distribution networks can be determined with a more objective standard through the entropy theory. Secondly, the model can be used as an efficient decision-making guide for gauge installation in water distribution systems.
Evaluating Domestic Hot Water Distribution System Options With Validated Analysis Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weitzel, E.; Hoeschele, M.
2014-09-01
A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. A full distribution system developed in TRNSYS has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. This study builds upon previous analysis modelling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall 124 different TRNSYS models were simulated. Of the configurations evaluated,more » distribution losses account for 13-29% of the total water heating energy use and water use efficiency ranges from 11-22%. The base case, an uninsulated trunk and branch system sees the most improvement in energy consumption by insulating and locating the water heater central to all fixtures. Demand recirculation systems are not projected to provide significant energy savings and in some cases increase energy consumption. Water use is most efficient with demand recirculation systems, followed by the insulated trunk and branch system with a central water heater. Compact plumbing practices and insulation have the most impact on energy consumption (2-6% for insulation and 3-4% per 10 gallons of enclosed volume reduced). The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.« less
Twiddlenet: Metadata Tagging and Data Dissemination in Mobile Device Networks
2007-09-01
hosting a distributed data dissemination application. Stated simply, there are a multitude of handheld devices on the market that can communicate in...content ( UGC ) across a network of distributed devices. This sharing is accomplished through the use of descriptive metadata tags that are assigned to a...file once it has been shared. These metadata files are uploaded to a centralized portal and arranged for efficient UGC location and searching
Where-Fi: a dynamic energy-efficient multimedia distribution framework for MANETs
NASA Astrophysics Data System (ADS)
Mohapatra, Shivajit; Carbunar, Bogdan; Pearce, Michael; Chaudhri, Rohit; Vasudevan, Venu
2008-01-01
Next generation mobile ad-hoc applications will revolve around users' need for sharing content/presence information with co-located devices. However, keeping such information fresh requires frequent meta-data exchanges, which could result in significant energy overheads. To address this issue, we propose distributed algorithms for energy efficient dissemination of presence and content usage information between nodes in mobile ad-hoc networks. First, we introduce a content dissemination protocol (called CPMP) for effectively distributing frequent small meta-data updates between co-located devices using multicast. We then develop two distributed algorithms that use the CPMP protocol to achieve "phase locked" wake up cycles for all the participating nodes in the network. The first algorithm is designed for fully-connected networks and then extended in the second to handle hidden terminals. The "phase locked" schedules are then exploited to adaptively transition the network interface to a deep sleep state for energy savings. We have implemented a prototype system (called "Where-Fi") on several Motorola Linux-based cell phone models. Our experimental results show that for all network topologies our algorithms were able to achieve "phase locking" between nodes even in the presence of hidden terminals. Moreover, we achieved battery lifetime extensions of as much as 28% for fully connected networks and about 20% for partially connected networks.
NASA Astrophysics Data System (ADS)
Liu, Lijuan; Zhang, Guiyang; Kong, Xiaobo; Liu, Yonggang; Xuan, Li
2018-01-01
A high conversion efficiency distributed feedback (DFB) laser from a dye-doped holographic polymer dispersed liquid crystal (HPDLC) transmission grating structure was reported. The alignment polyimide (PI) films were used to control the orientation of the phase separated liquid crystals (LCs) to increase the refractive index difference between the LC and the polymer, so it can provide better light feedback. The lasing wavelength located at 645.8 nm near the maximum of the amplified spontaneous emission (ASE) spectrum with the lowest threshold 0.97 μ J/pulse and the highest conversion efficiency 1.6% was obtained. The laser performance under electric field were also investigated and illustrated. The simple configuration, one-step fabrication organic dye laser shows the potential to realize ultra-low cost plastic lasers.
An Efficient Numerical Approach for Nonlinear Fokker-Planck equations
NASA Astrophysics Data System (ADS)
Otten, Dustin; Vedula, Prakash
2009-03-01
Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.
Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique
NASA Astrophysics Data System (ADS)
Mahootchi, M.; Fattahi, M.; Khakbazan, E.
2011-11-01
This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.
Trapping time statistics and efficiency of transport of optical excitations in dendrimers
NASA Astrophysics Data System (ADS)
Heijs, Dirk-Jan; Malyshev, Victor A.; Knoester, Jasper
2004-09-01
We theoretically study the trapping time distribution and the efficiency of the excitation energy transport in dendritic systems. Trapping of excitations, created at the periphery of the dendrimer, on a trap located at its core, is used as a probe of the efficiency of the energy transport across the dendrimer. The transport process is treated as incoherent hopping of excitations between nearest-neighbor dendrimer units and is described using a rate equation. We account for radiative and nonradiative decay of the excitations while diffusing across the dendrimer. We derive exact expressions for the Laplace transform of the trapping time distribution and the efficiency of trapping, and analyze those for various realizations of the energy bias, number of dendrimer generations, and relative rates for decay and hopping. We show that the essential parameter that governs the trapping efficiency is the product of the on-site excitation decay rate and the trapping time (mean first passage time) in the absence of decay.
The Ultimate Flow Controlled Wind Turbine Blade Airfoil
NASA Astrophysics Data System (ADS)
Seifert, Avraham; Dolgopyat, Danny; Friedland, Ori; Shig, Lior
2015-11-01
Active flow control is being studied as an enabling technology to enhance and maintain high efficiency of wind turbine blades also with contaminated surface and unsteady winds as well as at off-design operating conditions. The study is focused on a 25% thick airfoil (DU91-W2-250) suitable for the mid blade radius location. Initially a clean airfoil was fabricated and tested, as well as compared to XFoil predictions. From these experiments, the evolution of the separation location was identified. Five locations for installing active flow control actuators are available on this airfoil. It uses both Piezo fluidic (``Synthetic jets'') and the Suction and Oscillatory Blowing (SaOB) actuators. Then we evaluate both actuation concepts overall energy efficiency and efficacy in controlling boundary layer separation. Since efficient actuation is to be found at low amplitudes when placed close to separation location, distributed actuation is used. Following the completion of the baseline studies the study has focused on the airfoil instrumentation and extensive wind tunnel testing over a Reynolds number range of 0.2 to 1.5 Million. Sample results will be presented and outline for continued study will be discussed.
NASA Technical Reports Server (NTRS)
Jacobs, P. F.
1985-01-01
An investigation was conducted in the Langley 8 Foot Transonic Pressure Tunnel to determine the effect of aileron deflections on the aerodynamic characteristics of a subsonic energy efficient transport (EET) model. The semispan model had an aspect ratio 10 supercritical wing and was configured with a conventionally located set of ailerons (i.e., a high speed aileron located inboard and a low speed aileron located outboard). Data for the model were taken over a Mach number range from 0.30 to 0.90 and an angle of attack range from approximately -2 deg to 10 deg. The Reynolds number was 2.5 million per foot for Mach number = 0.30 and 4 million per foot for the other Mach numbers. Model force and moment data, aileron effectiveness parameters, aileron hinge moment data, otherwise pressure distributions, and spanwise load data are presented.
ERIC Educational Resources Information Center
Glomm, G.; Harris, D.; Lo, T.F.
2005-01-01
Charter schools represent one part of the larger movement toward parental choice in education, which is intended to improve school efficiency and innovation. We hypothesize that the number of charter schools entering in a local education market depends on how closely the distribution of education programs in public and private schools matches the…
Axial Structure of High-Vacuum Planar Magnetron Discharge Space
NASA Astrophysics Data System (ADS)
Miura, Tsutomu
1999-09-01
The spatial structure of high-vacuum planar magnetron discharge is theoretically investigated taking into account the electron confinement. The boundary xes of the electron confinement region depends on BA with Ea/BA as the parameter (BA: the magnetic flux density at the anode, Ea: the average electric field strength). The location at which the frequency of ionization events takes the maximum is expressed as CnNxiep (CnN: a factor related to the electron density distribution, xiep: the distance of the location from the cathode at which the ionization is most efficient). With increasing Ea and BA at a fixed Ea/BA, the density of the confined energetic electrons increases. With increasing Ea, the region where ionization is efficient shifts to the cathode side to give a high efficiency of the magnet. The boundary xes as determined by the probe method agreed with the theoretical prediction.
Study on Dissemination Patterns in Location-Aware Gossiping Networks
NASA Astrophysics Data System (ADS)
Kami, Nobuharu; Baba, Teruyuki; Yoshikawa, Takashi; Morikawa, Hiroyuki
We study the properties of information dissemination over location-aware gossiping networks leveraging location-based real-time communication applications. Gossiping is a promising method for quickly disseminating messages in a large-scale system, but in its application to information dissemination for location-aware applications, it is important to consider the network topology and patterns of spatial dissemination over the network in order to achieve effective delivery of messages to potentially interested users. To this end, we propose a continuous-space network model extended from Kleinberg's small-world model applicable to actual location-based applications. Analytical and simulation-based study shows that the proposed network achieves high dissemination efficiency resulting from geographically neutral dissemination patterns as well as selective dissemination to proximate users. We have designed a highly scalable location management method capable of promptly updating the network topology in response to node movement and have implemented a distributed simulator to perform dynamic target pursuit experiments as one example of applications that are the most sensitive to message forwarding delay. The experimental results show that the proposed network surpasses other types of networks in pursuit efficiency and achieves the desirable dissemination patterns.
NASA Astrophysics Data System (ADS)
Prada, Jose Fernando
Keeping a contingency reserve in power systems is necessary to preserve the security of real-time operations. This work studies two different approaches to the optimal allocation of energy and reserves in the day-ahead generation scheduling process. Part I presents a stochastic security-constrained unit commitment model to co-optimize energy and the locational reserves required to respond to a set of uncertain generation contingencies, using a novel state-based formulation. The model is applied in an offer-based electricity market to allocate contingency reserves throughout the power grid, in order to comply with the N-1 security criterion under transmission congestion. The objective is to minimize expected dispatch and reserve costs, together with post contingency corrective redispatch costs, modeling the probability of generation failure and associated post contingency states. The characteristics of the scheduling problem are exploited to formulate a computationally efficient method, consistent with established operational practices. We simulated the distribution of locational contingency reserves on the IEEE RTS96 system and compared the results with the conventional deterministic method. We found that assigning locational spinning reserves can guarantee an N-1 secure dispatch accounting for transmission congestion at a reasonable extra cost. The simulations also showed little value of allocating downward reserves but sizable operating savings from co-optimizing locational nonspinning reserves. Overall, the results indicate the computational tractability of the proposed method. Part II presents a distributed generation scheduling model to optimally allocate energy and spinning reserves among competing generators in a day-ahead market. The model is based on the coordination between individual generators and a market entity. The proposed method uses forecasting, augmented pricing and locational signals to induce efficient commitment of generators based on firm posted prices. It is price-based but does not rely on multiple iterations, minimizes information exchange and simplifies the market clearing process. Simulations of the distributed method performed on a six-bus test system showed that, using an appropriate set of prices, it is possible to emulate the results of a conventional centralized solution, without need of providing make-whole payments to generators. Likewise, they showed that the distributed method can accommodate transactions with different products and complex security constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BENDER, SUSAN FAE ANN; RODACY, PHILIP J.; BARNETT, JAMES L.
The ultimate goal of many environmental measurements is to determine the risk posed to humans or ecosystems by various contaminants. Conventional environmental monitoring typically requires extensive sampling grids covering several media including air, water, soil and vegetation. A far more efficient, innovative and inexpensive tactic has been found using honeybees as sampling mechanisms. Members from a single bee colony forage over large areas ({approx}2 x 10{sup 6} m{sup 2}), making tens of thousands of trips per day, and return to a fixed location where sampling can be conveniently conducted. The bees are in direct contact with the air, water, soilmore » and vegetation where they encounter and collect any contaminants that are present in gaseous, liquid and particulate form. The monitoring of honeybees when they return to the hive provides a rapid method to assess chemical distributions and impacts (1). The primary goal of this technology is to evaluate the efficiency of the transport mechanism (honeybees) to the hive using preconcentrators to collect samples. Once the extent and nature of the contaminant exposure has been characterized, resources can be distributed and environmental monitoring designs efficiently directed to the most appropriate locations. Methyl salicylate, a chemical agent surrogate was used as the target compound in this study.« less
A neural network approach to burst detection.
Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J
2002-01-01
This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.
Stellarator Coil Design and Plasma Sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long-Poe Ku and Allen H. Boozer
2010-11-03
The rich information contained in the plasma response to external magnetic perturbations can be used to help design stellarator coils more effectively. We demonstrate the feasibility by first devel- oping a simple, direct method to study perturbations in stellarators that do not break stellarator symmetry and periodicity. The method applies a small perturbation to the plasma boundary and evaluates the resulting perturbed free-boundary equilibrium to build up a sensitivity matrix for the important physics attributes of the underlying configuration. Using this sensitivity information, design methods for better stellarator coils are then developed. The procedure and a proof-of-principle application are givenmore » that (1) determine the spatial distributions of external normal magnetic field at the location of the unperturbed plasma boundary to which the plasma properties are most sen- sitive, (2) determine the distributions of external normal magnetic field that can be produced most efficiently by distant coils, (3) choose the ratios of the magnitudes of the the efficiently produced magnetic distributions so the sensitive plasma properties can be controlled. Using these methods, sets of modular coils are found for the National Compact Stellarator Experiment (NCSX) that are either smoother or can be located much farther from the plasma boundary than those of the present design.« less
Modular, Reconfigurable, High-Energy Systems Stepping Stones
NASA Technical Reports Server (NTRS)
Howell, Joe T.; Carrington, Connie K.; Mankins, John C.
2005-01-01
Modular, Reconfigurable, High-Energy Systems are Stepping Stones to provide capabilities for energy-rich infrastructure strategically located in space to support a variety of exploration scenarios. Abundant renewable energy at lunar or L1 locations could support propellant production and storage in refueling scenarios that enable affordable exploration. Renewable energy platforms in geosynchronous Earth orbits can collect and transmit power to satellites, or to Earth-surface locations. Energy-rich space technologies also enable the use of electric-powered propulsion systems that could efficiently deliver cargo and exploration facilities to remote locations. A first step to an energy-rich space infrastructure is a 100-kWe class solar-powered platform in Earth orbit. The platform would utilize advanced technologies in solar power collection and generation, power management and distribution, thermal management, and electric propulsion. It would also provide a power-rich free-flying platform to demonstrate in space a portfolio of technology flight experiments. This paper presents a preliminary design concept for a 100-kWe solar-powered satellite with the capability to flight-demonstrate a variety of payload experiments and to utilize electric propulsion. State-of-the-art solar concentrators, highly efficient multi-junction solar cells, integrated thermal management on the arrays, and innovative deployable structure design and packaging make the 100-kW satellite feasible for launch on one existing launch vehicle. Higher voltage arrays and power management and distribution (PMAD) systems reduce or eliminate the need for massive power converters, and could enable direct- drive of high-voltage solar electric thrusters.
Locating inefficient links in a large-scale transportation network
NASA Astrophysics Data System (ADS)
Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu
2015-02-01
Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T < 0 or Δ T > 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.
Grist, Eric P M; Flegg, Jennifer A; Humphreys, Georgina; Mas, Ignacio Suay; Anderson, Tim J C; Ashley, Elizabeth A; Day, Nicholas P J; Dhorda, Mehul; Dondorp, Arjen M; Faiz, M Abul; Gething, Peter W; Hien, Tran T; Hlaing, Tin M; Imwong, Mallika; Kindermans, Jean-Marie; Maude, Richard J; Mayxay, Mayfong; McDew-White, Marina; Menard, Didier; Nair, Shalini; Nosten, Francois; Newton, Paul N; Price, Ric N; Pukrittayakamee, Sasithon; Takala-Harrison, Shannon; Smithuis, Frank; Nguyen, Nhien T; Tun, Kyaw M; White, Nicholas J; Witkowski, Benoit; Woodrow, Charles J; Fairhurst, Rick M; Sibley, Carol Hopkins; Guerin, Philippe J
2016-10-24
Artemisinin-resistant Plasmodium falciparum malaria parasites are now present across much of mainland Southeast Asia, where ongoing surveys are measuring and mapping their spatial distribution. These efforts require substantial resources. Here we propose a generic 'smart surveillance' methodology to identify optimal candidate sites for future sampling and thus map the distribution of artemisinin resistance most efficiently. The approach uses the 'uncertainty' map generated iteratively by a geostatistical model to determine optimal locations for subsequent sampling. The methodology is illustrated using recent data on the prevalence of the K13-propeller polymorphism (a genetic marker of artemisinin resistance) in the Greater Mekong Subregion. This methodology, which has broader application to geostatistical mapping in general, could improve the quality and efficiency of drug resistance mapping and thereby guide practical operations to eliminate malaria in affected areas.
Yoo, Jihyung; Prikhodko, Vitaly; Parks, James E; Perfetto, Anthony; Geckler, Sam; Partridge, William P
2015-09-01
Exhaust gas recirculation (EGR) in internal combustion engines is an effective method of reducing NOx emissions while improving efficiency. However, insufficient mixing between fresh air and exhaust gas can lead to cycle-to-cycle and cylinder-to-cylinder non-uniform charge gas mixtures of a multi-cylinder engine, which can in turn reduce engine performance and efficiency. A sensor packaged into a compact probe was designed, built and applied to measure spatiotemporal EGR distributions in the intake manifold of an operating engine. The probe promotes the development of more efficient and higher-performance engines by resolving high-speed in situ CO2 concentration at various locations in the intake manifold. The study employed mid-infrared light sources tuned to an absorption band of CO2 near 4.3 μm, an industry standard species for determining EGR fraction. The calibrated probe was used to map spatial EGR distributions in an intake manifold with high accuracy and monitor cycle-resolved cylinder-specific EGR fluctuations at a rate of up to 1 kHz.
Vehicle-based Methane Mapping Helps Find Natural Gas Leaks and Prioritize Leak Repairs
NASA Astrophysics Data System (ADS)
von Fischer, J. C.; Weller, Z.; Roscioli, J. R.; Lamb, B. K.; Ferrara, T.
2017-12-01
Recently, mobile methane sensing platforms have been developed to detect and locate natural gas (NG) leaks in urban distribution systems and to estimate their size. Although this technology has already been used in targeted deployment for prioritization of NG pipeline infrastructure repair and replacement, one open question regarding this technology is how effective the resulting data are for prioritizing infrastructure repair and replacement. To answer this question we explore the accuracy and precision of the natural gas leak location and emission estimates provided by methane sensors placed on Google Street View (GSV) vehicles. We find that the vast majority (75%) of methane emitting sources detected by these mobile platforms are NG leaks and that the location estimates are effective at identifying the general location of leaks. We also show that the emission rate estimates from mobile detection platforms are able to effectively rank NG leaks for prioritizing leak repair. Our findings establish that mobile sensing platforms are an efficient and effective tool for improving the safety and reducing the environmental impacts of low-pressure NG distribution systems by reducing atmospheric methane emissions.
NASA Technical Reports Server (NTRS)
Tedder, Sarah A.; Hicks, Yolanda R.; Tacina, Kathleen M.; Anderson, Robert C.
2014-01-01
Lean direct injection (LDI) is a combustion concept to reduce oxides of nitrogen (NOx) for next generation aircraft gas turbine engines. These newer engines have cycles that increase fuel efficiency through increased operating pressures, which increase combustor inlet temperatures. NOx formation rates increase with higher temperatures; the LDI strategy avoids high temperature by staying fuel lean and away from stoichiometric burning. Thus, LDI relies on rapid and uniform fuel/air mixing. To understand this mixing process, a series of fundamental experiments are underway in the Combustion and Dynamics Facility at NASA Glenn Research Center. This first set of experiments examines cold flow (non-combusting) mixing using air and water. Using laser diagnostics, the effects of air swirler angle and injector tip location on the spray distribution, recirculation zone, and droplet size distribution are examined. Of the three swirler angles examined, 60 deg is determined to have the most even spray distribution. The injector tip location primarily shifts the flow without changing the structure, unless the flow includes a recirculation zone. When a recirculation zone is present, minimum axial velocity decreases as the injector tip moves downstream towards the venturi exit; also the droplets become more uniform in size and angular distribution.
NASA Technical Reports Server (NTRS)
Tedder, Sarah A.; Hicks, Yolanda R.; Tacina, Kathleen M.; Anderson, Robert C.
2015-01-01
Lean direct injection (LDI) is a combustion concept to reduce oxides of nitrogen (NOx) for next generation aircraft gas turbine engines. These newer engines have cycles that increase fuel efficiency through increased operating pressures, which increase combustor inlet temperatures. NOx formation rates increase with higher temperatures; the LDI strategy avoids high temperature by staying fuel lean and away from stoichiometric burning. Thus, LDI relies on rapid and uniform fuel/air mixing. To understand this mixing process, a series of fundamental experiments are underway in the Combustion and Dynamics Facility at NASA Glenn Research Center. This first set of experiments examines cold flow (non-combusting) mixing using air and water. Using laser diagnostics, the effects of air swirler angle and injector tip location on the spray distribution, recirculation zone, and droplet size distribution are examined. Of the three swirler angles examined, 60 degrees is determined to have the most even spray distribution. The injector tip location primarily shifts the flow without changing the structure, unless the flow includes a recirculation zone. When a recirculation zone is present, minimum axial velocity decreases as the injector tip moves downstream towards the venturi exit; also the droplets become more uniform in size and angular distribution.
NASA Astrophysics Data System (ADS)
Amran, Tengku Sarah Tengku; Ismail, Mohamad Pauzi; Ahmad, Mohamad Ridzuan; Amin, Mohamad Syafiq Mohd; Sani, Suhairy; Masenwat, Noor Azreen; Ismail, Mohd Azmi; Hamid, Shu-Hazri Abdul
2017-01-01
A water pipe is any pipe or tubes designed to transport and deliver water or treated drinking with appropriate quality, quantity and pressure to consumers. The varieties include large diameter main pipes, which supply entire towns, smaller branch lines that supply a street or group of buildings or small diameter pipes located within individual buildings. This distribution system (underground) is used to describe collectively the facilities used to supply water from its source to the point of usage. Therefore, a leaking in the underground water distribution piping system increases the likelihood of safe water leaving the source or treatment facility becoming contaminated before reaching the consumer. Most importantly, leaking can result in wastage of water which is precious natural resources. Furthermore, they create substantial damage to the transportation system and structure within urban and suburban environments. This paper presents a study on the possibility of using ground penetrating radar (GPR) with frequency of 1GHz to detect pipes and leakages in underground water distribution piping system. Series of laboratory experiment was designed to investigate the capability and efficiency of GPR in detecting underground pipes (metal and PVC) and water leakages. The data was divided into two parts: 1. detecting/locating underground water pipe, 2. detecting leakage of underground water pipe. Despite its simplicity, the attained data is proved to generate a satisfactory result indicating GPR is capable and efficient, in which it is able to detect the underground pipe and presence of leak of the underground pipe.
Using data tagging to improve the performance of Kanerva's sparse distributed memory
NASA Technical Reports Server (NTRS)
Rogers, David
1988-01-01
The standard formulation of Kanerva's sparse distributed memory (SDM) involves the selection of a large number of data storage locations, followed by averaging the data contained in those locations to reconstruct the stored data. A variant of this model is discussed, in which the predominant pattern is the focus of reconstruction. First, one architecture is proposed which returns the predominant pattern rather than the average pattern. However, this model will require too much storage for most uses. Next, a hybrid model is proposed, called tagged SDM, which approximates the results of the predominant pattern machine, but is nearly as efficient as Kanerva's original formulation. Finally, some experimental results are shown which confirm that significant improvements in the recall capability of SDM can be achieved using the tagged architecture.
Strategy Guideline: Compact Air Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, A.
2013-06-01
This Strategy Guideline discusses the benefits and challenges of using a compact air distribution system to handle the reduced loads and reduced air volume needed to condition the space within an energy efficient home. Traditional systems sized by 'rule of thumb' (i.e., 1 ton of cooling per 400 ft2 of floor space) that 'wash' the exterior walls with conditioned air from floor registers cannot provide appropriate air mixing and moisture removal in low-load homes. A compact air distribution system locates the HVAC equipment centrally with shorter ducts run to interior walls, and ceiling supply outlets throw the air toward themore » exterior walls along the ceiling plane; alternatively, high sidewall supply outlets throw the air toward the exterior walls. Potential drawbacks include resistance from installing contractors or code officials who are unfamiliar with compact air distribution systems, as well as a lack of availability of low-cost high sidewall or ceiling supply outlets to meet the low air volumes with good throw characteristics. The decision criteria for a compact air distribution system must be determined early in the whole-house design process, considering both supply and return air design. However, careful installation of a compact air distribution system can result in lower material costs from smaller equipment, shorter duct runs, and fewer outlets; increased installation efficiencies, including ease of fitting the system into conditioned space; lower loads on a better balanced HVAC system, and overall improved energy efficiency of the home.« less
NASA Technical Reports Server (NTRS)
Vijgen, P. M. H. W.; Hardin, J. D.; Yip, L. P.
1992-01-01
Accurate prediction of surface-pressure distributions, merging boundary-layers, and separated-flow regions over multi-element high-lift airfoils is required to design advanced high-lift systems for efficient subsonic transport aircraft. The availability of detailed measurements of pressure distributions and both averaged and time-dependent boundary-layer flow parameters at flight Reynolds numbers is critical to evaluate computational methods and to model the turbulence structure for closure of the flow equations. Several detailed wind-tunnel measurements at subscale Reynolds numbers were conducted to obtain detailed flow information including the Reynolds-stress component. As part of a subsonic-transport high-lift research program, flight experiments are conducted using the NASA-Langley B737-100 research aircraft to obtain detailed flow characteristics for support of computational and wind-tunnel efforts. Planned flight measurements include pressure distributions at several spanwise locations, boundary-layer transition and separation locations, surface skin friction, as well as boundary-layer profiles and Reynolds stresses in adverse pressure-gradient flow.
Learning from Massive Distributed Data Sets (Invited)
NASA Astrophysics Data System (ADS)
Kang, E. L.; Braverman, A. J.
2013-12-01
Technologies for remote sensing and ever-expanding computer experiments in climate science are generating massive data sets. Meanwhile, it has been common in all areas of large-scale science to have these 'big data' distributed over multiple different physical locations, and moving large amounts of data can be impractical. In this talk, we will discuss efficient ways for us to summarize and learn from distributed data. We formulate a graphical model to mimic the main characteristics of a distributed-data network, including the size of the data sets and speed of moving data. With this nominal model, we investigate the trade off between prediction accurate and cost of data movement, theoretically and through simulation experiments. We will also discuss new implementations of spatial and spatio-temporal statistical methods optimized for distributed data.
Michael A. Blazier; D. Andrew Scott
2006-01-01
Improvements in nitrogen (N) uptake efficiency and plantation growth require refined silvicultural systems that consider soil type, stand development, ecology, and their interactions. On four unthinned, mid-rotation loblolly pine plantations in Louisiana located on a gradient of soil drainage classes, soil, plant, and microbial N dynamics were measured in response to...
Korean Domestic Third Party Logistics Providers: Reach for a Global Market
2010-03-01
receiving resources from oversea, parts production , assembling finished goods, sales, and customer service become more important. This is...businesses. Production can be located in an optimal area while efficient logistics systems allow world-wide distribution. Global logistics is activities...logistics is managing and utilizing production flow from resources to finished goods by gathering scattered production and sales footholds, and
A fully distributed implementation of mean annual streamflow regional regression equations
Verdin, K.L.; Worstell, B.
2008-01-01
Estimates of mean annual streamflow are needed for a variety of hydrologic assessments. Away from gage locations, regional regression equations that are a function of upstream area, precipitation, and temperature are commonly used. Geographic information systems technology has facilitated their use for projects, but traditional approaches using the polygon overlay operator have been too inefficient for national scale applications. As an alternative, the Elevation Derivatives for National Applications (EDNA) database was used as a framework for a fully distributed implementation of mean annual streamflow regional regression equations. The raster “flow accumulation” operator was used to efficiently achieve spatially continuous parameterization of the equations for every 30 m grid cell of the conterminous United States (U.S.). Results were confirmed by comparing with measured flows at stations of the Hydro-Climatic Data Network, and their applications value demonstrated in the development of a national geospatial hydropower assessment. Interactive tools at the EDNA website make possible the fast and efficient query of mean annual streamflow for any location in the conterminous U.S., providing a valuable complement to other national initiatives (StreamStats and the National Hydrography Dataset Plus).
DMS Advanced Applications for Accommodating High Penetrations of DERs and Microgrids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratt, Annabelle; Veda, Santosh; Maitra, Arindam
Efficient and effective management of the electrical distribution system requires an integrated system approach for Distribution Management Systems (DMS), Distributed Energy Resources (DERs), Distributed Energy Resources Management System (DERMS), and microgrids to work in harmony. This paper highlights some of the outcomes from a U.S. Department of Energy (DOE), Office of Electricity (OE) project, including 1) Architecture of these integrated systems, and 2) Expanded functions of two example DMS applications, Volt-VAR optimization (VVO) and Fault Location, Isolation and Service Restoration (FLISR), to accommodate DER. For these two example applications, the relevant DER Group Functions necessary to support communication between DMSmore » and Microgrid Controller (MC) in grid-tied mode are identified.« less
DMS Advanced Applications for Accommodating High Penetrations of DERs and Microgrids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratt, Annabelle; Veda, Santosh; Maitra, Arindam
Efficient and effective management of the electric distribution system requires an integrated approach to allow various systems to work in harmony, including distribution management systems (DMS), distributed energy resources (DERs), distributed energy resources management systems, and microgrids. This study highlights some outcomes from a recent project sponsored by the US Department of Energy, Office of Electricity Delivery and Energy Reliability, including information about (i) the architecture of these integrated systems and (ii) expanded functions of two example DMS applications to accommodate DERs: volt-var optimisation and fault location, isolation, and service restoration. In addition, the relevant DER group functions necessary tomore » support communications between the DMS and a microgrid controller in grid-tied mode are identified.« less
Gu, S. H.; Dormion, J.; Hugot, J.-P.; Yanagihara, R.
2014-01-01
SUMMARY Recent discovery of genetically distinct hantaviruses in shrews and moles (order Soricomorpha, family Soricidae and Talpidae) has challenged the conventional view that rodents serve as the principal reservoir hosts. Nova virus (NVAV), previously identified in archival liver tissue of a single European mole (Talpa europaea) from Hungary, represents one of the most highly divergent hantaviruses identified to date. To ascertain the spatial distribution and genetic diversity of NVAV, we employed RT–PCR to analyse lungs from 94 moles, captured in two locations in France, during October 2012 to March 2013. NVAV was detected in more than 60% of moles at each location, suggesting efficient enzootic virus transmission and confirming that this mole species serves as the reservoir host. Although the pathogenic potential of NVAV is unknown, the widespread geographical distribution of the European mole might pose a hantavirus exposure risk for humans. PMID:24044372
Edge effect modeling of small tool polishing in planetary movement
NASA Astrophysics Data System (ADS)
Li, Qi-xin; Ma, Zhen; Jiang, Bo; Yao, Yong-sheng
2018-03-01
As one of the most challenging problems in Computer Controlled Optical Surfacing (CCOS), the edge effect greatly affects the polishing accuracy and efficiency. CCOS rely on stable tool influence function (TIF), however, at the edge of the mirror surface,with the grinding head out of the mirror ,the contact area and pressure distribution changes, which resulting in a non-linear change of TIF, and leads to tilting or sagging at the edge of the mirror. In order reduce the adverse effects and improve the polishing accuracy and efficiency. In this paper, we used the finite element simulation to analyze the pressure distribution at the mirror edge and combined with the improved traditional method to establish a new model. The new method fully considered the non-uniformity of pressure distribution. After modeling the TIFs in different locations, the description and prediction of the edge effects are realized, which has a positive significance on the control and suppression of edge effects
Intracellular localization of adeno-associated viral proteins expressed in insect cells.
Gallo-Ramírez, Lilí E; Ramírez, Octavio T; Palomares, Laura A
2011-01-01
Production of vectors derived from adeno-associated virus (AAVv) in insect cells represents a feasible option for large-scale applications. However, transducing particles yields obtained in this system are low compared with total capsid yields, suggesting the presence of genome encapsidation bottlenecks. Three components are required for AAVv production: viral capsid proteins (VP), the recombinant AAV genome, and Rep proteins for AAV genome replication and encapsidation. Little is known about the interaction between the three components in insect cells, which have intracellular conditions different to those in mammalian cells. In this work, the localization of AAV proteins in insect cells was assessed for the first time with the purpose of finding potential limiting factors. Unassembled VP were located either in the cytoplasm or in the nucleus. Their transport into the nucleus was dependent on protein concentration. Empty capsids were located in defined subnuclear compartments. Rep proteins expressed individually were efficiently translocated into the nucleus. Their intranuclear distribution was not uniform and differed from VP distribution. While Rep52 distribution and expression levels were not affected by AAV genomes or VP, Rep78 distribution and stability changed during coexpression. Expression of all AAV components modified capsid intranuclear distribution, and assembled VP were found in vesicles located in the nuclear periphery. Such vesicles were related to baculovirus infection, highlighting its role in AAVv production in insect cells. The results obtained in this work suggest that the intracellular distribution of AAV proteins allows their interaction and does not limit vector production in insect cells. Copyright © 2011 American Institute of Chemical Engineers (AIChE).
NASA Technical Reports Server (NTRS)
Trimble, Jay
2017-01-01
For NASA's Resource Prospector (RP) Lunar Rover Mission, we are moving away from a control center concept, to a fully distributed operation utilizing control nodes, with decision support from anywhere via mobile devices. This operations concept will utilize distributed information systems, notifications, mobile data access, and optimized mobile data display for off-console decision support. We see this concept of operations as a step in the evolution of mission operations from a central control center concept to a mission operations anywhere concept. The RP example is part of a trend, in which mission expertise for design, development and operations is distributed across countries and across the globe. Future spacecraft operations will be most cost efficient and flexible by following this distributed expertise, enabling operations from anywhere. For the RP mission we arrived at the decision to utilize a fully distributed operations team, where everyone operates from their home institution, based on evaluating the following factors: the requirement for physical proximity for near-real time command and control decisions; the cost of distributed control nodes vs. a centralized control center; the impact on training and mission preparation of flying the team to a central location. Physical proximity for operational decisions is seldom required, though certain categories of decisions, such as launch abort, or close coordination for mission or safety-critical near-real-time command and control decisions may benefit from co-location. The cost of facilities and operational infrastructure has not been found to be a driving factor for location in our studies. Mission training and preparation benefit from having all operators train and operate from home institutions.
Shaw, Jennifer L. A.; Weyrich, Laura S.; Sawade, Emma; Drikas, Mary; Cooper, Alan J.
2015-01-01
Drinking water assessments use a variety of microbial, physical, and chemical indicators to evaluate water treatment efficiency and product water quality. However, these indicators do not allow the complex biological communities, which can adversely impact the performance of drinking water distribution systems (DWDSs), to be characterized. Entire bacterial communities can be studied quickly and inexpensively using targeted metagenomic amplicon sequencing. Here, amplicon sequencing of the 16S rRNA gene region was performed alongside traditional water quality measures to assess the health, quality, and efficiency of two distinct, full-scale DWDSs: (i) a linear DWDS supplied with unfiltered water subjected to basic disinfection before distribution and (ii) a complex, branching DWDS treated by a four-stage water treatment plant (WTP) prior to disinfection and distribution. In both DWDSs bacterial communities differed significantly after disinfection, demonstrating the effectiveness of both treatment regimes. However, bacterial repopulation occurred further along in the DWDSs, and some end-user samples were more similar to the source water than to the postdisinfection water. Three sample locations appeared to be nitrified, displaying elevated nitrate levels and decreased ammonia levels, and nitrifying bacterial species, such as Nitrospira, were detected. Burkholderiales were abundant in samples containing large amounts of monochloramine, indicating resistance to disinfection. Genera known to contain pathogenic and fecal-associated species were also identified in several locations. From this study, we conclude that metagenomic amplicon sequencing is an informative method to support current compliance-based methods and can be used to reveal bacterial community interactions with the chemical and physical properties of DWDSs. PMID:26162884
Encapsulating urban traffic rhythms into road networks.
Wang, Junjie; Wei, Dong; He, Kun; Gong, Hang; Wang, Pu
2014-02-20
Using road GIS (geographical information systems) data and travel demand data for two U.S. urban areas, the dynamical driver sources of each road segment were located. A method to target road clusters closely related to urban traffic congestion was then developed to improve road network efficiency. The targeted road clusters show different spatial distributions at different times of a day, indicating that our method can encapsulate dynamical travel demand information into the road networks. As a proof of concept, when we lowered the speed limit or increased the capacity of road segments in the targeted road clusters, we found that both the number of congested roads and extra travel time were effectively reduced. In addition, the proposed modeling framework provided new insights on the optimization of transport efficiency in any infrastructure network with a specific supply and demand distribution.
Encapsulating Urban Traffic Rhythms into Road Networks
Wang, Junjie; Wei, Dong; He, Kun; Gong, Hang; Wang, Pu
2014-01-01
Using road GIS (geographical information systems) data and travel demand data for two U.S. urban areas, the dynamical driver sources of each road segment were located. A method to target road clusters closely related to urban traffic congestion was then developed to improve road network efficiency. The targeted road clusters show different spatial distributions at different times of a day, indicating that our method can encapsulate dynamical travel demand information into the road networks. As a proof of concept, when we lowered the speed limit or increased the capacity of road segments in the targeted road clusters, we found that both the number of congested roads and extra travel time were effectively reduced. In addition, the proposed modeling framework provided new insights on the optimization of transport efficiency in any infrastructure network with a specific supply and demand distribution. PMID:24553203
Free vibration of fully functionally graded carbon nanotube reinforced graphite/epoxy laminates
NASA Astrophysics Data System (ADS)
Kuo, Shih-Yao
2018-03-01
This study provides the first-known vibration analysis of fully functionally graded carbon nanotube reinforced hybrid composite (FFG-CNTRHC) laminates. CNTs are non-uniformly distributed to reinforce the graphite/epoxy laminates. Some CNT distribution functions in the plane and thickness directions are proposed to more efficiently increase the stiffening effect. The rule of mixtures is modified by considering the non-homogeneous material properties of FFG-CNTRHC laminates. The formulation of the location dependent stiffness matrix and mass matrix is derived. The effects of CNT volume fraction and distribution on the natural frequencies of FFG-CNTRHC laminates are discussed. The results reveal that the FFG layout may significantly increase the natural frequencies of FFG-CNTRHC laminate.
Glass Lewis, Marquisette; Ekúndayò, Olúgbémiga T
2017-05-16
Hysterectomy, the driving force for symptomatic uterine fibroids since 1895, has decreased over the years, but it is still the number one choice for many women. Since 1995, uterine artery embolization (UAE) has been proven by many researchers to be an effective treatment for uterine fibroids while allowing women to keep their uteri. The preponderance of data collection and research has focused on care quality in terms of efficiency and effectiveness, with little on location and viability related to care utilization, accessibility and physical availability. The purpose of this study was to determine and compare the cost of UAE and classical abdominal hysterectomy with regard to race/ethnicity, region, and location. Data from National Hospital Discharge for 2004 through 2008 were accessed and analyzed for uterine artery embolization and hysterectomy. Frequency analyses were performed to determine distribution of variables by race/ethnicity, location, region, insurance coverage, cost and procedure. Based on frequency distributions of cost and length of stay, outliers were trimmed and categorized. Crosstabs were used to determine cost distributions by region, place/location, procedure, race, and primary payer. For abdominal hysterectomy, 9.8% of the sample were performed in rural locations accross the country. However, for UAE, only seven procedures were performed nationally in the same period. Therefore, all inferential analyses and associations for UAE were assumed for urban locations only. The pattern differed from region to region, regarding the volume of care (numbers of cases by location) and care cost. Comparing hysterectomy and UAE, the patterns indicate generally higher costs for UAE with a mean cost difference of $4223.52. Of the hysterectomies performed for fibroids on Black women in the rural setting, 92.08% were in the south. Overall, data analyzed in this examination indicated a significant disparity between rural and urban residence in both data collection and number of procedures conducted. Further research should determine the background to cost and care location differentials between races and between rural and urban settings. Further, factors driving racial differences in the proportions of hysterectomies in the rural south should be identified to eliminate disparities. Data are needed on the prevalence of uterine fibroids in rural settings.
Operational Reconnaissance for the Anti-Access /Area Denial environment
2015-04-01
locations, the Air Force Distributed Common Ground System ( DCGS ) collects, processes, analyzes, and disseminates over 1.3 million megabits of... DCGS ; satellite data link between the aircraft and ground based receiver; and fiber- optic connection between the receiver, RPA crew, and DCGS . This...analysts and end users. DCGS Integration The Air Force global ISR enterprise is not configured to efficiently receive, exploit, or disseminate fighter
Modular High-Energy Systems for Solar Power Satellites
NASA Technical Reports Server (NTRS)
Howell, Joe T.; Carrington, Connie K.; Marzwell, Neville I.; Mankins, John C.
2006-01-01
Modular High-Energy Systems are Stepping Stones to provide capabilities for energy-rich infrastructure located in space to support a variety of exploration scenarios as well as provide a supplemental source of energy during peak demands to ground grid systems. Abundant renewable energy at lunar or other locations could support propellant production and storage in refueling scenarios that enable affordable exploration. Renewable energy platforms in geosynchronous Earth orbits can collect and transmit power to satellites, or to Earth-surface locations. Energy-rich space technologies also enable the use of electric-powered propulsion systems that could efficiently deliver cargo and exploration facilities to remote locations. A first step to an energy-rich space infrastructure is a 100-kWe class solar-powered platform in Earth orbit. The platform would utilize advanced technologies in solar power collection and generation, power management and distribution, thermal management, electric propulsion, wireless avionics, autonomous in space rendezvous and docking, servicing, and robotic assembly. It would also provide an energy-rich free-flying platform to demonstrate in space a portfolio of technology flight experiments. This paper summary a preliminary design concept for a 100-kWe solar-powered satellite system to demonstrate in-flight a variety of advanced technologies, each as a separate payload. These technologies include, but are not limited to state-of-the-art solar concentrators, highly efficient multi-junction solar cells, integrated thermal management on the arrays, and innovative deployable structure design and packaging to enable the 100-kW satellite feasible to launch on one existing launch vehicle. Higher voltage arrays and power distribution systems (PDS) reduce or eliminate the need for massive power converters, and could enable direct-drive of high-voltage solar electric thrusters.
Joint Source Location and Focal Mechanism Inversion: efficiency, accuracy and applications
NASA Astrophysics Data System (ADS)
Liang, C.; Yu, Y.
2017-12-01
The analysis of induced seismicity has become a common practice to evaluate the results of hydraulic fracturing treatment. Liang et al (2016) proposed a joint Source Scanning Algorithms (jSSA for short) to obtain microseismic events and focal mechanisms simultaneously. The jSSA is superior over traditional SSA in many aspects, but the computation cost is too significant to be applied in real time monitoring. In this study, we have developed several scanning schemas to reduce computation time. A multi-stage scanning schema is proved to be able to improve the efficiency significantly while also retain its accuracy. A series of tests have been carried out by using both real field data and synthetic data to evaluate the accuracy of the method and its dependence on noise level, source depths, focal mechanisms and other factors. The surface-based arrays have better constraints on horizontal location errors (<20m) and angular errors of P axes (within 10 degree, for S/N>0.5). For sources with varying rakes, dips, strikes and depths, the errors are mostly controlled by the partition of positive and negative polarities in different quadrants. More evenly partitioned polarities in different quadrants yield better results in both locations and focal mechanisms. Nevertheless, even with bad resolutions for some FMs, the optimized jSSA method can still improve location accuracies significantly. Based on much more densely distributed events and focal mechanisms, a gridded stress inversion is conducted to get a evenly distributed stress field. The full potential of the jSSA has yet to be explored in different directions, especially in earthquake seismology as seismic array becoming incleasingly dense.
Label-free optical imaging of membrane patches for atomic force microscopy
Churnside, Allison B.; King, Gavin M.; Perkins, Thomas T.
2010-01-01
In atomic force microscopy (AFM), finding sparsely distributed regions of interest can be difficult and time-consuming. Typically, the tip is scanned until the desired object is located. This process can mechanically or chemically degrade the tip, as well as damage fragile biological samples. Protein assemblies can be detected using the back-scattered light from a focused laser beam. We previously used back-scattered light from a pair of laser foci to stabilize an AFM. In the present work, we integrate these techniques to optically image patches of purple membranes prior to AFM investigation. These rapidly acquired optical images were aligned to the subsequent AFM images to ~40 nm, since the tip position was aligned to the optical axis of the imaging laser. Thus, this label-free imaging efficiently locates sparsely distributed protein assemblies for subsequent AFM study while simultaneously minimizing degradation of the tip and the sample. PMID:21164738
Jiang, Xuejun; Guo, Xu; Zhang, Ning; Wang, Bo
2018-01-01
This article presents and investigates performance of a series of robust multivariate nonparametric tests for detection of location shift between two multivariate samples in randomized controlled trials. The tests are built upon robust estimators of distribution locations (medians, Hodges-Lehmann estimators, and an extended U statistic) with both unscaled and scaled versions. The nonparametric tests are robust to outliers and do not assume that the two samples are drawn from multivariate normal distributions. Bootstrap and permutation approaches are introduced for determining the p-values of the proposed test statistics. Simulation studies are conducted and numerical results are reported to examine performance of the proposed statistical tests. The numerical results demonstrate that the robust multivariate nonparametric tests constructed from the Hodges-Lehmann estimators are more efficient than those based on medians and the extended U statistic. The permutation approach can provide a more stringent control of Type I error and is generally more powerful than the bootstrap procedure. The proposed robust nonparametric tests are applied to detect multivariate distributional difference between the intervention and control groups in the Thai Healthy Choices study and examine the intervention effect of a four-session motivational interviewing-based intervention developed in the study to reduce risk behaviors among youth living with HIV. PMID:29672555
Model of a thin film optical fiber fluorosensor
NASA Technical Reports Server (NTRS)
Egalon, Claudio O.; Rogowski, Robert S.
1991-01-01
The efficiency of core-light injection from sources in the cladding of an optical fiber is modeled analytically by means of the exact field solution of a step-profile fiber. The analysis is based on the techniques by Marcuse (1988) in which the sources are treated as infinitesimal electric currents with random phase and orientation that excite radiation fields and bound modes. Expressions are developed based on an infinite cladding approximation which yield the power efficiency for a fiber coated with fluorescent sources in the core/cladding interface. Marcuse's results are confirmed for the case of a weakly guiding cylindrical fiber with fluorescent sources uniformly distributed in the cladding, and the power efficiency is shown to be practically constant for variable wavelengths and core radii. The most efficient fibers have the thin film located at the core/cladding boundary, and fibers with larger differences in the indices of refraction are shown to be the most efficient.
Optimal pitching axis location of flapping wings for efficient hovering flight.
Wang, Q; Goosen, J F L; van Keulen, F
2017-09-01
Flapping wings can pitch passively about their pitching axes due to their flexibility, inertia, and aerodynamic loads. A shift in the pitching axis location can dynamically alter the aerodynamic loads, which in turn changes the passive pitching motion and the flight efficiency. Therefore, it is of great interest to investigate the optimal pitching axis for flapping wings to maximize the power efficiency during hovering flight. In this study, flapping wings are modeled as rigid plates with non-uniform mass distribution. The wing flexibility is represented by a linearly torsional spring at the wing root. A predictive quasi-steady aerodynamic model is used to evaluate the lift generated by such wings. Two extreme power consumption scenarios are modeled for hovering flight, i.e. the power consumed by a drive system with and without the capacity of kinetic energy recovery. For wings with different shapes, the optimal pitching axis location is found such that the cycle-averaged power consumption during hovering flight is minimized. Optimization results show that the optimal pitching axis is located between the leading edge and the mid-chord line, which shows close resemblance to insect wings. An optimal pitching axis can save up to 33% of power during hovering flight when compared to traditional wings used by most of flapping wing micro air vehicles (FWMAVs). Traditional wings typically use the straight leading edge as the pitching axis. With the optimized pitching axis, flapping wings show higher pitching amplitudes and start the pitching reversals in advance of the sweeping reversals. These phenomena lead to higher lift-to-drag ratios and, thus, explain the lower power consumption. In addition, the optimized pitching axis provides the drive system higher potential to recycle energy during the deceleration phases as compared to their counterparts. This observation underlines the particular importance of the wing pitching axis location for energy-efficient FWMAVs when using kinetic energy recovery drive systems.
Ferrante, Oscar; Patacca, Alessia; Di Caro, Valeria; Della Libera, Chiara; Santandrea, Elisa; Chelazzi, Leonardo
2018-05-01
The cognitive system has the capacity to learn and make use of environmental regularities - known as statistical learning (SL), including for the implicit guidance of attention. For instance, it is known that attentional selection is biased according to the spatial probability of targets; similarly, changes in distractor filtering can be triggered by the unequal spatial distribution of distractors. Open questions remain regarding the cognitive/neuronal mechanisms underlying SL of target selection and distractor filtering. Crucially, it is unclear whether the two processes rely on shared neuronal machinery, with unavoidable cross-talk, or they are fully independent, an issue that we directly addressed here. In a series of visual search experiments, participants had to discriminate a target stimulus, while ignoring a task-irrelevant salient distractor (when present). We systematically manipulated spatial probabilities of either one or the other stimulus, or both. We then measured performance to evaluate the direct effects of the applied contingent probability distribution (e.g., effects on target selection of the spatial imbalance in target occurrence across locations) as well as its indirect or "transfer" effects (e.g., effects of the same spatial imbalance on distractor filtering across locations). By this approach, we confirmed that SL of both target and distractor location implicitly bias attention. Most importantly, we described substantial indirect effects, with the unequal spatial probability of the target affecting filtering efficiency and, vice versa, the unequal spatial probability of the distractor affecting target selection efficiency across locations. The observed cross-talk demonstrates that SL of target selection and distractor filtering are instantiated via (at least partly) shared neuronal machinery, as further corroborated by strong correlations between direct and indirect effects at the level of individual participants. Our findings are compatible with the notion that both kinds of SL adjust the priority of specific locations within attentional priority maps of space. Copyright © 2017 Elsevier Ltd. All rights reserved.
Performance Evaluation and Control of Distributed Computer Communication Networks.
1985-09-01
Zukerman, S. Katz, P. Rodriguez, R. Pazos , S. Resheff, Z. Tsai, Z. Zhang, L. Jong, V. Minh. Other participants are the following visiting... Pazos -Rangel "Bandwidth Allocation and Routing in ISDN’s," IEEE Communications Magazine, February 1984. Abstract The goal of communications network design...location and routing for integrated networks - is formulated, and efficient methods for its solution are presented. (2) R.A. Pazos -Rangel "Evaluation
The solid angle (geometry factor) for a spherical surface source and an arbitrary detector aperture
Favorite, Jeffrey A.
2016-01-13
It is proven that the solid angle (or geometry factor, also called the geometrical efficiency) for a spherically symmetric outward-directed surface source with an arbitrary radius and polar angle distribution and an arbitrary detector aperture is equal to the solid angle for an isotropic point source located at the center of the spherical surface source and the same detector aperture.
Reconciling biodiversity and carbon conservation.
Thomas, Chris D; Anderson, Barbara J; Moilanen, Atte; Eigenbrod, Felix; Heinemeyer, Andreas; Quaife, Tristan; Roy, David B; Gillings, Simon; Armsworth, Paul R; Gaston, Kevin J
2013-05-01
Climate change is leading to the development of land-based mitigation and adaptation strategies that are likely to have substantial impacts on global biodiversity. Of these, approaches to maintain carbon within existing natural ecosystems could have particularly large benefits for biodiversity. However, the geographical distributions of terrestrial carbon stocks and biodiversity differ. Using conservation planning analyses for the New World and Britain, we conclude that a carbon-only strategy would not be effective at conserving biodiversity, as have previous studies. Nonetheless, we find that a combined carbon-biodiversity strategy could simultaneously protect 90% of carbon stocks (relative to a carbon-only conservation strategy) and > 90% of the biodiversity (relative to a biodiversity-only strategy) in both regions. This combined approach encapsulates the principle of complementarity, whereby locations that contain different sets of species are prioritised, and hence disproportionately safeguard localised species that are not protected effectively by carbon-only strategies. It is efficient because localised species are concentrated into small parts of the terrestrial land surface, whereas carbon is somewhat more evenly distributed; and carbon stocks protected in one location are equivalent to those protected elsewhere. Efficient compromises can only be achieved when biodiversity and carbon are incorporated together within a spatial planning process. © 2012 John Wiley & Sons Ltd/CNRS.
Relay discovery and selection for large-scale P2P streaming
Zhang, Chengwei; Wang, Angela Yunxian
2017-01-01
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. PMID:28410384
Relay discovery and selection for large-scale P2P streaming.
Zhang, Chengwei; Wang, Angela Yunxian; Hei, Xiaojun
2017-01-01
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.
Shaw, Jennifer L A; Monis, Paul; Weyrich, Laura S; Sawade, Emma; Drikas, Mary; Cooper, Alan J
2015-09-01
Drinking water assessments use a variety of microbial, physical, and chemical indicators to evaluate water treatment efficiency and product water quality. However, these indicators do not allow the complex biological communities, which can adversely impact the performance of drinking water distribution systems (DWDSs), to be characterized. Entire bacterial communities can be studied quickly and inexpensively using targeted metagenomic amplicon sequencing. Here, amplicon sequencing of the 16S rRNA gene region was performed alongside traditional water quality measures to assess the health, quality, and efficiency of two distinct, full-scale DWDSs: (i) a linear DWDS supplied with unfiltered water subjected to basic disinfection before distribution and (ii) a complex, branching DWDS treated by a four-stage water treatment plant (WTP) prior to disinfection and distribution. In both DWDSs bacterial communities differed significantly after disinfection, demonstrating the effectiveness of both treatment regimes. However, bacterial repopulation occurred further along in the DWDSs, and some end-user samples were more similar to the source water than to the postdisinfection water. Three sample locations appeared to be nitrified, displaying elevated nitrate levels and decreased ammonia levels, and nitrifying bacterial species, such as Nitrospira, were detected. Burkholderiales were abundant in samples containing large amounts of monochloramine, indicating resistance to disinfection. Genera known to contain pathogenic and fecal-associated species were also identified in several locations. From this study, we conclude that metagenomic amplicon sequencing is an informative method to support current compliance-based methods and can be used to reveal bacterial community interactions with the chemical and physical properties of DWDSs. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Growing and navigating the small world Web by local content
Menczer, Filippo
2002-01-01
Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues. PMID:12381792
Growing and navigating the small world Web by local content
NASA Astrophysics Data System (ADS)
Menczer, Filippo
2002-10-01
Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.
Growing and navigating the small world Web by local content.
Menczer, Filippo
2002-10-29
Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.
Dawson, P.; Whilldin, D.; Chouet, B.
2004-01-01
Radial Semblance is applied to broadband seismic network data to provide source locations of Very-Long-Period (VLP) seismic energy in near real time. With an efficient algorithm and adequate network coverage, accurate source locations of VLP energy are derived to quickly locate the shallow magmatic conduit system at Kilauea Volcano, Hawaii. During a restart in magma flow following a brief pause in the current eruption, the shallow magmatic conduit is pressurized, resulting in elastic radiation from various parts of the conduit system. A steeply dipping distribution of VLP hypocenters outlines a region extending from sea level to about 550 m elevation below and just east of the Halemaumau Pit Crater. The distinct hypocenters suggest the shallow plumbing system beneath Halemaumau consists of a complex plexus of sills and dikes. An unconstrained location for a section of the conduit is also observed beneath the region between Kilauea Caldera and Kilauea Iki Crater.
Infrared fiber optic sensor for measurements of nonuniform temperature distributions
NASA Astrophysics Data System (ADS)
Belotserkovsky, Edward; Drizlikh, S.; Zur, Albert; Bar-Or, O.; Katzir, Abraham
1992-04-01
Infrared (IR) fiber optic radiometry of thermal surfaces offers several advantages over refractive optics radiometry. It does not need a direct line of sight to the measured thermal surface and combines high capability of monitoring small areas with high efficiency. These advantages of IR fibers are important in the control of nonuniform temperature distributions, in which the temperature of closely situated points differs considerably and a high spatial resolution is necessary. The theoretical and experimental transforming functions of the sensor during scanning of an area with a nonuniform temperature distribution were obtained and their dependence on the spacial location of the fiber and type of temperature distribution were analyzed. Parameters such as accuracy and precision were determined. The results suggest that IR fiber radiometric thermometry may be useful in medical applications such as laser surgery, hyperthermia, and hypothermia.
Moving target tracking through distributed clustering in directional sensor networks.
Enayet, Asma; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Almogren, Ahmad; Alamri, Atif
2014-12-18
The problem of moving target tracking in directional sensor networks (DSNs) introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target's location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works.
Moving Target Tracking through Distributed Clustering in Directional Sensor Networks
Enayet, Asma; Razzaque, Md. Abdur; Hassan, Mohammad Mehedi; Almogren, Ahmad; Alamri, Atif
2014-01-01
The problem of moving target tracking in directional sensor networks (DSNs) introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target's location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works. PMID:25529205
NASA Astrophysics Data System (ADS)
Humphries, Nicolas E.
2015-09-01
The comprehensive review of Lévy patterns observed in the moves and pauses of a vast array of organisms by Reynolds [1] makes clear a need to attempt to unify phenomena to understand how organism movement may have evolved. However, I would contend that the research on Lévy 'movement patterns' we detect in time series of animal movements has to a large extent been misunderstood. The statistical techniques, such as Maximum Likelihood Estimation, used to detect these patterns look only at the statistical distribution of move step-lengths and not at the actual pattern, or structure, of the movement path. The path structure is lost altogether when move step-lengths are sorted prior to analysis. Likewise, the simulated movement paths, with step-lengths drawn from a truncated power law distribution in order to test characteristics of the path, such as foraging efficiency, in no way match the actual paths, or trajectories, of real animals. These statistical distributions are, therefore, null models of searching or foraging activity. What has proved surprising about these step-length distributions is the extent to which they improve the efficiency of random searches over simple Brownian motion. It has been shown unequivocally that a power law distribution of move step lengths is more efficient, in terms of prey items located per unit distance travelled, than any other distribution of move step-lengths so far tested (up to 3 times better than Brownian), and over a range of prey field densities spanning more than 4 orders of magnitude [2].
Efficient packing of patterns in sparse distributed memory by selective weighting of input bits
NASA Technical Reports Server (NTRS)
Kanerva, Pentti
1991-01-01
When a set of patterns is stored in a distributed memory, any given storage location participates in the storage of many patterns. From the perspective of any one stored pattern, the other patterns act as noise, and such noise limits the memory's storage capacity. The more similar the retrieval cues for two patterns are, the more the patterns interfere with each other in memory, and the harder it is to separate them on retrieval. A method is described of weighting the retrieval cues to reduce such interference and thus to improve the separability of patterns that have similar cues.
NASA Astrophysics Data System (ADS)
Arya, L. D.; Koshti, Atul
2018-05-01
This paper investigates the Distributed Generation (DG) capacity optimization at location based on the incremental voltage sensitivity criteria for sub-transmission network. The Modified Shuffled Frog Leaping optimization Algorithm (MSFLA) has been used to optimize the DG capacity. Induction generator model of DG (wind based generating units) has been considered for study. Standard test system IEEE-30 bus has been considered for the above study. The obtained results are also validated by shuffled frog leaping algorithm and modified version of bare bones particle swarm optimization (BBExp). The performance of MSFLA has been found more efficient than the other two algorithms for real power loss minimization problem.
Discharge data assimilation in a distributed hydrologic model for flood forecasting purposes
NASA Astrophysics Data System (ADS)
Ercolani, G.; Castelli, F.
2017-12-01
Flood early warning systems benefit from accurate river flow forecasts, and data assimilation may improve their reliability. However, the actual enhancement that can be obtained in the operational practice should be investigated in detail and quantified. In this work we assess the benefits that the simultaneous assimilation of discharge observations at multiple locations can bring to flow forecasting through a distributed hydrologic model. The distributed model, MOBIDIC, is part of the operational flood forecasting chain of Tuscany Region in Central Italy. The assimilation system adopts a mixed variational-Monte Carlo approach to update efficiently initial river flow, soil moisture, and a parameter related to runoff production. The evaluation of the system is based on numerous hindcast experiments of real events. The events are characterized by significant rainfall that resulted in both high and relatively low flow in the river network. The area of study is the main basin of Tuscany Region, i.e. Arno river basin, which extends over about 8300 km2 and whose mean annual precipitation is around 800 mm. Arno's mainstream, with its nearly 240 km length, passes through major Tuscan cities, as Florence and Pisa, that are vulnerable to floods (e.g. flood of November 1966). The assimilation tests follow the usage of the model in the forecasting chain, employing the operational resolution in both space and time (500 m and 15 minutes respectively) and releasing new flow forecasts every 6 hours. The assimilation strategy is evaluated in respect to open loop simulations, i.e. runs that do not exploit discharge observations through data assimilation. We compare hydrographs in their entirety, as well as classical performance indexes, as error on peak flow and Nash-Sutcliffe efficiency. The dependence of performances on lead time and location is assessed. Results indicate that the operational forecasting chain can benefit from the developed assimilation system, although with a significant variability due to the specific characteristics of any single event, and with downstream locations more sensitive to observations than upstream sites.
NASA Astrophysics Data System (ADS)
Gao, Tao; Li, Xin; Guo, Bingli; Yin, Shan; Li, Wenzhe; Huang, Shanguo
2017-07-01
Multipath provisioning is a survivable and resource efficient solution against increasing link failures caused by natural or man-made disasters in elastic optical datacenter networks (EODNs). Nevertheless, the conventional multipath provisioning scheme is designed only for connecting a specific node pair. Also, it is obvious that the number of node-disjoint paths between any two nodes is restricted to network connectivity, which has a fixed value for a given topology. Recently, the concept of content connectivity in EODNs has been proposed, which guarantees that a user can be served by any datacenter hosting the required content regardless of where it is located. From this new perspective, we propose a survivable multipath provisioning with content connectivity (MPCC) scheme, which is expected to improve the spectrum efficiency and the whole system survivability. We formulate the MPCC scheme with Integer Linear Program (ILP) in static traffic scenario and a heuristic approach is proposed for dynamic traffic scenario. Furthermore, to adapt MPCC to the variation of network state in dynamic traffic scenario, we propose a dynamic content placement (DCP) strategy in the MPCC scheme for detecting the variation of the distribution of user requests and adjusting the content location dynamically. Simulation results indicate that the MPCC scheme can reduce over 20% spectrum consumption than conventional multipath provisioning scheme in static traffic scenario. And in dynamic traffic scenario, the MPCC scheme can reduce over 20% spectrum consumption and over 50% blocking probability than conventional multipath provisioning scheme. Meanwhile, benefiting from the DCP strategy, the MPCC scheme has a good adaption to the variation of the distribution of user requests.
NASA Astrophysics Data System (ADS)
Gao, J. L.
2002-04-01
In this article, we present a system-level characterization of the energy consumption for sensor network application scenarios. We compute a power efficiency metric -- average watt-per-meter -- for each radio transmission and extend this local metric to find the global energy consumption. This analysis shows how overall energy consumption varies with transceiver characteristics, node density, data traffic distribution, and base-station location.
Glass Lewis, Marquisette; Ekúndayò, Olúgbémiga T.
2017-01-01
Hysterectomy, the driving force for symptomatic uterine fibroids since 1895, has decreased over the years, but it is still the number one choice for many women. Since 1995, uterine artery embolization (UAE) has been proven by many researchers to be an effective treatment for uterine fibroids while allowing women to keep their uteri. The preponderance of data collection and research has focused on care quality in terms of efficiency and effectiveness, with little on location and viability related to care utilization, accessibility and physical availability. The purpose of this study was to determine and compare the cost of UAE and classical abdominal hysterectomy with regard to race/ethnicity, region, and location. Data from National Hospital Discharge for 2004 through 2008 were accessed and analyzed for uterine artery embolization and hysterectomy. Frequency analyses were performed to determine distribution of variables by race/ethnicity, location, region, insurance coverage, cost and procedure. Based on frequency distributions of cost and length of stay, outliers were trimmed and categorized. Crosstabs were used to determine cost distributions by region, place/location, procedure, race, and primary payer. For abdominal hysterectomy, 9.8% of the sample were performed in rural locations accross the country. However, for UAE, only seven procedures were performed nationally in the same period. Therefore, all inferential analyses and associations for UAE were assumed for urban locations only. The pattern differed from region to region, regarding the volume of care (numbers of cases by location) and care cost. Comparing hysterectomy and UAE, the patterns indicate generally higher costs for UAE with a mean cost difference of $4223.52. Of the hysterectomies performed for fibroids on Black women in the rural setting, 92.08% were in the south. Overall, data analyzed in this examination indicated a significant disparity between rural and urban residence in both data collection and number of procedures conducted. Further research should determine the background to cost and care location differentials between races and between rural and urban settings. Further, factors driving racial differences in the proportions of hysterectomies in the rural south should be identified to eliminate disparities. Data are needed on the prevalence of uterine fibroids in rural settings. PMID:29099026
A Simple and Automatic Method for Locating Surgical Guide Hole
NASA Astrophysics Data System (ADS)
Li, Xun; Chen, Ming; Tang, Kai
2017-12-01
Restoration-driven surgical guides are widely used in implant surgery. This study aims to provide a simple and valid method of automatically locating surgical guide hole, which can reduce operator's experiences and improve the design efficiency and quality of surgical guide. Few literatures can be found on this topic and the paper proposed a novel and simple method to solve this problem. In this paper, a local coordinate system for each objective tooth is geometrically constructed in CAD system. This coordinate system well represents dental anatomical features and the center axis of the objective tooth (coincide with the corresponding guide hole axis) can be quickly evaluated in this coordinate system, finishing the location of the guide hole. The proposed method has been verified by comparing two types of benchmarks: manual operation by one skilled doctor with over 15-year experiences (used in most hospitals) and automatic way using one popular commercial package Simplant (used in few hospitals).Both the benchmarks and the proposed method are analyzed in their stress distribution when chewing and biting. The stress distribution is visually shown and plotted as a graph. The results show that the proposed method has much better stress distribution than the manual operation and slightly better than Simplant, which will significantly reduce the risk of cervical margin collapse and extend the wear life of the restoration.
Alhassan, Robert Kaba; Nketiah-Amponsah, Edward; Akazili, James; Spieker, Nicole; Arhinful, Daniel Kojo; Rinke de Wit, Tobias F
2015-01-01
Despite improvements in a number of health outcome indicators partly due to the National Health Insurance Scheme (NHIS), Ghana is unlikely to attain all its health-related millennium development goals before the end of 2015. Inefficient use of available limited resources has been cited as a contributory factor for this predicament. This study sought to explore efficiency levels of NHIS-accredited private and public health facilities; ascertain factors that account for differences in efficiency and determine the association between quality care and efficiency levels. The study is a cross-sectional survey of NHIS-accredited primary health facilities (n = 64) in two regions in southern Ghana. Data Envelopment Analysis was used to estimate technical efficiency of sampled health facilities while Tobit regression was employed to predict factors associated with efficiency levels. Spearman correlation test was performed to determine the association between quality care and efficiency. Overall, 20 out of the 64 health facilities (31 %) were optimally efficient relative to their peers. Out of the 20 efficient facilities, 10 (50 %) were Public/government owned facilities; 8 (40 %) were Private-for-profit facilities and 2 (10 %) were Private-not-for-profit/Mission facilities. Mission (Coef. = 52.1; p = 0.000) and Public (Coef. = 42.9; p = 0.002) facilities located in the Western region (predominantly rural) had higher odds of attaining the 100 % technical efficiency benchmark than those located in the Greater Accra region (largely urban). No significant association was found between technical efficiency scores of health facilities and many technical quality care proxies, except in overall quality score per the NHIS accreditation data (Coef. = -0.3158; p < 0.05) and SafeCare Essentials quality score on environmental safety for staff and patients (Coef. = -0.2764; p < 0.05) where the association was negative. The findings suggest some level of wastage of health resources in many healthcare facilities, especially those located in urban areas. The Ministry of Health and relevant stakeholders should undertake more effective need analysis to inform resource allocation, distribution and capacity building to promote efficient utilization of limited resources without compromising quality care standards.
ENERGY EFFICIENCY UPGRADES FOR SANITATION FACILITIES IN SELAWIK, AK FINAL REPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
POLLIS, REBECCA
2014-10-17
The Native Village of Selawik is a federally recognized Alaskan tribe, located at the mouth of the Selawik River, about 90 miles east of Kotzebue in northwest Alaska. Due to the community’s rural location and cold climate, it is common for electric rates to be four times higher than the cost urban residents pay. These high energy costs were the driving factor for Selawik pursuing funding from the Department of Energy in order to achieve significant energy cost savings. The main objective of the project was to improve the overall energy efficiency of the water treatment/distribution and sewer collection systemsmore » in Selawik by implementing the retrofit measures identified in a previously conducted utility energy audit. One purpose for the proposed improvements was to enable the community to realize significant savings associated with the cost of energy. Another purpose of the upgrades was to repair the vacuum sewer system on the west side of Selawik to prevent future freeze-up problems during winter months.« less
Individual analyses of Lévy walk in semi-free ranging Tonkean macaques (Macaca tonkeana).
Sueur, Cédric; Briard, Léa; Petit, Odile
2011-01-01
Animals adapt their movement patterns to their environment in order to maximize their efficiency when searching for food. The Lévy walk and the Brownian walk are two types of random movement found in different species. Studies have shown that these random movements can switch from a Brownian to a Lévy walk according to the size distribution of food patches. However no study to date has analysed how characteristics such as sex, age, dominance or body mass affect the movement patterns of an individual. In this study we used the maximum likelihood method to examine the nature of the distribution of step lengths and waiting times and assessed how these distributions are influenced by the age and the sex of group members in a semi free-ranging group of ten Tonkean macaques. Individuals highly differed in their activity budget and in their movement patterns. We found an effect of age and sex of individuals on the power distribution of their step lengths and of their waiting times. The males and old individuals displayed a higher proportion of longer trajectories than females and young ones. As regards waiting times, females and old individuals displayed higher rates of long stationary periods than males and young individuals. These movement patterns resembling random walks can probably be explained by the animals moving from one location to other known locations. The power distribution of step lengths might be due to a power distribution of food patches in the enclosure while the power distribution of waiting times might be due to the power distribution of the patch sizes.
Remote Sensing Assessment of Lunar Resources: We Know Where to Go to Find What We Need
NASA Technical Reports Server (NTRS)
Gillis, J. J.; Taylor, G. J.; Lucey, P. G.
2004-01-01
The utilization of space resources is necessary to not only foster the growth of human activities in space, but is essential to the President s vision of a "sustained and affordable human and robotic program to explore the solar system and beyond." The distribution of resources will shape planning permanent settlements by affecting decisions about where to locate a settlement. Mapping the location of such resources, however, is not the limiting factor in selecting a site for a lunar base. It is indecision about which resources to use that leaves the location uncertain. A wealth of remotely sensed data exists that can be used to identify targets for future detailed exploration. Thus, the future of space resource utilization pre-dominantly rests upon developing a strategy for resource exploration and efficient methods of extraction.
U.S. Geological Survey science for the Wyoming Landscape Conservation Initiative—2014 annual report
Bowen, Zachary H.; Aldridge, Cameron L.; Anderson, Patrick J.; Assal, Timothy J.; Bartos, Timothy T.; Biewick, Laura R; Boughton, Gregory K.; Chalfoun, Anna D.; Chong, Geneva W.; Dematatis, Marie K.; Eddy-Miller, Cheryl A.; Garman, Steven L.; Germaine, Stephen S.; Homer, Collin G.; Huber, Christopher; Kauffman, Matthew J.; Latysh, Natalie; Manier, Daniel; Melcher, Cynthia P.; Miller, Alexander; Miller, Kirk A.; Olexa, Edward M.; Schell, Spencer; Walters, Annika W.; Wilson, Anna B.; Wyckoff, Teal B.
2015-01-01
Finally, capabilities of the WLCI Web site and the USGS ScienceBase infrastructure were maintained and upgraded to help ensure access to and efficient use of all the WLCI data, products, assessment tools, and outreach materials that have been developed. Of particular note is the completion of three Web applications developed for mapping (1) the 1900−2008 progression of oil and gas development;(2) the predicted distributions of Wyoming’s Species of Greatest Conservation Need; and (3) the locations of coal and wind energy production, sage-grouse distribution and core management areas, and alternative routes for transmission lines within the WLCI region. Collectively, these applications tools provide WLCI planners and managers with powerful tools for better understanding the distributions of wildlife species and potential alternatives for energy development.
Optimal actuator location within a morphing wing scissor mechanism configuration
NASA Astrophysics Data System (ADS)
Joo, James J.; Sanders, Brian; Johnson, Terrence; Frecker, Mary I.
2006-03-01
In this paper, the optimal location of a distributed network of actuators within a scissor wing mechanism is investigated. The analysis begins by developing a mechanical understanding of a single cell representation of the mechanism. This cell contains four linkages connected by pin joints, a single actuator, two springs to represent the bidirectional behavior of a flexible skin, and an external load. Equilibrium equations are developed using static analysis and the principle of virtual work equations. An objective function is developed to maximize the efficiency of the unit cell model. It is defined as useful work over input work. There are two constraints imposed on this problem. The first is placed on force transferred from the external source to the actuator. It should be less than the blocked actuator force. The other is to require the ratio of output displacement over input displacement, i.e., geometrical advantage (GA), of the cell to be larger than a prescribed value. Sequential quadratic programming is used to solve the optimization problem. This process suggests a systematic approach to identify an optimum location of an actuator and to avoid the selection of location by trial and error. Preliminary results show that optimum locations of an actuator can be selected out of feasible regions according to the requirements of the problem such as a higher GA, a higher efficiency, or a smaller transferred force from external force. Results include analysis of single and multiple cell wing structures and some experimental comparisons.
Bergman, Juraj; Mitrikeski, Petar T.
2015-01-01
Summary Sporulation efficiency in the yeast Saccharomyces cerevisiae is a well-established model for studying quantitative traits. A variety of genes and nucleotides causing different sporulation efficiencies in laboratory, as well as in wild strains, has already been extensively characterised (mainly by reciprocal hemizygosity analysis and nucleotide exchange methods). We applied a different strategy in order to analyze the variation in sporulation efficiency of laboratory yeast strains. Coupling classical quantitative genetic analysis with simulations of phenotypic distributions (a method we call phenotype modelling) enabled us to obtain a detailed picture of the quantitative trait loci (QTLs) relationships underlying the phenotypic variation of this trait. Using this approach, we were able to uncover a dominant epistatic inheritance of loci governing the phenotype. Moreover, a molecular analysis of known causative quantitative trait genes and nucleotides allowed for the detection of novel alleles, potentially responsible for the observed phenotypic variation. Based on the molecular data, we hypothesise that the observed dominant epistatic relationship could be caused by the interaction of multiple quantitative trait nucleotides distributed across a 60--kb QTL region located on chromosome XIV and the RME1 locus on chromosome VII. Furthermore, we propose a model of molecular pathways which possibly underlie the phenotypic variation of this trait. PMID:27904371
Corgnet, Brice; Espín, Antonio M.; Hernán-González, Roberto
2017-01-01
Groups make decisions on both the production and the distribution of resources. These decisions typically involve a tension between increasing the total level of group resources (i.e. social efficiency) and distributing these resources among group members (i.e. individuals' relative shares). This is the case because the redistribution process may destroy part of the resources, thus resulting in socially inefficient allocations. Here we apply a dual-process approach to understand the cognitive underpinnings of this fundamental tension. We conducted a set of experiments to examine the extent to which different allocation decisions respond to intuition or deliberation. In a newly developed approach, we assess intuition and deliberation at both the trait level (using the Cognitive Reflection Test, henceforth CRT) and the state level (through the experimental manipulation of response times). To test for robustness, experiments were conducted in two countries: the USA and India. Despite absolute-level differences across countries, in both locations we show that: (i) time pressure and low CRT scores are associated with individuals' concerns for their relative shares and (ii) time delay and high CRT scores are associated with individuals' concerns for social efficiency. These findings demonstrate that deliberation favours social efficiency by overriding individuals' intuitive tendency to focus on relative shares. PMID:28386421
NASA Astrophysics Data System (ADS)
Barbarossa, S.; Farina, A.
A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.
DISCRN: A Distributed Storytelling Framework for Intelligence Analysis.
Shukla, Manu; Dos Santos, Raimundo; Chen, Feng; Lu, Chang-Tien
2017-09-01
Storytelling connects entities (people, organizations) using their observed relationships to establish meaningful storylines. This can be extended to spatiotemporal storytelling that incorporates locations, time, and graph computations to enhance coherence and meaning. But when performed sequentially these computations become a bottleneck because the massive number of entities make space and time complexity untenable. This article presents DISCRN, or distributed spatiotemporal ConceptSearch-based storytelling, a distributed framework for performing spatiotemporal storytelling. The framework extracts entities from microblogs and event data, and links these entities using a novel ConceptSearch to derive storylines in a distributed fashion utilizing key-value pair paradigm. Performing these operations at scale allows deeper and broader analysis of storylines. The novel parallelization techniques speed up the generation and filtering of storylines on massive datasets. Experiments with microblog posts such as Twitter data and Global Database of Events, Language, and Tone events show the efficiency of the techniques in DISCRN.
Information architecture for a planetary 'exploration web'
NASA Technical Reports Server (NTRS)
Lamarra, N.; McVittie, T.
2002-01-01
'Web services' is a common way of deploying distributed applications whose software components and data sources may be in different locations, formats, languages, etc. Although such collaboration is not utilized significantly in planetary exploration, we believe there is significant benefit in developing an architecture in which missions could leverage each others capabilities. We believe that an incremental deployment of such an architecture could significantly contribute to the evolution of increasingly capable, efficient, and even autonomous remote exploration.
A Prototype Decision Support System for the Location of Military Water Points.
1980-06-01
create an environ- ment which is conductive to an efficient man/machine decision making system . This could be accomplished by designing the operating...Figure 12. Flowchart of Program COMPUTE 50 Procedure This Decision Support System was designed to be interactive. That is, it requests data from the user...Pg. 82-114, 1974. 24. Geoffrion, A.M. and G.W. Graves, "Multicomodity Distribution System Design by Benders Partition", Management Science, Vol. 20, Pg
NASA Astrophysics Data System (ADS)
Mahto, Navin Kumar; Choubey, Gautam; Suneetha, Lakka; Pandey, K. M.
2016-11-01
The two equation standard k-ɛ turbulence model and the two-dimensional compressible Reynolds-Averaged Navier-Stokes (RANS) equations have been used to computationally simulate the double cavity scramjet combustor. Here all the simulations are performed by using ANSYS 14-FLUENT code. At the same time, the validation of the present numerical simulation for double cavity has been performed by comparing its result with the available experimental data which is in accordance with the literature. The results are in good agreement with the schlieren image and the pressure distribution curve obtained experimentally. However, the pressure distribution curve obtained numerically is under-predicted in 5 locations by numerical calculation. Further, investigations on the variations of the effects of the length-to-depth ratio of cavity and Mach number on the combustion characteristics has been carried out. The present results show that there is an optimal length-to-depth ratio for the cavity for which the performance of combustor significantly improves and also efficient combustion takes place within the combustor region. Also, the shifting of the location of incident oblique shock took place in the downstream of the H2 inlet when the Mach number value increases. But after achieving a critical Mach number range of 2-2.5, the further increase in Mach number results in lower combustion efficiency which may deteriorate the performance of combustor.
Spörlein, Christoph; Schlueter, Elmar
2018-01-01
Here we examine a conceptualization of immigrant assimilation that is based on the more general notion that distributional differences erode across generations. We explore this idea by reinvestigating the efficiency-equality trade-off hypothesis, which posits that stratified education systems educate students more efficiently at the cost of increasing inequality in overall levels of competence. In the context of ethnic inequality in math achievement, this study explores the extent to which an education system's characteristics are associated with ethnic inequality in terms of both the group means and group variances in achievement. Based on data from the 2012 PISA and mixed-effect location scale models, our analyses revealed two effects: on average, minority students had lower math scores than majority students, and minority students' scores were more concentrated at the lower end of the distribution. However, the ethnic inequality in the distribution of scores declined across generations. We did not find compelling evidence that stratified education systems increase mean differences in competency between minority and majority students. However, our analyses revealed that in countries with early educational tracking, minority students' math scores tended to cluster at the lower end of the distribution, regardless of compositional and school differences between majority and minority students.
Spörlein, Christoph
2018-01-01
Here we examine a conceptualization of immigrant assimilation that is based on the more general notion that distributional differences erode across generations. We explore this idea by reinvestigating the efficiency-equality trade-off hypothesis, which posits that stratified education systems educate students more efficiently at the cost of increasing inequality in overall levels of competence. In the context of ethnic inequality in math achievement, this study explores the extent to which an education system’s characteristics are associated with ethnic inequality in terms of both the group means and group variances in achievement. Based on data from the 2012 PISA and mixed-effect location scale models, our analyses revealed two effects: on average, minority students had lower math scores than majority students, and minority students’ scores were more concentrated at the lower end of the distribution. However, the ethnic inequality in the distribution of scores declined across generations. We did not find compelling evidence that stratified education systems increase mean differences in competency between minority and majority students. However, our analyses revealed that in countries with early educational tracking, minority students’ math scores tended to cluster at the lower end of the distribution, regardless of compositional and school differences between majority and minority students. PMID:29494677
Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru
2016-03-30
As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.
Time-varying value of electric energy efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mims, Natalie A.; Eckman, Tom; Goldman, Charles
Electric energy efficiency resources save energy and may reduce peak demand. Historically, quantification of energy efficiency benefits has largely focused on the economic value of energy savings during the first year and lifetime of the installed measures. Due in part to the lack of publicly available research on end-use load shapes (i.e., the hourly or seasonal timing of electricity savings) and energy savings shapes, consideration of the impact of energy efficiency on peak demand reduction (i.e., capacity savings) has been more limited. End-use load research and the hourly valuation of efficiency savings are used for a variety of electricity planningmore » functions, including load forecasting, demand-side management and evaluation, capacity and demand response planning, long-term resource planning, renewable energy integration, assessing potential grid modernization investments, establishing rates and pricing, and customer service. This study reviews existing literature on the time-varying value of energy efficiency savings, provides examples in four geographically diverse locations of how consideration of the time-varying value of efficiency savings impacts the calculation of power system benefits, and identifies future research needs to enhance the consideration of the time-varying value of energy efficiency in cost-effectiveness screening analysis. Findings from this study include: -The time-varying value of individual energy efficiency measures varies across the locations studied because of the physical and operational characteristics of the individual utility system (e.g., summer or winter peaking, load factor, reserve margin) as well as the time periods during which savings from measures occur. -Across the four locations studied, some of the largest capacity benefits from energy efficiency are derived from the deferral of transmission and distribution system infrastructure upgrades. However, the deferred cost of such upgrades also exhibited the greatest range in value of all the components of avoided costs across the locations studied. -Of the five energy efficiency measures studied, those targeting residential air conditioning in summer-peaking electric systems have the most significant added value when the total time-varying value is considered. -The increased use of rooftop solar systems, storage, and demand response, and the addition of electric vehicles and other major new electricity-consuming end uses are anticipated to significantly alter the load shape of many utility systems in the future. Data used to estimate the impact of energy efficiency measures on electric system peak demands will need to be updated periodically to accurately reflect the value of savings as system load shapes change. -Publicly available components of electric system costs avoided through energy efficiency are not uniform across states and utilities. Inclusion or exclusion of these components and differences in their value affect estimates of the time-varying value of energy efficiency. -Publicly available data on end-use load and energy savings shapes are limited, are concentrated regionally, and should be expanded.« less
Distributed Wireless Power Transfer With Energy Feedback
NASA Astrophysics Data System (ADS)
Lee, Seunghyun; Zhang, Rui
2017-04-01
Energy beamforming (EB) is a key technique for achieving efficient radio-frequency (RF) transmission enabled wireless energy transfer (WET). By optimally designing the waveforms from multiple energy transmitters (ETs) over the wireless channels, they can be constructively combined at the energy receiver (ER) to achieve an EB gain that scales with the number of ETs. However, the optimal design of EB waveforms requires accurate channel state information (CSI) at the ETs, which is challenging to obtain practically, especially in a distributed system with ETs at separate locations. In this paper, we study practical and efficient channel training methods to achieve optimal EB in a distributed WET system. We propose two protocols with and without centralized coordination, respectively, where distributed ETs either sequentially or in parallel adapt their transmit phases based on a low-complexity energy feedback from the ER. The energy feedback only depends on the received power level at the ER, where each feedback indicates one particular transmit phase that results in the maximum harvested power over a set of previously used phases. Simulation results show that the two proposed training protocols converge very fast in practical WET systems even with a large number of distributed ETs, while the protocol with sequential ET phase adaptation is also analytically shown to converge to the optimal EB design with perfect CSI by increasing the training time. Numerical results are also provided to evaluate the performance of the proposed distributed EB and training designs as compared to other benchmark schemes.
Heat transfer of phase-change materials in two-dimensional cylindrical coordinates
NASA Technical Reports Server (NTRS)
Labdon, M. B.; Guceri, S. I.
1981-01-01
Two-dimensional phase-change problem is numerically solved in cylindrical coordinates (r and z) by utilizing two Taylor series expansions for the temperature distributions in the neighborhood of the interface location. These two expansions form two polynomials in r and z directions. For the regions sufficiently away from the interface the temperature field equations are numerically solved in the usual way and the results are coupled with the polynomials. The main advantages of this efficient approach include ability to accept arbitrarily time dependent boundary conditions of all types and arbitrarily specified initial temperature distributions. A modified approach using a single Taylor series expansion in two variables is also suggested.
Multi-Agent Patrolling under Uncertainty and Threats.
Chen, Shaofei; Wu, Feng; Shen, Lincheng; Chen, Jing; Ramchurn, Sarvapali D
2015-01-01
We investigate a multi-agent patrolling problem where information is distributed alongside threats in environments with uncertainties. Specifically, the information and threat at each location are independently modelled as multi-state Markov chains, whose states are not observed until the location is visited by an agent. While agents will obtain information at a location, they may also suffer damage from the threat at that location. Therefore, the goal of the agents is to gather as much information as possible while mitigating the damage incurred. To address this challenge, we formulate the single-agent patrolling problem as a Partially Observable Markov Decision Process (POMDP) and propose a computationally efficient algorithm to solve this model. Building upon this, to compute patrols for multiple agents, the single-agent algorithm is extended for each agent with the aim of maximising its marginal contribution to the team. We empirically evaluate our algorithm on problems of multi-agent patrolling and show that it outperforms a baseline algorithm up to 44% for 10 agents and by 21% for 15 agents in large domains.
Smith, Joshua B; Laatsch, Lauren J; Beasley, James C
2017-08-31
Scavenging plays an important role in shaping communities through inter- and intra-specific interactions. Although vertebrate scavenger efficiency and species composition is likely influenced by the spatial complexity of environments, heterogeneity in carrion distribution has largely been disregarded in scavenging studies. We tested this hypothesis by experimentally placing juvenile bird carcasses on the ground and in nests in trees to simulate scenarios of nestling bird carrion availability. We used cameras to record scavengers removing carcasses and elapsed time to removal. Carrion placed on the ground was scavenged by a greater diversity of vertebrates and at > 2 times the rate of arboreal carcasses, suggesting arboreal carrion may represent an important resource to invertebrate scavengers, particularly in landscapes with efficient vertebrate scavenging communities. Nonetheless, six vertebrate species scavenged arboreal carcasses. Rat snakes (Elaphe obsolete), which exclusively scavenged from trees, and turkey vultures (Cathartes aura) were the primary scavengers of arboreal carrion, suggesting such resources are potentially an important pathway of nutrient acquisition for some volant and scansorial vertebrates. Our results highlight the intricacy of carrion-derived food web linkages, and how consideration of spatial complexity in carcass distribution (i.e., arboreal) may reveal important pathways of nutrient acquisition by invertebrate and vertebrate scavenging guilds.
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
Shallow aquifer storage and recovery (SASR): Initial findings from the Willamette Basin, Oregon
NASA Astrophysics Data System (ADS)
Neumann, P.; Haggerty, R.
2012-12-01
A novel mode of shallow aquifer management could increase the volumetric potential and distribution of groundwater storage. We refer to this mode as shallow aquifer storage and recovery (SASR) and gauge its potential as a freshwater storage tool. By this mode, water is stored in hydraulically connected aquifers with minimal impact to surface water resources. Basin-scale numerical modeling provides a linkage between storage efficiency and hydrogeological parameters, which in turn guides rulemaking for how and where water can be stored. Increased understanding of regional groundwater-surface water interactions is vital to effective SASR implementation. In this study we (1) use a calibrated model of the central Willamette Basin (CWB), Oregon to quantify SASR storage efficiency at 30 locations; (2) estimate SASR volumetric storage potential throughout the CWB based on these results and pertinent hydrogeological parameters; and (3) introduce a methodology for management of SASR by such parameters. Of 3 shallow, sedimentary aquifers in the CWB, we find the moderately conductive, semi-confined, middle sedimentary unit (MSU) to be most efficient for SASR. We estimate that users overlying 80% of the area in this aquifer could store injected water with greater than 80% efficiency, and find efficiencies of up to 95%. As a function of local production well yields, we estimate a maximum annual volumetric storage potential of 30 million m3 using SASR in the MSU. This volume constitutes roughly 9% of the current estimated summer pumpage in the Willamette basin at large. The dimensionless quantity lag #—calculated using modeled specific capacity, distance to nearest in-layer stream boundary, and injection duration—exhibits relatively high correlation to SASR storage efficiency at potential locations in the CWB. This correlation suggests that basic field measurements could guide SASR as an efficient shallow aquifer storage tool.
Lunar seismicity and tectonics
NASA Technical Reports Server (NTRS)
Lammlein, D. R.
1977-01-01
Results are presented for an analysis of all moonquake data obtained by the Apollo seismic stations during the period from November 1969 to May 1974 and a preliminary analysis of critical data obtained in the interval from May 1974 to May 1975. More accurate locations are found for previously located moonquakes, and additional sources are located. Consideration is given to the sources of natural seismic signals, lunar seismic activity, moonquake periodicities, tidal periodicities in moonquake activity, hypocentral locations and occurrence characteristics of deep and shallow moonquakes, lunar tidal control over moonquakes, lunar tectonism, the locations of moonquake belts, and the dynamics of the lunar interior. It is concluded that: (1) moonquakes are distributed in several major belts of global extent that coincide with regions of the youngest and most intense volcanic and tectonic activity; (2) lunar tides control both the small quakes occurring at great depth and the larger quakes occurring near the surface; (3) the moon has a much thicker lithosphere than earth; (4) a single tectonic mechanism may account for all lunar seismic activity; and (5) lunar tidal stresses are an efficient triggering mechanism for moonquakes.
NASA Astrophysics Data System (ADS)
Rosenwaks, Salman; Barmashenko, Boris D.; Bruins, Esther; Furman, Dov; Rybalkin, Victor; Katz, Arje
2002-05-01
Spatial distributions of the gain and temperament across the flow were studied for transonic and supersonic schemes of the iodine injection in a slit nozzle supersonic chemical oxygen-iodine laser as a function of the iodine and secondary nitrogen flow rate, jet penetration parameter and gas pumping rate. The mixing efficiency for supersonic injection of iodine is found to be much larger than for transonic injection, the maximum values of the gain being approximately 0.65 percent/cm for both injection schemes. Measurements of the gain distribution as a function of the iodine molar flow rate nI2 were carried out. For transonic injection the optimal value of nI2 at the flow centerline is smaller than that at the off axis location. The temperature is distributed homogeneously across the flow, increasing only in the narrow boundary layers near the walls. Opening a leak downstream of the cavity in order to decease the Mach number results in a decrease of the gain and increase of the temperature. The mixing efficiency in this case is much larger than for closed leak.
NASA Astrophysics Data System (ADS)
Orans, Ren
1990-10-01
Existing procedures used to develop marginal costs for electric utilities were not designed for applications in an increasingly competitive market for electric power. The utility's value of receiving power, or the costs of selling power, however, depend on the exact location of the buyer or seller, the magnitude of the power and the period of time over which the power is used. Yet no electric utility in the United States has disaggregate marginal costs that reflect differences in costs due to the time, size or location of the load associated with their power or energy transactions. The existing marginal costing methods used by electric utilities were developed in response to the Public Utilities Regulatory Policy Act (PURPA) in 1978. The "ratemaking standards" (Title 1) established by PURPA were primarily concerned with the appropriate segmentation of total revenues to various classes-of-service, designing time-of-use rating periods, and the promotion of efficient long-term resource planning. By design, the methods were very simple and inexpensive to implement. Now, more than a decade later, the costing issues facing electric utilities are becoming increasingly complex, and the benefits of developing more specific marginal costs will outweigh the costs of developing this information in many cases. This research develops a framework for estimating total marginal costs that vary by the size, timing, and the location of changes in loads within an electric distribution system. To complement the existing work at the Electric Power Research Institute (EPRI) and Pacific Gas and Electric Company (PGandE) on estimating disaggregate generation and transmission capacity costs, this dissertation focuses on the estimation of distribution capacity costs. While the costing procedure is suitable for the estimation of total (generation, transmission and distribution) marginal costs, the empirical work focuses on the geographic disaggregation of marginal costs related to electric utility distribution investment. The study makes use of data from an actual distribution planning area, located within PGandE's service territory, to demonstrate the important characteristics of this new costing approach. The most significant result of this empirical work is that geographic differences in the cost of capacity in distribution systems can be as much as four times larger than the current system average utility estimates. Furthermore, lumpy capital investment patterns can lead to significant cost differences over time.
Achieving better cooling of turbine blades using numerical simulation methods
NASA Astrophysics Data System (ADS)
Inozemtsev, A. A.; Tikhonov, A. S.; Sendyurev, C. I.; Samokhvalov, N. Yu.
2013-02-01
A new design of the first-stage nozzle vane for the turbine of a prospective gas-turbine engine is considered. The blade's thermal state is numerically simulated in conjugate statement using the ANSYS CFX 13.0 software package. Critical locations in the blade design are determined from the distribution of heat fluxes, and measures aimed at achieving more efficient cooling are analyzed. Essentially lower (by 50-100°C) maximal temperature of metal has been achieved owing to the results of the performed work.
NASA Astrophysics Data System (ADS)
Silva, Augusto F. d.; Costa, Carlos; Abrantes, Pedro; Gama, Vasco; Den Boer, Ad
1998-07-01
This paper describes an integrated system designed to provide efficient means for DICOM compliant cardiac imaging archival, transmission and visualization based on a communications backbone matching recent enabling telematic technologies like Asynchronous Transfer Mode (ATM) and switched Local Area Networks (LANs). Within a distributed client-server framework, the system was conceived on a modality based bottom-up approach, aiming ultrafast access to short term archives and seamless retrieval of cardiac video sequences throughout review stations located at the outpatient referral rooms, intensive and intermediate care units and operating theaters.
Image-Based Modeling Reveals Dynamic Redistribution of DNA Damage into Nuclear Sub-Domains
Costes, Sylvain V; Ponomarev, Artem; Chen, James L; Nguyen, David; Cucinotta, Francis A; Barcellos-Hoff, Mary Helen
2007-01-01
Several proteins involved in the response to DNA double strand breaks (DSB) form microscopically visible nuclear domains, or foci, after exposure to ionizing radiation. Radiation-induced foci (RIF) are believed to be located where DNA damage occurs. To test this assumption, we analyzed the spatial distribution of 53BP1, phosphorylated ATM, and γH2AX RIF in cells irradiated with high linear energy transfer (LET) radiation and low LET. Since energy is randomly deposited along high-LET particle paths, RIF along these paths should also be randomly distributed. The probability to induce DSB can be derived from DNA fragment data measured experimentally by pulsed-field gel electrophoresis. We used this probability in Monte Carlo simulations to predict DSB locations in synthetic nuclei geometrically described by a complete set of human chromosomes, taking into account microscope optics from real experiments. As expected, simulations produced DNA-weighted random (Poisson) distributions. In contrast, the distributions of RIF obtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) were non-random. This deviation from the expected DNA-weighted random pattern can be further characterized by “relative DNA image measurements.” This novel imaging approach shows that RIF were located preferentially at the interface between high and low DNA density regions, and were more frequent than predicted in regions with lower DNA density. The same preferential nuclear location was also measured for RIF induced by 1 Gy of low-LET radiation. This deviation from random behavior was evident only 5 min after irradiation for phosphorylated ATM RIF, while γH2AX and 53BP1 RIF showed pronounced deviations up to 30 min after exposure. These data suggest that DNA damage–induced foci are restricted to certain regions of the nucleus of human epithelial cells. It is possible that DNA lesions are collected in these nuclear sub-domains for more efficient repair. PMID:17676951
Mayhew, Terry M; Mühlfeld, Christian; Vanhecke, Dimitri; Ochs, Matthias
2009-04-01
Detecting, localising and counting ultrasmall particles and nanoparticles in sub- and supra-cellular compartments are of considerable current interest in basic and applied research in biomedicine, bioscience and environmental science. For particles with sufficient contrast (e.g. colloidal gold, ferritin, heavy metal-based nanoparticles), visualization requires the high resolutions achievable by transmission electron microscopy (TEM). Moreover, if particles can be counted, their spatial distributions can be subjected to statistical evaluation. Whatever the level of structural organisation, particle distributions can be compared between different compartments within a given structure (cell, tissue and organ) or between different sets of structures (in, say, control and experimental groups). Here, a portfolio of stereology-based methods for drawing such comparisons is presented. We recognise two main scenarios: (1) section surface localisation, in which particles, exemplified by antibody-conjugated colloidal gold particles or quantum dots, are distributed at the section surface during post-embedding immunolabelling, and (2) section volume localisation (or full section penetration), in which particles are contained within the cell or tissue prior to TEM fixation and embedding procedures. Whatever the study aim or hypothesis, the methods for quantifying particles rely on the same basic principles: (i) unbiased selection of specimens by multistage random sampling, (ii) unbiased estimation of particle number and compartment size using stereological test probes (points, lines, areas and volumes), and (iii) statistical testing of an appropriate null hypothesis. To compare different groups of cells or organs, a simple and efficient approach is to compare the observed distributions of raw particle counts by a combined contingency table and chi-squared analysis. Compartmental chi-squared values making substantial contributions to total chi-squared values help identify where the main differences between distributions reside. Distributions between compartments in, say, a given cell type, can be compared using a relative labelling index (RLI) or relative deposition index (RDI) combined with a chi-squared analysis to test whether or not particles preferentially locate in certain compartments. This approach is ideally suited to analysing particles located in volume-occupying compartments (organelles or tissue spaces) or surface-occupying compartments (membranes) and expected distributions can be generated by the stereological devices of point, intersection and particle counting. Labelling efficiencies (number of gold particles per antigen molecule) in immunocytochemical studies can be determined if suitable calibration methods (e.g. biochemical assays of golds per membrane surface or per cell) are available. In addition to relative quantification for between-group and between-compartment comparisons, stereological methods also permit absolute quantification, e.g. total volumes, surfaces and numbers of structures per cell. Here, the utility, limitations and recent applications of these methods are reviewed.
Smart intimation and location of faults in distribution system
NASA Astrophysics Data System (ADS)
Hari Krishna, K.; Srinivasa Rao, B.
2018-04-01
Location of faults in the distribution system is one of the most complicated problems that we are facing today. Identification of fault location and severity of fault within a short time is required to provide continuous power supply but fault identification and information transfer to the operator is the biggest challenge in the distribution network. This paper proposes a fault location method in the distribution system based on Arduino nano and GSM module with flame sensor. The main idea is to locate the fault in the distribution transformer by sensing the arc coming out from the fuse element. The biggest challenge in the distribution network is to identify the location and the severity of faults under different conditions. Well operated transmission and distribution systems will play a key role for uninterrupted power supply. Whenever fault occurs in the distribution system the time taken to locate and eliminate the fault has to be reduced. The proposed design was achieved with flame sensor and GSM module. Under faulty condition, the system will automatically send an alert message to the operator in the distribution system, about the abnormal conditions near the transformer, site code and its exact location for possible power restoration.
Gibbon travel paths are goal oriented.
Asensio, Norberto; Brockelman, Warren Y; Malaivijitnond, Suchinda; Reichard, Ulrich H
2011-05-01
Remembering locations of food resources is critical for animal survival. Gibbons are territorial primates which regularly travel through small and stable home ranges in search of preferred, limited and patchily distributed resources (primarily ripe fruit). They are predicted to profit from an ability to memorize the spatial characteristics of their home range and may increase their foraging efficiency by using a 'cognitive map' either with Euclidean or with topological properties. We collected ranging and feeding data from 11 gibbon groups (Hylobates lar) to test their navigation skills and to better understand gibbons' 'spatial intelligence'. We calculated the locations at which significant travel direction changes occurred using the change-point direction test and found that these locations primarily coincided with preferred fruit sources. Within the limits of biologically realistic visibility distances observed, gibbon travel paths were more efficient in detecting known preferred food sources than a heuristic travel model based on straight travel paths in random directions. Because consecutive travel change-points were far from the gibbons' sight, planned movement between preferred food sources was the most parsimonious explanation for the observed travel patterns. Gibbon travel appears to connect preferred food sources as expected under the assumption of a good mental representation of the most relevant sources in a large-scale space.
Aerts, Sam; Deschrijver, Dirk; Joseph, Wout; Verloock, Leen; Goeminne, Francis; Martens, Luc; Dhaene, Tom
2013-05-01
Human exposure to background radiofrequency electromagnetic fields (RF-EMF) has been increasing with the introduction of new technologies. There is a definite need for the quantification of RF-EMF exposure but a robust exposure assessment is not yet possible, mainly due to the lack of a fast and efficient measurement procedure. In this article, a new procedure is proposed for accurately mapping the exposure to base station radiation in an outdoor environment based on surrogate modeling and sequential design, an entirely new approach in the domain of dosimetry for human RF exposure. We tested our procedure in an urban area of about 0.04 km(2) for Global System for Mobile Communications (GSM) technology at 900 MHz (GSM900) using a personal exposimeter. Fifty measurement locations were sufficient to obtain a coarse street exposure map, locating regions of high and low exposure; 70 measurement locations were sufficient to characterize the electric field distribution in the area and build an accurate predictive interpolation model. Hence, accurate GSM900 downlink outdoor exposure maps (for use in, e.g., governmental risk communication and epidemiological studies) are developed by combining the proven efficiency of sequential design with the speed of exposimeter measurements and their ease of handling. Copyright © 2013 Wiley Periodicals, Inc.
Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth
2017-02-16
With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.
DUCT RETROFIT STRATEGY TO COMPLEMENT A MODULATING FURNACE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ANDREWS,J.W.
2002-10-02
Some recent work (Walker 2001, Andrews 2002) has indicated that installing a modulating furnace in a conventional duct system may, in many cases, result in a significant degradation in thermal distribution efficiency. The fundamental mechanism was pointed out nearly two decades ago (Andrews and Krajewski 1985). The problem occurs in duct systems that are less-than-perfectly insulated (e.g., R-4 duct wrap) and are located outside the conditioned space. It stems from the fact that when the airflow rate is reduced, as it will be when the modulating furnace reduces its heat output rate, the supply air will have a longer residencemore » time in the ducts and will therefore lose a greater percentage of its heat by conduction than it did at the higher airflow rate. The impact of duct leakage, on the other hand, is not expected to change very much under furnace modulation. The pressures in the duct system will be reduced when the airflow rate is reduced, thus reducing the leakage per unit time. This is balanced by the fact that the operating time will increase in order to meet the same heating load as with the conventional furnace operating at higher output and airflow rates. The balance would be exact if the exponent in the pressure vs. airflow equation were the same as that in the pressure vs. duct leakage equation. Since the pressure-airflow exponent is usually {approx}0.5 and the pressure-leakage exponent is usually {approx}0.6, the leakage loss as a fraction of the load should be slightly lower for the modulating furnace. The difference, however, is expected to be small, determined as it is by a function with an exponent equal to the difference between the above two exponents, or {approx}0.1. The negative impact of increased thermal conduction losses from the duct system may be partially offset by improved efficiency of the modulating furnace itself. Also, the modulating furnace will cycle on and off less often than a single-capacity model, and this may add a small amount (probably in the range 1%-3%) to the thermal distribution efficiency. Nevertheless, the effect of furnace modulation on thermal distribution efficiency, both as calculated and as measured in the laboratory, is quite significant. Although exact quantification of the impact will depend on factors such as climate and the location of the ducts within the structure, impacts in the 15%-25% range are to be expected for ducts located outside the conditioned space, as most residential duct systems are. This is too large a handicap to ignore.« less
Achieve Location Privacy-Preserving Range Query in Vehicular Sensing
Lu, Rongxing; Ma, Maode; Bao, Haiyong
2017-01-01
Modern vehicles are equipped with a plethora of on-board sensors and large on-board storage, which enables them to gather and store various local-relevant data. However, the wide application of vehicular sensing has its own challenges, among which location-privacy preservation and data query accuracy are two critical problems. In this paper, we propose a novel range query scheme, which helps the data requester to accurately retrieve the sensed data from the distributive on-board storage in vehicular ad hoc networks (VANETs) with location privacy preservation. The proposed scheme exploits structured scalars to denote the locations of data requesters and vehicles, and achieves the privacy-preserving location matching with the homomorphic Paillier cryptosystem technique. Detailed security analysis shows that the proposed range query scheme can successfully preserve the location privacy of the involved data requesters and vehicles, and protect the confidentiality of the sensed data. In addition, performance evaluations are conducted to show the efficiency of the proposed scheme, in terms of computation delay and communication overhead. Specifically, the computation delay and communication overhead are not dependent on the length of the scalar, and they are only proportional to the number of vehicles. PMID:28786943
Achieve Location Privacy-Preserving Range Query in Vehicular Sensing.
Kong, Qinglei; Lu, Rongxing; Ma, Maode; Bao, Haiyong
2017-08-08
Modern vehicles are equipped with a plethora of on-board sensors and large on-board storage, which enables them to gather and store various local-relevant data. However, the wide application of vehicular sensing has its own challenges, among which location-privacy preservation and data query accuracy are two critical problems. In this paper, we propose a novel range query scheme, which helps the data requester to accurately retrieve the sensed data from the distributive on-board storage in vehicular ad hoc networks (VANETs) with location privacy preservation. The proposed scheme exploits structured scalars to denote the locations of data requesters and vehicles, and achieves the privacy-preserving location matching with the homomorphic Paillier cryptosystem technique. Detailed security analysis shows that the proposed range query scheme can successfully preserve the location privacy of the involved data requesters and vehicles, and protect the confidentiality of the sensed data. In addition, performance evaluations are conducted to show the efficiency of the proposed scheme, in terms of computation delay and communication overhead. Specifically, the computation delay and communication overhead are not dependent on the length of the scalar, and they are only proportional to the number of vehicles.
NASA Astrophysics Data System (ADS)
Linden, H. R.; Singer, S. F.
2001-12-01
It is generally agreed that hydrogen is an ideal energy source, both for transportation and for the generation of electric power. Through the use of fuel cells, hydrogen becomes a high-efficiency carbon-free power source for electromotive transport; with the help of regenerative braking, cars should be able to reach triple the current mileage. Many have visualized a distributed electric supply network with decentralized generation based on fuel cells. Fuel cells can provide high generation efficiencies by overcoming the fundamental thermodynamic limitation imposed by the Carnot cycle. Further, by using the heat energy of the high-temperature fuel cell in co-generation, one can achieve total thermal efficiencies approaching 100 percent, as compared to present-day average power-plant efficiencies of around 35 percent. In addition to reducing CO2 emissions, distributed generation based on fuel cells also eliminates the tremendous release of waste heat into the environment, the need for cooling water, and related limitations on siting. Manufacture of hydrogen remains a key problem, but there are many technical solutions that come into play whenever the cost equations permit . One can visualize both central and local hydrogen production. Initially, reforming of abundant natural gas into mixtures of 80% H2 and 20% CO2 provides a relatively low-emission source of hydrogen. Conventional fossil-fuel plants and nuclear plants can become hydrogen factories using both high-temperature topping cycles and electrolysis of water. Hydro-electric plants can manufacture hydrogen by electrolysis. Later, photovoltaic and wind farms could be set up at favorable locations around the world as hydrogen factories. If perfected, photovoltaic hydrogen production through catalysis would use solar photons most efficiently . For both wind and PV, hydrogen production solves some crucial problems: intermittency of wind and of solar radiation, storage of energy, and use of locations that are not desirable for other economic uses. A hydrogen-based energy future is inevitable as low-cost sources of petroleum and natural gas become depleted with time. However, such fundamental changes in energy systems will take time to accomplish. Coal may survive for a longer time but may not be able to compete as the century draws to a close.
Chromosome Model reveals Dynamic Redistribution of DNA Damage into Nuclear Sub-domains
NASA Technical Reports Server (NTRS)
Costes, Sylvain V.; Ponomarev, Artem; Chen, James L.; Cucinotta, Francis A.; Barcellos-Hoff, Helen
2007-01-01
Several proteins involved in the response to DNA double strand breaks (DSB) form microscopically visible nuclear domains, or foci, after exposure to ionizing radiation. Radiation-induced foci (RIF) are believed to be located where DNA damage is induced. To test this assumption, we analyzed the spatial distribution of 53BP1, phosphorylated ATM and gammaH2AX RIF in cells irradiated with high linear energy transfer (LET) radiation. Since energy is randomly deposited along high-LET particle paths, RIF along these paths should also be randomly distributed. The probability to induce DSB can be derived from DNA fragment data measured experimentally by pulsed-field gel electrophoresis. We used this probability in Monte Carlo simulations to predict DSB locations in synthetic nuclei geometrically described by a complete set of human chromosomes, taking into account microscope optics from real experiments. As expected, simulations produced DNA-weighted random (Poisson) distributions. In contrast, the distributions of RIF obtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) were non-random. This deviation from the expected DNA-weighted random pattern can be further characterized by relative DNA image measurements. This novel imaging approach shows that RIF were located preferentially at the interface between high and low DNA density regions, and were more frequent in regions with lower density DNA than predicted. This deviation from random behavior was more pronounced within the first 5 min following irradiation for phosphorylated ATM RIF, while gammaH2AX and 53BP1 RIF showed very pronounced deviation up to 30 min after exposure. These data suggest the existence of repair centers in mammalian epithelial cells. These centers would be nuclear sub-domains where DNA lesions would be collected for more efficient repair.
Estimating regional centile curves from mixed data sources and countries.
van Buuren, Stef; Hayes, Daniel J; Stasinopoulos, D Mikis; Rigby, Robert A; ter Kuile, Feiko O; Terlouw, Dianne J
2009-10-15
Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating regional (multi-country) age-conditional weight distributions based on existing survey data from different countries. We developed a two-step method by which one is able to model data with widely different age ranges and sample sizes. The method produces references both at the country level and at the regional (multi-country) level. The first step models country-specific centile curves by Box-Cox t and Box-Cox power exponential distributions implemented in generalized additive model for location, scale and shape through a common model. Individual countries may vary in location and spread. The second step defines the regional reference from a finite mixture of the country distributions, weighted by population size. To demonstrate the method we fitted the weight-for-age distribution of 12 countries in South East Asia and the Western Pacific, based on 273 270 observations. We modeled both the raw body weight and the corresponding Z score, and obtained a good fit between the final models and the original data for both solutions. We briefly discuss an application of the generated regional references to obtain appropriate, region specific, age-based dosing regimens of drugs used in the tropics. The method is an affordable and efficient strategy to estimate regional growth distributions where the standard costly alternatives are not an option. Copyright (c) 2009 John Wiley & Sons, Ltd.
Exact posterior computation in non-conjugate Gaussian location-scale parameters models
NASA Astrophysics Data System (ADS)
Andrade, J. A. A.; Rathie, P. N.
2017-12-01
In Bayesian analysis the class of conjugate models allows to obtain exact posterior distributions, however this class quite restrictive in the sense that it involves only a few distributions. In fact, most of the practical applications involves non-conjugate models, thus approximate methods, such as the MCMC algorithms, are required. Although these methods can deal with quite complex structures, some practical problems can make their applications quite time demanding, for example, when we use heavy-tailed distributions, convergence may be difficult, also the Metropolis-Hastings algorithm can become very slow, in addition to the extra work inevitably required on choosing efficient candidate generator distributions. In this work, we draw attention to the special functions as a tools for Bayesian computation, we propose an alternative method for obtaining the posterior distribution in Gaussian non-conjugate models in an exact form. We use complex integration methods based on the H-function in order to obtain the posterior distribution and some of its posterior quantities in an explicit computable form. Two examples are provided in order to illustrate the theory.
Caveats for correlative species distribution modeling
Jarnevich, Catherine S.; Stohlgren, Thomas J.; Kumar, Sunil; Morisette, Jeffrey T.; Holcombe, Tracy R.
2015-01-01
Correlative species distribution models are becoming commonplace in the scientific literature and public outreach products, displaying locations, abundance, or suitable environmental conditions for harmful invasive species, threatened and endangered species, or species of special concern. Accurate species distribution models are useful for efficient and adaptive management and conservation, research, and ecological forecasting. Yet, these models are often presented without fully examining or explaining the caveats for their proper use and interpretation and are often implemented without understanding the limitations and assumptions of the model being used. We describe common pitfalls, assumptions, and caveats of correlative species distribution models to help novice users and end users better interpret these models. Four primary caveats corresponding to different phases of the modeling process, each with supporting documentation and examples, include: (1) all sampling data are incomplete and potentially biased; (2) predictor variables must capture distribution constraints; (3) no single model works best for all species, in all areas, at all spatial scales, and over time; and (4) the results of species distribution models should be treated like a hypothesis to be tested and validated with additional sampling and modeling in an iterative process.
Chen, Fengxiang; Zhang, Yong; Gfroerer, T. H.; ...
2015-06-02
Traditionally, spatially-resolved photoluminescence (PL) has been performed using a point-by-point scan mode with both excitation and detection occurring at the same spatial location. But with the availability of high quality detector arrays like CCDs, an imaging mode has become popular for performing spatially-resolved PL. By illuminating the entire area of interest and collecting the data simultaneously from all spatial locations, the measurement efficiency can be greatly improved. However, this new approach has proceeded under the implicit assumption of comparable spatial resolution. We show here that when carrier diffusion is present, the spatial resolution can actually differ substantially between the twomore » modes, with the less efficient scan mode being far superior. We apply both techniques in investigation of defects in a GaAs epilayer – where isolated singlet and doublet dislocations can be identified. A superposition principle is developed for solving the diffusion equation to extract the intrinsic carrier diffusion length, which can be applied to a system with arbitrarily distributed defects. The understanding derived from this work is significant for a broad range of problems in physics and beyond (for instance biology) – whenever the dynamics of generation, diffusion, and annihilation of species can be probed with either measurement mode.« less
Numerical Study of a 10 K Two Stage Pulse Tube Cryocooler with Precooling Inside the Pulse Tube
NASA Astrophysics Data System (ADS)
Xiaomin, Pang; Xiaotao, Wang; Wei, Dai; Jianyin, Hu; Ercang, Luo
2017-02-01
High efficiency cryocoolers working below 10 K have many applications such as cryo-pump, superconductor cooling and cryogenic electronics. This paper presents a thermally coupled two-stage pulse tube cryocooler system and its numeric analysis. The simulation results indicate that temperature distribution in the pulse tube has a significant impact on the system performance. So a precooling heat exchanger is put inside the second stage pulse tube for a deep investigation on its influence on the system performance. The influences of operating parameters such as precooling temperature, location of the precooling heat exchanger are discussed. Comparison of energy losses apparently show the advantages of the configuration which leads to an improvement on the efficiency. Finally, the cryocooler is predicted to be able to reach a relative Carnot efficiency of 10.7% at 10 K temperature.
Heat and mass transfer of liquid nitrogen in coal porous media
NASA Astrophysics Data System (ADS)
Lang, Lu; Chengyun, Xin; Xinyu, Liu
2018-04-01
Liquid nitrogen has been working as an important medium in fire extinguishing and prevention, due to its efficiency in oxygen exclusion and heat removal. Such a technique is especially crucial for coal industry in China. We built a tunnel model with a temperature monitor system (with 36 thermocouples installed) to experimentally study heat and mass transfer of liquid nitrogen in non-homogeneous coal porous media (CPM), and expected to optimize parameters of liquid nitrogen injection in engineering applications. Results indicate that injection location and amount of liquid nitrogen, together with air leakage, significantly affect temperature distribution in CPM, and non-equilibrium heat inside and outside of coal particles. The injection position of liquid nitrogen determines locations of the lowest CPM temperature and liquid nitrogen residual. In the deeper coal bed, coal particles take longer time to reach thermal equilibrium between their surface and inside. Air leakage accelerates temperature increase at the bottom of the coal bed, which is a major reason leading to fire prevention inefficiency. Measurement fluctuation of CPM temperature may be caused by incomplete contact of coal particles with liquid nitrogen flowing in the coal bed. Moreover, the secondary temperature drop (STD) happens and grows with the more injection of liquid nitrogen, and the STD phenomenon is explained through temperature distributions at different locations.
NASA Astrophysics Data System (ADS)
Gautam, Amit Kr.; Gautam, Ajay Kr.; Patel, R. B.
2010-11-01
In order to provide load balancing in clustered sensor deployment, the upstream clusters (near the BS) are kept smaller in size as compared to downstream ones (away from BS). Moreover, geographic awareness is also desirable in order to further enhance energy efficiency. But, this must be cost effective, since most of current location awareness strategies are either cost and weight inefficient (GPS) or are complex, inaccurate and unreliable in operation. This paper presents design and implementation of a Geographic LOad BALanced (GLOBAL) Clustering Protocol for Wireless Sensor Networks. A mathematical formulation is provided for determining the number of sensor nodes in each cluster. This enables uniform energy consumption after the multi-hop data transmission towards BS. Either the sensors can be manually deployed or the clusters be so formed that the sensor are efficiently distributed as per formulation. The latter strategy is elaborated in this contribution. Methods to provide static clustering and custom cluster sizes with location awareness are also provided in the given work. Finally, low mobility node applications can also implement the proposed work.
Yigzaw, Kassaye Yitbarek; Michalas, Antonis; Bellika, Johan Gustav
2017-01-03
Techniques have been developed to compute statistics on distributed datasets without revealing private information except the statistical results. However, duplicate records in a distributed dataset may lead to incorrect statistical results. Therefore, to increase the accuracy of the statistical analysis of a distributed dataset, secure deduplication is an important preprocessing step. We designed a secure protocol for the deduplication of horizontally partitioned datasets with deterministic record linkage algorithms. We provided a formal security analysis of the protocol in the presence of semi-honest adversaries. The protocol was implemented and deployed across three microbiology laboratories located in Norway, and we ran experiments on the datasets in which the number of records for each laboratory varied. Experiments were also performed on simulated microbiology datasets and data custodians connected through a local area network. The security analysis demonstrated that the protocol protects the privacy of individuals and data custodians under a semi-honest adversarial model. More precisely, the protocol remains secure with the collusion of up to N - 2 corrupt data custodians. The total runtime for the protocol scales linearly with the addition of data custodians and records. One million simulated records distributed across 20 data custodians were deduplicated within 45 s. The experimental results showed that the protocol is more efficient and scalable than previous protocols for the same problem. The proposed deduplication protocol is efficient and scalable for practical uses while protecting the privacy of patients and data custodians.
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
On the use of the energy probability distribution zeros in the study of phase transitions
NASA Astrophysics Data System (ADS)
Mól, L. A. S.; Rodrigues, R. G. M.; Stancioli, R. A.; Rocha, J. C. S.; Costa, B. V.
2018-04-01
This contribution is devoted to cover some technical aspects related to the use of the recently proposed energy probability distribution zeros in the study of phase transitions. This method is based on the partial knowledge of the partition function zeros and has been shown to be extremely efficient to precisely locate phase transition temperatures. It is based on an iterative method in such a way that the transition temperature can be approached at will. The iterative method will be detailed and some convergence issues that has been observed in its application to the 2D Ising model and to an artificial spin ice model will be shown, together with ways to circumvent them.
Cost-Benefit Analysis of Computer Resources for Machine Learning
Champion, Richard A.
2007-01-01
Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.
NASA Astrophysics Data System (ADS)
Janebo, Maria H.; Houghton, Bruce F.; Thordarson, Thorvaldur; Bonadonna, Costanza; Carey, Rebecca J.
2018-05-01
The size distribution of the population of particles injected into the atmosphere during a volcanic explosive eruption, i.e., the total grain-size distribution (TGSD), can provide important insights into fragmentation efficiency and is a fundamental source parameter for models of tephra dispersal and sedimentation. Recent volcanic crisis (e.g. Eyjafjallajökull 2010, Iceland and Córdon Caulle 2011, Chile) and the ensuing economic losses, highlighted the need for a better constraint of eruption source parameters to be used in real-time forecasting of ash dispersal (e.g., mass eruption rate, plume height, particle features), with a special focus on the scarcity of published TGSD in the scientific literature. Here we present TGSD data associated with Hekla volcano, which has been very active in the last few thousands of years and is located on critical aviation routes. In particular, we have reconstructed the TGSD of the initial subplinian-Plinian phases of four historical eruptions, covering a range of magma composition (andesite to rhyolite), eruption intensity (VEI 4 to 5), and erupted volume (0.2 to 1 km3). All four eruptions have bimodal TGSDs with mass fraction of fine ash (<63 μm; m63) from 0.11 to 0.25. The two Plinian dacitic-rhyolitic Hekla deposits have higher abundances of fine ash, and hence larger m63 values, than their andesitic subplinian equivalents, probably a function of more intense and efficient primary fragmentation. Due to differences in plume height, this contrast is not seen in samples from individual sites, especially in the near field, where lapilli have a wider spatial coverage in the Plinian deposits. The distribution of pyroclast sizes in Plinian versus subplinian falls reflects competing influences of more efficient fragmentation (e.g., producing larger amounts of fine ash) versus more efficient particle transport related to higher and more vigorous plumes, displacing relatively coarse lapilli farther down the dispersal axis.
Serving by local consensus in the public service location game.
Sun, Yi-Fan; Zhou, Hai-Jun
2016-09-02
We discuss the issue of distributed and cooperative decision-making in a network game of public service location. Each node of the network can decide to host a certain public service incurring in a construction cost and serving all the neighboring nodes and itself. A pure consumer node has to pay a tax, and the collected tax is evenly distributed to all the hosting nodes to remedy their construction costs. If all nodes make individual best-response decisions, the system gets trapped in an inefficient situation of high tax level. Here we introduce a decentralized local-consensus selection mechanism which requires nodes to recommend their neighbors of highest local impact as candidate servers, and a node may become a server only if all its non-server neighbors give their assent. We demonstrate that although this mechanism involves only information exchange among neighboring nodes, it leads to socially efficient solutions with tax level approaching the lowest possible value. Our results may help in understanding and improving collective problem-solving in various networked social and robotic systems.
Searching for storm water inflows in foul sewers using fibre-optic distributed temperature sensing.
Schilperoort, Rémy; Hoppe, Holger; de Haan, Cornelis; Langeveld, Jeroen
2013-01-01
A major drawback of separate sewer systems is the occurrence of illicit connections: unintended sewer cross-connections that connect foul water outlets from residential or industrial premises to the storm water system and/or storm water outlets to the foul sewer system. The amount of unwanted storm water in foul sewer systems can be significant, resulting in a number of detrimental effects on the performance of the wastewater system. Efficient removal of storm water inflows into foul sewers requires knowledge of the exact locations of the inflows. This paper presents the use of distributed temperature sensing (DTS) monitoring data to localize illicit storm water inflows into foul sewer systems. Data results from two monitoring campaigns in foul sewer systems in the Netherlands and Germany are presented. For both areas a number of storm water inflow locations can be derived from the data. Storm water inflow can only be detected as long as the temperature of this inflow differs from the in-sewer temperatures prior to the event. Also, the in-sewer propagation of storm and wastewater can be monitored, enabling a detailed view on advection.
Sequential Monte Carlo Instant Radiosity.
Hedman, Peter; Karras, Tero; Lehtinen, Jaakko
2017-05-01
Instant Radiosity and its derivatives are interactive methods for efficiently estimating global (indirect) illumination. They represent the last indirect bounce of illumination before the camera as the composite radiance field emitted by a set of virtual point light sources (VPLs). In complex scenes, current algorithms suffer from a difficult combination of two issues: it remains a challenge to distribute VPLs in a manner that simultaneously gives a high-quality indirect illumination solution for each frame, and to do so in a temporally coherent manner. We address both issues by building, and maintaining over time, an adaptive and temporally coherent distribution of VPLs in locations where they bring indirect light to the image. We introduce a novel heuristic sampling method that strives to only move as few of the VPLs between frames as possible. The result is, to the best of our knowledge, the first interactive global illumination algorithm that works in complex, highly-occluded scenes, suffers little from temporal flickering, supports moving cameras and light sources, and is output-sensitive in the sense that it places VPLs in locations that matter most to the final result.
An integrated solution for remote data access
NASA Astrophysics Data System (ADS)
Sapunenko, Vladimir; D'Urso, Domenico; dell'Agnello, Luca; Vagnoni, Vincenzo; Duranti, Matteo
2015-12-01
Data management constitutes one of the major challenges that a geographically- distributed e-Infrastructure has to face, especially when remote data access is involved. We discuss an integrated solution which enables transparent and efficient access to on-line and near-line data through high latency networks. The solution is based on the joint use of the General Parallel File System (GPFS) and of the Tivoli Storage Manager (TSM). Both products, developed by IBM, are well known and extensively used in the HEP computing community. Owing to a new feature introduced in GPFS 3.5, so-called Active File Management (AFM), the definition of a single, geographically-distributed namespace, characterised by automated data flow management between different locations, becomes possible. As a practical example, we present the implementation of AFM-based remote data access between two data centres located in Bologna and Rome, demonstrating the validity of the solution for the use case of the AMS experiment, an astro-particle experiment supported by the INFN CNAF data centre with the large disk space requirements (more than 1.5 PB).
A Computational Model for Predicting Gas Breakdown
NASA Astrophysics Data System (ADS)
Gill, Zachary
2017-10-01
Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.
Serving by local consensus in the public service location game
Sun, Yi-Fan; Zhou, Hai-Jun
2016-01-01
We discuss the issue of distributed and cooperative decision-making in a network game of public service location. Each node of the network can decide to host a certain public service incurring in a construction cost and serving all the neighboring nodes and itself. A pure consumer node has to pay a tax, and the collected tax is evenly distributed to all the hosting nodes to remedy their construction costs. If all nodes make individual best-response decisions, the system gets trapped in an inefficient situation of high tax level. Here we introduce a decentralized local-consensus selection mechanism which requires nodes to recommend their neighbors of highest local impact as candidate servers, and a node may become a server only if all its non-server neighbors give their assent. We demonstrate that although this mechanism involves only information exchange among neighboring nodes, it leads to socially efficient solutions with tax level approaching the lowest possible value. Our results may help in understanding and improving collective problem-solving in various networked social and robotic systems. PMID:27586793
Serving by local consensus in the public service location game
NASA Astrophysics Data System (ADS)
Sun, Yi-Fan; Zhou, Hai-Jun
2016-09-01
We discuss the issue of distributed and cooperative decision-making in a network game of public service location. Each node of the network can decide to host a certain public service incurring in a construction cost and serving all the neighboring nodes and itself. A pure consumer node has to pay a tax, and the collected tax is evenly distributed to all the hosting nodes to remedy their construction costs. If all nodes make individual best-response decisions, the system gets trapped in an inefficient situation of high tax level. Here we introduce a decentralized local-consensus selection mechanism which requires nodes to recommend their neighbors of highest local impact as candidate servers, and a node may become a server only if all its non-server neighbors give their assent. We demonstrate that although this mechanism involves only information exchange among neighboring nodes, it leads to socially efficient solutions with tax level approaching the lowest possible value. Our results may help in understanding and improving collective problem-solving in various networked social and robotic systems.
Design of shared unit-dose drug distribution network using multi-level particle swarm optimization.
Chen, Linjie; Monteiro, Thibaud; Wang, Tao; Marcon, Eric
2018-03-01
Unit-dose drug distribution systems provide optimal choices in terms of medication security and efficiency for organizing the drug-use process in large hospitals. As small hospitals have to share such automatic systems for economic reasons, the structure of their logistic organization becomes a very sensitive issue. In the research reported here, we develop a generalized multi-level optimization method - multi-level particle swarm optimization (MLPSO) - to design a shared unit-dose drug distribution network. Structurally, the problem studied can be considered as a type of capacitated location-routing problem (CLRP) with new constraints related to specific production planning. This kind of problem implies that a multi-level optimization should be performed in order to minimize logistic operating costs. Our results show that with the proposed algorithm, a more suitable modeling framework, as well as computational time savings and better optimization performance are obtained than that reported in the literature on this subject.
Leaf gas films, underwater photosynthesis and plant species distributions in a flood gradient.
Winkel, Anders; Visser, Eric J W; Colmer, Timothy D; Brodersen, Klaus P; Voesenek, Laurentius A C J; Sand-Jensen, Kaj; Pedersen, Ole
2016-07-01
Traits for survival during flooding of terrestrial plants include stimulation or inhibition of shoot elongation, aerenchyma formation and efficient gas exchange. Leaf gas films form on superhydrophobic cuticles during submergence and enhance underwater gas exchange. The main hypothesis tested was that the presence of leaf gas films influences the distribution of plant species along a natural flood gradient. We conducted laboratory experiments and field observations on species distributed along a natural flood gradient. We measured presence or absence of leaf gas films and specific leaf area of 95 species. We also measured, gas film retention time during submergence and underwater net photosynthesis and dark respiration of 25 target species. The presence of a leaf gas film was inversely correlated to flood frequency and duration and reached a maximum value of 80% of the species in the rarely flooded locations. This relationship was primarily driven by grasses that all, independently of their field location along the flood gradient, possess gas films when submerged. Although the present study and earlier experiments have shown that leaf gas films enhance gas exchange of submerged plants, the ability of species to form leaf gas films did not show the hypothesized relationship with species composition along the flood gradient. © 2016 John Wiley & Sons Ltd.
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
Gamma-ray momentum reconstruction from Compton electron trajectories by filtered back-projection
Haefner, A.; Gunter, D.; Plimley, B.; ...
2014-11-03
Gamma-ray imaging utilizing Compton scattering has traditionally relied on measuring coincident gamma-ray interactions to map directional information of the source distribution. This coincidence requirement makes it an inherently inefficient process. We present an approach to gamma-ray reconstruction from Compton scattering that requires only a single electron tracking detector, thus removing the coincidence requirement. From the Compton scattered electron momentum distribution, our algorithm analytically computes the incident photon's correlated direction and energy distributions. Because this method maps the source energy and location, it is useful in applications, where prior information about the source distribution is unknown. We demonstrate this method withmore » electron tracks measured in a scientific Si charge coupled device. While this method was demonstrated with electron tracks in a Si-based detector, it is applicable to any detector that can measure electron direction and energy, or equivalently the electron momentum. For example, it can increase the sensitivity to obtain energy and direction in gas-based systems that suffer from limited efficiency.« less
Optimal planning and design of a renewable energy based supply system for microgrids
Hafez, Omar; Bhattacharya, Kankar
2012-03-03
This paper presents a technique for optimal planning and design of hybrid renewable energy systems for microgrid applications. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is used to determine the optimal size and type of distributed energy resources (DERs) and their operating schedules for a sample utility distribution system. Using the DER-CAM results, an evaluation is performed to evaluate the electrical performance of the distribution circuit if the DERs selected by the DER-CAM optimization analyses are incorporated. Results of analyses regarding the economic benefits of utilizing the optimal locations identified for the selected DER within the system are alsomore » presented. The actual Brookhaven National Laboratory (BNL) campus electrical network is used as an example to show the effectiveness of this approach. The results show that these technical and economic analyses of hybrid renewable energy systems are essential for the efficient utilization of renewable energy resources for microgird applications.« less
Dynamic shared state maintenance in distributed virtual environments
NASA Astrophysics Data System (ADS)
Hamza-Lup, Felix George
Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model. An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for sensor-based distributed VE that has the potential to improve the system real-time behavior and scalability. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Endrawati, Titin; Siregar, M. Tirtana
2018-03-01
PT Mentari Trans Nusantara is a company engaged in the distribution of goods from the manufacture of the product to the distributor branch of the customer so that the product distribution must be controlled directly from the PT Mentari Trans Nusantara Center for faster delivery process. Problems often occur on the expedition company which in charge in sending the goods although it has quite extensive networking. The company is less control over logistics management. Meanwhile, logistics distribution management control policy will affect the company's performance in distributing products to customer distributor branches and managing product inventory in distribution center. PT Mentari Trans Nusantara is an expedition company which engaged in good delivery, including in Jakarta. Logistics management performance is very important due to its related to the supply of goods from the central activities to the branches based oncustomer demand. Supply chain management performance is obviously depends on the location of both the distribution center and branches, the smoothness of transportation in the distribution and the availability of the product in the distribution center to meet the demand in order to avoid losing sales. This study concluded that the company could be more efficient and effective in minimizing the risks of loses by improve its logistic management.
Laser anemometer measurements in a transonic axial-flow fan rotor
NASA Technical Reports Server (NTRS)
Strazisar, Anthony J.; Wood, Jerry R.; Hathaway, Michael D.; Suder, Kenneth L.
1989-01-01
Laser anemometer surveys were made of the 3-D flow field in NASA rotor 67, a low aspect ratio transonic axial-flow fan rotor. The test rotor has a tip relative Mach number of 1.38. The flowfield was surveyed at design speed at near peak efficiency and near stall operating conditions. Data is presented in the form of relative Mach number and relative flow angle distributions on surfaces of revolution at nine spanwise locations evenly spaced from hub to tip. At each spanwise location, data was acquired upstream, within, and downstream of the rotor. Aerodynamic performance measurements and detailed rotor blade and annulus geometry are also presented so that the experimental results can be used as a test case for 3-D turbomachinery flow analysis codes.
Habitat models to assist plant protection efforts in Shenandoah National Park, Virginia, USA
Van Manen, F.T.; Young, J.A.; Thatcher, C.A.; Cass, W.B.; Ulrey, C.
2005-01-01
During 2002, the National Park Service initiated a demonstration project to develop science-based law enforcement strategies for the protection of at-risk natural resources, including American ginseng (Panax quinquefolius L.), bloodroot (Sanguinaria canadensis L.), and black cohosh (Cimicifuga racemosa (L.) Nutt. [syn. Actaea racemosa L.]). Harvest pressure on these species is increasing because of the growing herbal remedy market. We developed habitat models for Shenandoah National Park and the northern portion of the Blue Ridge Parkway to determine the distribution of favorable habitats of these three plant species and to demonstrate the use of that information to support plant protection activities. We compiled locations for the three plant species to delineate favorable habitats with a geographic information system (GIS). We mapped potential habitat quality for each species by calculating a multivariate statistic, Mahalanobis distance, based on GIS layers that characterized the topography, land cover, and geology of the plant locations (10-m resolution). We tested model performance with an independent dataset of plant locations, which indicated a significant relationship between Mahalanobis distance values and species occurrence. We also generated null models by examining the distribution of the Mahalanobis distance values had plants been distributed randomly. For all species, the habitat models performed markedly better than their respective null models. We used our models to direct field searches to the most favorable habitats, resulting in a sizeable number of new plant locations (82 ginseng, 73 bloodroot, and 139 black cohosh locations). The odds of finding new plant locations based on the habitat models were 4.5 (black cohosh) to 12.3 (American ginseng) times greater than random searches; thus, the habitat models can be used to improve the efficiency of plant protection efforts, (e.g., marking of plants, law enforcement activities). The field searches also indicated that the level of occupancy of the most favorable habitats ranged from 49.4% for ginseng to 84.8% for black cohosh. Given the potential threats to these species from illegal harvesting, that information may serve as an important benchmark for future habitat and population assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Joshua B.; Laatsch, Lauren J.; Beasley, James C.
Scavenging plays an important role in shaping communities through inter- and intra-specific interactions. Although vertebrate scavenger efficiency and species composition is likely influenced by the spatial complexity of environments, heterogeneity in carrion distribution has largely been disregarded in scavenging studies. We tested this hypothesis by experimentally placing juvenile bird carcasses on the ground and in nests in trees to simulate scenarios of nestling bird carrion availability. We used cameras to record scavengers removing carcasses and elapsed time to removal. Carrion placed on the ground was scavenged by a greater diversity of vertebrates and at > 2 times the rate ofmore » arboreal carcasses, suggesting arboreal carrion may represent an important resource to invertebrate scavengers, particularly in landscapes with efficient vertebrate scavenging communities. Nonetheless, six vertebrate species scavenged arboreal carcasses. Rat snakes (Elaphe obsolete), which exclusively scavenged from trees, and turkey vultures (Cathartes aura) were the primary scavengers of arboreal carrion, suggesting such resources are potentially an important pathway of nutrient acquisition for some volant and scansorial vertebrates. Our results highlight the intricacy of carrion-derived food web linkages, and how consideration of spatial complexity in carcass distribution (i.e., arboreal) may reveal important pathways of nutrient acquisition by invertebrate and vertebrate scavenging guilds.« less
Smith, Joshua B.; Laatsch, Lauren J.; Beasley, James C.
2017-08-31
Scavenging plays an important role in shaping communities through inter- and intra-specific interactions. Although vertebrate scavenger efficiency and species composition is likely influenced by the spatial complexity of environments, heterogeneity in carrion distribution has largely been disregarded in scavenging studies. We tested this hypothesis by experimentally placing juvenile bird carcasses on the ground and in nests in trees to simulate scenarios of nestling bird carrion availability. We used cameras to record scavengers removing carcasses and elapsed time to removal. Carrion placed on the ground was scavenged by a greater diversity of vertebrates and at > 2 times the rate ofmore » arboreal carcasses, suggesting arboreal carrion may represent an important resource to invertebrate scavengers, particularly in landscapes with efficient vertebrate scavenging communities. Nonetheless, six vertebrate species scavenged arboreal carcasses. Rat snakes (Elaphe obsolete), which exclusively scavenged from trees, and turkey vultures (Cathartes aura) were the primary scavengers of arboreal carrion, suggesting such resources are potentially an important pathway of nutrient acquisition for some volant and scansorial vertebrates. Our results highlight the intricacy of carrion-derived food web linkages, and how consideration of spatial complexity in carcass distribution (i.e., arboreal) may reveal important pathways of nutrient acquisition by invertebrate and vertebrate scavenging guilds.« less
Location of Biomarkers and Reagents within Agarose Beads of a Programmable Bio-nano-chip
Jokerst, Jesse V.; Chou, Jie; Camp, James P.; Wong, Jorge; Lennart, Alexis; Pollard, Amanda A.; Floriano, Pierre N.; Christodoulides, Nicolaos; Simmons, Glennon W.; Zhou, Yanjie; Ali, Mehnaaz F.
2012-01-01
The slow development of cost-effective medical microdevices with strong analytical performance characteristics is due to a lack of selective and efficient analyte capture and signaling. The recently developed programmable bio-nano-chip (PBNC) is a flexible detection device with analytical behavior rivaling established macroscopic methods. The PBNC system employs ≈300 μm-diameter bead sensors composed of agarose “nanonets” that populate a microelectromechanical support structure with integrated microfluidic elements. The beads are an efficient and selective protein-capture medium suitable for the analysis of complex fluid samples. Microscopy and computational studies probe the 3D interior of the beads. The relative contributions that the capture and detection of moieties, analyte size, and bead porosity make to signal distribution and intensity are reported. Agarose pore sizes ranging from 45 to 620 nm are examined and those near 140 nm provide optimal transport characteristics for rapid (<15 min) tests. The system exhibits efficient (99.5%) detection of bead-bound analyte along with low (≈2%) nonspecific immobilization of the detection probe for carcinoembryonic antigen assay. Furthermore, the role analyte dimensions play in signal distribution is explored, and enhanced methods for assay building that consider the unique features of biomarker size are offered. PMID:21290601
Voters' Fickleness:. a Mathematical Model
NASA Astrophysics Data System (ADS)
Boccara, Nino
This paper presents a spatial agent-based model in order to study the evolution of voters' choice during the campaign of a two-candidate election. Each agent, represented by a point inside a two-dimensional square, is under the influence of its neighboring agents, located at a Euclidean distance less than or equal to d, and under the equal influence of both candidates seeking to win its support. Moreover, each agent located at time t at a given point moves at the next timestep to a randomly selected neighboring location distributed normally around its position at time t. Besides their location in space, agents are characterized by their level of awareness, a real a ∈ [0, 1], and their opinion ω ∈ {-1, 0, +1}, where -1 and +1 represent the respective intentions to cast a ballot in favor of one of the two candidates while 0 indicates either disinterest or refusal to vote. The essential purpose of the paper is qualitative; its aim is to show that voters' fickleness is strongly correlated to the level of voters' awareness and the efficiency of candidates' propaganda.
Collaborative Localization Algorithms for Wireless Sensor Networks with Reduced Localization Error
Sahoo, Prasan Kumar; Hwang, I-Shyan
2011-01-01
Localization is an important research issue in Wireless Sensor Networks (WSNs). Though Global Positioning System (GPS) can be used to locate the position of the sensors, unfortunately it is limited to outdoor applications and is costly and power consuming. In order to find location of sensor nodes without help of GPS, collaboration among nodes is highly essential so that localization can be accomplished efficiently. In this paper, novel localization algorithms are proposed to find out possible location information of the normal nodes in a collaborative manner for an outdoor environment with help of few beacons and anchor nodes. In our localization scheme, at most three beacon nodes should be collaborated to find out the accurate location information of any normal node. Besides, analytical methods are designed to calculate and reduce the localization error using probability distribution function. Performance evaluation of our algorithm shows that there is a tradeoff between deployed number of beacon nodes and localization error, and average localization time of the network can be increased with increase in the number of normal nodes deployed over a region. PMID:22163738
Experimental purification of two-atom entanglement.
Reichle, R; Leibfried, D; Knill, E; Britton, J; Blakestad, R B; Jost, J D; Langer, C; Ozeri, R; Seidelin, S; Wineland, D J
2006-10-19
Entanglement is a necessary resource for quantum applications--entanglement established between quantum systems at different locations enables private communication and quantum teleportation, and facilitates quantum information processing. Distributed entanglement is established by preparing an entangled pair of quantum particles in one location, and transporting one member of the pair to another location. However, decoherence during transport reduces the quality (fidelity) of the entanglement. A protocol to achieve entanglement 'purification' has been proposed to improve the fidelity after transport. This protocol uses separate quantum operations at each location and classical communication to distil high-fidelity entangled pairs from lower-fidelity pairs. Proof-of-principle experiments distilling entangled photon pairs have been carried out. However, these experiments obtained distilled pairs with a low probability of success and required destruction of the entangled pairs, rendering them unavailable for further processing. Here we report efficient and non-destructive entanglement purification with atomic quantum bits. Two noisy entangled pairs were created and distilled into one higher-fidelity pair available for further use. Success probabilities were above 35 per cent. The many applications of entanglement purification make it one of the most important techniques in quantum information processing.
Drive-by large-region acoustic noise-source mapping via sparse beamforming tomography.
Tuna, Cagdas; Zhao, Shengkui; Nguyen, Thi Ngoc Tho; Jones, Douglas L
2016-10-01
Environmental noise is a risk factor for human physical and mental health, demanding an efficient large-scale noise-monitoring scheme. The current technology, however, involves extensive sound pressure level (SPL) measurements at a dense grid of locations, making it impractical on a city-wide scale. This paper presents an alternative approach using a microphone array mounted on a moving vehicle to generate two-dimensional acoustic tomographic maps that yield the locations and SPLs of the noise-sources sparsely distributed in the neighborhood traveled by the vehicle. The far-field frequency-domain delay-and-sum beamforming output power values computed at multiple locations as the vehicle drives by are used as tomographic measurements. The proposed method is tested with acoustic data collected by driving an electric vehicle with a rooftop-mounted microphone array along a straight road next to a large open field, on which various pre-recorded noise-sources were produced by a loudspeaker at different locations. The accuracy of the tomographic imaging results demonstrates the promise of this approach for rapid, low-cost environmental noise-monitoring.
Xu, Chunyun; Cheng, Haobo; Feng, Yunpeng; Jing, Xiaoli
2016-09-01
A type of laser semiactive angle measurement system is designed for target detecting and tracking. Only one detector is used to detect target location from four distributed aperture optical systems through a 4×1 imaging fiber bundle. A telecentric optical system in image space is designed to increase the efficiency of imaging fiber bundles. According to the working principle of a four-quadrant (4Q) detector, fiber diamond alignment is adopted between an optical system and a 4Q detector. The structure of the laser semiactive angle measurement system is, we believe, novel. Tolerance analysis is carried out to determine tolerance limits of manufacture and installation errors of the optical system. The performance of the proposed method is identified by computer simulations and experiments. It is demonstrated that the linear region of the system is ±12°, with measurement error of better than 0.2°. In general, this new system can be used with large field of view and high accuracy, providing an efficient, stable, and fast method for angle measurement in practical situations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golchert, B.; Shell, J.; Jones, S.
2006-09-06
The objective of this project is to apply the Argonne National Laboratory's Glass Furnace Model (GFM) to the Longhorn oxy-fuel furnace to improve energy efficiency and to investigate the transport of gases released from the batch/melt into the exhaust. The model will make preliminary estimates of the local concentrations of water, carbon dioxide, elemental oxygen, and other subspecies in the entire combustion space as well as the concentration of these species in the furnace exhaust gas. This information, along with the computed temperature distribution in the combustion space may give indications on possible locations of crown corrosion. An investigation intomore » the optimization of the furnace will be performed by varying several key parameters such as the burner firing pattern, exhaust number/size, and the boost usage (amount and distribution). Results from these parametric studies will be analyzed to determine more efficient methods of operating the furnace that reduce crown corrosion. Finally, computed results from the GFM will be qualitatively correlated to measured values, thus augmenting the validation of the GFM.« less
Evaluation of on-line pulse control for vibration suppression in flexible spacecraft
NASA Technical Reports Server (NTRS)
Masri, Sami F.
1987-01-01
A numerical simulation was performed, by means of a large-scale finite element code capable of handling large deformations and/or nonlinear behavior, to investigate the suitability of the nonlinear pulse-control algorithm to suppress the vibrations induced in the Spacecraft Control Laboratory Experiment (SCOLE) components under realistic maneuvers. Among the topics investigated were the effects of various control parameters on the efficiency and robustness of the vibration control algorithm. Advanced nonlinear control techniques were applied to an idealized model of some of the SCOLE components to develop an efficient algorithm to determine the optimal locations of point actuators, considering the hardware on the SCOLE project as distributed in nature. The control was obtained from a quadratic optimization criterion, given in terms of the state variables of the distributed system. An experimental investigation was performed on a model flexible structure resembling the essential features of the SCOLE components, and electrodynamic and electrohydraulic actuators were used to investigate the applicability of the control algorithm with such devices in addition to mass-ejection pulse generators using compressed air.
Moraes, Celso; Myung, Sunghee; Lee, Sangkeum; Har, Dongsoo
2017-01-10
Provision of energy to wireless sensor networks is crucial for their sustainable operation. Sensor nodes are typically equipped with batteries as their operating energy sources. However, when the sensor nodes are sited in almost inaccessible locations, replacing their batteries incurs high maintenance cost. Under such conditions, wireless charging of sensor nodes by a mobile charger with an antenna can be an efficient solution. When charging distributed sensor nodes, a directional antenna, rather than an omnidirectional antenna, is more energy-efficient because of smaller proportion of off-target radiation. In addition, for densely distributed sensor nodes, it can be more effective for some undercharged sensor nodes to harvest energy from neighboring overcharged sensor nodes than from the remote mobile charger, because this reduces the pathloss of charging signal due to smaller distances. In this paper, we propose a hybrid charging scheme that combines charging by a mobile charger with a directional antenna, and energy trading, e.g., transferring and harvesting, between neighboring sensor nodes. The proposed scheme is compared with other charging scheme. Simulations demonstrate that the hybrid charging scheme with a directional antenna achieves a significant reduction in the total charging time required for all sensor nodes to reach a target energy level.
Moraes, Celso; Myung, Sunghee; Lee, Sangkeum; Har, Dongsoo
2017-01-01
Provision of energy to wireless sensor networks is crucial for their sustainable operation. Sensor nodes are typically equipped with batteries as their operating energy sources. However, when the sensor nodes are sited in almost inaccessible locations, replacing their batteries incurs high maintenance cost. Under such conditions, wireless charging of sensor nodes by a mobile charger with an antenna can be an efficient solution. When charging distributed sensor nodes, a directional antenna, rather than an omnidirectional antenna, is more energy-efficient because of smaller proportion of off-target radiation. In addition, for densely distributed sensor nodes, it can be more effective for some undercharged sensor nodes to harvest energy from neighboring overcharged sensor nodes than from the remote mobile charger, because this reduces the pathloss of charging signal due to smaller distances. In this paper, we propose a hybrid charging scheme that combines charging by a mobile charger with a directional antenna, and energy trading, e.g., transferring and harvesting, between neighboring sensor nodes. The proposed scheme is compared with other charging scheme. Simulations demonstrate that the hybrid charging scheme with a directional antenna achieves a significant reduction in the total charging time required for all sensor nodes to reach a target energy level. PMID:28075372
Energy Aware Clustering Algorithms for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Rakhshan, Noushin; Rafsanjani, Marjan Kuchaki; Liu, Chenglian
2011-09-01
The sensor nodes deployed in wireless sensor networks (WSNs) are extremely power constrained, so maximizing the lifetime of the entire networks is mainly considered in the design. In wireless sensor networks, hierarchical network structures have the advantage of providing scalable and energy efficient solutions. In this paper, we investigate different clustering algorithms for WSNs and also compare these clustering algorithms based on metrics such as clustering distribution, cluster's load balancing, Cluster Head's (CH) selection strategy, CH's role rotation, node mobility, clusters overlapping, intra-cluster communications, reliability, security and location awareness.
Reid, Jeffrey C.
1989-01-01
Computer processing and high resolution graphics display of geochemical data were used to quickly, accurately, and efficiently obtain important decision-making information for tin (cassiterite) exploration, Seward Peninsula, Alaska (USA). Primary geochemical dispersion patterns were determined for tin-bearing intrusive granite phases of Late Cretaceous age with exploration bedrock lithogeochemistry at the Kougarok tin prospect. Expensive diamond drilling footage was required to reach exploration objectives. Recognition of element distribution and dispersion patterns was useful in subsurface interpretation and correlation, and to aid location of other holes.
The Hydrologic Instrumentation Facility of the U.S. Geological Survey
Wagner, C.R.; Jeffers, Sharon
1984-01-01
The U.S. Geological Survey Water Resources Division has improved support to the agencies field offices by the consolidation of all instrumentation support services in a single facility. This facility known as the Hydrologic Instrumentation Facility (HIF) is located at the National Space Technology Laboratory, Mississippi, about 50 miles east of New Orleans, Louisiana. The HIF is responsible for design and development, testing, evaluation, procurement, warehousing, distribution and repair of a variety of specialized hydrologic instrumentation. The centralization has resulted in more efficient and effective support of the Survey 's hydrologic programs. (USGS)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-26
... Status; IKEA Distribution Services (Distribution of Home Furnishings and Accessories); Baltimore, MD... subzone at the warehouse and distribution facility of IKEA Distribution Services, located in Perryville... and distribution at the facility of IKEA Distribution Services, located in Perryville, Maryland...
SAGE: The Self-Adaptive Grid Code. 3
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1999-01-01
The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.
Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations
NASA Astrophysics Data System (ADS)
Dokuz, A. S.; Celik, M.
2017-11-01
Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.
Modelling disease outbreaks in realistic urban social networks
NASA Astrophysics Data System (ADS)
Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan
2004-05-01
Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.
NASA Astrophysics Data System (ADS)
Delorit, J. D.; Block, P. J.
2017-12-01
Where strong water rights law and corresponding markets exist as a coupled econo-legal mechanism, water rights holders are permitted to trade allocations to promote economic water resource use efficiency. In locations where hydrologic uncertainty drives the assignment of annual per-water right allocation values by water resource managers, collaborative water resource decision making by water rights holders, specifically those involved in agricultural production, can result in both resource and economic Pareto efficiency. Such is the case in semi-arid North Chile, where interactions between representative farmer groups, treated as competitive bilateral monopolies, and modeled at water market-scale, can provide both price and water right allocation distribution signals for unregulated, temporary water right leasing markets. For the range of feasible per-water right allocation values, a coupled agricultural-economic model is developed to describe the equilibrium distribution of water, the corresponding market price of water rights and the net surplus generated by collaboration between competing agricultural uses. Further, this research describes a per-water right inflection point for allocations where economic efficiency is not possible, and where price negotiation among competing agricultural uses is required. An investigation of the effects of water right supply and demand inequality at the market-scale is completed to characterize optimal market performance under existing water rights law. The broader insights of this research suggest that water rights holders engaged in agriculture can achieve economic benefits from forming crop-type cooperatives and by accurately assessing the economic value of allocation.
Radiation Source Mapping with Bayesian Inverse Methods
Hykes, Joshua M.; Azmy, Yousry Y.
2017-03-22
In this work, we present a method to map the spectral and spatial distributions of radioactive sources using a limited number of detectors. Locating and identifying radioactive materials is important for border monitoring, in accounting for special nuclear material in processing facilities, and in cleanup operations following a radioactive material spill. Most methods to analyze these types of problems make restrictive assumptions about the distribution of the source. In contrast, the source mapping method presented here allows an arbitrary three-dimensional distribution in space and a gamma peak distribution in energy. To apply the method, the problem is cast as anmore » inverse problem where the system’s geometry and material composition are known and fixed, while the radiation source distribution is sought. A probabilistic Bayesian approach is used to solve the resulting inverse problem since the system of equations is ill-posed. The posterior is maximized with a Newton optimization method. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint, discrete ordinates flux solutions, obtained in this work by the Denovo code, is required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes form the linear mapping from the state space to the response space. The test of the method’s success is simultaneously locating a set of 137Cs and 60Co gamma sources in a room. This test problem is solved using experimental measurements that we collected for this purpose. Because of the weak sources available for use in the experiment, some of the expected photopeaks were not distinguishable from the Compton continuum. However, by supplanting 14 flawed measurements (out of a total of 69) with synthetic responses computed by MCNP, the proof-of-principle source mapping was successful. The locations of the sources were predicted within 25 cm for two of the sources and 90 cm for the third, in a room with an ~4-x 4-m floor plan. Finally, the predicted source intensities were within a factor of ten of their true value.« less
Web Application to Monitor Logistics Distribution of Disaster Relief Using the CodeIgniter Framework
NASA Astrophysics Data System (ADS)
Jamil, Mohamad; Ridwan Lessy, Mohamad
2018-03-01
Disaster management is the responsibility of the central government and local governments. The principles of disaster management, among others, are quick and precise, priorities, coordination and cohesion, efficient and effective manner. Help that is needed by most societies are logistical assistance, such as the assistance covers people’s everyday needs, such as food, instant noodles, fast food, blankets, mattresses etc. Logistical assistance is needed for disaster management, especially in times of disasters. The support of logistical assistance must be timely, to the right location, target, quality, quantity, and needs. The purpose of this study is to make a web application to monitorlogistics distribution of disaster relefusing CodeIgniter framework. Through this application, the mechanisms of aid delivery will be easily controlled from and heading to the disaster site.
Decision Model for Planning and Scheduling of Seafood Product Considering Traceability
NASA Astrophysics Data System (ADS)
Agustin; Mawengkang, Herman; Mathelinea, Devy
2018-01-01
Due to the global challenges, it is necessary for an industrial company to integrate production scheduling and distribution planning, in order to be more efficient and to get more economics advantages. This paper presents seafood production planning and scheduling of a seafood manufacture company which produces simultaneously multi kind of seafood products, located at Aceh Province, Indonesia. The perishability nature of fish highly restricts its storage duration and delivery conditions. Traceability is a tracking requirement to check whether the quality of the product is satisfied. The production and distribution planning problem aims to meet customer demand subject to traceability of the seafood product and other restrictions. The problem is modeled as a mixed integer linear program, and then it is solved using neighborhood search approach.
Percolation in three-dimensional fracture networks for arbitrary size and shape distributions
NASA Astrophysics Data System (ADS)
Thovert, J.-F.; Mourzenko, V. V.; Adler, P. M.
2017-04-01
The percolation threshold of fracture networks is investigated by extensive direct numerical simulations. The fractures are randomly located and oriented in three-dimensional space. A very wide range of regular, irregular, and random fracture shapes is considered, in monodisperse or polydisperse networks containing fractures with different shapes and/or sizes. The results are rationalized in terms of a dimensionless density. A simple model involving a new shape factor is proposed, which accounts very efficiently for the influence of the fracture shape. It applies with very good accuracy in monodisperse or moderately polydisperse networks, and provides a good first estimation in other situations. A polydispersity index is shown to control the need for a correction, and the corrective term is modelled for the investigated size distributions.
NASA Technical Reports Server (NTRS)
Green, Robert D.; Agui, Juan H.; Vijayakumar, R.
2017-01-01
The air revitalization system aboard the International Space Station (ISS) provides the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation due to the microgravity environment in Low Earth Orbit (LEO). The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Adsorption (HEPA) media filters deployed at multiple locations in each U.S. Segment module; these filters are referred to as Bacterial Filter Elements, or BFEs. These filters see a replacement interval, as part of maintenance, of 2-5 years dependent on location in the ISS. In this work, we present particulate removal efficiency, pressure drop, and leak test results for a sample set of 8 BFEs returned from the ISS after filter replacement. The results can potentially be utilized by the ISS Program to ascertain whether the present replacement interval can be maintained or extended to balance the on-ground filter inventory with extension of the lifetime of ISS beyond 2024. These results can also provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Protection of Location Privacy Based on Distributed Collaborative Recommendations
Wang, Peng; Yang, Jing; Zhang, Jian-Pei
2016-01-01
In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users’ location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users’ location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users’ location information profiles and used generalization and encryption to ensure the safety of the user’s location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user’s location privacy. PMID:27649308
Protection of Location Privacy Based on Distributed Collaborative Recommendations.
Wang, Peng; Yang, Jing; Zhang, Jian-Pei
2016-01-01
In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users' location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users' location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users' location information profiles and used generalization and encryption to ensure the safety of the user's location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user's location privacy.
Leroy, Emmanuelle C; Samaran, Flore; Bonnel, Julien; Royer, Jean-Yves
2016-01-01
Passive acoustic monitoring is an efficient way to provide insights on the ecology of large whales. This approach allows for long-term and species-specific monitoring over large areas. In this study, we examined six years (2010 to 2015) of continuous acoustic recordings at up to seven different locations in the Central and Southern Indian Basin to assess the peak periods of presence, seasonality and migration movements of Antarctic blue whales (Balaenoptera musculus intermedia). An automated method is used to detect the Antarctic blue whale stereotyped call, known as Z-call. Detection results are analyzed in terms of distribution, seasonal presence and diel pattern of emission at each site. Z-calls are detected year-round at each site, except for one located in the equatorial Indian Ocean, and display highly seasonal distribution. This seasonality is stable across years for every site, but varies between sites. Z-calls are mainly detected during autumn and spring at the subantarctic locations, suggesting that these sites are on the Antarctic blue whale migration routes, and mostly during winter at the subtropical sites. In addition to these seasonal trends, there is a significant diel pattern in Z-call emission, with more Z-calls in daytime than in nighttime. This diel pattern may be related to the blue whale feeding ecology.
Temporal scaling and spatial statistical analyses of groundwater level fluctuations
NASA Astrophysics Data System (ADS)
Sun, H.; Yuan, L., Sr.; Zhang, Y.
2017-12-01
Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.
Leroy, Emmanuelle C.; Samaran, Flore; Bonnel, Julien; Royer, Jean-Yves
2016-01-01
Passive acoustic monitoring is an efficient way to provide insights on the ecology of large whales. This approach allows for long-term and species-specific monitoring over large areas. In this study, we examined six years (2010 to 2015) of continuous acoustic recordings at up to seven different locations in the Central and Southern Indian Basin to assess the peak periods of presence, seasonality and migration movements of Antarctic blue whales (Balaenoptera musculus intermedia). An automated method is used to detect the Antarctic blue whale stereotyped call, known as Z-call. Detection results are analyzed in terms of distribution, seasonal presence and diel pattern of emission at each site. Z-calls are detected year-round at each site, except for one located in the equatorial Indian Ocean, and display highly seasonal distribution. This seasonality is stable across years for every site, but varies between sites. Z-calls are mainly detected during autumn and spring at the subantarctic locations, suggesting that these sites are on the Antarctic blue whale migration routes, and mostly during winter at the subtropical sites. In addition to these seasonal trends, there is a significant diel pattern in Z-call emission, with more Z-calls in daytime than in nighttime. This diel pattern may be related to the blue whale feeding ecology. PMID:27828976
De Diego, N; Rodríguez, J L; Dodd, I C; Pérez-Alfocea, F; Moncaleán, P; Lacuesta, M
2013-05-01
Anatomical, physiological and phytohormonal changes involved in drought tolerance were examined in different Pinus radiata D. Don breeds subjected to soil drying and rewatering. Breeds with the smallest stomatal chamber size had the lowest transpiration rate and the highest intrinsic water-use efficiency. Xylem cell size was positively correlated with leaf hydraulic conductance and needle indole-3-acetic acid (IAA) concentrations, whereas transpiration rate was negatively correlated with needle abscisic acid (ABA) levels. Since these two phytohormones seem important in regulating the P. radiata drought response, they were simultaneously immunolocalized in roots and needles of the most tolerant breed (P. radiata var. radiata × var. cedrosensis) during two sequential drought cycles and after rewatering. During drought, IAA was unequally distributed into the pointed area of the needle cross-section and mainly located in mesophyll and vascular tissue cells of needles, possibly inducing needle epinasty, whereas ABA was principally located in guard cells, presumably to elicit stomata closure. In the roots, at the end of the first drought cycle, while strong IAA accumulation was observed in the cortex, ABA levels decreased probably due to translocation to the leaves. Rewatering modified the distribution of both IAA and ABA in the needles, causing an accumulation principally in vascular tissue, with residual concentrations in mesophyll, likely favouring the acclimatization of the plants for further drought cycles. Contrarily, in the roots IAA and ABA were located in the exodermis, a natural barrier that regulates the phytohormone translocation to other plant tissues and hormone losses to the soil solution after rewatering. These results confirm that immunolocalization is an efficient tool to understand the translocation of IAA and ABA in plants subjected to different water stress situations, and clarify their role in regulating physiological responses such as stomata closure and epinasty in needles and root development.
Radiation characteristics of Leaky Surface Plasmon polaritons of graphene
NASA Astrophysics Data System (ADS)
Mohadesi, V.; Asgari, A.; Siahpoush, V.
2018-07-01
High efficient coupling of graphene surface plasmons to far field radiation is possible by some techniques and can be used in the radiating applications. Besides of the coupling efficiency, the angular distribution of the radiated power is an important parameter in the radiating devices performance. In this paper we investigate the gain of the far field radiation related to the coupling of graphene surface plasmons via a high permittivity medium located close to the graphene. Our results show that high directive radiation and high coupling efficiency can be obtained by this technique and gain and directivity of radiation can be modified by graphene characteristics such as chemical potential and also quality of the graphene. Raising the chemical potential of graphene leads to increase the gain of the radiation as the result of amplifying the directivity of the radiation. Furthermore, high values of relaxation time lead to high directive and strong coupling which raises the maximum value of gain in efficient coupling angle. Tunable characteristics of gain and directivity in this structure can be important designing reconfigurable THz radiating devices.
An efficiency improvement in warehouse operation using simulation analysis
NASA Astrophysics Data System (ADS)
Samattapapong, N.
2017-11-01
In general, industry requires an efficient system for warehouse operation. There are many important factors that must be considered when designing an efficient warehouse system. The most important is an effective warehouse operation system that can help transfer raw material, reduce costs and support transportation. By all these factors, researchers are interested in studying about work systems and warehouse distribution. We start by collecting the important data for storage, such as the information on products, information on size and location, information on data collection and information on production, and all this information to build simulation model in Flexsim® simulation software. The result for simulation analysis found that the conveyor belt was a bottleneck in the warehouse operation. Therefore, many scenarios to improve that problem were generated and testing through simulation analysis process. The result showed that an average queuing time was reduced from 89.8% to 48.7% and the ability in transporting the product increased from 10.2% to 50.9%. Thus, it can be stated that this is the best method for increasing efficiency in the warehouse operation.
Optical design of transmitter lens for asymmetric distributed free space optical networks
NASA Astrophysics Data System (ADS)
Wojtanowski, Jacek; Traczyk, Maciej
2018-05-01
We present a method of transmitter lens design dedicated for light distribution shaping on a curved and asymmetric target. In this context, target is understood as a surface determined by hypothetical optical detectors locations. In the proposed method, ribbon-like surfaces of arbitrary shape are considered. The designed lens has the task to transform collimated and generally non-uniform input beam into desired irradiance distribution on such irregular targets. Desired irradiance is associated with space-dependant efficiency of power flow between the source and receivers distributed on the target surface. This unconventional nonimaging task is different from most illumination or beam shaping objectives, where constant or prescribed irradiance has to be produced on a flat target screen. The discussed optical challenge comes from the applications where single transmitter cooperates with multitude of receivers located in various positions in space and oriented in various directions. The proposed approach is not limited to optical networks, but can be applied in a variety of other applications where nonconventional irradiance distribution has to be engineered. The described method of lens design is based on geometrical optics, radiometry and ray mapping philosophy. Rays are processed as a vector field, each of them carrying a certain amount of power. Having the target surface shape and orientation of receivers distribution, the rays-surface crossings map is calculated. It corresponds to the output rays vector field, which is referred to the calculated input rays spatial distribution on the designed optical surface. The application of Snell's law in a vector form allows one to obtain surface local normal vector and calculate lens profile. In the paper, we also present the case study dealing with exemplary optical network. The designed freeform lens is implemented in commercially available optical design software and irradiance three-dimensional spatial distribution is examined, showing perfect agreement with expectations.
Goals and strategies in the global control design of the OAJ Robotic Observatory
NASA Astrophysics Data System (ADS)
Yanes-Díaz, A.; Rueda-Teruel, S.; Antón, J. L.; Rueda-Teruel, F.; Moles, M.; Cenarro, A. J.; Marín-Franch, A.; Ederoclite, A.; Gruel, N.; Varela, J.; Cristóbal-Hornillos, D.; Chueca, S.; Díaz-Martín, M. C.; Guillén, L.; Luis-Simoes, R.; Maícas, N.; Lamadrid, J. L.; López-Sainz, A.; Hernández-Fuertes, J.; Valdivielso, L.; Mendes de Oliveira, C.; Penteado, P.; Schoenell, W.; Kanaan, A.
2012-09-01
There are many ways to solve the challenging problem of making a high performance robotic observatory from scratch. The Observatorio Astrofísico de Javalambre (OAJ) is a new astronomical facility located in the Sierra de Javalambre (Teruel, Spain) whose primary role will be to conduct all-sky astronomical surveys. The OAJ control system has been designed from a global point of view including astronomical subsystems as well as infrastructures and other facilities. Three main factors have been considered in the design of a global control system for the robotic OAJ: quality, reliability and efficiency. We propose CIA (Control Integrated Architecture) design and OEE (Overall Equipment Effectiveness) as a key performance indicator in order to improve operation processes, minimizing resources and obtaining high cost reduction whilst maintaining quality requirements. The OAJ subsystems considered for the control integrated architecture are the following: two wide-field telescopes and their instrumentation, active optics subsystems, facilities for sky quality monitoring (seeing, extinction, sky background, sky brightness, cloud distribution, meteorological station), domes and several infrastructure facilities such as water supply, glycol water, water treatment plant, air conditioning, compressed air, LN2 plant, illumination, surveillance, access control, fire suppression, electrical generators, electrical distribution, electrical consumption, communication network, Uninterruptible Power Supply and two main control rooms, one at the OAJ and the other remotely located in Teruel, 40km from the observatory, connected through a microwave radio-link. This paper presents the OAJ strategy in control design to achieve maximum quality efficiency for the observatory processes and operations, giving practical examples of our approach.
Network-based production quality control
NASA Astrophysics Data System (ADS)
Kwon, Yongjin; Tseng, Bill; Chiou, Richard
2007-09-01
This study investigates the feasibility of remote quality control using a host of advanced automation equipment with Internet accessibility. Recent emphasis on product quality and reduction of waste stems from the dynamic, globalized and customer-driven market, which brings opportunities and threats to companies, depending on the response speed and production strategies. The current trends in industry also include a wide spread of distributed manufacturing systems, where design, production, and management facilities are geographically dispersed. This situation mandates not only the accessibility to remotely located production equipment for monitoring and control, but efficient means of responding to changing environment to counter process variations and diverse customer demands. To compete under such an environment, companies are striving to achieve 100%, sensor-based, automated inspection for zero-defect manufacturing. In this study, the Internet-based quality control scheme is referred to as "E-Quality for Manufacturing" or "EQM" for short. By its definition, EQM refers to a holistic approach to design and to embed efficient quality control functions in the context of network integrated manufacturing systems. Such system let designers located far away from the production facility to monitor, control and adjust the quality inspection processes as production design evolves.
iParking: An Intelligent Indoor Location-Based Smartphone Parking Service
Liu, Jingbin; Chen, Ruizhi; Chen, Yuwei; Pei, Ling; Chen, Liang
2012-01-01
Indoor positioning technologies have been widely studied with a number of solutions being proposed, yet substantial applications and services are still fairly primitive. Taking advantage of the emerging concept of the connected car, the popularity of smartphones and mobile Internet, and precise indoor locations, this study presents the development of a novel intelligent parking service called iParking. With the iParking service, multiple parties such as users, parking facilities and service providers are connected through Internet in a distributed architecture. The client software is a light-weight application running on a smartphone, and it works essentially based on a precise indoor positioning solution, which fuses Wireless Local Area Network (WLAN) signals and the measurements of the built-in sensors of the smartphones. The positioning accuracy, availability and reliability of the proposed positioning solution are adequate for facilitating the novel parking service. An iParking prototype has been developed and demonstrated in a real parking environment at a shopping mall. The demonstration showed how the iParking service could improve the parking experience and increase the efficiency of parking facilities. The iParking is a novel service in terms of cost- and energy-efficient solution. PMID:23202179
iParking: an intelligent indoor location-based smartphone parking service.
Liu, Jingbin; Chen, Ruizhi; Chen, Yuwei; Pei, Ling; Chen, Liang
2012-10-31
Indoor positioning technologies have been widely studied with a number of solutions being proposed, yet substantial applications and services are still fairly primitive. Taking advantage of the emerging concept of the connected car, the popularity of smartphones and mobile Internet, and precise indoor locations, this study presents the development of a novel intelligent parking service called iParking. With the iParking service, multiple parties such as users, parking facilities and service providers are connected through Internet in a distributed architecture. The client software is a light-weight application running on a smartphone, and it works essentially based on a precise indoor positioning solution, which fuses Wireless Local Area Network (WLAN) signals and the measurements of the built-in sensors of the smartphones. The positioning accuracy, availability and reliability of the proposed positioning solution are adequate for facilitating the novel parking service. An iParking prototype has been developed and demonstrated in a real parking environment at a shopping mall. The demonstration showed how the iParking service could improve the parking experience and increase the efficiency of parking facilities. The iParking is a novel service in terms of cost- and energy-efficient solution.
Online location of a break in water distribution systems
NASA Astrophysics Data System (ADS)
Liang, Jianwen; Xiao, Di; Zhao, Xinhua; Zhang, Hongwei
2003-08-01
Breaks often occur to urban water distribution systems under severely cold weather, or due to corrosion of pipes, deformation of ground, etc., and the breaks cannot easily be located, especially immediately after the events. This paper develops a methodology to locate a break in a water distribution system by monitoring water pressure online at some nodes in the water distribution system. For the purpose of online monitoring, supervisory control and data acquisition (SCADA) technology can well be used. A neural network-based inverse analysis method is constructed for locating the break based on the variation of water pressure. The neural network is trained by using analytically simulated data from the water distribution system, and validated by using a set of data that have never been used in the training. It is found that the methodology provides a quick, effective, and practical way in which a break in a water distribution system can be located.
Online monitoring of seismic damage in water distribution systems
NASA Astrophysics Data System (ADS)
Liang, Jianwen; Xiao, Di; Zhao, Xinhua; Zhang, Hongwei
2004-07-01
It is shown that water distribution systems can be damaged by earthquakes, and the seismic damages cannot easily be located, especially immediately after the events. Earthquake experiences show that accurate and quick location of seismic damage is critical to emergency response of water distribution systems. This paper develops a methodology to locate seismic damage -- multiple breaks in a water distribution system by monitoring water pressure online at limited positions in the water distribution system. For the purpose of online monitoring, supervisory control and data acquisition (SCADA) technology can well be used. A neural network-based inverse analysis method is constructed for locating the seismic damage based on the variation of water pressure. The neural network is trained by using analytically simulated data from the water distribution system, and validated by using a set of data that have never been used in the training. It is found that the methodology provides an effective and practical way in which seismic damage in a water distribution system can be accurately and quickly located.
Yi, Meng; Chen, Qingkui; Xiong, Neal N
2016-11-03
This paper considers the distributed access and control problem of massive wireless sensor networks' data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate.
Energy efficient lighting and communications
NASA Astrophysics Data System (ADS)
Zhou, Z.; Kavehrad, M.; Deng, P.
2012-01-01
As Light-Emitting Diode (LED)'s increasingly displace incandescent lighting over the next few years, general applications of Visible Light Communication (VLC) technology are expected to include wireless internet access, vehicle-to-vehicle communications, broadcast from LED signage, and machine-to-machine communications. An objective in this paper is to reveal the influence of system parameters on the power distribution and communication quality, in a general plural sources VLC system. It is demonstrated that sources' Half-Power Angles (HPA), receivers' Field-Of Views (FOV), sources layout and the power distribution among sources are significant impact factors. Based on our findings, we developed a method to adaptively change working status of each LED respectively according to users' locations. The program minimizes total power emitted while simultaneously ensuring sufficient light intensity and communication quality for each user. The paper also compares Orthogonal Frequency-Division Multiplexing (OFDM) and On-Off Keying (OOK) signals performance in indoor optical wireless communications. The simulation is carried out for different locations where different impulse response distortions are experienced. OFDM seems a better choice than prevalent OOK for indoor VLC due to its high resistance to multi-path effect and delay spread. However, the peak-to-average power limitations of the method must be investigated for lighting LEDs.
Path Diversity Improved Opportunistic Routing for Underwater Sensor Networks
Wang, Haiyan; He, Ke
2018-01-01
The packets carried along a pre-defined route in underwater sensor networks are very vulnerble. Node mobility or intermittent channel availability easily leads to unreachable routing. Opportunistic routing has been proven to be a promising paradigm to design routing protocols for underwater sensor networks. It takes advantage of the broadcast nature of the wireless medium to combat packet losses and selects potential paths on the fly. Finding an appropriate forwarding candidate set is a key issue in opportunistic routing. Many existing solutions ignore the impact of candidates location distribution on packet forwarding. In this paper, a path diversity improved candidate selection strategy is applied in opportunistic routing to improve packet forwarding efficiency. It not only maximizes the packet forwarding advancements but also takes the candidate’s location distribution into account. Based on this strategy, we propose two effective routing protocols: position improved candidates selection (PICS) and position random candidates selection (PRCS). PICS employs two-hop neighbor information to make routing decisions. PRCS only uses one-hop neighbor information. Simulation results show that both PICS and PRCS can significantly improve network performance when compared with the previous solutions, in terms of packet delivery ratio, average energy consumption and end-to-end delay. PMID:29690621
The study of disaster situation awareness based on volunteered geographic information
NASA Astrophysics Data System (ADS)
Zhao, Qiansheng; Chen, Zi; Li, Shengming; Luo, Nianxue
2015-12-01
As the development of Web 2.0, the social media like microblog, blogs and social network have supplied a bunch of information with locations (Volunteered Geographical Information, VGI).Recent years many cases have shown that, if disaster happened, the cyber citizens will get together very quickly and share the disaster information, this results a bunch of volunteered geographical information about disaster situation which is very valuable for disaster response if this VGIs are used efficiently and properly. This project will take typhoon disaster as case study. In this paper, we study the relations between weibo messages and the real typhoon situation, we proposed an analysis framework for mine the relations between weibo messages distribution and physical space. We found that the number of the weibo messages, key words frequency and spatial temporary distribution of the messages have strong relations with the disaster spread in the real world, and this research results can improve our disaster situation awareness in the future. The achievement of the study will give a method for typhoon disaster situation awareness based on VGI from the bottom up, and will locate the disaster spot and evolution quickly which is very important for disaster response and recover.
Path Diversity Improved Opportunistic Routing for Underwater Sensor Networks.
Bai, Weigang; Wang, Haiyan; He, Ke; Zhao, Ruiqin
2018-04-23
The packets carried along a pre-defined route in underwater sensor networks are very vulnerble. Node mobility or intermittent channel availability easily leads to unreachable routing. Opportunistic routing has been proven to be a promising paradigm to design routing protocols for underwater sensor networks. It takes advantage of the broadcast nature of the wireless medium to combat packet losses and selects potential paths on the fly. Finding an appropriate forwarding candidate set is a key issue in opportunistic routing. Many existing solutions ignore the impact of candidates location distribution on packet forwarding. In this paper, a path diversity improved candidate selection strategy is applied in opportunistic routing to improve packet forwarding efficiency. It not only maximizes the packet forwarding advancements but also takes the candidate’s location distribution into account. Based on this strategy, we propose two effective routing protocols: position improved candidates selection (PICS) and position random candidates selection (PRCS). PICS employs two-hop neighbor information to make routing decisions. PRCS only uses one-hop neighbor information. Simulation results show that both PICS and PRCS can significantly improve network performance when compared with the previous solutions, in terms of packet delivery ratio, average energy consumption and end-to-end delay.
Simulating pad-electrodes with high-definition arrays in transcranial electric stimulation
NASA Astrophysics Data System (ADS)
Kempe, René; Huang, Yu; Parra, Lucas C.
2014-04-01
Objective. Research studies on transcranial electric stimulation, including direct current, often use a computational model to provide guidance on the placing of sponge-electrode pads. However, the expertise and computational resources needed for finite element modeling (FEM) make modeling impractical in a clinical setting. Our objective is to make the exploration of different electrode configurations accessible to practitioners. We provide an efficient tool to estimate current distributions for arbitrary pad configurations while obviating the need for complex simulation software. Approach. To efficiently estimate current distributions for arbitrary pad configurations we propose to simulate pads with an array of high-definition (HD) electrodes and use an efficient linear superposition to then quickly evaluate different electrode configurations. Main results. Numerical results on ten different pad configurations on a normal individual show that electric field intensity simulated with the sampled array deviates from the solutions with pads by only 5% and the locations of peak magnitude fields have a 94% overlap when using a dense array of 336 electrodes. Significance. Computationally intensive FEM modeling of the HD array needs to be performed only once, perhaps on a set of standard heads that can be made available to multiple users. The present results confirm that by using these models one can now quickly and accurately explore and select pad-electrode montages to match a particular clinical need.
High performance organic distributed Bragg reflector lasers fabricated by dot matrix holography.
Wan, Wenqiang; Huang, Wenbin; Pu, Donglin; Qiao, Wen; Ye, Yan; Wei, Guojun; Fang, Zongbao; Zhou, Xiaohong; Chen, Linsen
2015-12-14
We report distributed Bragg reflector (DBR) polymer lasers fabricated using dot matrix holography. Pairs of distributed Bragg reflector mirrors with variable mirror separations are fabricated and a novel energy transfer blend consisting of a blue-emitting conjugated polymer and a red-emitting one is spin-coated onto the patterned substrate to complete the device. Under optical pumping, the device emits sing-mode lasing around 622 nm with a bandwidth of 0.41 nm. The working threshold is as low as 13.5 μJ/cm² (~1.68 kW/cm²) and the measured slope efficiency reaches 5.2%. The distributed feedback (DFB) cavity and the DBR cavity resonate at the same lasing wavelength while the DFB laser shows a much higher threshold. We further show that flexible DBR lasers can be conveniently fabricated through the UV-imprinting technique by using the patterned silica substrate as the mold. Dot matrix holography represents a versatile approach to control the number, the size, the location and the orientation of DBR mirrors, thus providing great flexibility in designing DBR lasers.
Study of Solid State Drives performance in PROOF distributed analysis system
NASA Astrophysics Data System (ADS)
Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.
2010-04-01
Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.
The Role of Experience in Location Estimation: Target Distributions Shift Location Memory Biases
ERIC Educational Resources Information Center
Lipinski, John; Simmering, Vanessa R.; Johnson, Jeffrey S.; Spencer, John P.
2010-01-01
Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. "Cognition, 93", 75-97]. This conflicts with earlier results showing…
Hung, Le Xuan; Canh, Ngo Trong; Lee, Sungyoung; Lee, Young-Koo; Lee, Heejo
2008-01-01
For many sensor network applications such as military or homeland security, it is essential for users (sinks) to access the sensor network while they are moving. Sink mobility brings new challenges to secure routing in large-scale sensor networks. Previous studies on sink mobility have mainly focused on efficiency and effectiveness of data dissemination without security consideration. Also, studies and experiences have shown that considering security during design time is the best way to provide security for sensor network routing. This paper presents an energy-efficient secure routing and key management for mobile sinks in sensor networks, called SCODEplus. It is a significant extension of our previous study in five aspects: (1) Key management scheme and routing protocol are considered during design time to increase security and efficiency; (2) The network topology is organized in a hexagonal plane which supports more efficiency than previous square-grid topology; (3) The key management scheme can eliminate the impacts of node compromise attacks on links between non-compromised nodes; (4) Sensor node deployment is based on Gaussian distribution which is more realistic than uniform distribution; (5) No GPS or like is required to provide sensor node location information. Our security analysis demonstrates that the proposed scheme can defend against common attacks in sensor networks including node compromise attacks, replay attacks, selective forwarding attacks, sinkhole and wormhole, Sybil attacks, HELLO flood attacks. Both mathematical and simulation-based performance evaluation show that the SCODEplus significantly reduces the communication overhead, energy consumption, packet delivery latency while it always delivers more than 97 percent of packets successfully. PMID:27873956
Hung, Le Xuan; Canh, Ngo Trong; Lee, Sungyoung; Lee, Young-Koo; Lee, Heejo
2008-12-03
For many sensor network applications such as military or homeland security, it is essential for users (sinks) to access the sensor network while they are moving. Sink mobility brings new challenges to secure routing in large-scale sensor networks. Previous studies on sink mobility have mainly focused on efficiency and effectiveness of data dissemination without security consideration. Also, studies and experiences have shown that considering security during design time is the best way to provide security for sensor network routing. This paper presents an energy-efficient secure routing and key management for mobile sinks in sensor networks, called SCODE plus . It is a significant extension of our previous study in five aspects: (1) Key management scheme and routing protocol are considered during design time to increase security and efficiency; (2) The network topology is organized in a hexagonal plane which supports more efficiency than previous square-grid topology; (3) The key management scheme can eliminate the impacts of node compromise attacks on links between non-compromised nodes; (4) Sensor node deployment is based on Gaussian distribution which is more realistic than uniform distribution; (5) No GPS or like is required to provide sensor node location information. Our security analysis demonstrates that the proposed scheme can defend against common attacks in sensor networks including node compromise attacks, replay attacks, selective forwarding attacks, sinkhole and wormhole, Sybil attacks, HELLO flood attacks. Both mathematical and simulation-based performance evaluation show that the SCODE plus significantly reduces the communication overhead, energy consumption, packet delivery latency while it always delivers more than 97 percent of packets successfully.
10 CFR 431.197 - Manufacturer's determination of efficiency for distribution transformers.
Code of Federal Regulations, 2011 CFR
2011-01-01
... distribution transformers. 431.197 Section 431.197 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Distribution Transformers Compliance and Enforcement § 431.197 Manufacturer's determination of efficiency for distribution transformers. When a...
Nicholson, P; Addison, C; Cross, A M; Kennard, J; Preston, V G; Rixon, F J
1994-05-01
The intracellular distributions of three herpes simplex virus type 1 (HSV-1) capsid proteins, VP23, VP5 and VP22a, were examined using vaccinia virus and plasmid expression systems. During infection of cells with HSV-1 wild-type virus, all three proteins were predominantly located in the nucleus, which is the site of capsid assembly. However, when expressed in the absence of any other HSV-1 proteins, although VP22a was found exclusively in the nucleus as expected, VP5 and VP23 were distributed throughout the cell. Thus nuclear localization is not an intrinsic property of these proteins but must be mediated by one or more HSV-1-induced proteins. Co-expression experiments demonstrated that VP5 was efficiently transported to the nucleus in the presence of VP22a, but the distribution of VP23 was unaffected by the presence of either or both of the other two proteins.
NASA Astrophysics Data System (ADS)
Messerotti, Mauro; Otruba, Wolfgang; Hanslmeier, Arnold
2000-06-01
The Kanzelhoehe Solar Observatory is an observing facility located in Carinthia (Austria) and operated by the Institute of Geophysics, Astrophysics and Meteorology of the Karl- Franzens University Graz. A set of instruments for solar surveillance at different wavelengths bands is continuously operated in automatic mode and is presently being upgraded to be used in supplying near-real-time solar activity indexes for space weather applications. In this frame, we tested a low-end software/hardware architecture running on the PC platform in a non-homogeneous, remotely distributed environment that allows efficient or moderately efficient application sharing at the Intranet and Extranet (i.e., Wide Area Network) levels respectively. Due to the geographical distributed of participating teams (Trieste, Italy; Kanzelhoehe and Graz, Austria), we have been using such features for collaborative remote software development and testing, data analysis and calibration, and observing run emulation from multiple sites as well. In this work, we describe the used architecture and its performances based on a series of application sharing tests we carried out to ascertain its effectiveness in real collaborative remote work, observations and data exchange. The system proved to be reliable at the Intranet level for most distributed tasks, limited to less demanding ones at the Extranet level, but quite effective in remote instrument control when real time response is not needed.
Systems and methods for optically measuring properties of hydrocarbon fuel gases
Adler-Golden, S.; Bernstein, L.S.; Bien, F.; Gersh, M.E.; Goldstein, N.
1998-10-13
A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution. 14 figs.
Systems and methods for optically measuring properties of hydrocarbon fuel gases
Adler-Golden, Steven; Bernstein, Lawrence S.; Bien, Fritz; Gersh, Michael E.; Goldstein, Neil
1998-10-13
A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution.
All optical mode controllable Er-doped random fiber laser with distributed Bragg gratings.
Zhang, W L; Ma, R; Tang, C H; Rao, Y J; Zeng, X P; Yang, Z J; Wang, Z N; Gong, Y; Wang, Y S
2015-07-01
An all-optical method to control the lasing modes of Er-doped random fiber lasers (RFLs) is proposed and demonstrated. In the RFL, an Er-doped fiber (EDF) recoded with randomly separated fiber Bragg gratings (FBG) is used as the gain medium and randomly distributed reflectors, as well as the controllable element. By combining random feedback of the FBG array and Fresnel feedback of a cleaved fiber end, multi-mode coherent random lasing is obtained with a threshold of 14 mW and power efficiency of 14.4%. Moreover, a laterally-injected control light is used to induce local gain perturbation, providing additional gain for certain random resonance modes. As a result, active mode selection of the RFL is realized by changing locations of the laser cavity that is exposed to the control light.
Distributed acoustic sensing technique and its field trial in SAGD well
NASA Astrophysics Data System (ADS)
Han, Li; He, Xiangge; Pan, Yong; Liu, Fei; Yi, Duo; Hu, Chengjun; Zhang, Min; Gu, Lijuan
2017-10-01
Steam assisted gravity drainage (SAGD) is a very promising way for the development of heavy oil, extra heavy oil and tight oil reservoirs. Proper monitoring of the SAGD operations is essential to avoid operational issues and improve efficiency. Among all the monitoring techniques, micro-seismic monitoring and related interpretation method can give useful information about the steam chamber development and has been extensively studied. Distributed acoustic sensor (DAS) based on Rayleigh backscattering is a newly developed technique that can measure acoustic signal at all points along the sensing fiber. In this paper, we demonstrate a DAS system based on dual-pulse heterodyne demodulation technique and did field trial in SAGD well located in Xinjiang Oilfield, China. The field trail results validated the performance of the DAS system and indicated its applicability in steam-chamber monitoring and hydraulic monitoring.
Spatial distribution of calcium-gated chloride channels in olfactory cilia.
French, Donald A; Badamdorj, Dorjsuren; Kleene, Steven J
2010-12-30
In vertebrate olfactory receptor neurons, sensory cilia transduce odor stimuli into changes in neuronal membrane potential. The voltage changes are primarily caused by the sequential openings of two types of channel: a cyclic-nucleotide-gated (CNG) cationic channel and a calcium-gated chloride channel. In frog, the cilia are 25 to 200 µm in length, so the spatial distributions of the channels may be an important determinant of odor sensitivity. To determine the spatial distribution of the chloride channels, we recorded from single cilia as calcium was allowed to diffuse down the length of the cilium and activate the channels. A computational model of this experiment allowed an estimate of the spatial distribution of the chloride channels. On average, the channels were concentrated in a narrow band centered at a distance of 29% of the ciliary length, measured from the base of the cilium. This matches the location of the CNG channels determined previously. This non-uniform distribution of transduction proteins is consistent with similar findings in other cilia. On average, the two types of olfactory transduction channel are concentrated in the same region of the cilium. This may contribute to the efficient detection of weak stimuli.
Measurements of trap dynamics of cold OH molecules using resonance-enhanced multiphoton ionization
NASA Astrophysics Data System (ADS)
Gray, John M.; Bossert, Jason A.; Shyur, Yomay; Lewandowski, H. J.
2017-08-01
Trapping cold, chemically important molecules with electromagnetic fields is a useful technique to study small molecules and their interactions. Traps provide long interaction times, which are needed to precisely examine these low-density molecular samples. However, the trapping fields lead to nonuniform molecular density distributions in these systems. Therefore, it is important to be able to experimentally characterize the spatial density distribution in the trap. Ionizing molecules at different locations in the trap using resonance-enhanced multiphoton ionization (REMPI) and detecting the resulting ions can be used to probe the density distribution even at the low density present in these experiments because of the extremely high efficiency of detection. Until recently, one of the most chemically important molecules, OH, did not have a convenient REMPI scheme identified. Here, we use a newly developed 1 +1' REMPI scheme to detect trapped cold OH molecules. We use this capability to measure the trap dynamics of the central density of the cloud and the density distribution. These types of measurements can be used to optimize loading of molecules into traps, as well as to help characterize the energy distribution, which is critical knowledge for interpreting molecular collision experiments.
The Modulation of Crustal Magmatic Systems by Tectonic Forcing
NASA Astrophysics Data System (ADS)
Karakas, O.; Dufek, J.
2010-12-01
The amount, location and residence time of melt in the crust significantly impacts crustal structure and influences the composition, frequency, and volume of eruptive products. In this study, we develop a two dimensional model that simulates the response of the crust to prolonged mantle-derived intrusions in arc environments. The domain includes the entire crustal section and upper mantle and focuses on the evolving thermal structure due to intrusions and external tectonic forcing. Magmatic intrusion into the crust can be accommodated by extension or thickening of the crust or some combination of both mechanisms. Additionally, external tectonic forcing can generate thicker crustal sections, while tectonic extension can significantly thin the crust. We monitor the thermal response, melt fraction and surface heat flux for different tectonic conditions and melt flux from the mantle. The amount of crustal melt versus fractionated primary mantle melts present in the crustal column helps determine crustal structure and growth through time. We express the amount of crustal melting in terms of an efficiency; we define the melting efficiency as the ratio of the melted volume of crustal material to the volume of melt expected from a strict enthalpy balance as explained by Dufek and Bergantz (2005). Melting efficiencies are less than 1 in real systems because heat diffuses to sections of the crust that never melt. In general, thick crust and crust experiencing extended compressional regimes results in an increased melting efficiency; and thin crust and crust with high extension rates have lower efficiency. In most settings, maximum efficiencies are less than 0.05-0.10. We also observe that with a geophysically estimated flux, the mantle-derived magma bodies build up isolated magma pods that are distributed in the crust. One of the aspects of this work is to monitor the location and size of these magma chambers in the crustal column. We further investigate the rheological, stress and pre-existing structure control on the longevity of the individual magmatic systems.
NASA Astrophysics Data System (ADS)
Pengfei, ZHANG; Ling, ZHANG; Zhenwei, WU; Zong, XU; Wei, GAO; Liang, WANG; Qingquan, YANG; Jichan, XU; Jianbin, LIU; Hao, QU; Yong, LIU; Juan, HUANG; Chengrui, WU; Yumei, HOU; Zhao, JIN; J, D. ELDER; Houyang, GUO
2018-04-01
Modeling with OEDGE was carried out to assess the initial and long-term plasma contamination efficiency of Ar puffing from different divertor locations, i.e. the inner divertor, the outer divertor and the dome, in the EAST superconducting tokamak for typical ohmic plasma conditions. It was found that the initial Ar contamination efficiency is dependent on the local plasma conditions at the different gas puff locations. However, it quickly approaches a similar steady state value for Ar recycling efficiency >0.9. OEDGE modeling shows that the final equilibrium Ar contamination efficiency is significantly lower for the more closed lower divertor than that for the upper divertor.
Kim, Steve M; Ganguli, Surya; Frank, Loren M
2012-08-22
Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit.
Parallel and series FED microstrip array with high efficiency and low cross polarization
NASA Technical Reports Server (NTRS)
Huang, John (Inventor)
1995-01-01
A microstrip array antenna for vertically polarized fan beam (approximately 2 deg x 50 deg) for C-band SAR applications with a physical area of 1.7 m by 0.17 m comprises two rows of patch elements and employs a parallel feed to left- and right-half sections of the rows. Each section is divided into two segments that are fed in parallel with the elements in each segment fed in series through matched transmission lines for high efficiency. The inboard section has half the number of patch elements of the outboard section, and the outboard sections, which have tapered distribution with identical transmission line sections, terminated with half wavelength long open-circuit stubs so that the remaining energy is reflected and radiated in phase. The elements of the two inboard segments of the two left- and right-half sections are provided with tapered transmission lines from element to element for uniform power distribution over the central third of the entire array antenna. The two rows of array elements are excited at opposite patch feed locations with opposite (180 deg difference) phases for reduced cross-polarization.
In situ measurement on TSV-Cu deformation with hotplate system based on sheet resistance
NASA Astrophysics Data System (ADS)
Sun, Yunna; Wang, Bo; Wang, Huiying; Wu, Kaifeng; Yang, Shengyong; Wang, Yan; Ding, Guifu
2017-12-01
The in situ measurement of TSVs deformation at different temperature is meaningful for learning more about the thermal deformation schemes of 3D TSVs in the microelectronic devices. An efficient and smart hotplate based on sheet resistance is designed for offering more heat, producing a uniform temperature distribution, relieving thermal stress and heat concentration issues, and reducing room space, which was optimized by the finite element method (FEM). The fabricated hotplate is efficient and smart (2.5 cm × 2.0 cm × 0.5 cm) enough to be located in the limited space during measuring. The thermal infrared imager was employed as the temperature sensor for monitoring the temperature distribution of TSVs sample. The 3D profilometry was adopted as the observer for TSVs profiles survey. The in situ 2D top surface profiles and 3D displacement profiles of TSVs sample at the different temperature were measured by 3D profilometer. The in situ average relative deformation and effective plastic deformation of the TSV sample were measured. With optical measurement method, 3D profilometry, the TSV sample can be tested repeatedly.
NASA Astrophysics Data System (ADS)
Smith, David R.; Gowda, Vinay R.; Yurduseven, Okan; Larouche, Stéphane; Lipworth, Guy; Urzhumov, Yaroslav; Reynolds, Matthew S.
2017-01-01
Wireless power transfer (WPT) has been an active topic of research, with a number of WPT schemes implemented in the near-field (coupling) and far-field (radiation) regimes. Here, we consider a beamed WPT scheme based on a dynamically reconfigurable source aperture transferring power to receiving devices within the Fresnel region. In this context, the dynamic aperture resembles a reconfigurable lens capable of focusing power to a well-defined spot, whose dimension can be related to a point spread function. The necessary amplitude and phase distribution of the field imposed over the aperture can be determined in a holographic sense, by interfering a hypothetical point source located at the receiver location with a plane wave at the aperture location. While conventional technologies, such as phased arrays, can achieve the required control over phase and amplitude, they typically do so at a high cost; alternatively, metasurface apertures can achieve dynamic focusing with potentially lower cost. We present an initial tradeoff analysis of the Fresnel region WPT concept assuming a metasurface aperture, relating the key parameters such as spot size, aperture size, wavelength, and focal distance, as well as reviewing system considerations such as the availability of sources and power transfer efficiency. We find that approximate design formulas derived from the Gaussian optics approximation provide useful estimates of system performance, including transfer efficiency and coverage volume. The accuracy of these formulas is confirmed through numerical studies.
Molecular clumps in the W51 giant molecular cloud
NASA Astrophysics Data System (ADS)
Parsons, H.; Thompson, M. A.; Clark, J. S.; Chrysostomou, A.
2012-08-01
In this paper, we present a catalogue of dense molecular clumps located within the W51 giant molecular cloud (GMC). This work is based on Heterodyne Array Receiver Programme 13CO J = 3-2 observations of the W51 GMC and uses the automated CLUMPFIND algorithm to decompose the region into a total of 1575 clumps of which 1130 are associated with the W51 GMC. We clearly see the distinct structures of the W51 complex and the high-velocity stream previously reported. We find the clumps have characteristic diameters of 1.4 pc, excitation temperatures of 12 K, densities of 5.6 × 1021 cm-2, surface densities 0.02 g cm-2 and masses of 90 M⊙. We find a total mass of dense clumps within the GMC of 1.5 × 105 M⊙, with only 1 per cent of the clumps detected by number and 4 per cent by mass found to be supercritical. We find a clump-forming efficiency of 14 ± 1 per cent for the W51 GMC and a supercritical clump-forming efficiency of 0.5-0.5+2.3 per cent. Looking at the clump mass distribution, we find it is described by a single power law with a slope of α=2.4-0.1+0.2 above ˜100 M⊙. By comparing locations of supercritical clumps and young clusters, we see that any future star formation is likely to be located away from the currently active W51A region.
Transformer Efficiency Assessment - Okinawa, Japan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas L. Baldwin; Robert J. Turk; Kurt S. Myers
The US Army Engineering & Support Center, Huntsville (USAESCH), and the US Marine Corps Base (MCB), Okinawa, Japan retained Idaho National Laboratory (INL) to conduct a Transformer Efficiency Assessment of “key” transformers located at multiple military bases in Okinawa, Japan. The purpose of this assessment is to support the Marine Corps Base, Okinawa in evaluating medium voltage distribution transformers for potential efficiency upgrades. The original scope of work included the MCB providing actual transformer nameplate data, manufacturer’s factory test sheets, electrical system data (kWh), demand data (kWd), power factor data, and electricity cost data. Unfortunately, the MCB’s actual data ismore » not available and therefore making it necessary to de-scope the original assessment. Note: Any similar nameplate data, photos of similar transformer nameplates, and basic electrical details from one-line drawings (provided by MCB) are not a replacement for actual load loss test data. It is recommended that load measurements are performed on the high and low sides of transformers to better quantify actual load losses, demand data, and power factor data. We also recommend that actual data, when available, be inserted by MCB Okinawa where assumptions have been made and then the LCC analysis updated. This report covers a generalized assessment of modern U.S. transformers in a three level efficiency category, Low-Level efficiency, Medium-Level efficiency, and High-Level efficiency.« less
Transformer Efficiency Assessment - Okinawa, Japan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas L. Baldwin; Robert J. Turk; Kurt S. Myers
2012-05-01
The US Army Engineering & Support Center, Huntsville (USAESCH), and the US Marine Corps Base (MCB), Okinawa, Japan retained Idaho National Laboratory (INL) to conduct a Transformer Efficiency Assessment of “key” transformers located at multiple military bases in Okinawa, Japan. The purpose of this assessment is to support the Marine Corps Base, Okinawa in evaluating medium voltage distribution transformers for potential efficiency upgrades. The original scope of work included the MCB providing actual transformer nameplate data, manufacturer’s factory test sheets, electrical system data (kWh), demand data (kWd), power factor data, and electricity cost data. Unfortunately, the MCB’s actual data ismore » not available and therefore making it necessary to de-scope the original assessment. Note: Any similar nameplate data, photos of similar transformer nameplates, and basic electrical details from one-line drawings (provided by MCB) are not a replacement for actual load loss test data. It is recommended that load measurements are performed on the high and low sides of transformers to better quantify actual load losses, demand data, and power factor data. We also recommend that actual data, when available, be inserted by MCB Okinawa where assumptions have been made and then the LCC analysis updated. This report covers a generalized assessment of modern U.S. transformers in a three level efficiency category, Low-Level efficiency, Medium-Level efficiency, and High-Level efficiency.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bird, Lori; Davidson, Carolyn; McLaren, Joyce
With rapid growth in energy efficiency and distributed generation, electric utilities are anticipating stagnant or decreasing electricity sales, particularly in the residential sector. Utilities are increasingly considering alternative rates structures that are designed to recover fixed costs from residential solar photovoltaic (PV) customers with low net electricity consumption. Proposed structures have included fixed charge increases, minimum bills, and increasingly, demand rates - for net metered customers and all customers. This study examines the electricity bill implications of various residential rate alternatives for multiple locations within the United States. For the locations analyzed, the results suggest that residential PV customers offset,more » on average, between 60% and 99% of their annual load. However, roughly 65% of a typical customer's electricity demand is non-coincidental with PV generation, so the typical PV customer is generally highly reliant on the grid for pooling services.« less
Choudhury, Sharmistha Dutta; Badugu, Ramachandram; Ray, Krishanu; Lakowicz, Joseph R
2015-02-12
Metal-dielectric-metal (MDM) structures provide directional emission close to the surface normal, which offers opportunities for new design formats in fluorescence based applications. The directional emission arises due to near-field coupling of fluorophores with the optical modes present in the MDM substrate. Reflectivity simulations and dispersion diagrams provide a basic understanding of the mode profiles and the factors that affect the coupling efficiency and the spatial distribution of the coupled emission. This work reveals that the composition of the metal layers, the location of the dye in the MDM substrate and the dielectric thickness are important parameters that can be chosen to tune the color of the emission wavelength, the angle of observation, the angular divergence of the emission and the polarization of the emitted light. These features are valuable for displays and optical signage.
Choudhury, Sharmistha Dutta; Badugu, Ramachandram; Ray, Krishanu; Lakowicz, Joseph R.
2015-01-01
Metal-dielectric-metal (MDM) structures provide directional emission close to the surface normal, which offers opportunities for new design formats in fluorescence based applications. The directional emission arises due to near-field coupling of fluorophores with the optical modes present in the MDM substrate. Reflectivity simulations and dispersion diagrams provide a basic understanding of the mode profiles and the factors that affect the coupling efficiency and the spatial distribution of the coupled emission. This work reveals that the composition of the metal layers, the location of the dye in the MDM substrate and the dielectric thickness are important parameters that can be chosen to tune the color of the emission wavelength, the angle of observation, the angular divergence of the emission and the polarization of the emitted light. These features are valuable for displays and optical signage. PMID:25844110
Multi-scale clustering of functional data with application to hydraulic gradients in wetlands
Greenwood, Mark C.; Sojda, Richard S.; Sharp, Julia L.; Peck, Rory G.; Rosenberry, Donald O.
2011-01-01
A new set of methods are developed to perform cluster analysis of functions, motivated by a data set consisting of hydraulic gradients at several locations distributed across a wetland complex. The methods build on previous work on clustering of functions, such as Tarpey and Kinateder (2003) and Hitchcock et al. (2007), but explore functions generated from an additive model decomposition (Wood, 2006) of the original time se- ries. Our decomposition targets two aspects of the series, using an adaptive smoother for the trend and circular spline for the diurnal variation in the series. Different measures for comparing locations are discussed, including a method for efficiently clustering time series that are of different lengths using a functional data approach. The complicated nature of these wetlands are highlighted by the shifting group memberships depending on which scale of variation and year of the study are considered.
Nodal failure index approach to groundwater remediation design
Lee, J.; Reeves, H.W.; Dowding, C.H.
2008-01-01
Computer simulations often are used to design and to optimize groundwater remediation systems. We present a new computationally efficient approach that calculates the reliability of remedial design at every location in a model domain with a single simulation. The estimated reliability and other model information are used to select a best remedial option for given site conditions, conceptual model, and available data. To evaluate design performance, we introduce the nodal failure index (NFI) to determine the number of nodal locations at which the probability of success is below the design requirement. The strength of the NFI approach is that selected areas of interest can be specified for analysis and the best remedial design determined for this target region. An example application of the NFI approach using a hypothetical model shows how the spatial distribution of reliability can be used for a decision support system in groundwater remediation design. ?? 2008 ASCE.
Quantitative measurement of pass-by noise radiated by vehicles running at high speeds
NASA Astrophysics Data System (ADS)
Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin
2011-03-01
It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.
An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids
NASA Astrophysics Data System (ADS)
Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun
2017-07-01
Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.
Bulashevska, Alla; Eils, Roland
2006-06-14
The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.
NASA Astrophysics Data System (ADS)
Lamdjaya, T.; Jobiliong, E.
2017-01-01
PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.
A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.
Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W
2005-01-01
We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.
Models for disaster relief shelter location and supply routing.
DOT National Transportation Integrated Search
2013-01-01
This project focuses on the development of a natural disaster response planning model that determines where to locate points of distribution for relief supplies after a disaster occurs. Advance planning (selecting locations for points of distribution...
Jeon, Hyeonjae; Park, Kwangjin; Hwang, Dae-Joon; Choo, Hyunseung
2009-01-01
Sensor nodes transmit the sensed information to the sink through wireless sensor networks (WSNs). They have limited power, computational capacities and memory. Portable wireless devices are increasing in popularity. Mechanisms that allow information to be efficiently obtained through mobile WSNs are of significant interest. However, a mobile sink introduces many challenges to data dissemination in large WSNs. For example, it is important to efficiently identify the locations of mobile sinks and disseminate information from multi-source nodes to the multi-mobile sinks. In particular, a stationary dissemination path may no longer be effective in mobile sink applications, due to sink mobility. In this paper, we propose a Sink-oriented Dynamic Location Service (SDLS) approach to handle sink mobility. In SDLS, we propose an Eight-Direction Anchor (EDA) system that acts as a location service server. EDA prevents intensive energy consumption at the border sensor nodes and thus provides energy balancing to all the sensor nodes. Then we propose a Location-based Shortest Relay (LSR) that efficiently forwards (or relays) data from a source node to a sink with minimal delay path. Our results demonstrate that SDLS not only provides an efficient and scalable location service, but also reduces the average data communication overhead in scenarios with multiple and moving sinks and sources.
Stochastic rainfall synthesis for urban applications using different regionalization methods
NASA Astrophysics Data System (ADS)
Callau Poduje, A. C.; Leimbach, S.; Haberlandt, U.
2017-12-01
The proper design and efficient operation of urban drainage systems require long and continuous rainfall series in a high temporal resolution. Unfortunately, these time series are usually available in a few locations and it is therefore suitable to develop a stochastic precipitation model to generate rainfall in locations without observations. The model presented is based on an alternating renewal process and involves an external and an internal structure. The members of these structures are described by probability distributions which are site specific. Different regionalization methods based on site descriptors are presented which are used for estimating the distributions for locations without observations. Regional frequency analysis, multiple linear regressions and a vine-copula method are applied for this purpose. An area located in the north-west of Germany is used to compare the different methods and involves a total of 81 stations with 5 min rainfall records. The site descriptors include information available for the whole region: position, topography and hydrometeorologic characteristics which are estimated from long term observations. The methods are compared directly by cross validation of different rainfall statistics. Given that the model is stochastic the evaluation is performed based on ensembles of many long synthetic time series which are compared with observed ones. The performance is as well indirectly evaluated by setting up a fictional urban hydrological system to test the capability of the different methods regarding flooding and overflow characteristics. The results show a good representation of the seasonal variability and good performance in reproducing the sample statistics of the rainfall characteristics. The copula based method shows to be the most robust of the three methods. Advantages and disadvantages of the different methods are presented and discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-29
... EERE-2010-BT-STD-0048] RIN 1904-AC04 Energy Efficiency Standards for Distribution Transformers; Notice...-type distribution transformers. The purpose of the subcommittee will be to discuss and, if possible, reach consensus on a proposed rule for the energy efficiency of distribution transformers, as authorized...
NASA Astrophysics Data System (ADS)
Eltahir, E. A. B.; IM, E. S.
2014-12-01
This study investigates the impact of potential large-scale (about 400,000 km2) and medium-scale (about 60,000 km2) irrigation on the climate of West Africa using the MIT Regional Climate Model. A new irrigation module is implemented to assess the impact of location and scheduling of irrigation on rainfall distribution over West Africa. A control simulation (without irrigation) and various sensitivity experiments (with irrigation) are performed and compared to discern the effects of irrigation location, size and scheduling. In general, the irrigation-induced surface cooling due to anomalously wet soil tends to suppress moist convection and rainfall, which in turn induces local subsidence and low level anti-cyclonic circulation. These local effects are dominated by a consistent reduction of local rainfall over the irrigated land, irrespective of its location. However, the remote response of rainfall distribution to irrigation exhibits a significant sensitivity to the latitudinal position of irrigation. The low-level northeasterly flow associated with anti-cyclonic circulation centered over the irrigation area can enhance the extent of low level convergence through interaction with the prevailing monsoon flow, leading to significant increase in rainfall. Despite much reduced forcing of irrigation water, the medium-scale irrigation seems to draw the same response as large-scale irrigation, which supports the robustness of the response to irrigation in our modeling system. Both large-scale and medium-scale irrigation experiments show that an optimal irrigation location and scheduling exists that would lead to a more efficient use of irrigation water. The approach of using a regional climate model to investigate the impact of location and size of irrigation schemes may be the first step in incorporating land-atmosphere interactions in the design of location and size of irrigation projects. However, this theoretical approach is still in early stages of development and further research is needed before any practical application in water resources planning. Acknowledgements.This research was supported by the National Research Foundation Singapore through the Singapore MIT Alliance for Research and Technology's Center for Environmental Sensing and Modeling interdisciplinary research program.
Realistic simulations of a cyclotron spiral inflector within a particle-in-cell framework
NASA Astrophysics Data System (ADS)
Winklehner, Daniel; Adelmann, Andreas; Gsell, Achim; Kaman, Tulin; Campo, Daniela
2017-12-01
We present an upgrade to the particle-in-cell ion beam simulation code opal that enables us to run highly realistic simulations of the spiral inflector system of a compact cyclotron. This upgrade includes a new geometry class and field solver that can handle the complicated boundary conditions posed by the electrode system in the central region of the cyclotron both in terms of particle termination, and calculation of self-fields. Results are benchmarked against the analytical solution of a coasting beam. As a practical example, the spiral inflector and the first revolution in a 1 MeV /amu test cyclotron, located at Best Cyclotron Systems, Inc., are modeled and compared to the simulation results. We find that opal can now handle arbitrary boundary geometries with relative ease. Simulated injection efficiencies and beam shape compare well with measured efficiencies and a preliminary measurement of the beam distribution after injection.
Is a matrix exponential specification suitable for the modeling of spatial correlation structures?
Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha
2018-01-01
This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375
Distributed Turboelectric Propulsion for Hybrid Wing Body Aircraft
NASA Technical Reports Server (NTRS)
Kim, Hyun Dae; Brown, Gerald V.; Felder, James L.
2008-01-01
Meeting future goals for aircraft and air traffic system performance will require new airframes with more highly integrated propulsion. Previous studies have evaluated hybrid wing body (HWB) configurations with various numbers of engines and with increasing degrees of propulsion-airframe integration. A recently published configuration with 12 small engines partially embedded in a HWB aircraft, reviewed herein, serves as the airframe baseline for the new concept aircraft that is the subject of this paper. To achieve high cruise efficiency, a high lift-to-drag ratio HWB was adopted as the baseline airframe along with boundary layer ingestion inlets and distributed thrust nozzles to fill in the wakes generated by the vehicle. The distributed powered-lift propulsion concept for the baseline vehicle used a simple, high-lift-capable internally blown flap or jet flap system with a number of small high bypass ratio turbofan engines in the airframe. In that concept, the engine flow path from the inlet to the nozzle is direct and does not involve complicated internal ducts through the airframe to redistribute the engine flow. In addition, partially embedded engines, distributed along the upper surface of the HWB airframe, provide noise reduction through airframe shielding and promote jet flow mixing with the ambient airflow. To improve performance and to reduce noise and environmental impact even further, a drastic change in the propulsion system is proposed in this paper. The new concept adopts the previous baseline cruise-efficient short take-off and landing (CESTOL) airframe but employs a number of superconducting motors to drive the distributed fans rather than using many small conventional engines. The power to drive these electric fans is generated by two remotely located gas-turbine-driven superconducting generators. This arrangement allows many small partially embedded fans while retaining the superior efficiency of large core engines, which are physically separated but connected through electric power lines to the fans. This paper presents a brief description of the earlier CESTOL vehicle concept and the newly proposed electrically driven fan concept vehicle, using the previous CESTOL vehicle as a baseline.
Frequency distribution of lithium in leaves of Lycium andersonii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romney, E.M.; Wallace, A.; Kinnear, J.
1977-01-01
Lycium andersonii A. Gray is an accumulator of Li. Assays were made of 200 samples of it collected from six different locations within the Northern Mojave Desert. Mean concentrations of Li varied from location to location and tended not to follow log/sub e/ normal distribution, and to follow a normal distribution only poorly. There was some negative skewness to the log/sub e/ distribution which did exist. The results imply that the variation in accumulation of Li depends upon native supply of Li. Possibly the Li supply and the ability of L. andersonii plants to accumulate it are both log/sub e/more » normally distributed. The mean leaf concentration of Li in all locations was 29 ..mu..g/g, but the maximum was 166 ..mu..g/g.« less
Sciarretta, Andrea; Tabilio, Maria Rosaria; Lampazzi, Elena; Ceccaroli, Claudio; Colacci, Marco; Trematerra, Pasquale
2018-01-01
The Mediterranean fruit fly (medfly), Ceratitis capitata (Wiedemann), is a key pest of fruit crops in many tropical, subtropical and mild temperate areas worldwide. The economic importance of this fruit fly is increasing due to its invasion of new geographical areas. Efficient control and eradication efforts require adequate information regarding C. capitata adults in relation to environmental and physiological cues. This would allow effective characterisation of the population spatio-temporal dynamic of the C. capitata population at both the orchard level and the area-wide landscape. The aim of this study was to analyse population patterns of adult medflies caught using two trapping systems in a peach orchard located in central Italy. They were differentiated by adult sex (males or females) and mating status of females (unmated or mated females) to determine the spatio-temporal dynamic and evaluate the effect of cultivar and chemical treatments on trap catches. Female mating status was assessed by spermathecal dissection and a blind test was carried out to evaluate the reliability of the technique. Geostatistical methods, variogram and kriging, were used to produce distributional maps. Results showed a strong correlation between the distribution of males and unmated females, whereas males versus mated females and unmated females versus mated females showed a lower correlation. Both cultivar and chemical treatments had significant effects on trap catches, showing associations with sex and female mating status. Medfly adults showed aggregated distributions in the experimental field, but hot spots locations varied. The spatial pattern of unmated females reflected that of males, whereas mated females were largely distributed around ripening or ripe fruit. The results give relevant insights into pest management. Mated females may be distributed differently to unmated females and the identification of male hot spots through monitoring would allow localisation of virgin female populations. Based on our results, a more precise IPM strategy, coupled with effective sanitation practices, could represent a more effective approach to medfly control.
NASA Astrophysics Data System (ADS)
Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques
2015-12-01
In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.
Distributed fiber optic moisture intrusion sensing system
Weiss, Jonathan D.
2003-06-24
Method and system for monitoring and identifying moisture intrusion in soil such as is contained in landfills housing radioactive and/or hazardous waste. The invention utilizes the principle that moist or wet soil has a higher thermal conductance than dry soil. The invention employs optical time delay reflectometry in connection with a distributed temperature sensing system together with heating means in order to identify discrete areas within a volume of soil wherein temperature is lower. According to the invention an optical element and, optionally, a heating element may be included in a cable or other similar structure and arranged in a serpentine fashion within a volume of soil to achieve efficient temperature detection across a large area or three dimensional volume of soil. Remediation, moisture countermeasures, or other responsive action may then be coordinated based on the assumption that cooler regions within a soil volume may signal moisture intrusion where those regions are located.
Multiple jet study data correlations. [data correlation for jet mixing flow of air jets
NASA Technical Reports Server (NTRS)
Walker, R. E.; Eberhardt, R. G.
1975-01-01
Correlations are presented which allow determination of penetration and mixing of multiple cold air jets injected normal to a ducted subsonic heated primary air stream. Correlations were obtained over jet-to-primary stream momentum flux ratios of 6 to 60 for locations from 1 to 30 jet diameters downstream of the injection plane. The range of geometric and operating variables makes the correlations relevant to gas turbine combustors. Correlations were obtained for the mixing efficiency between jets and primary stream using an energy exchange parameter. Also jet centerplane velocity and temperature trajectories were correlated and centerplane dimensionless temperature distributions defined. An assumption of a Gaussian vertical temperature distribution at all stations is shown to result in a reasonable temperature field model. Data are presented which allow comparison of predicted and measured values over the range of conditions specified above.
Scalable parallel distance field construction for large-scale applications
Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; ...
2015-10-01
Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less
Anisotropic Solar Wind Sputtering of the Lunar Surface Induced by Crustal Magnetic Anomalies
NASA Technical Reports Server (NTRS)
Poppe, A. R.; Sarantos, M.; Halekas, J. S.; Delory, G. T.; Saito, Y.; Nishino, M.
2014-01-01
The lunar exosphere is generated by several processes each of which generates neutral distributions with different spatial and temporal variability. Solar wind sputtering of the lunar surface is a major process for many regolith-derived species and typically generates neutral distributions with a cosine dependence on solar zenith angle. Complicating this picture are remanent crustal magnetic anomalies on the lunar surface, which decelerate and partially reflect the solar wind before it strikes the surface. We use Kaguya maps of solar wind reflection efficiencies, Lunar Prospector maps of crustal field strengths, and published neutral sputtering yields to calculate anisotropic solar wind sputtering maps. We feed these maps to a Monte Carlo neutral exospheric model to explore three-dimensional exospheric anisotropies and find that significant anisotropies should be present in the neutral exosphere depending on selenographic location and solar wind conditions. Better understanding of solar wind/crustal anomaly interactions could potentially improve our results.
The galaxy NGC 1566 - Distribution and kinematics of the ionized gas
NASA Astrophysics Data System (ADS)
Comte, G.; Duquennoy, A.
1982-10-01
H-alpha narrowband observations are the basis of a study of ionized hydrogen in the large spiral galaxy NGC 1566 which has yielded a catalog of 418 H II regions covering the main body of the galaxy, supplemented by 59 positions and estimated H-alpha luminosities for regions located in the pseudo-outer ring where no H-alpha plate is available. A discussion of luminosity function, diameter distribution and spiral structure notes evidence for a double two-armed spiral pattern. The plane of the galaxy appears warped, and the efficiency of the two different spiral patterns in star formation is different. A preliminary radial velocity field is determined from three interferograms in H-alpha light, and is found to be acceptably fitted by a simple bulge-plus-disk dynamical model in which the apparent disk mass-to-light ratio sharply increases from center to edge.
Scalable Parallel Distance Field Construction for Large-Scale Applications.
Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H
2015-10-01
Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications.
Efficiency in bus stop location and design.
DOT National Transportation Integrated Search
1980-01-01
The research reported here identified those elements associated with the location and design of bus stops that affect the efficiency of transit and traffic operations, and developed guidelines to assist transportation engineers and planners in techni...
Sturman, Andrew; Titov, Mikhail; Zawar-Reza, Peyman
2011-01-15
Installation of temporary or long term monitoring sites is expensive, so it is important to rationally identify potential locations that will achieve the requirements of regional air quality management strategies. A simple, but effective, numerical approach to selecting ambient particulate matter (PM) monitoring site locations has therefore been developed using the MM5-CAMx4 air pollution dispersion modelling system. A new method, 'site efficiency,' was developed to assess the ability of any monitoring site to provide peak ambient air pollution concentrations that are representative of the urban area. 'Site efficiency' varies from 0 to 100%, with the latter representing the most representative site location for monitoring peak PM concentrations. Four heavy pollution episodes in Christchurch (New Zealand) during winter 2005, representing 4 different aerosol dispersion patterns, were used to develop and test this site assessment technique. Evaluation of the efficiency of monitoring sites was undertaken for night and morning aerosol peaks for 4 different particulate material (PM) spatial patterns. The results demonstrate that the existing long term monitoring site at Coles Place is quite well located, with a site efficiency value of 57.8%. A temporary ambient PM monitoring site (operating during winter 2006) showed a lower ability to capture night and morning peak aerosol concentrations. Evaluation of multiple site locations used during an extensive field campaign in Christchurch (New Zealand) in 2000 indicated that the maximum efficiency achieved by any site in the city would be 60-65%, while the efficiency of a virtual background site is calculated to be about 7%. This method of assessing the appropriateness of any potential monitoring site can be used to optimize monitoring site locations for any air pollution measurement programme. Copyright © 2010 Elsevier B.V. All rights reserved.
78 FR 44247 - Semiannual Regulatory Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-23
... (energy efficiency standards) Distribution Transformers (energy efficiency standards) Residential... Sequence No. Title Identifier No. 130 Energy Efficiency 1904-AC04 Standards for Distribution Transformers... Transformers Legal Authority: 42 U.S.C. 6317(a); 42 U.S.C. 6313(a)(6)(C) Abstract: The current distribution...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-03
... Status; Skechers USA, LLC (Distribution of Footwear); Moreno Valley, California Pursuant to its authority... distribution facility of Skechers USA, LLC, located in Moreno Valley, California, (FTZ Docket 5- 2008, filed 2... activity related to footwear warehousing and distribution at the facility of Skechers USA, LLC, located in...
Power break off in a bulb turbine: wall pressure sensor investigation
NASA Astrophysics Data System (ADS)
Duquesne, P.; Maciel, Y.; Aeschlimann, V.; Ciocan, G. D.; Deschênes, C.
2014-03-01
A measurement campaign using unsteady wall pressure sensors on a bulb turbine draft tube was performed over the power and efficiency break off range of a N11 curve. This study is part of the BulbT project, undertaken by the Consortium on hydraulic machines and the LAMH (Hydraulic Machine Laboratory of Laval University). The chosen operating points include the best efficiency point for a high runner blade angle and a high N11. Three other points, with the same N11, have been selected in the break off zone of the efficiency curve. Flow conditions have been set using the guide vanes while the runner blade angle remained constant. The pressure sensors were developed from small piezoresistive chips with high frequency response. The calibration gave an instrumental error lower than 0.3% of the measurement range. The unsteady wall pressure was measured simultaneously at 13 locations inside the first part of the draft tube, which is conical, and at 16 locations in the circular to rectangular transition part just downstream. It was also measured at 11 locations along a streamwise line path at the bottom left part of the draft tube, where flow separation occurs, covering the whole streamwise extent of the draft tube. For seven radial-azimuthal planes, four sensors were distributed azimuthally. As confirmed by tuft visualizations, the break off phenomenon is correlated to the presence of flow separation inside the diffuser at the wall. The break off is linked to the appearance of a large recirculation in the draft tube. The efficiency drop increases with the size of the separated region. Analysis of the draft tube pressure coefficients confirms that the break off is related to diffuser losses. The streamwise evolution of the mean pressure coefficient is analyzed for the different operating conditions. An azimuthal dissymmetry of the mean pressure produced by the separation is detected. The pressure signals have been analyzed and used to track the separation zone depending on the operating conditions. Spectral analysis of these signals reveals a low frequency unsteadiness generated by the flow separation.
An Efficient Implementation For Real Time Applications Of The Wigner-Ville Distribution
NASA Astrophysics Data System (ADS)
Boashash, Boualem; Black, Peter; Whitehouse, Harper J.
1986-03-01
The Wigner-Ville Distribution (WVD) is a valuable tool for time-frequency signal analysis. In order to implement the WVD in real time an efficient algorithm and architecture have been developed which may be implemented with commercial components. This algorithm successively computes the analytic signal corresponding to the input signal, forms a weighted kernel function and analyses the kernel via a Discrete Fourier Transform (DFT). To evaluate the analytic signal required by the algorithm it is shown that the time domain definition implemented as a finite impulse response (FIR) filter is practical and more efficient than the frequency domain definition of the analytic signal. The windowed resolution of the WVD in the frequency domain is shown to be similar to the resolution of a windowed Fourier Transform. A real time signal processsor has been designed for evaluation of the WVD analysis system. The system is easily paralleled and can be configured to meet a variety of frequency and time resolutions. The arithmetic unit is based on a pair of high speed VLSI floating-point multiplier and adder chips. Dual operand buses and an independent result bus maximize data transfer rates. The system is horizontally microprogrammed and utilizes a full instruction pipeline. Each microinstruction specifies two operand addresses, a result location, the type of arithmetic and the memory configuration. input and output is via shared memory blocks with front-end processors to handle data transfers during the non access periods of the analyzer.
NASA Astrophysics Data System (ADS)
Cherkashin, N.; Daghbouj, N.; Seine, G.; Claverie, A.
2018-04-01
Sequential He++H+ ion implantation, being more effective than the sole implantation of H+ or He+, is used by many to transfer thin layers of silicon onto different substrates. However, due to the poor understanding of the basic mechanisms involved in such a process, the implantation parameters to be used for the efficient delamination of a superficial layer are still subject to debate. In this work, by using various experimental techniques, we have studied the influence of the He and H relative depth-distributions imposed by the ion energies onto the result of the sequential implantation and annealing of the same fluence of He and H ions. Analyzing the characteristics of the blister populations observed after annealing and deducing the composition of the gas they contain from FEM simulations, we show that the trapping efficiency of He atoms in platelets and blisters during annealing depends on the behavior of the vacancies generated by the two implants within the H-rich region before and after annealing. Maximum efficiency of the sequential ion implantation is obtained when the H-rich region is able to trap all implanted He ions, while the vacancies it generated are not available to favor the formation of V-rich complexes after implantation then He-filled nano-bubbles after annealing. A technological option is to implant He+ ions first at such an energy that the damage it generates is located on the deeper side of the H profile.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, A.
2013-06-01
Frito Lay North America (FLNA) requires technical assistance for the evaluation and implementation of renewable energy and energy efficiency projects in production facilities and distribution centers across North America. Services provided by NREL do not compete with those available in the private sector, but rather provide FLNA with expertise to create opportunities for the private sector renewable/efficiency industries and to inform FLNA decision making regarding cost-effective projects. Services include: identifying the most cost-effective project locations based on renewable energy resource data, utility data, incentives and other parameters affecting projects; assistance with feasibility studies; procurement specifications; design reviews; and other servicesmore » to support FNLA in improving resource efficiency at facilities. This Cooperative Research and Development Agreement (CRADA) establishes the terms and conditions under which FLNA may access capabilities unique to the laboratory and required by FLNA. Each subsequent task issued under this umbrella agreement would include a scope-of-work, budget, schedule, and provisions for intellectual property specific to that task.« less
SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry
NASA Astrophysics Data System (ADS)
Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.
2007-03-01
A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.
Kinnunen, J; Pohjonen, H
2001-07-01
A 3-year PACS project was started in 1997 and completed in 1999 with filmless radiology and surgery. An efficient network for transferring images provides the infrastructure for integration of different distributed imaging systems and enables efficient handling of all patient-related information on one display station. Because of the need for high-speed communications and the massive amount of image data transferred in radiology, ATM (25, 155 Mbit/s) was chosen to be the main technology used. Both hardware and software redundancy of the system have been carefully planned. The size of the Dicom image library utilizing MO discs is currently 1.2 TB with 300 GB RAID capacity. For the increasing amount of teleradiologic consultations, a special Dicom gateway is planned. It allows a centralized and resilient handling and routing of received images around the hospital. Hospital-wide PACS has already improved the speed and quality of patient care by providing instant access to diagnostic information at multiple locations simultaneously. The benefits of PACS are considered from the viewpoint of the entire hospital: PACS offers a method for efficiently transporting patient-related images and reports to the referring physicians.
Flow analysis of airborne particles in a hospital operating room
NASA Astrophysics Data System (ADS)
Faeghi, Shiva; Lennerts, Kunibert
2016-06-01
Preventing airborne infections during a surgery has been always an important issue to deliver effective and high quality medical care to the patient. One of the important sources of infection is particles that are distributed through airborne routes. Factors influencing infection rates caused by airborne particles, among others, are efficient ventilation and the arrangement of surgical facilities inside the operating room. The paper studies the ventilation airflow pattern in an operating room in a hospital located in Tehran, Iran, and seeks to find the efficient configurations with respect to the ventilation system and layout of facilities. This study uses computational fluid dynamics (CFD) and investigates the effects of different inflow velocities for inlets, two pressurization scenarios (equal and excess pressure) and two arrangements of surgical facilities in room while the door is completely open. The results show that system does not perform adequately when the door is open in the operating room under the current conditions, and excess pressure adjustments should be employed to achieve efficient results. The findings of this research can be discussed in the context of design and controlling of the ventilation facilities of operating rooms.
Design and implementation of streaming media server cluster based on FFMpeg.
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.
Design and Implementation of Streaming Media Server Cluster Based on FFMpeg
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187
The Accuracy of GBM GRB Localizations
NASA Astrophysics Data System (ADS)
Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.
2010-03-01
We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Chang, Xijiang; Kunii, Kazuki; Liang, Rongqing; Nagatsu, Masaaki
2013-04-01
A large-area planar surface-wave plasma (SWP) source driven by a 915 MHz ultrahigh frequency (UHF) wave was developed. To avoid using large, thick dielectric plates as vacuum windows, we propose a cavity launcher consisting of a cylindrical cavity with several small quartz discs at the bottom. Three types of launchers with quartz discs located at different positions were tested to compare their plasma production efficiencies and spatial distributions of electron density. With the optimum launcher, large-area plasma discharges with a radial uniformity within ±10% were obtained in a radius of about 25-30 cm in Ar gas at 8 Pa for incident power in the range 0.5-2.5 kW. The maximum electron density and temperature were approximately (0.95-1.1) × 1011 cm-3 and 1.9-2.0 eV, respectively, as measured by a Langmuir probe located 24 cm below the bottom of the cavity launcher. Using an Ar/NH3 SWP with the optimum launcher, we demonstrated large-area amino-group surface modification of polyurethane sheets. Experimental results indicated that a uniform amino-group modification was achieved over a radius of approximately 40 cm, which is slightly larger than the radial uniformity of the electron density distribution.
NASA Astrophysics Data System (ADS)
Devaraj, Arun; Vijayakumar, Murugesan; Bao, Jie; Guo, Mond F.; Derewinski, Miroslaw A.; Xu, Zhijie; Gray, Michel J.; Prodinger, Sebastian; Ramasamy, Karthikeyan K.
2016-11-01
The formation of carbonaceous deposits (coke) in zeolite pores during catalysis leads to temporary deactivation of catalyst, necessitating regeneration steps, affecting throughput, and resulting in partial permanent loss of catalytic efficiency. Yet, even to date, the coke molecule distribution is quite challenging to study with high spatial resolution from surface to bulk of the catalyst particles at a single particle level. To address this challenge we investigated the coke molecules in HZSM-5 catalyst after ethanol conversion treatment by a combination of C K-edge X-ray absorption spectroscopy (XAS), 13C Cross polarization-magic angle spinning nuclear magnetic resonance (CP-MAS NMR) spectroscopy, and atom probe tomography (APT). XAS and NMR highlighted the aromatic character of coke molecules. APT permitted the imaging of the spatial distribution of hydrocarbon molecules located within the pores of spent HZSM-5 catalyst from surface to bulk at a single particle level. 27Al NMR results and APT results indicated association of coke molecules with Al enriched regions within the spent HZSM-5 catalyst particles. The experimental results were additionally validated by a level-set-based APT field evaporation model. These results provide a new approach to investigate catalytic deactivation due to hydrocarbon coking or poisoning of zeolites at an unprecedented spatial resolution.
A dynamic model of saliva secretion
Palk, Laurence; Sneyd, James; Shuttleworth, Trevor J.; Yule, David I.; Crampin, Edmund J.
2010-01-01
We construct a mathematical model of the parotid acinar cell with the aim of investigating how the distribution of K+ and Cl− channels affects saliva production. Secretion of fluid is initiated by Ca2+ signals acting the Ca2+ dependent K+ and Cl− channels. The opening of these channels facilitates the movement of Cl− ions into the lumen which water follows by osmosis. We use recent results into both the release of Ca2+ from internal stores via the inositol (1,4,5)-trisphosphate receptor (IP3R) and IP3 dynamics to create a physiologically realistic Ca2+ model which is able to recreate important experimentally observed behaviours seen in parotid acinar cells. We formulate an equivalent electrical circuit diagram for the movement of ions responsible for water flow which enables us to calculate and include distinct apical and basal membrane potentials to the model. We show that maximum saliva production occurs when a small amount of K+ conductance is located at the apical membrane, with the majority in the basal membrane. The maximum fluid output is found to coincide with a minimum in the apical membrane potential. The traditional model whereby all Cl− channels are located in the apical membrane is shown to be the most efficient Cl− channel distribution. PMID:20600135
Satellite-Relayed Intercontinental Quantum Network.
Liao, Sheng-Kai; Cai, Wen-Qi; Handsteiner, Johannes; Liu, Bo; Yin, Juan; Zhang, Liang; Rauch, Dominik; Fink, Matthias; Ren, Ji-Gang; Liu, Wei-Yue; Li, Yang; Shen, Qi; Cao, Yuan; Li, Feng-Zhi; Wang, Jian-Feng; Huang, Yong-Mei; Deng, Lei; Xi, Tao; Ma, Lu; Hu, Tai; Li, Li; Liu, Nai-Le; Koidl, Franz; Wang, Peiyuan; Chen, Yu-Ao; Wang, Xiang-Bin; Steindorfer, Michael; Kirchner, Georg; Lu, Chao-Yang; Shu, Rong; Ursin, Rupert; Scheidl, Thomas; Peng, Cheng-Zhi; Wang, Jian-Yu; Zeilinger, Anton; Pan, Jian-Wei
2018-01-19
We perform decoy-state quantum key distribution between a low-Earth-orbit satellite and multiple ground stations located in Xinglong, Nanshan, and Graz, which establish satellite-to-ground secure keys with ∼kHz rate per passage of the satellite Micius over a ground station. The satellite thus establishes a secure key between itself and, say, Xinglong, and another key between itself and, say, Graz. Then, upon request from the ground command, Micius acts as a trusted relay. It performs bitwise exclusive or operations between the two keys and relays the result to one of the ground stations. That way, a secret key is created between China and Europe at locations separated by 7600 km on Earth. These keys are then used for intercontinental quantum-secured communication. This was, on the one hand, the transmission of images in a one-time pad configuration from China to Austria as well as from Austria to China. Also, a video conference was performed between the Austrian Academy of Sciences and the Chinese Academy of Sciences, which also included a 280 km optical ground connection between Xinglong and Beijing. Our work clearly confirms the Micius satellite as a robust platform for quantum key distribution with different ground stations on Earth, and points towards an efficient solution for an ultralong-distance global quantum network.
Devaraj, Arun; Vijayakumar, Murugesan; Bao, Jie; Guo, Mond F.; Derewinski, Miroslaw A.; Xu, Zhijie; Gray, Michel J.; Prodinger, Sebastian; Ramasamy, Karthikeyan K.
2016-01-01
The formation of carbonaceous deposits (coke) in zeolite pores during catalysis leads to temporary deactivation of catalyst, necessitating regeneration steps, affecting throughput, and resulting in partial permanent loss of catalytic efficiency. Yet, even to date, the coke molecule distribution is quite challenging to study with high spatial resolution from surface to bulk of the catalyst particles at a single particle level. To address this challenge we investigated the coke molecules in HZSM-5 catalyst after ethanol conversion treatment by a combination of C K-edge X-ray absorption spectroscopy (XAS), 13C Cross polarization-magic angle spinning nuclear magnetic resonance (CP-MAS NMR) spectroscopy, and atom probe tomography (APT). XAS and NMR highlighted the aromatic character of coke molecules. APT permitted the imaging of the spatial distribution of hydrocarbon molecules located within the pores of spent HZSM-5 catalyst from surface to bulk at a single particle level. 27Al NMR results and APT results indicated association of coke molecules with Al enriched regions within the spent HZSM-5 catalyst particles. The experimental results were additionally validated by a level-set–based APT field evaporation model. These results provide a new approach to investigate catalytic deactivation due to hydrocarbon coking or poisoning of zeolites at an unprecedented spatial resolution. PMID:27876869
Yi, Meng; Chen, Qingkui; Xiong, Neal N.
2016-01-01
This paper considers the distributed access and control problem of massive wireless sensor networks’ data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate. PMID:27827878
NASA Technical Reports Server (NTRS)
Simpson, James J.; Harkins, Daniel N.
1993-01-01
Historically, locating and browsing satellite data has been a cumbersome and expensive process. This has impeded the efficient and effective use of satellite data in the geosciences. SSABLE is a new interactive tool for the archive, browse, order, and distribution of satellite date based upon X Window, high bandwidth networks, and digital image rendering techniques. SSABLE provides for automatically constructing relational database queries to archived image datasets based on time, data, geographical location, and other selection criteria. SSABLE also provides a visual representation of the selected archived data for viewing on the user's X terminal. SSABLE is a near real-time system; for example, data are added to SSABLE's database within 10 min after capture. SSABLE is network and machine independent; it will run identically on any machine which satisfies the following three requirements: 1) has a bitmapped display (monochrome or greater); 2) is running the X Window system; and 3) is on a network directly reachable by the SSABLE system. SSABLE has been evaluated at over 100 international sites. Network response time in the United States and Canada varies between 4 and 7 s for browse image updates; reported transmission times to Europe and Australia typically are 20-25 s.
Network hydraulics inclusion in water quality event detection using multiple sensor stations data.
Oliker, Nurit; Ostfeld, Avi
2015-09-01
Event detection is one of the current most challenging topics in water distribution systems analysis: how regular on-line hydraulic (e.g., pressure, flow) and water quality (e.g., pH, residual chlorine, turbidity) measurements at different network locations can be efficiently utilized to detect water quality contamination events. This study describes an integrated event detection model which combines multiple sensor stations data with network hydraulics. To date event detection modelling is likely limited to single sensor station location and dataset. Single sensor station models are detached from network hydraulics insights and as a result might be significantly exposed to false positive alarms. This work is aimed at decreasing this limitation through integrating local and spatial hydraulic data understanding into an event detection model. The spatial analysis complements the local event detection effort through discovering events with lower signatures by exploring the sensors mutual hydraulic influences. The unique contribution of this study is in incorporating hydraulic simulation information into the overall event detection process of spatially distributed sensors. The methodology is demonstrated on two example applications using base runs and sensitivity analyses. Results show a clear advantage of the suggested model over single-sensor event detection schemes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Satellite-Relayed Intercontinental Quantum Network
NASA Astrophysics Data System (ADS)
Liao, Sheng-Kai; Cai, Wen-Qi; Handsteiner, Johannes; Liu, Bo; Yin, Juan; Zhang, Liang; Rauch, Dominik; Fink, Matthias; Ren, Ji-Gang; Liu, Wei-Yue; Li, Yang; Shen, Qi; Cao, Yuan; Li, Feng-Zhi; Wang, Jian-Feng; Huang, Yong-Mei; Deng, Lei; Xi, Tao; Ma, Lu; Hu, Tai; Li, Li; Liu, Nai-Le; Koidl, Franz; Wang, Peiyuan; Chen, Yu-Ao; Wang, Xiang-Bin; Steindorfer, Michael; Kirchner, Georg; Lu, Chao-Yang; Shu, Rong; Ursin, Rupert; Scheidl, Thomas; Peng, Cheng-Zhi; Wang, Jian-Yu; Zeilinger, Anton; Pan, Jian-Wei
2018-01-01
We perform decoy-state quantum key distribution between a low-Earth-orbit satellite and multiple ground stations located in Xinglong, Nanshan, and Graz, which establish satellite-to-ground secure keys with ˜kHz rate per passage of the satellite Micius over a ground station. The satellite thus establishes a secure key between itself and, say, Xinglong, and another key between itself and, say, Graz. Then, upon request from the ground command, Micius acts as a trusted relay. It performs bitwise exclusive or operations between the two keys and relays the result to one of the ground stations. That way, a secret key is created between China and Europe at locations separated by 7600 km on Earth. These keys are then used for intercontinental quantum-secured communication. This was, on the one hand, the transmission of images in a one-time pad configuration from China to Austria as well as from Austria to China. Also, a video conference was performed between the Austrian Academy of Sciences and the Chinese Academy of Sciences, which also included a 280 km optical ground connection between Xinglong and Beijing. Our work clearly confirms the Micius satellite as a robust platform for quantum key distribution with different ground stations on Earth, and points towards an efficient solution for an ultralong-distance global quantum network.
Peer-to-peer architecture for multi-departmental distributed PACS
NASA Astrophysics Data System (ADS)
Rosset, Antoine; Heuberger, Joris; Pysher, Lance; Ratib, Osman
2006-03-01
We have elected to explore peer-to-peer technology as an alternative to centralized PACS architecture for the increasing requirements for wide access to images inside and outside a radiology department. The goal being to allow users across the enterprise to access any study anytime without the need for prefetching or routing of images from central archive. Images can be accessed between different workstations and local storage nodes. We implemented "bonjour" a new remote file access technology developed by Apple allowing applications to share data and files remotely with optimized data access and data transfer. Our Open-source image display platform called OsiriX was adapted to allow sharing of local DICOM images through direct access of each local SQL database to be accessible from any other OsiriX workstation over the network. A server version of Osirix Core Data database also allows to access distributed archives servers in the same way. The infrastructure implemented allows fast and efficient access to any image anywhere anytime independently from the actual physical location of the data. It also allows benefiting from the performance of distributed low-cost and high capacity storage servers that can provide efficient caching of PACS data that was found to be 10 to 20 x faster that accessing the same date from the central PACS archive. It is particularly suitable for large hospitals and academic environments where clinical conferences, interdisciplinary discussions and successive sessions of image processing are often part of complex workflow or patient management and decision making.
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum dose difference within 1.7%. The maximum relative difference of output factors was within 0.5%. Over 98.5% passing rate was achieved in 3D gamma-index tests with 2%/2 mm criteria in both an IMRT prostate patient case and a head-and-neck case. These results demonstrated the efficacy of our model in terms of accurately representing a reference phase-space file. We have also tested the efficiency gain of our source model over our previously developed phase-space-let file source model. The overall efficiency of dose calculation was found to be improved by ~1.3-2.2 times in water and patient cases using our analytical model.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-15
... Distribution Transformers AGENCY: Department of Energy, Office of Energy Efficiency and Renewable Energy... Rulemaking Working Group for Low-Voltage Dry-Type Distribution Transformers (hereafter ``LV Group''). The LV... proposed rule for regulating the energy efficiency of distribution transformers, as authorized by the...
Dumas, Pascal; Jimenez, Haizea; Peignon, Christophe; Wantiez, Laurent; Adjeroud, Mehdi
2013-01-01
No-take marine reserves are one of the oldest and most versatile tools used across the Pacific for the conservation of reef resources, in particular for invertebrates traditionally targeted by local fishers. Assessing their actual efficiency is still a challenge in complex ecosystems such as coral reefs, where reserve effects are likely to be obscured by high levels of environmental variability. The goal of this study was to investigate the potential interference of small-scale habitat structure on the efficiency of reserves. The spatial distribution of widely harvested macroinvertebrates was surveyed in a large set of protected vs. unprotected stations from eleven reefs located in New Caledonia. Abundance, density and individual size data were collected along random, small-scale (20×1 m) transects. Fine habitat typology was derived with a quantitative photographic method using 17 local habitat variables. Marine reserves substantially augmented the local density, size structure and biomass of the target species. Density of Trochus niloticus and Tridacna maxima doubled globally inside the reserve network; average size was greater by 10 to 20% for T. niloticus. We demonstrated that the apparent success of protection could be obscured by marked variations in population structure occurring over short distances, resulting from small-scale heterogeneity in the reef habitat. The efficiency of reserves appeared to be modulated by the availability of suitable habitats at the decimetric scale (“microhabitats”) for the considered sessile/low-mobile macroinvertebrate species. Incorporating microhabitat distribution could significantly enhance the efficiency of habitat surrogacy, a valuable approach in the case of conservation targets focusing on endangered or emblematic macroinvertebrate or relatively sedentary fish species PMID:23554965
TASK ALLOCATION IN GEO-DISTRIBUTED CYBER-PHYSICAL SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aggarwal, Rachit; Smidts, Carol
This paper studies the task allocation algorithm for a distributed test facility (DTF), which aims to assemble geo-distributed cyber (software) and physical (hardware in the loop components into a prototype cyber-physical system (CPS). This allows low cost testing on an early conceptual prototype (ECP) of the ultimate CPS (UCPS) to be developed. The DTF provides an instrumentation interface for carrying out reliability experiments remotely such as fault propagation analysis and in-situ testing of hardware and software components in a simulated environment. Unfortunately, the geo-distribution introduces an overhead that is not inherent to the UCPS, i.e. a significant time delay inmore » communication that threatens the stability of the ECP and is not an appropriate representation of the behavior of the UCPS. This can be mitigated by implementing a task allocation algorithm to find a suitable configuration and assign the software components to appropriate computational locations, dynamically. This would allow the ECP to operate more efficiently with less probability of being unstable due to the delays introduced by geo-distribution. The task allocation algorithm proposed in this work uses a Monte Carlo approach along with Dynamic Programming to identify the optimal network configuration to keep the time delays to a minimum.« less
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1982-01-01
The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.
NASA Astrophysics Data System (ADS)
Shao, Yuxiang; Chen, Qing; Wei, Zhenhua
Logistics distribution center location evaluation is a dynamic, fuzzy, open and complicated nonlinear system, which makes it difficult to evaluate the distribution center location by the traditional analysis method. The paper proposes a distribution center location evaluation system which uses the fuzzy neural network combined with the genetic algorithm. In this model, the neural network is adopted to construct the fuzzy system. By using the genetic algorithm, the parameters of the neural network are optimized and trained so as to improve the fuzzy system’s abilities of self-study and self-adaptation. At last, the sampled data are trained and tested by Matlab software. The simulation results indicate that the proposed identification model has very small errors.
Improved method for retinotopy constrained source estimation of visual evoked responses
Hagler, Donald J.; Dale, Anders M.
2011-01-01
Retinotopy constrained source estimation (RCSE) is a method for non-invasively measuring the time courses of activation in early visual areas using magnetoencephalography (MEG) or electroencephalography (EEG). Unlike conventional equivalent current dipole or distributed source models, the use of multiple, retinotopically-mapped stimulus locations to simultaneously constrain the solutions allows for the estimation of independent waveforms for visual areas V1, V2, and V3, despite their close proximity to each other. We describe modifications that improve the reliability and efficiency of this method. First, we find that increasing the number and size of visual stimuli results in source estimates that are less susceptible to noise. Second, to create a more accurate forward solution, we have explicitly modeled the cortical point spread of individual visual stimuli. Dipoles are represented as extended patches on the cortical surface, which take into account the estimated receptive field size at each location in V1, V2, and V3 as well as the contributions from contralateral, ipsilateral, dorsal, and ventral portions of the visual areas. Third, we implemented a map fitting procedure to deform a template to match individual subject retinotopic maps derived from functional magnetic resonance imaging (fMRI). This improves the efficiency of the overall method by allowing automated dipole selection, and it makes the results less sensitive to physiological noise in fMRI retinotopy data. Finally, the iteratively reweighted least squares (IRLS) method was used to reduce the contribution from stimulus locations with high residual error for robust estimation of visual evoked responses. PMID:22102418
An Intelligent Archive Testbed Incorporating Data Mining
NASA Technical Reports Server (NTRS)
Ramapriyan, H.; Isaac, D.; Yang, W.; Bonnlander, B.; Danks, D.
2009-01-01
Many significant advances have occurred during the last two decades in remote sensing instrumentation, computation, storage, and communication technology. A series of Earth observing satellites have been launched by U.S. and international agencies and have been operating and collecting global data on a regular basis. These advances have created a data rich environment for scientific research and applications. NASA s Earth Observing System (EOS) Data and Information System (EOSDIS) has been operational since August 1994 with support for pre-EOS data. Currently, EOSDIS supports all the EOS missions including Terra (1999), Aqua (2002), ICESat (2002) and Aura (2004). EOSDIS has been effectively capturing, processing and archiving several terabytes of standard data products each day. It has also been distributing these data products at a rate of several terabytes per day to a diverse and globally distributed user community (Ramapriyan et al. 2009). There are other NASA-sponsored data system activities including measurement-based systems such as the Ocean Data Processing System and the Precipitation Processing system, and several projects under the Research, Education and Applications Solutions Network (REASoN), Making Earth Science Data Records for Use in Research Environments (MEaSUREs), and the Advancing Collaborative Connections for Earth-Sun System Science (ACCESS) programs. Together, these activities provide a rich set of resources constituting a value chain for users to obtain data at various levels ranging from raw radiances to interdisciplinary model outputs. The result has been a significant leap in our understanding of the Earth systems that all humans depend on for their enjoyment, livelihood, and survival. The trend in the community today is towards many distributed sets of providers of data and services. Despite this, visions for the future include users being able to locate, fuse and utilize data with location transparency and high degree of interoperability, and being able to convert data to information and usable knowledge in an efficient, convenient manner, aided significantly by automation (Ramapriyan et al. 2004; NASA 2005). We can look upon the distributed provider environment with capabilities to convert data to information and to knowledge as an Intelligent Archive in the Context of a Knowledge Building system (IA-KBS). Some of the key capabilities of an IA-KBS are: Virtual Product Generation, Significant Event Detection, Automated Data Quality Assessment, Large-Scale Data Mining, Dynamic Feedback Loop, and Data Discovery and Efficient Requesting (Ramapriyan et al. 2004).
Efficiency analysis of diffusion on T-fractals in the sense of random walks.
Peng, Junhao; Xu, Guoai
2014-04-07
Efficiently controlling the diffusion process is crucial in the study of diffusion problem in complex systems. In the sense of random walks with a single trap, mean trapping time (MTT) and mean diffusing time (MDT) are good measures of trapping efficiency and diffusion efficiency, respectively. They both vary with the location of the node. In this paper, we analyze the effects of node's location on trapping efficiency and diffusion efficiency of T-fractals measured by MTT and MDT. First, we provide methods to calculate the MTT for any target node and the MDT for any source node of T-fractals. The methods can also be used to calculate the mean first-passage time between any pair of nodes. Then, using the MTT and the MDT as the measure of trapping efficiency and diffusion efficiency, respectively, we compare the trapping efficiency and diffusion efficiency among all nodes of T-fractal and find the best (or worst) trapping sites and the best (or worst) diffusing sites. Our results show that the hub node of T-fractal is the best trapping site, but it is also the worst diffusing site; and that the three boundary nodes are the worst trapping sites, but they are also the best diffusing sites. Comparing the maximum of MTT and MDT with their minimums, we find that the maximum of MTT is almost 6 times of the minimum of MTT and the maximum of MDT is almost equal to the minimum for MDT. Thus, the location of target node has large effect on the trapping efficiency, but the location of source node almost has no effect on diffusion efficiency. We also simulate random walks on T-fractals, whose results are consistent with the derived results.
Geographic location, network patterns and population distribution of rural settlements in Greece
NASA Astrophysics Data System (ADS)
Asimakopoulos, Avraam; Mogios, Emmanuel; Xenikos, Dimitrios G.
2016-10-01
Our work addresses the problem of how social networks are embedded in space, by studying the spread of human population over complex geomorphological terrain. We focus on villages or small cities up to a few thousand inhabitants located in mountainous areas in Greece. This terrain presents a familiar tree-like structure of valleys and land plateaus. Cities are found more often at lower altitudes and exhibit preference on south orientation. Furthermore, the population generally avoids flat land plateaus and river beds, preferring locations slightly uphill, away from the plateau edge. Despite the location diversity regarding geomorphological parameters, we find certain quantitative norms when we examine location and population distributions relative to the (man-made) transportation network. In particular, settlements at radial distance ℓ away from road network junctions have the same mean altitude, practically independent of ℓ ranging from a few meters to 10 km. Similarly, the distribution of the settlement population at any given ℓ is the same for all ℓ. Finally, the cumulative distribution of the number of rural cities n(ℓ) is fitted to the Weibull distribution, suggesting that human decisions for creating settlements could be paralleled to mechanisms typically attributed to this particular statistical distribution.
Gerber, Daniel L.; Vossos, Vagelis; Feng, Wei; ...
2017-06-12
Direct current (DC) power distribution has recently gained traction in buildings research due to the proliferation of on-site electricity generation and battery storage, and an increasing prevalence of internal DC loads. The research discussed in this paper uses Modelica-based simulation to compare the efficiency of DC building power distribution with an equivalent alternating current (AC) distribution. The buildings are all modeled with solar generation, battery storage, and loads that are representative of the most efficient building technology. A variety of paramet ric simulations determine how and when DC distribution proves advantageous. These simulations also validate previous studies that use simplermore » approaches and arithmetic efficiency models. This work shows that using DC distribution can be considerably more efficient: a medium sized office building using DC distribution has an expected baseline of 12% savings, but may also save up to 18%. In these results, the baseline simulation parameters are for a zero net energy (ZNE) building that can island as a microgrid. DC is most advantageous in buildings with large solar capacity, large battery capacity, and high voltage distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Daniel L.; Vossos, Vagelis; Feng, Wei
Direct current (DC) power distribution has recently gained traction in buildings research due to the proliferation of on-site electricity generation and battery storage, and an increasing prevalence of internal DC loads. The research discussed in this paper uses Modelica-based simulation to compare the efficiency of DC building power distribution with an equivalent alternating current (AC) distribution. The buildings are all modeled with solar generation, battery storage, and loads that are representative of the most efficient building technology. A variety of paramet ric simulations determine how and when DC distribution proves advantageous. These simulations also validate previous studies that use simplermore » approaches and arithmetic efficiency models. This work shows that using DC distribution can be considerably more efficient: a medium sized office building using DC distribution has an expected baseline of 12% savings, but may also save up to 18%. In these results, the baseline simulation parameters are for a zero net energy (ZNE) building that can island as a microgrid. DC is most advantageous in buildings with large solar capacity, large battery capacity, and high voltage distribution.« less
Efficient transformer study: Analysis of manufacture and utility data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burkes, Klaehn; Cordaro, Joe; McIntosh, John
Distribution transformers convert power from the distribution system voltage to the end-customer voltage, which consists of residences, businesses, distributed generation, campus systems, and manufacturing facilities. Amorphous metal distribution transformers (AMDT) are also more expensive and heavier than conventional silicon steel distribution transformers. This and the difficulty to measure the benefit from energy efficiency and low awareness of the technology have hindered the adoption of AMDT. This report presents the cost savings for installing AMDT and the amount of energy saved based on the improved efficiency.
NASA Astrophysics Data System (ADS)
Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw
2016-11-01
In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudson, W.G.
Scapteriscus vicinus is the most important pest of turf and pasture grasses in Florida. This study develops a method of correlating sample results with true population density and provides the first quantitative information on spatial distribution and movement patterns of mole crickets. Three basic techniques for sampling mole crickets were compared: soil flushes, soil corer, and pitfall trapping. No statistical difference was found between the soil corer and soil flushing. Soil flushing was shown to be more sensitive to changes in population density than pitfall trapping. No technique was effective for sampling adults. Regression analysis provided a means of adjustingmore » for the effects of soil moisture and showed soil temperature to be unimportant in predicting efficiency of flush sampling. Cesium-137 was used to label females for subsequent location underground. Comparison of mean distance to nearest neighbor with the distance predicted by a random distribution model showed that the observed distance in the spring was significantly greater than hypothesized (Student's T-test, p < 0.05). Fall adult nearest neighbor distance was not different than predicted by the random distribution hypothesis.« less
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Yarrington, Phillip W.
2007-01-01
The simplified shear solution method is presented for approximating the through-thickness shear stress distribution within a composite laminate based on laminated beam theory. The method does not consider the solution of a particular boundary value problem, rather it requires only knowledge of the global shear loading, geometry, and material properties of the laminate or panel. It is thus analogous to lamination theory in that ply level stresses can be efficiently determined from global load resultants (as determined, for instance, by finite element analysis) at a given location in a structure and used to evaluate the margin of safety on a ply by ply basis. The simplified shear solution stress distribution is zero at free surfaces, continuous at ply boundaries, and integrates to the applied shear load. Comparisons to existing theories are made for a variety of laminates, and design examples are provided illustrating the use of the method for determining through-thickness shear stress margins in several types of composite panels and in the context of a finite element structural analysis.
Interoperating Cloud-based Virtual Farms
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.
2015-12-01
The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.
Nonlinear Reduced-Order Analysis with Time-Varying Spatial Loading Distributions
NASA Technical Reports Server (NTRS)
Prezekop, Adam
2008-01-01
Oscillating shocks acting in combination with high-intensity acoustic loadings present a challenge to the design of resilient hypersonic flight vehicle structures. This paper addresses some features of this loading condition and certain aspects of a nonlinear reduced-order analysis with emphasis on system identification leading to formation of a robust modal basis. The nonlinear dynamic response of a composite structure subject to the simultaneous action of locally strong oscillating pressure gradients and high-intensity acoustic loadings is considered. The reduced-order analysis used in this work has been previously demonstrated to be both computationally efficient and accurate for time-invariant spatial loading distributions, provided that an appropriate modal basis is used. The challenge of the present study is to identify a suitable basis for loadings with time-varying spatial distributions. Using a proper orthogonal decomposition and modal expansion, it is shown that such a basis can be developed. The basis is made more robust by incrementally expanding it to account for changes in the location, frequency and span of the oscillating pressure gradient.
Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui
2013-11-01
With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.
Counterfactual quantum key distribution with high efficiency
NASA Astrophysics Data System (ADS)
Sun, Ying; Wen, Qiao-Yan
2010-11-01
In a counterfactual quantum key distribution scheme, a secret key can be generated merely by transmitting the split vacuum pulses of single particles. We improve the efficiency of the first quantum key distribution scheme based on the counterfactual phenomenon. This scheme not only achieves the same security level as the original one but also has higher efficiency. We also analyze how to achieve the optimal efficiency under various conditions.
Counterfactual quantum key distribution with high efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun Ying; Beijing Electronic Science and Technology Institute, Beijing 100070; Wen Qiaoyan
2010-11-15
In a counterfactual quantum key distribution scheme, a secret key can be generated merely by transmitting the split vacuum pulses of single particles. We improve the efficiency of the first quantum key distribution scheme based on the counterfactual phenomenon. This scheme not only achieves the same security level as the original one but also has higher efficiency. We also analyze how to achieve the optimal efficiency under various conditions.
Janssen, Mathieu F; Bonsel, Gouke J; Luo, Nan
2018-06-01
This study describes the first empirical head-to-head comparison of EQ-5D-3L (3L) and EQ-5D-5L (5L) value sets for multiple countries. A large multinational dataset, including 3L and 5L data for eight patient groups and a student cohort, was used to compare 3L versus 5L value sets for Canada, China, England/UK (5L/3L, respectively), Japan, The Netherlands, South Korea and Spain. We used distributional analyses and two methods exploring discriminatory power: relative efficiency as assessed by the F statistic, and an area under the curve for the receiver-operating characteristics approach. Differences in outcomes were explored by separating descriptive system effects from valuation effects, and by exploring distributional location effects. In terms of distributional evenness, efficiency of scale use and the face validity of the resulting distributions, 5L was superior, leading to an increase in sensitivity and precision in health status measurement. When compared with 5L, 3L systematically overestimated health problems and consequently underestimated utilities. This led to bias, i.e. over- or underestimations of discriminatory power. We conclude that 5L provides more precise measurement at individual and group levels, both in terms of descriptive system data and utilities. The increased sensitivity and precision of 5L is likely to be generalisable to longitudinal studies, such as in intervention designs. Hence, we recommend the use of the 5L across applications, including economic evaluation, clinical and public health studies. The evaluative framework proved to be useful in assessing preference-based instruments and might be useful for future work in the development of descriptive systems or health classifications.
Contrast statistics for foveated visual systems: fixation selection by minimizing contrast entropy
NASA Astrophysics Data System (ADS)
Raj, Raghu; Geisler, Wilson S.; Frazor, Robert A.; Bovik, Alan C.
2005-10-01
The human visual system combines a wide field of view with a high-resolution fovea and uses eye, head, and body movements to direct the fovea to potentially relevant locations in the visual scene. This strategy is sensible for a visual system with limited neural resources. However, for this strategy to be effective, the visual system needs sophisticated central mechanisms that efficiently exploit the varying spatial resolution of the retina. To gain insight into some of the design requirements of these central mechanisms, we have analyzed the effects of variable spatial resolution on local contrast in 300 calibrated natural images. Specifically, for each retinal eccentricity (which produces a certain effective level of blur), and for each value of local contrast observed at that eccentricity, we measured the probability distribution of the local contrast in the unblurred image. These conditional probability distributions can be regarded as posterior probability distributions for the ``true'' unblurred contrast, given an observed contrast at a given eccentricity. We find that these conditional probability distributions are adequately described by a few simple formulas. To explore how these statistics might be exploited by central perceptual mechanisms, we consider the task of selecting successive fixation points, where the goal on each fixation is to maximize total contrast information gained about the image (i.e., minimize total contrast uncertainty). We derive an entropy minimization algorithm and find that it performs optimally at reducing total contrast uncertainty and that it also works well at reducing the mean squared error between the original image and the image reconstructed from the multiple fixations. Our results show that measurements of local contrast alone could efficiently drive the scan paths of the eye when the goal is to gain as much information about the spatial structure of a scene as possible.
Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data.
Xie, Qingqing; Wang, Liangmin
2016-11-25
With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP) can provide location-based service (LBS) for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman) algorithm and ciphertext policy attribute-based encryption (CP-ABE) scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA) and efficient enough for practical applications in terms of user side computation overhead.
Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data †
Xie, Qingqing; Wang, Liangmin
2016-01-01
With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP) can provide location-based service (LBS) for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman) algorithm and ciphertext policy attribute-based encryption (CP-ABE) scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA) and efficient enough for practical applications in terms of user side computation overhead. PMID:27897984
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
NASA Astrophysics Data System (ADS)
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).
Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2012-01-10
The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.
Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2008-01-01
The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.
NASA Astrophysics Data System (ADS)
Kuras, P. K.; Weiler, M.; Alila, Y.; Spittlehouse, D.; Winkler, R.
2006-12-01
Hydrologic models have been increasingly used in forest hydrology to overcome the limitations of paired watershed experiments, where vegetative recovery and natural variability obscure the inferences and conclusions that can be drawn from such studies. Models, however, are also plagued by uncertainty stemming from a limited understanding of hydrological processes in forested catchments and parameter equifinality is a common concern. This has created the necessity to improve our understanding of how hydrological systems work, through the development of hydrological measures, analyses and models that address the question: are we getting the right answers for the right reasons? Hence, physically-based, spatially-distributed hydrologic models should be validated with high-quality experimental data describing multiple concurrent internal catchment processes under a range of hydrologic regimes. The distributed hydrology soil vegetation model (DHSVM) frequently used in forest management applications is an example of a process-based model used to address the aforementioned circumstances, and this study takes a novel approach at collectively examining the ability of a pre-calibrated model application to realistically simulate outlet flows along with the spatial-temporal variation of internal catchment processes including: continuous groundwater dynamics at 9 locations, stream and road network flow at 67 locations for six individual days throughout the freshet, and pre-melt season snow distribution. Model efficiency was improved over prior evaluations due to continuous efforts in improving the quality of meteorological data in the watershed. Road and stream network flows were very well simulated for a range of hydrological conditions, and the spatial distribution of the pre-melt season snowpack was in general agreement with observed values. The model was effective in simulating the spatial variability of subsurface flow generation, except at locations where strong stream-groundwater interactions existed, as the model is not capable of simulating such processes and subsurface flows always drain to the stream network. The model has proven overall to be quite capable in realistically simulating internal catchment processes in the watershed, which creates more confidence in future model applications exploring the effects of various forest management scenarios on the watershed's hydrological processes.
Baum, Rex L.; Godt, Jonathan W.; Savage, William Z.
2010-01-01
Shallow rainfall-induced landslides commonly occur under conditions of transient infiltration into initially unsaturated soils. In an effort to predict the timing and location of such landslides, we developed a model of the infiltration process using a two-layer system that consists of an unsaturated zone above a saturated zone and implemented this model in a geographic information system (GIS) framework. The model links analytical solutions for transient, unsaturated, vertical infiltration above the water table to pressure-diffusion solutions for pressure changes below the water table. The solutions are coupled through a transient water table that rises as water accumulates at the base of the unsaturated zone. This scheme, though limited to simplified soil-water characteristics and moist initial conditions, greatly improves computational efficiency over numerical models in spatially distributed modeling applications. Pore pressures computed by these coupled models are subsequently used in one-dimensional slope-stability computations to estimate the timing and locations of slope failures. Applied over a digital landscape near Seattle, Washington, for an hourly rainfall history known to trigger shallow landslides, the model computes a factor of safety for each grid cell at any time during a rainstorm. The unsaturated layer attenuates and delays the rainfall-induced pore-pressure response of the model at depth, consistent with observations at an instrumented hillside near Edmonds, Washington. This attenuation results in realistic estimates of timing for the onset of slope instability (7 h earlier than observed landslides, on average). By considering the spatial distribution of physical properties, the model predicts the primary source areas of landslides.
Identifying and mitigating errors in satellite telemetry of polar bears
Arthur, Stephen M.; Garner, Gerald W.; Olson, Tamara L.
1998-01-01
Satellite radiotelemetry is a useful method of tracking movements of animals that travel long distances or inhabit remote areas. However, the logistical constraints that encourage the use of satellite telemetry also inhibit efforts to assess accuracy of the resulting data. To investigate effectiveness of methods that might be used to improve the reliability of these data, we compared 3 sets of criteria designed to select the most plausible locations of polar bears (Ursus maritimus) that were tracked using satellite radiotelemetry in the Bering, Chukchi, East Siberian, Laptev, and Kara seas during 1988-93. We also evaluated several indices of location accuracy. Our results suggested that, although indices could provide information useful in evaluating location accuracy, no index or set of criteria was sufficient to identify all the implausible locations. Thus, it was necessary to examine the data and make subjective decisions about which locations to accept or reject. However, by using a formal set of selection criteria, we simplified the task of evaluating locations and ensured that decisions were made consistently. This approach also enabled us to evaluate biases that may be introduced by the criteria used to identify location errors. For our study, the best set of selection criteria comprised: (1) rejecting locations for which the distance to the nearest other point from the same day was >50 km; (2) determining the highest accuracy code (NLOC) for a particular day and rejecting locations from that day with lesser values; and (3) from the remaining locations for each day, selecting the location closest to the location chosen for the previous transmission period. Although our selection criteria seemed unlikely to bias studies of habitat use or geographic distribution, basing selection decisions on distances between points might bias studies of movement rates or distances. It is unlikely that any set of criteria will be best for all situations; to make efficient use of data and minimize bias, these rules must be tailored to specific study objectives.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-12
... Intent to Negotiate Proposed Rule on Energy Efficiency Standards for Distribution Transformers AGENCY... transformers. The purpose of the subcommittee will be to discuss and, if possible, reach consensus on a proposed rule for the energy efficiency of distribution transformers, as authorized by the Energy Policy...
Deep generative learning of location-invariant visual word recognition.
Di Bono, Maria Grazia; Zorzi, Marco
2013-01-01
It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective-is largely based on letter-level information.
Analysis on Voltage Profile of Distribution Network with Distributed Generation
NASA Astrophysics Data System (ADS)
Shao, Hua; Shi, Yujie; Yuan, Jianpu; An, Jiakun; Yang, Jianhua
2018-02-01
Penetration of distributed generation has some impacts on a distribution network in load flow, voltage profile, reliability, power loss and so on. After the impacts and the typical structures of the grid-connected distributed generation are analyzed, the back/forward sweep method of the load flow calculation of the distribution network is modelled including distributed generation. The voltage profiles of the distribution network affected by the installation location and the capacity of distributed generation are thoroughly investigated and simulated. The impacts on the voltage profiles are summarized and some suggestions to the installation location and the capacity of distributed generation are given correspondingly.
NASA Astrophysics Data System (ADS)
Davies, D.; Murphy, K. J.; Michael, K.
2013-12-01
NASA's Land Atmosphere Near real-time Capability for EOS (Earth Observing System) (LANCE) provides data and imagery from Terra, Aqua and Aura satellites in less than 3 hours from satellite observation, to meet the needs of the near real-time (NRT) applications community. This article describes the architecture of the LANCE and outlines the modifications made to achieve the 3-hour latency requirement with a view to informing future NRT satellite distribution capabilities. It also describes how latency is determined. LANCE is a distributed system that builds on the existing EOS Data and Information System (EOSDIS) capabilities. To achieve the NRT latency requirement, many components of the EOS satellite operations, ground and science processing systems have been made more efficient without compromising the quality of science data processing. The EOS Data and Operations System (EDOS) processes the NRT stream with higher priority than the science data stream in order to minimize latency. In addition to expediting transfer times, the key difference between the NRT Level 0 products and those for standard science processing is the data used to determine the precise location and tilt of the satellite. Standard products use definitive geo-location (attitude and ephemeris) data provided daily, whereas NRT products use predicted geo-location provided by the instrument Global Positioning System (GPS) or approximation of navigational data (depending on platform). Level 0 data are processed in to higher-level products at designated Science Investigator-led Processing Systems (SIPS). The processes used by LANCE have been streamlined and adapted to work with datasets as soon as they are downlinked from satellites or transmitted from ground stations. Level 2 products that require ancillary data have modified production rules to relax the requirements for ancillary data so reducing processing times. Looking to the future, experience gained from LANCE can provide valuable lessons on satellite and ground system architectures and on how the delivery of NRT products from other NASA missions might be achieved.
The role of experience in location estimation: Target distributions shift location memory biases.
Lipinski, John; Simmering, Vanessa R; Johnson, Jeffrey S; Spencer, John P
2010-04-01
Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. Cognition, 93, 75-97]. This conflicts with earlier results showing that location estimation is biased relative to the spatial distribution of targets [Spencer, J. P., & Hund, A. M. (2002). Prototypes and particulars: Geometric and experience-dependent spatial categories. Journal of Experimental Psychology: General, 131, 16-37]. Here, we resolve this controversy by using a task based on Huttenlocher et al. (Experiment 4) with minor modifications to enhance our ability to detect experience-dependent effects. Results after the first block of trials replicate the pattern reported in Huttenlocher et al. After additional experience, however, participants showed biases that significantly shifted according to the target distributions. These results are consistent with the Dynamic Field Theory, an alternative theory of spatial cognition that integrates long-term memory traces across trials relative to the perceived structure of the task space. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Vijgen, Paul M. H. W.
1993-01-01
Three planar, untwisted wings with the same elliptical chord distribution but with different curvatures of the quarter-chord line were tested in the Langley 8-Foot Transonic Pressure Tunnel (8-ft TPT) and the Langley 7- by 10-Foot High-Speed Tunnel (7 x 10 HST). A fourth wing with a rectangular planform and the same projected area and span was also tested. Force and moment measurements from the 8-ft TPT tests are presented for Mach numbers from 0.3 to 0.5 and angles of attack from -4 degrees to 7 degrees. Sketches of the oil-flow patterns on the upper surfaces of the wings and some force and moment measurements from the 7 x 10 HST tests are presented at a Mach number of 0.5. Increasing the curvature of the quarter-chord line makes the angle of zero lift more negative but has little effect on the drag coefficient at zero lift. The changes in lift-curve slope and in the Oswald efficiency factor with the change in curvature of the quarter-chord line (wingtip location) indicate that the elliptical wing with the unswept quarter-chord line has the lowest lifting efficiency and the elliptical wing with the unswept trailing edge has the highest lifting efficiency; the crescent-shaped planform wing has an efficiency in between.
Sampling probability distributions of lesions in mammograms
NASA Astrophysics Data System (ADS)
Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.
2015-03-01
One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.
Optimizing an ELF/VLF Phased Array at HAARP
NASA Astrophysics Data System (ADS)
Fujimaru, S.; Moore, R. C.
2013-12-01
The goal of this study is to maximize the amplitude of 1-5 kHz ELF/VLF waves generated by ionospheric HF heating and measured at a ground-based ELF/VLF receiver. The optimization makes use of experimental observations performed during ELF/VLF wave generation experiments at the High-frequency Active Auroral Research Program (HAARP) Observatory in Gakona, Alaska. During these experiments, the amplitude, phase, and propagation delay of the ELF/VLF waves were carefully measured. The HF beam was aimed at 15 degrees zenith angle in 8 different azimuthal directions, equally spaced in a circle, while broadcasting a 3.25 MHz (X-mode) signal that was amplitude modulated (square wave) with a linear frequency-time chirp between 1 and 5 kHz. The experimental observations are used to provide reference amplitudes, phases, and propagation delays for ELF/VLF waves generated at these specific locations. The presented optimization accounts for the trade-off between duty cycle, heated area, and the distributed nature of the source region in order to construct a "most efficient" phased array. The amplitudes and phases generated by modulated heating at each location are combined in post-processing to find an optimal combination of duty cycle, heating location, and heating order.
NASA Astrophysics Data System (ADS)
Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.
2014-03-01
This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.
Cui, Yong; Wang, Qiusheng; Yuan, Haiwen; Song, Xiao; Hu, Xuemin; Zhao, Luxing
2015-01-01
In the wireless sensor networks (WSNs) for electric field measurement system under the High-Voltage Direct Current (HVDC) transmission lines, it is necessary to obtain the electric field distribution with multiple sensors. The location information of each sensor is essential to the correct analysis of measurement results. Compared with the existing approach which gathers the location information by manually labelling sensors during deployment, the automatic localization can reduce the workload and improve the measurement efficiency. A novel and practical range-free localization algorithm for the localization of one-dimensional linear topology wireless networks in the electric field measurement system is presented. The algorithm utilizes unknown nodes' neighbor lists based on the Received Signal Strength Indicator (RSSI) values to determine the relative locations of nodes. The algorithm is able to handle the exceptional situation of the output permutation which can effectively improve the accuracy of localization. The performance of this algorithm under real circumstances has been evaluated through several experiments with different numbers of nodes and different node deployments in the China State Grid HVDC test base. Results show that the proposed algorithm achieves an accuracy of over 96% under different conditions. PMID:25658390
Cui, Yong; Wang, Qiusheng; Yuan, Haiwen; Song, Xiao; Hu, Xuemin; Zhao, Luxing
2015-02-04
In the wireless sensor networks (WSNs) for electric field measurement system under the High-Voltage Direct Current (HVDC) transmission lines, it is necessary to obtain the electric field distribution with multiple sensors. The location information of each sensor is essential to the correct analysis of measurement results. Compared with the existing approach which gathers the location information by manually labelling sensors during deployment, the automatic localization can reduce the workload and improve the measurement efficiency. A novel and practical range-free localization algorithm for the localization of one-dimensional linear topology wireless networks in the electric field measurement system is presented. The algorithm utilizes unknown nodes' neighbor lists based on the Received Signal Strength Indicator (RSSI) values to determine the relative locations of nodes. The algorithm is able to handle the exceptional situation of the output permutation which can effectively improve the accuracy of localization. The performance of this algorithm under real circumstances has been evaluated through several experiments with different numbers of nodes and different node deployments in the China State Grid HVDC test base. Results show that the proposed algorithm achieves an accuracy of over 96% under different conditions.
NASA Astrophysics Data System (ADS)
Lu, Siqi; Wang, Xiaorong; Wu, Junyong
2018-01-01
The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.
The effect of food bolus location on jaw movement smoothness and masticatory efficiency.
Molenaar, W N B; Gezelle Meerburg, P J; Luraschi, J; Whittle, T; Schimmel, M; Lobbezoo, F; Peck, C C; Murray, G M; Minami, I
2012-09-01
Masticatory efficiency in individuals with extensive tooth loss has been widely discussed. However, little is known about jaw movement smoothness during chewing and the effect of differences in food bolus location on movement smoothness and masticatory efficiency. The aim of this study was to determine whether experimental differences in food bolus location (anterior versus posterior) had an effect on masticatory efficiency and jaw movement smoothness. Jaw movement smoothness was evaluated by measuring jerk-cost (calculated from acceleration) with an accelerometer that was attached to the skin of the mentum of 10 asymptomatic subjects, and acceleration was recorded during chewing on two-colour chewing gum, which was used to assessed masticatory efficiency. Chewing was performed under two conditions: posterior chewing (chewing on molars and premolars only) and anterior chewing (chewing on canine and first premolar teeth only). Jerk-cost and masticatory efficiency (calculated as the ratio of unmixed azure colour to the total area of gum, the unmixed fraction) were compared between anterior and posterior chewing with the Wilcoxon signed rank test (two-tailed). Subjects chewed significantly less efficiently during anterior chewing than during posterior chewing (P = 0·0051). There was no significant difference in jerk-cost between anterior and posterior conditions in the opening phase (P = 0·25), or closing phase (P = 0·42). This is the first characterisation of the effect of food bolus location on jaw movement smoothness at the same time as recording masticatory efficiency. The data suggest that anterior chewing decreases masticatory efficiency, but does not influence jerk-cost. © 2012 Blackwell Publishing Ltd.
Geographical distributions of lake trout strains stocked in Lake Ontario
Elrod, Joseph H.; O'Gorman, Robert; Schneider, Clifford P.; Schaner, Ted
1996-01-01
Geographical distributions of lake trout (Salvelinus namaycush) stocked at seven locations in U.S. waters and at four locations in Canadian waters of Lake Ontario were determined from fish caught with gill nets in September in 17 areas of U.S. waters and at 10 fixed locations in Canadian waters in 1986-95. For fish of a given strain stocked at a given location, geographical distributions were not different for immature males and immature females or for mature males and mature females. The proportion of total catch at the three locations nearest the stocking location was higher for mature fish than for immature fish in all 24 available comparisons (sexes combined) and was greater for fish stocked as yearlings than for those stocked as fingerlings in all eight comparisons. Mature fish were relatively widely dispersed from stocking locations indicating that their tendency to return to stocking locations for spawning was weak, and there was no appreciable difference in this tendency among strains. Mature lake trout were uniformly distributed among sampling locations, and the strain composition at stocking locations generally reflected the stocking history 5 to 6 years earlier. Few lake trout moved across Lake Ontario between the north and south shores or between the eastern outlet basin and the main lake basin. Limited dispersal from stocking sites supports the concept of stocking different genetic strains in various parts of the lake with the attributes of each strain selected to match environmental conditions in the portion of the lake where it is stocked.
Kontodimopoulos, Nick; Moschovakis, Giorgos; Aletras, Vassilis H; Niakas, Dimitris
2007-11-17
The purpose of this study was to compare technical and scale efficiency of primary care centers from the two largest Greek providers, the National Health System (NHS) and the Social Security Foundation (IKA) and to determine if, and how, efficiency is affected by various exogenous factors such as catchment population and location. The sample comprised of 194 units (103 NHS and 91 IKA). Efficiency was measured with Data Envelopment Analysis (DEA) using three inputs, -medical staff, nursing/paramedical staff, administrative/other staff- and two outputs, which were the aggregated numbers of scheduled/emergency patient visits and imaging/laboratory diagnostic tests. Facilities were categorized as small, medium and large (<15,000, 15,000-30,000 and >30,000 respectively) to reflect catchment population and as urban/semi-urban or remote/island to reflect location. In a second stage analysis, technical and scale efficiency scores were regressed against facility type (NHS or IKA), size and location using multivariate Tobit regression. Regarding technical efficiency, IKA performed better than the NHS (84.9% vs. 70.1%, Mann-Whitney P < 0.001), smaller units better than medium-sized and larger ones (84.2% vs. 72.4% vs. 74.3%, Kruskal-Wallis P < 0.01) and remote/island units better than urban centers (81.1% vs. 75.7%, Mann-Whitney P = 0.103). As for scale efficiency, IKA again outperformed the NHS (89.7% vs. 85.9%, Mann-Whitney P = 0.080), but results were reversed in respect to facility size and location. Specifically, larger units performed better (96.3% vs. 90.9% vs. 75.9%, Kruskal-Wallis P < 0.001), and urban units showed higher scale efficiency than remote ones (91.9% vs. 75.3%, Mann-Whitney P < 0.001). Interestingly 75% of facilities appeared to be functioning under increasing returns to scale. Within-group comparisons revealed significant efficiency differences between the two primary care providers. Tobit regression models showed that facility type, size and location were significant explanatory variables of technical and scale efficiency. Variations appeared to exist in the productive performance of the NHS and IKA as the two main primary care providers in Greece. These variations reflect differences in primary care organization, economical incentives, financial constraints, sociodemographic and local peculiarities. In all technical efficiency comparisons, IKA facilities appeared to outperform NHS ones irrespective of facility size or location. In respect to scale efficiency, the results were to some extent inconclusive and observed differences were mostly insignificant, although again IKA appeared to perform better.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Zhiwen; Eichman, Joshua D; Kurtz, Jennifer M
This paper presents the feasibility and economics of using fuel cell backup power systems in telecommunication cell towers to provide grid services (e.g., ancillary services, demand response). The fuel cells are able to provide power for the cell tower during emergency conditions. This study evaluates the strategic integration of clean, efficient, and reliable fuel cell systems with the grid for improved economic benefits. The backup systems have potential as enhanced capability through information exchanges with the power grid to add value as grid services that depend on location and time. The economic analysis has been focused on the potential revenuemore » for distributed telecommunications fuel cell backup units to provide value-added power supply. This paper shows case studies on current fuel cell backup power locations and regional grid service programs. The grid service benefits and system configurations for different operation modes provide opportunities for expanding backup fuel cell applications responsive to grid needs.« less
Ergosterol is mainly located in the cytoplasmic leaflet of the yeast plasma membrane.
Solanko, Lukasz M; Sullivan, David P; Sere, Yves Y; Szomek, Maria; Lunding, Anita; Solanko, Katarzyna A; Pizovic, Azra; Stanchev, Lyubomir D; Pomorski, Thomas Günther; Menon, Anant K; Wüstner, Daniel
2018-03-01
Transbilayer lipid asymmetry is a fundamental characteristic of the eukaryotic cell plasma membrane (PM). While PM phospholipid asymmetry is well documented, the transbilayer distribution of PM sterols such as mammalian cholesterol and yeast ergosterol is not reliably known. We now report that sterols are asymmetrically distributed across the yeast PM, with the majority (~80%) located in the cytoplasmic leaflet. By exploiting the sterol-auxotrophic hem1Δ yeast strain we obtained cells in which endogenous ergosterol was quantitatively replaced with dehydroergosterol (DHE), a closely related fluorescent sterol that functionally and accurately substitutes for ergosterol in vivo. Using fluorescence spectrophotometry and microscopy we found that <20% of DHE fluorescence was quenched when the DHE-containing cells were exposed to membrane-impermeant collisional quenchers (spin-labeled phosphatidylcholine and trinitrobenzene sulfonic acid). Efficient quenching was seen only after the cells were disrupted by glass-bead lysis or repeated freeze-thaw to allow quenchers access to the cell interior. The extent of quenching was unaffected by treatments that deplete cellular ATP levels, collapse the PM electrochemical gradient or affect the actin cytoskeleton. However, alterations in PM phospholipid asymmetry in cells lacking phospholipid flippases resulted in a more symmetric transbilayer distribution of sterol. Similarly, an increase in the quenchable pool of DHE was observed when PM sphingolipid levels were reduced by treating cells with myriocin. We deduce that sterols comprise up to ~45% of all inner leaflet lipids in the PM, a result that necessitates revision of current models of the architecture of the PM lipid bilayer. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Revolutionary Aeropropulsion Concept for Sustainable Aviation: Turboelectric Distributed Propulsion
NASA Technical Reports Server (NTRS)
Kim, Hyun Dae; Felder, James L.; Tong, Michael. T.; Armstrong, Michael
2013-01-01
In response to growing aviation demands and concerns about the environment and energy usage, a team at NASA proposed and examined a revolutionary aeropropulsion concept, a turboelectric distributed propulsion system, which employs multiple electric motor-driven propulsors that are distributed on a large transport vehicle. The power to drive these electric propulsors is generated by separately located gas-turbine-driven electric generators on the airframe. This arrangement enables the use of many small-distributed propulsors, allowing a very high effective bypass ratio, while retaining the superior efficiency of large core engines, which are physically separated but connected to the propulsors through electric power lines. Because of the physical separation of propulsors from power generating devices, a new class of vehicles with unprecedented performance employing such revolutionary propulsion system is possible in vehicle design. One such vehicle currently being investigated by NASA is called the "N3-X" that uses a hybrid-wing-body for an airframe and superconducting generators, motors, and transmission lines for its propulsion system. On the N3-X these new degrees of design freedom are used (1) to place two large turboshaft engines driving generators in freestream conditions to minimize total pressure losses and (2) to embed a broad continuous array of 14 motor-driven fans on the upper surface of the aircraft near the trailing edge of the hybrid-wing-body airframe to maximize propulsive efficiency by ingesting thick airframe boundary layer flow. Through a system analysis in engine cycle and weight estimation, it was determined that the N3-X would be able to achieve a reduction of 70% or 72% (depending on the cooling system) in energy usage relative to the reference aircraft, a Boeing 777-200LR. Since the high-power electric system is used in its propulsion system, a study of the electric power distribution system was performed to identify critical dynamic and safety issues. This paper presents some of the features and issues associated with the turboelectric distributed propulsion system and summarizes the recent study results, including the high electric power distribution, in the analysis of the N3-X vehicle.
Yun, Min Ju; Sim, Yeon Hyang; Cha, Seung I; Seo, Seon Hee; Lee, Dong Y
2017-11-08
Dye sensitize solar cells (DSSCs) have been considered as the promising alternatives silicon based solar cell with their characteristics including high efficiency under weak illumination and insensitive power output to incident angle. Therefore, many researches have been studied to improve the energy conversion efficiency of DSSCs. However the efficiency of DSSCs are still trapped at the around 10%. In this study, micro-scale hexagonal shape patterned photoanode have proposed to modify light distribution of photon. In the patterned electrode, the appearance efficiency have been obtained from 7.1% to 7.8% considered active area and the efficiency of 12.7% have been obtained based on the photoanode area. Enhancing diffusion of electrons and modification of photon distribution utilizing the morphology of the electrode are major factors to improving the performance of patterned electrode. Also, finite element method analyses of photon distributions were conducted to estimate morphological effect that influence on the photon distribution and current density. From our proposed study, it is expecting that patterned electrode is one of the solution to overcome the stagnant efficiency and one of the optimized geometry of electrode to modify photon distribution. Process of inter-patterning in photoanode has been minimized.
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
An LES-PBE-PDF approach for modeling particle formation in turbulent reacting flows
NASA Astrophysics Data System (ADS)
Sewerin, Fabian; Rigopoulos, Stelios
2017-10-01
Many chemical and environmental processes involve the formation of a polydispersed particulate phase in a turbulent carrier flow. Frequently, the immersed particles are characterized by an intrinsic property such as the particle size, and the distribution of this property across a sample population is taken as an indicator for the quality of the particulate product or its environmental impact. In the present article, we propose a comprehensive model and an efficient numerical solution scheme for predicting the evolution of the property distribution associated with a polydispersed particulate phase forming in a turbulent reacting flow. Here, the particulate phase is described in terms of the particle number density whose evolution in both physical and particle property space is governed by the population balance equation (PBE). Based on the concept of large eddy simulation (LES), we augment the existing LES-transported probability density function (PDF) approach for fluid phase scalars by the particle number density and obtain a modeled evolution equation for the filtered PDF associated with the instantaneous fluid composition and particle property distribution. This LES-PBE-PDF approach allows us to predict the LES-filtered fluid composition and particle property distribution at each spatial location and point in time without any restriction on the chemical or particle formation kinetics. In view of a numerical solution, we apply the method of Eulerian stochastic fields, invoking an explicit adaptive grid technique in order to discretize the stochastic field equation for the number density in particle property space. In this way, sharp moving features of the particle property distribution can be accurately resolved at a significantly reduced computational cost. As a test case, we consider the condensation of an aerosol in a developed turbulent mixing layer. Our investigation not only demonstrates the predictive capabilities of the LES-PBE-PDF model but also indicates the computational efficiency of the numerical solution scheme.
Tuning Fractures With Dynamic Data
NASA Astrophysics Data System (ADS)
Yao, Mengbi; Chang, Haibin; Li, Xiang; Zhang, Dongxiao
2018-02-01
Flow in fractured porous media is crucial for production of oil/gas reservoirs and exploitation of geothermal energy. Flow behaviors in such media are mainly dictated by the distribution of fractures. Measuring and inferring the distribution of fractures is subject to large uncertainty, which, in turn, leads to great uncertainty in the prediction of flow behaviors. Inverse modeling with dynamic data may assist to constrain fracture distributions, thus reducing the uncertainty of flow prediction. However, inverse modeling for flow in fractured reservoirs is challenging, owing to the discrete and non-Gaussian distribution of fractures, as well as strong nonlinearity in the relationship between flow responses and model parameters. In this work, building upon a series of recent advances, an inverse modeling approach is proposed to efficiently update the flow model to match the dynamic data while retaining geological realism in the distribution of fractures. In the approach, the Hough-transform method is employed to parameterize non-Gaussian fracture fields with continuous parameter fields, thus rendering desirable properties required by many inverse modeling methods. In addition, a recently developed forward simulation method, the embedded discrete fracture method (EDFM), is utilized to model the fractures. The EDFM maintains computational efficiency while preserving the ability to capture the geometrical details of fractures because the matrix is discretized as structured grid, while the fractures being handled as planes are inserted into the matrix grids. The combination of Hough representation of fractures with the EDFM makes it possible to tune the fractures (through updating their existence, location, orientation, length, and other properties) without requiring either unstructured grids or regridding during updating. Such a treatment is amenable to numerous inverse modeling approaches, such as the iterative inverse modeling method employed in this study, which is capable of dealing with strongly nonlinear problems. A series of numerical case studies with increasing complexity are set up to examine the performance of the proposed approach.
1998-12-01
As the most abundant protein in the circulatory system albumin contributes 80% to colloid osmotic blood pressure. Albumin is also chiefly responsible for the maintenance of blood pH. It is located in every tissue and bodily secretion, with extracellular protein comprising 60% of total albumin. Perhaps the most outstanding property of albumin is its ability to bind reversibly to an incredible variety of ligands. It is widely accepted in the pharmaceutical industry that the overall distribution, metabolism, and efficiency of many drugs are rendered ineffective because of their unusually high affinity for this abundant protein. An understanding of the chemistry of the various classes of pharmaceutical interactions with albumin can suggest new approaches to drug therapy and design. Principal Investigator: Dan Carter/New Century Pharmaceuticals
Efficient numerical simulation of an electrothermal de-icer pad
NASA Technical Reports Server (NTRS)
Roelke, R. J.; Keith, T. G., Jr.; De Witt, K. J.; Wright, W. B.
1987-01-01
In this paper, a new approach to calculate the transient thermal behavior of an iced electrothermal de-icer pad was developed. The method of splines was used to obtain the temperature distribution within the layered pad. Splines were used in order to create a tridiagonal system of equations that could be directly solved by Gauss elimination. The Stefan problem was solved using the enthalpy method along with a recent implicit technique. Only one to three iterations were needed to locate the melt front during any time step. Computational times were shown to be greatly reduced over those of an existing one dimensional procedure without any reduction in accuracy; the curent technique was more than 10 times faster.
Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data
Bakun, W.H.; Gomez, Capera A.; Stucchi, M.
2011-01-01
Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental magnitudes for large and small earthquakes are generally consistent with the confidence intervals inferred from the distribution of bootstrap resampled magnitudes.
Performance assessment of the Gash Delta Spate Irrigation System, Sudan
NASA Astrophysics Data System (ADS)
Ghebreamlak, Araya Z.; Tanakamaru, Haruya; Tada, Akio; Adam, Bashir M. Ahmed; Elamin, Khalid A. E.
2018-02-01
The Gash Delta Spate Irrigation System (GDSIS), located in eastern Sudan with a net command area of 100 000 ha (an area currently equipped with irrigation structures), was established in 1924. The land is irrigated every 3 years (3-year rotation) or every 2 years (2-year rotation) so that about 33 000 or 50 000 ha respectively can be cultivated annually. This study deals with assessing the performance of the 3- and 2-year rotation systems using the Monte Carlo simulation. Reliability, which is a measure of how frequently the irrigation water supply satisfies the demand, and vulnerability, which is a measure of the magnitude of failure, were selected as the performance criteria. Combinations of five levels of intake ratio and five levels of irrigation efficiency for the irrigation water supply of each rotation system were analysed. Historical annual flow data of the Gash River for 107 years were fit to several frequency distributions. The Weibull distribution was the best on the basis of the Akaike information criteria and was used for simulating the ensembles of annual river flow. The reliabilities and vulnerabilities of both rotation systems were evaluated at typical values of intake ratio and irrigation efficiency. The results show that (i) the 3-year rotation is more reliable in water supply than the 2-year rotation, (ii) the vulnerability of the 3-year rotation is lower than that of the 2-year rotation and (iii) therefore the 3-year rotation is preferable in the GDSIS. The sensitivities of reliability and vulnerability to changes in intake ratio and irrigation efficiency were also examined.
Fluid-structure interaction simulations of the Fontan procedure using variable wall properties.
Long, C C; Hsu, M-C; Bazilevs, Y; Feinstein, J A; Marsden, A L
2012-05-01
Children born with single ventricle heart defects typically undergo a staged surgical procedure culminating in a total cavopulmonary connection (TCPC) or Fontan surgery. The goal of this work was to perform physiologic, patient-specific hemodynamic simulations of two post-operative TCPC patients by using fluid-structure interaction (FSI) simulations. Data from two patients are presented, and post-op anatomy is reconstructed from MRI data. Respiration rate, heart rate, and venous pressures are obtained from catheterization data, and inflow rates are obtained from phase contrast MRI data and are used together with a respiratory model. Lumped parameter (Windkessel) boundary conditions are used at the outlets. We perform FSI simulations by using an arbitrary Lagrangian-Eulerian finite element framework to account for motion of the blood vessel walls in the TCPC. This study is the first to introduce variable elastic properties for the different areas of the TCPC, including a Gore-Tex conduit. Quantities such as wall shear stresses and pressures at critical locations are extracted from the simulation and are compared with pressure tracings from clinical data as well as with rigid wall simulations. Hepatic flow distribution and energy efficiency are also calculated and compared for all cases. There is little effect of FSI on pressure tracings, hepatic flow distribution, and time-averaged energy efficiency. However, the effect of FSI on wall shear stress, instantaneous energy efficiency, and wall motion is significant and should be considered in future work, particularly for accurate prediction of thrombus formation. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Srivastava, Abhay; Tian, Ye; Qie, Xiushu; Wang, Dongfang; Sun, Zhuling; Yuan, Shanfeng; Wang, Yu; Chen, Zhixiong; Xu, Wenjing; Zhang, Hongbo; Jiang, Rubin; Su, Debin
2017-11-01
The performances of Beijing Lightning Network (BLNET) operated in Beijing-Tianjin-Hebei urban cluster area have been evaluated in terms of detection efficiency and relative location accuracy. A self-reference method has been used to show the detection efficiency of BLNET, for which fast antenna waveforms have been manually examined. Based on the fast antenna verification, the average detection efficiency of BLNET is 97.4% for intracloud (IC) flashes, 73.9% for cloud-to-ground (CG) flashes and 93.2% for the total flashes. Result suggests the CG detection of regional dense network is highly precise when the thunderstorm passes over the network; however it changes day to day when the thunderstorms are outside the network. Further, the CG stroke data from three different lightning location networks across Beijing are compared. The relative detection efficiency of World Wide Lightning Location Network (WWLLN) and Chinese Meteorology Administration - Lightning Detection Network (CMA-LDN, also known as ADTD) are approximately 12.4% (16.8%) and 36.5% (49.4%), respectively, comparing with fast antenna (BLNET). The location of BLNET is in middle, while WWLLN and CMA-LDN average locations are southeast and northwest, respectively. Finally, the IC pulses and CG return stroke pulses have been compared with the S-band Doppler radar. This type of study is useful to know the approximate situation in a region and improve the performance of lightning location networks in the absence of ground truth. Two lightning flashes occurred on tower in the coverage of BLNET show that the horizontal location error was 52.9 m and 250 m, respectively.
Alternative Fuels Data Center: Natural Gas Fueling Station Locations
or ZIP code or along a route in the United States. Loading alternative fueling station locator Fleet Rightsizing System Efficiency Locate Stations Search by Location Map a Route Laws & Incentives
Performance analysis of rain attenuation on earth-to-satellite microwave links design in Libya
NASA Astrophysics Data System (ADS)
Rafiqul Islam, Md; Hussein Budalal, Asma Ali; Habaebi, Mohamed H.; Badron, Khairayu; Fadzil Ismail, Ahmad
2017-11-01
Performances of earth-to-satellite microwave links operating in Ku, Ka, and V-bands are degraded by the environment and strongly attenuated by rain. Rain attenuation is the most significant consideration and challenge to design a reliable earth-to-satellite microwave links for these frequency bands. Hence, it is essential for satellite link designer to take into account rain fade margin accurately before system implementation. Rain rate is the main measured parameter to predict of rain attenuation. Rainfall statistical data measured and recorded in Libya for the period of 30 years are collected from 5 locations. The prediction methods require one minute integration time rain intensity. Therefore, collected data were analyzed and processed to convert into one-minute rain rate cumulative distribution in Libya. The model proposed by ITU-R is used to predict and investigate rain fade based on converted 1-minute rain rate data. Rain fade predicted at two locations are used for performance analysis in terms of link spectral efficiency and throughput. V-band downlink shows that 99.99% availability is possible in all the Southern part stations in Libya at 0.29 bps/Hz spectral efficiency and 20.74 Mbps throughput when 72 MHz transponder band width is used which is not feasible in Northern part. Results of this paper will be a very useful resource to design highly reliable earth-to-satellite communication links in Libya.
NASA Astrophysics Data System (ADS)
Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai
2017-07-01
Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.
NASA Astrophysics Data System (ADS)
Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai
2018-07-01
Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.
Sampling efficiency of the Moore egg collector
Worthington, Thomas A.; Brewer, Shannon K.; Grabowski, Timothy B.; Mueller, Julia
2013-01-01
Quantitative studies focusing on the collection of semibuoyant fish eggs, which are associated with a pelagic broadcast-spawning reproductive strategy, are often conducted to evaluate reproductive success. Many of the fishes in this reproductive guild have suffered significant reductions in range and abundance. However, the efficiency of the sampling gear used to evaluate reproduction is often unknown and renders interpretation of the data from these studies difficult. Our objective was to assess the efficiency of a modified Moore egg collector (MEC) using field and laboratory trials. Gear efficiency was assessed by releasing a known quantity of gellan beads with a specific gravity similar to that of eggs from representatives of this reproductive guild (e.g., the Arkansas River Shiner Notropis girardi) into an outdoor flume and recording recaptures. We also used field trials to determine how discharge and release location influenced gear efficiency given current methodological approaches. The flume trials indicated that gear efficiency ranged between 0.0% and 9.5% (n = 57) in a simple 1.83-m-wide channel and was positively related to discharge. Efficiency in the field trials was lower, ranging between 0.0% and 3.6%, and was negatively related to bead release distance from the MEC and discharge. The flume trials indicated that the gellan beads were not distributed uniformly across the channel, although aggregation was reduced at higher discharges. This clustering of passively drifting particles should be considered when selecting placement sites for an MEC; further, the use of multiple devices may be warranted in channels with multiple areas of concentrated flow.
USDA-ARS?s Scientific Manuscript database
The need to understand the effects of enhanced efficiency fertilizers (EEF) for their effect on nitrous oxide emissions and agronomic performance was the motivation underpinning this multi-location study across North America. Research locations participating in this study included Ames, IA; Auburn, ...
Efficient packet transportation on complex networks with nonuniform node capacity distribution
NASA Astrophysics Data System (ADS)
He, Xuan; Niu, Kai; He, Zhiqiang; Lin, Jiaru; Jiang, Zhong-Yuan
2015-03-01
Provided that node delivery capacity may be not uniformly distributed in many realistic networks, we present a node delivery capacity distribution in which each node capacity is composed of uniform fraction and degree related proportion. Based on the node delivery capacity distribution, we construct a novel routing mechanism called efficient weighted routing (EWR) strategy to enhance network traffic capacity and transportation efficiency. Compared with the shortest path routing and the efficient routing strategies, the EWR achieves the highest traffic capacity. After investigating average path length, network diameter, maximum efficient betweenness, average efficient betweenness, average travel time and average traffic load under extensive simulations, it indicates that the EWR appears to be a very effective routing method. The idea of this routing mechanism gives us a good insight into network science research. The practical use of this work is prospective in some real complex systems such as the Internet.
Improved field free line magnetic particle imaging using saddle coils.
Erbe, Marlitt; Sattel, Timo F; Buzug, Thorsten M
2013-12-01
Magnetic particle imaging (MPI) is a novel tracer-based imaging method detecting the distribution of superparamagnetic iron oxide (SPIO) nanoparticles in vivo in three dimensions and in real time. Conventionally, MPI uses the signal emitted by SPIO tracer material located at a field free point (FFP). To increase the sensitivity of MPI, however, an alternative encoding scheme collecting the particle signal along a field free line (FFL) was proposed. To provide the magnetic fields needed for line imaging in MPI, a very efficient scanner setup regarding electrical power consumption is needed. At the same time, the scanner needs to provide a high magnetic field homogeneity along the FFL as well as parallel to its alignment to prevent the appearance of artifacts, using efficient radon-based reconstruction methods arising for a line encoding scheme. This work presents a dynamic FFL scanner setup for MPI that outperforms all previously presented setups in electrical power consumption as well as magnetic field quality.
Electrostatic channeling in P. falciparum DHFR-TS: Brownian dynamics and Smoluchowski modeling.
Metzger, Vincent T; Eun, Changsun; Kekenes-Huskey, Peter M; Huber, Gary; McCammon, J Andrew
2014-11-18
We perform Brownian dynamics simulations and Smoluchowski continuum modeling of the bifunctional Plasmodium falciparum dihydrofolate reductase-thymidylate synthase (P. falciparum DHFR-TS) with the objective of understanding the electrostatic channeling of dihydrofolate generated at the TS active site to the DHFR active site. The results of Brownian dynamics simulations and Smoluchowski continuum modeling suggest that compared to Leishmania major DHFR-TS, P. falciparum DHFR-TS has a lower but significant electrostatic-mediated channeling efficiency (?15-25%) at physiological pH (7.0) and ionic strength (150 mM). We also find that removing the electric charges from key basic residues located between the DHFR and TS active sites significantly reduces the channeling efficiency of P. falciparum DHFR-TS. Although several protozoan DHFR-TS enzymes are known to have similar tertiary and quaternary structure, subtle differences in structure, active-site geometry, and charge distribution appear to influence both electrostatic-mediated and proximity-based substrate channeling.
Wind Tunnel Seeding Systems for Laser Velocimeters
NASA Technical Reports Server (NTRS)
Hunter, W. W., Jr. (Compiler); Nichols, C. E., Jr. (Compiler)
1985-01-01
The principal motivating factor for convening the Workshop on the Development and Application of Wind Tunnel Seeding Systems for Laser Velocimeters is the necessity to achieve efficient operation and, most importantly, to insure accurate measurements with velocimeter techniques. The ultimate accuracy of particle scattering based laser velocimeter measurements of wind tunnel flow fields depends on the ability of the scattering particle to faithfully track the local flow field in which it is embedded. A complex relationship exists between the particle motion and the local flow field. This relationship is dependent on particle size, size distribution, shape, and density. To quantify the accuracy of the velocimeter measurements of the flow field, the researcher has to know the scattering particle characteristics. In order to obtain optimum velocimeter measurements, the researcher is striving to achieve control of the particle characteristics and to verify those characteristics at the measurement point. Additionally, the researcher is attempting to achieve maximum measurement efficiency through control of particle concentration and location in the flow field.
NASA Astrophysics Data System (ADS)
Jeziorska, Justyna; Niedzielski, Tomasz
2018-03-01
River basins located in the Central Sudetes (SW Poland) demonstrate a high vulnerability to flooding. Four mountainous basins and the corresponding outlets have been chosen for modeling the streamflow dynamics using TOPMODEL, a physically based semi-distributed topohydrological model. The model has been calibrated using the Monte Carlo approach—with discharge, rainfall, and evapotranspiration data used to estimate the parameters. The overall performance of the model was judged by interpreting the efficiency measures. TOPMODEL was able to reproduce the main pattern of the hydrograph with acceptable accuracy for two of the investigated catchments. However, it failed to simulate the hydrological response in the remaining two catchments. The best performing data set obtained Nash-Sutcliffe efficiency of 0.78. This data set was chosen to conduct a detailed analysis aiming to estimate the optimal timespan of input data for which TOPMODEL performs best. The best fit was attained for the half-year time span. The model was validated and found to reveal good skills.
Development of evaluation technique of GMAW welding quality based on statistical analysis
NASA Astrophysics Data System (ADS)
Feng, Shengqiang; Terasaki, Hidenri; Komizo, Yuichi; Hu, Shengsun; Chen, Donggao; Ma, Zhihua
2014-11-01
Nondestructive techniques for appraising gas metal arc welding(GMAW) faults plays a very important role in on-line quality controllability and prediction of the GMAW process. On-line welding quality controllability and prediction have several disadvantages such as high cost, low efficiency, complication and greatly being affected by the environment. An enhanced, efficient evaluation technique for evaluating welding faults based on Mahalanobis distance(MD) and normal distribution is presented. In addition, a new piece of equipment, designated the weld quality tester(WQT), is developed based on the proposed evaluation technique. MD is superior to other multidimensional distances such as Euclidean distance because the covariance matrix used for calculating MD takes into account correlations in the data and scaling. The values of MD obtained from welding current and arc voltage are assumed to follow a normal distribution. The normal distribution has two parameters: the mean µ and standard deviation σ of the data. In the proposed evaluation technique used by the WQT, values of MD located in the range from zero to µ+3 σ are regarded as "good". Two experiments which involve changing the flow of shielding gas and smearing paint on the surface of the substrate are conducted in order to verify the sensitivity of the proposed evaluation technique and the feasibility of using WQT. The experimental results demonstrate the usefulness of the WQT for evaluating welding quality. The proposed technique can be applied to implement the on-line welding quality controllability and prediction, which is of great importance to design some novel equipment for weld quality detection.
Areas of high conservation value at risk by plant invaders in Georgia under climate change.
Slodowicz, Daniel; Descombes, Patrice; Kikodze, David; Broennimann, Olivier; Müller-Schärer, Heinz
2018-05-01
Invasive alien plants (IAP) are a threat to biodiversity worldwide. Understanding and anticipating invasions allow for more efficient management. In this regard, predicting potential invasion risks by IAPs is essential to support conservation planning into areas of high conservation value (AHCV) such as sites exhibiting exceptional botanical richness, assemblage of rare, and threatened and/or endemic plant species. Here, we identified AHCV in Georgia, a country showing high plant richness, and assessed the susceptibility of these areas to colonization by IAPs under present and future climatic conditions. We used actual protected areas and areas of high plant endemism (identified using occurrences of 114 Georgian endemic plant species) as proxies for AHCV. Then, we assessed present and future potential distribution of 27 IAPs using species distribution models under four climate change scenarios and stacked single-species potential distribution into a consensus map representing IAPs richness. We evaluated present and future invasion risks in AHCV using IAPs richness as a metric of susceptibility. We show that the actual protected areas cover only 9.4% of the areas of high plant endemism in Georgia. IAPs are presently located at lower elevations around the large urban centers and in western Georgia. We predict a shift of IAPs toward eastern Georgia and higher altitudes and an increased susceptibility of AHCV to IAPs under future climate change. Our study provides a good baseline for decision makers and stakeholders on where and how resources should be invested in the most efficient way to protect Georgia's high plant richness from IAPs.
Efficient Use of Distributed Systems for Scientific Applications
NASA Technical Reports Server (NTRS)
Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques
2000-01-01
Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring distributed systems. In particular this application, illustrated in the document entails an integration of finite element and fluid dynamic simulations to address the cooling of turbine blades of a gas turbine engine design. It is not uncommon to encounter high-temperature, film-cooled turbine airfoils with 1,000,000s of degrees of freedom. This results because of the complexity of the various components of the airfoils, requiring fine-grain meshing for accuracy. Additional information is contained in the original.
Circular, confined distribution for charged particle beams
Garnett, Robert W.; Dobelbower, M. Christian
1995-01-01
A charged particle beam line is formed with magnetic optics that manipulate the charged particle beam to form the beam having a generally rectangular configuration to a circular beam cross-section having a uniform particle distribution at a predetermined location. First magnetic optics form a charged particle beam to a generally uniform particle distribution over a square planar area at a known first location. Second magnetic optics receive the charged particle beam with the generally square configuration and affect the charged particle beam to output the charged particle beam with a phase-space distribution effective to fold corner portions of the beam toward the core region of the beam. The beam forms a circular configuration having a generally uniform spatial particle distribution over a target area at a predetermined second location.
Circular, confined distribution for charged particle beams
Garnett, R.W.; Dobelbower, M.C.
1995-11-21
A charged particle beam line is formed with magnetic optics that manipulate the charged particle beam to form the beam having a generally rectangular configuration to a circular beam cross-section having a uniform particle distribution at a predetermined location. First magnetic optics form a charged particle beam to a generally uniform particle distribution over a square planar area at a known first location. Second magnetic optics receive the charged particle beam with the generally square configuration and affect the charged particle beam to output the charged particle beam with a phase-space distribution effective to fold corner portions of the beam toward the core region of the beam. The beam forms a circular configuration having a generally uniform spatial particle distribution over a target area at a predetermined second location. 26 figs.
Rosenberry, Donald O.; Briggs, Martin A.; Delin, Geoffrey N.; Hare, Danielle K.
2016-01-01
Quantifying flow of groundwater through streambeds often is difficult due to the complexity of aquifer-scale heterogeneity combined with local-scale hyporheic exchange. We used fiber-optic distributed temperature sensing (FO-DTS), seepage meters, and vertical temperature profiling to locate, quantify, and monitor areas of focused groundwater discharge in a geomorphically simple sand-bed stream. This combined approach allowed us to rapidly focus efforts at locations where prodigious amounts of groundwater discharged to the Quashnet River on Cape Cod, Massachusetts, northeastern USA. FO-DTS detected numerous anomalously cold reaches one to several m long that persisted over two summers. Seepage meters positioned upstream, within, and downstream of 7 anomalously cold reaches indicated that rapid groundwater discharge occurred precisely where the bed was cold; median upward seepage was nearly 5 times faster than seepage measured in streambed areas not identified as cold. Vertical temperature profilers deployed next to 8 seepage meters provided diurnal-signal-based seepage estimates that compared remarkably well with seepage-meter values. Regression slope and R2 values both were near 1 for seepage ranging from 0.05 to 3.0 m d−1. Temperature-based seepage model accuracy was improved with thermal diffusivity determined locally from diurnal signals. Similar calculations provided values for streambed sediment scour and deposition at subdaily resolution. Seepage was strongly heterogeneous even along a sand-bed river that flows over a relatively uniform sand and fine-gravel aquifer. FO-DTS was an efficient method for detecting areas of rapid groundwater discharge, even in a strongly gaining river, that can then be quantified over time with inexpensive streambed thermal methods.
Using location tracking data to assess efficiency in established clinical workflows.
Meyer, Mark; Fairbrother, Pamela; Egan, Marie; Chueh, Henry; Sandberg, Warren S
2006-01-01
Location tracking systems are becoming more prevalent in clinical settings yet applications still are not common. We have designed a system to aid in the assessment of clinical workflow efficiency. Location data is captured from active RFID tags and processed into usable data. These data are stored and presented visually with trending capability over time. The system allows quick assessments of the impact of process changes on workflow, and isolates areas for improvement.
Transfer-Efficient Face Routing Using the Planar Graphs of Neighbors in High Density WSNs
Kim, Sang-Ha
2017-01-01
Face routing has been adopted in wireless sensor networks (WSNs) where topological changes occur frequently or maintaining full network information is difficult. For message forwarding in networks, a planar graph is used to prevent looping, and because long edges are removed by planarization and the resulting planar graph is composed of short edges, and messages are forwarded along multiple nodes connected by them even though they can be forwarded directly. To solve this, face routing using information on all nodes within 2-hop range was adopted to forward messages directly to the farthest node within radio range. However, as the density of the nodes increases, network performance plunges because message transfer nodes receive and process increased node information. To deal with this problem, we propose a new face routing using the planar graphs of neighboring nodes to improve transfer efficiency. It forwards a message directly to the farthest neighbor and reduces loads and processing time by distributing network graph construction and planarization to the neighbors. It also decreases the amount of location information to be transmitted by sending information on the planar graph nodes rather than on all neighboring nodes. Simulation results show that it significantly improves transfer efficiency. PMID:29053623
Potential Arbitrage Revenue of Energy Storage Systems in PJM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salles, Mauricio; Huang, Junling; Aziz, Michael
The volatility of electricity prices is attracting interest in the opportunity of providing net revenue by energy arbitrage. We analyzed the potential revenue of a generic Energy Storage System (ESS) in 7395 different locations within the electricity markets of Pennsylvania-New Jersey-Maryland interconnection (PJM), the largest U.S. regional transmission organization, using hourly locational marginal prices over the seven-year period 2008–2014. Assuming a price-taking ESS with perfect foresight in the real-time market, we optimized the charge-discharge profile to determine the maximum potential revenue for a 1 MW system as a function of energy/power ratio, or rated discharge duration, from 1 to 14more » h, including a limited analysis of sensitivity to round-trip efficiency. We determined minimum potential revenue with a similar analysis of the day-ahead market. We presented the distribution over the set of nodes and years of price, price volatility, and maximum potential arbitrage revenue. From these results, we determined the break even overnight installed cost of an ESS below which arbitrage would be profitable, its dependence on rated discharge duration, its distribution over grid nodes, and its variation over the years. We showed that dispatch into real-time markets based on day-ahead market settlement prices is a simple, feasible method that raises the lower bound on the achievable arbitrage revenue.« less
Shear-lag effect and its effect on the design of high-rise buildings
NASA Astrophysics Data System (ADS)
Thanh Dat, Bui; Traykov, Alexander; Traykova, Marina
2018-03-01
For super high-rise buildings, the analysis and selection of suitable structural solutions are very important. The structure has not only to carry the gravity loads (self-weight, live load, etc.), but also to resist lateral loads (wind and earthquake loads). As the buildings become taller, the demand on different structural systems dramatically increases. The article considers the division of the structural systems of tall buildings into two main categories - interior structures for which the major part of the lateral load resisting system is located within the interior of the building, and exterior structures for which the major part of the lateral load resisting system is located at the building perimeter. The basic types of each of the main structural categories are described. In particular, the framed tube structures, which belong to the second main category of exterior structures, seem to be very efficient. That type of structure system allows tall buildings resist the lateral loads. However, those tube systems are affected by shear lag effect - a nonlinear distribution of stresses across the sides of the section, which is commonly found in box girders under lateral loads. Based on a numerical example, some general conclusions for the influence of the shear-lag effect on frequencies, periods, distribution and variation of the magnitude of the internal forces in the structure are presented.
NASA Astrophysics Data System (ADS)
Forouzanfar, F.; Tavakkoli-Moghaddam, R.; Bashiri, M.; Baboli, A.; Hadji Molana, S. M.
2017-11-01
This paper studies a location-routing-inventory problem in a multi-period closed-loop supply chain with multiple suppliers, producers, distribution centers, customers, collection centers, recovery, and recycling centers. In this supply chain, centers are multiple levels, a price increase factor is considered for operational costs at centers, inventory and shortage (including lost sales and backlog) are allowed at production centers, arrival time of vehicles of each plant to its dedicated distribution centers and also departure from them are considered, in such a way that the sum of system costs and the sum of maximum time at each level should be minimized. The aforementioned problem is formulated in the form of a bi-objective nonlinear integer programming model. Due to the NP-hard nature of the problem, two meta-heuristics, namely, non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO), are used in large sizes. In addition, a Taguchi method is used to set the parameters of these algorithms to enhance their performance. To evaluate the efficiency of the proposed algorithms, the results for small-sized problems are compared with the results of the ɛ-constraint method. Finally, four measuring metrics, namely, the number of Pareto solutions, mean ideal distance, spacing metric, and quality metric, are used to compare NSGA-II and MOPSO.
Dominant Middle East oil reserves critically important to world supply
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riva, J.P. Jr.
1991-09-23
This paper reports that the location production, and transportation of the 60 million bbl of oil consumed in the world each day is of vital importance to relations between nations, as well as to their economic wellbeing. Oil has frequently been a decisive factor in the determination of foreign policy. The war in the Persian Gulf, while a dramatic example of the critical importance of oil, is just the latest of a long line of oil-influenced diplomatic/military incidents, which may be expected to continue. Assuming that the world's remaining oil was evenly distributed and demand did not grow, if explorationmore » and development proceeded as efficiently as they have in the U.S., world oil production could be sustained at around current levels to about the middle of the next century. It then would begin a long decline in response to a depleting resource base. However, the world's remaining oil is very unevenly distributed. It is located primarily in the Eastern Hemisphere, mostly in the Persian Gulf, and much is controlled by the Organization of Petroleum Exporting Countries. Scientific resource assessments indicate that about half of the world's remaining conventionally recoverable crude oil resource occurs in the Persian Gulf area. In terms of proved reserves (known recoverable oil), the Persian Gulf portion increase to almost two-thirds.« less
Potential Arbitrage Revenue of Energy Storage Systems in PJM
Salles, Mauricio; Huang, Junling; Aziz, Michael; ...
2017-07-27
The volatility of electricity prices is attracting interest in the opportunity of providing net revenue by energy arbitrage. We analyzed the potential revenue of a generic Energy Storage System (ESS) in 7395 different locations within the electricity markets of Pennsylvania-New Jersey-Maryland interconnection (PJM), the largest U.S. regional transmission organization, using hourly locational marginal prices over the seven-year period 2008–2014. Assuming a price-taking ESS with perfect foresight in the real-time market, we optimized the charge-discharge profile to determine the maximum potential revenue for a 1 MW system as a function of energy/power ratio, or rated discharge duration, from 1 to 14more » h, including a limited analysis of sensitivity to round-trip efficiency. We determined minimum potential revenue with a similar analysis of the day-ahead market. We presented the distribution over the set of nodes and years of price, price volatility, and maximum potential arbitrage revenue. From these results, we determined the break even overnight installed cost of an ESS below which arbitrage would be profitable, its dependence on rated discharge duration, its distribution over grid nodes, and its variation over the years. We showed that dispatch into real-time markets based on day-ahead market settlement prices is a simple, feasible method that raises the lower bound on the achievable arbitrage revenue.« less
Evolution of the cerebellum as a neuronal machine for Bayesian state estimation
NASA Astrophysics Data System (ADS)
Paulin, M. G.
2005-09-01
The cerebellum evolved in association with the electric sense and vestibular sense of the earliest vertebrates. Accurate information provided by these sensory systems would have been essential for precise control of orienting behavior in predation. A simple model shows that individual spikes in electrosensory primary afferent neurons can be interpreted as measurements of prey location. Using this result, I construct a computational neural model in which the spatial distribution of spikes in a secondary electrosensory map forms a Monte Carlo approximation to the Bayesian posterior distribution of prey locations given the sense data. The neural circuit that emerges naturally to perform this task resembles the cerebellar-like hindbrain electrosensory filtering circuitry of sharks and other electrosensory vertebrates. The optimal filtering mechanism can be extended to handle dynamical targets observed from a dynamical platform; that is, to construct an optimal dynamical state estimator using spiking neurons. This may provide a generic model of cerebellar computation. Vertebrate motion-sensing neurons have specific fractional-order dynamical characteristics that allow Bayesian state estimators to be implemented elegantly and efficiently, using simple operations with asynchronous pulses, i.e. spikes. The computational neural models described in this paper represent a novel kind of particle filter, using spikes as particles. The models are specific and make testable predictions about computational mechanisms in cerebellar circuitry, while providing a plausible explanation of cerebellar contributions to aspects of motor control, perception and cognition.
Design and Performance of the NASA SCEPTOR Distributed Electric Propulsion Flight Demonstrator
NASA Technical Reports Server (NTRS)
Borer, Nicholas K.; Patterson, Michael D.; Viken, Jeffrey K.; Moore, Mark D.; Clarke, Sean; Redifer, Matthew E.; Christie, Robert J.; Stoll, Alex M.; Dubois, Arthur; Bevirt, JoeBen;
2016-01-01
Distributed Electric Propulsion (DEP) technology uses multiple propulsors driven by electric motors distributed about the airframe to yield beneficial aerodynamic-propulsion interaction. The NASA SCEPTOR flight demonstration project will retrofit an existing internal combustion engine-powered light aircraft with two types of DEP: small "high-lift" propellers distributed along the leading edge of the wing which accelerate the flow over the wing at low speeds, and larger cruise propellers co-located with each wingtip for primary propulsive power. The updated high-lift system enables a 2.5x reduction in wing area as compared to the original aircraft, reducing drag at cruise and shifting the velocity for maximum lift-to-drag ratio to a higher speed, while maintaining low-speed performance. The wingtip-mounted cruise propellers interact with the wingtip vortex, enabling a further efficiency increase that can reduce propulsive power by 10%. A tradespace exploration approach is developed that enables rapid identification of salient trades, and subsequent creation of SCEPTOR demonstrator geometries. These candidates were scrutinized by subject matter experts to identify design preferences that were not modeled during configuration exploration. This exploration and design approach is used to create an aircraft that consumes an estimated 4.8x less energy at the selected cruise point when compared to the original aircraft.
The global distribution of tetrapods reveals a need for targeted reptile conservation.
Roll, Uri; Feldman, Anat; Novosolov, Maria; Allison, Allen; Bauer, Aaron M; Bernard, Rodolphe; Böhm, Monika; Castro-Herrera, Fernando; Chirio, Laurent; Collen, Ben; Colli, Guarino R; Dabool, Lital; Das, Indraneil; Doan, Tiffany M; Grismer, Lee L; Hoogmoed, Marinus; Itescu, Yuval; Kraus, Fred; LeBreton, Matthew; Lewin, Amir; Martins, Marcio; Maza, Erez; Meirte, Danny; Nagy, Zoltán T; de C Nogueira, Cristiano; Pauwels, Olivier S G; Pincheira-Donoso, Daniel; Powney, Gary D; Sindaco, Roberto; Tallowin, Oliver J S; Torres-Carvajal, Omar; Trape, Jean-François; Vidan, Enav; Uetz, Peter; Wagner, Philipp; Wang, Yuezhao; Orme, C David L; Grenyer, Richard; Meiri, Shai
2017-11-01
The distributions of amphibians, birds and mammals have underpinned global and local conservation priorities, and have been fundamental to our understanding of the determinants of global biodiversity. In contrast, the global distributions of reptiles, representing a third of terrestrial vertebrate diversity, have been unavailable. This prevented the incorporation of reptiles into conservation planning and biased our understanding of the underlying processes governing global vertebrate biodiversity. Here, we present and analyse the global distribution of 10,064 reptile species (99% of extant terrestrial species). We show that richness patterns of the other three tetrapod classes are good spatial surrogates for species richness of all reptiles combined and of snakes, but characterize diversity patterns of lizards and turtles poorly. Hotspots of total and endemic lizard richness overlap very little with those of other taxa. Moreover, existing protected areas, sites of biodiversity significance and global conservation schemes represent birds and mammals better than reptiles. We show that additional conservation actions are needed to effectively protect reptiles, particularly lizards and turtles. Adding reptile knowledge to a global complementarity conservation priority scheme identifies many locations that consequently become important. Notably, investing resources in some of the world's arid, grassland and savannah habitats might be necessary to represent all terrestrial vertebrates efficiently.
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
Distributed spatial information integration based on web service
NASA Astrophysics Data System (ADS)
Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng
2008-10-01
Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.
Distributed spatial information integration based on web service
NASA Astrophysics Data System (ADS)
Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng
2009-10-01
Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.
Tabilio, Maria Rosaria; Lampazzi, Elena; Ceccaroli, Claudio; Colacci, Marco; Trematerra, Pasquale
2018-01-01
The Mediterranean fruit fly (medfly), Ceratitis capitata (Wiedemann), is a key pest of fruit crops in many tropical, subtropical and mild temperate areas worldwide. The economic importance of this fruit fly is increasing due to its invasion of new geographical areas. Efficient control and eradication efforts require adequate information regarding C. capitata adults in relation to environmental and physiological cues. This would allow effective characterisation of the population spatio-temporal dynamic of the C. capitata population at both the orchard level and the area-wide landscape. The aim of this study was to analyse population patterns of adult medflies caught using two trapping systems in a peach orchard located in central Italy. They were differentiated by adult sex (males or females) and mating status of females (unmated or mated females) to determine the spatio-temporal dynamic and evaluate the effect of cultivar and chemical treatments on trap catches. Female mating status was assessed by spermathecal dissection and a blind test was carried out to evaluate the reliability of the technique. Geostatistical methods, variogram and kriging, were used to produce distributional maps. Results showed a strong correlation between the distribution of males and unmated females, whereas males versus mated females and unmated females versus mated females showed a lower correlation. Both cultivar and chemical treatments had significant effects on trap catches, showing associations with sex and female mating status. Medfly adults showed aggregated distributions in the experimental field, but hot spots locations varied. The spatial pattern of unmated females reflected that of males, whereas mated females were largely distributed around ripening or ripe fruit. The results give relevant insights into pest management. Mated females may be distributed differently to unmated females and the identification of male hot spots through monitoring would allow localisation of virgin female populations. Based on our results, a more precise IPM strategy, coupled with effective sanitation practices, could represent a more effective approach to medfly control. PMID:29617420
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
Cooling of Electric Motors Used for Propulsion on SCEPTOR
NASA Technical Reports Server (NTRS)
Christie, Robert J.; Dubois, Arthur; Derlaga, Joseph M.
2017-01-01
NASA is developing a suite of hybrid-electric propulsion technologies for aircraft. These technologies have the benefit of lower emissions, diminished noise, increased efficiency, and reduced fuel burn. These will provide lower operating costs for aircraft operators. Replacing internal combustion engines with distributed electric propulsion is a keystone of this technology suite, but presents many new problems to aircraft system designers. One of the problems is how to cool these electric motors without adding significant aerodynamic drag, cooling system weight or fan power. This paper discusses the options evaluated for cooling the motors on SCEPTOR (Scalable Convergent Electric Propulsion Technology and Operations Research): a project that will demonstrate Distributed Electric Propulsion technology in flight. Options for external and internal cooling, inlet and exhaust locations, ducting and adjustable cowling, and axial and centrifugal fans were evaluated. The final design was based on a trade between effectiveness, simplicity, robustness, mass and performance over a range of ground and flight operation environments.
NASA Astrophysics Data System (ADS)
Gendelis, S.; Jakovičs, A.
2010-01-01
Numerical mathematical modelling of the indoor thermal conditions and of the energy losses for separate rooms is an important part of the analysis of the heat-exchange balance and energy efficiency in buildings. The measurements of heat transfer coefficients for bounding structures, the air-tightness tests and thermographic diagnostics done for a building allow the influence of those factors to be predicted more correctly in developed numerical models. The temperature distribution and airflows in a typical room (along with the heat losses) were calculated for different heater locations and solar radiation (modelled as a heat source) through the window, as well as various pressure differences between the openings in opposite walls. The airflow velocities and indoor temperature, including its gradient, were also analysed as parameters of thermal comfort conditions. The results obtained show that all of the listed factors have an important influence on the formation of thermal comfort conditions and on the heat balance in a room.
Microstrip Yagi array antenna for mobile satellite vehicle application
NASA Technical Reports Server (NTRS)
Huang, John; Densmore, Arthur C.
1991-01-01
A novel antenna structure formed by combining the Yagi-Uda array concept and the microstrip radiator technique is discussed. This antenna, called the microstrip Yagi array, has been developed for the mobile satellite (MSAT) system as a low-profile, low-cost, and mechanically steered medium-gain land-vehicle antenna. With the antenna's active patches (driven elements) and parasitic patches (reflector and director elements) located on the same horizontal plane, the main beam of the array can be tilted, by the effect of mutual coupling, in the elevation direction providing optimal coverage for users in the continental United States. Because the parasitic patches are not connected to any of the lossy RF power distributing circuit the antenna is an efficient radiating system. With the complete monopulse beamforming and power distributing circuits etched on a single thin stripline board underneath the microstrip Yagi array, the overall L-band antenna system has achieved a very low profile for vehicle's rooftop mounting, as well as a low manufacturing cost. Experimental results demonstrate the performance of this antenna.
Effect of particle size distribution on the separation efficiency in liquid chromatography.
Horváth, Krisztián; Lukács, Diána; Sepsey, Annamária; Felinger, Attila
2014-09-26
In this work, the influence of the width of particle size distribution (PSD) on chromatographic efficiency is studied. The PSD is described by lognormal distribution. A theoretical framework is developed in order to calculate heights equivalent to a theoretical plate in case of different PSDs. Our calculations demonstrate and verify that wide particle size distributions have significant effect on the separation efficiency of molecules. The differences of fully porous and core-shell phases regarding the influence of width of PSD are presented and discussed. The efficiencies of bimodal phases were also calculated. The results showed that these packings do not have any advantage over unimodal phases. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Green, Robert D.; Agui, Juan H.; Berger, Gordon M.; Vijayakumar, R.; Perry, Jay L.
2016-01-01
The atmosphere revitalization equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provides the vital functions of maintaining a habitable environment for the crew as well as protecting the hardware from fouling by suspended particulate matter. Providing these functions are challenging in pressurized spacecraft cabins because no outside air ventilation is possible and a larger particulate load is imposed on the filtration system due to lack of sedimentation in reduced gravity conditions. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Adsorption (HEPA) filters deployed at multiple locations in each module. These filters are referred to as Bacteria Filter Elements (BFEs). As more experience has been gained with ISS operations, the BFE service life, which was initially one year, has been extended to two to five years, dependent on the location in the U.S. Segment. In previous work we developed a test facility and test protocol for leak testing the ISS BFEs. For this work, we present results of leak testing a sample set of returned BFEs with a service life of 2.5 years, along with particulate removal efficiency and pressure drop measurements. The results can potentially be utilized by the ISS Program to ascertain whether the present replacement interval can be maintained or extended to balance the on-ground filter inventory with extension of the lifetime of ISS to 2024. These results can also provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.
78 FR 1570 - Semiannual Regulatory Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... Transformers (energy efficiency standards) Residential clothes washers (energy efficiency standards... Distribution Transformers (Reg Plan Seq No. 32). 263 Test Procedures for 1904-AC76 Residential Refrigerators... Efficiency Standards for Distribution Transformers Regulatory Plan: This entry is Seq. No. 32 in part II of...
NASA Astrophysics Data System (ADS)
Allen, D. J.; Pickering, K. E.; Ring, A.; Holzworth, R. H.
2013-12-01
Lightning is the dominant source of nitrogen oxides (NOx) involved in the production of ozone in the middle and upper troposphere in the tropics and in summer in the midlatitudes. Therefore it is imperative that the lightning NOx (LNOx) source strength per flash be better constrained. This process requires accurate information on the location and timing of lightning flashes. In the past fifteen years satellite-based lightning monitoring by the Optical Transient Detector (OTD) and Lightning Imaging Sensor (LIS) has greatly increased our understanding of the global distribution of lightning as a function of season and time-of-day. However, detailed information at higher temporal resolutions is only available for limited regions where ground-based networks such as the United States National Lightning Detection Network (NLDN) exist. In 2004, the ground-based World Wide Lightning Location Network (WWLLN) was formed with the goal of providing continuous flash rate information over the entire globe. It detects very low frequency (VLF) radio waves emitted by lightning with a detection efficiency (DE) that varies with stroke energy, time-of-day, surface type, and network coverage. This study evaluated the DE of WWLLN strokes relative to climatological OTD/LIS flashes using data from the 2007 to 2012 time period, a period during which the mean number of working sensors increased from 28 to 53. The analysis revealed that the mean global DE increased from 5% in 2007 to 13% in 2012. Regional variations were substantial with mean 2012 DEs of 5-10% over much of Argentina, Africa, and Asia and 15-30% over much of the Atlantic, Pacific, and Indian Oceans, the United States and the Maritime Continent. Detection-efficiency adjusted WWLLN flash rates were then compared to NLDN-based flash rates. Spatial correlations for individual summer months ranged from 0.66 to 0.93. Temporal correlations are currently being examined for regions of the U.S. and will also be shown.
Goedhart, Joachim; van Unen, Jakobus; Adjobo-Hermans, Merel J W; Gadella, Theodorus W J
2013-01-01
The p63RhoGEF and GEFT proteins are encoded by the same gene and both members of the Dbl family of guanine nucleotide exchange factors. These proteins can be activated by the heterotrimeric G-protein subunit Gαq. We show that p63RhoGEF is located at the plasma membrane, whereas GEFT is confined to the cytoplasm. Live-cell imaging studies yielded quantitative information on diffusion coefficients, association rates and encounter times of GEFT and p63RhoGEF. Calcium signaling was examined as a measure of the signal transmission, revealing more efficient signaling through the membrane-associated p63RhoGEF. A rapamycin dependent recruitment system was used to dynamically alter the subcellular location and concentration of GEFT, showing efficient signaling through GEFT only upon membrane recruitment. Together, our results show efficient signal transmission through membrane located effectors, and highlight a role for increased concentration rather than increased encounter times due to membrane localization in the Gαq mediated pathways to p63RhoGEF and PLCβ.
NASA Astrophysics Data System (ADS)
DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.
2013-08-01
We analyzed single molecule FRET burst measurements using Bayesian nested sampling. The MultiNest algorithm produces accurate FRET efficiency distributions from single-molecule data. FRET efficiency distributions recovered by MultiNest and classic maximum entropy are compared for simulated data and for calmodulin labeled at residues 44 and 117. MultiNest compares favorably with maximum entropy analysis for simulated data, judged by the Bayesian evidence. FRET efficiency distributions recovered for calmodulin labeled with two different FRET dye pairs depended on the dye pair and changed upon Ca2+ binding. We also looked at the FRET efficiency distributions of calmodulin bound to the calcium/calmodulin dependent protein kinase II (CaMKII) binding domain. For both dye pairs, the FRET efficiency distribution collapsed to a single peak in the case of calmodulin bound to the CaMKII peptide. These measurements strongly suggest that consideration of dye-protein interactions is crucial in forming an accurate picture of protein conformations from FRET data.
DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.
2013-01-01
We analyze single molecule FRET burst measurements using Bayesian nested sampling. The MultiNest algorithm produces accurate FRET efficiency distributions from single-molecule data. FRET efficiency distributions recovered by MultiNest and classic maximum entropy are compared for simulated data and for calmodulin labeled at residues 44 and 117. MultiNest compares favorably with maximum entropy analysis for simulated data, judged by the Bayesian evidence. FRET efficiency distributions recovered for calmodulin labeled with two different FRET dye pairs depended on the dye pair and changed upon Ca2+ binding. We also looked at the FRET efficiency distributions of calmodulin bound to the calcium/calmodulin dependent protein kinase II (CaMKII) binding domain. For both dye pairs, the FRET efficiency distribution collapsed to a single peak in the case of calmodulin bound to the CaMKII peptide. These measurements strongly suggest that consideration of dye-protein interactions is crucial in forming an accurate picture of protein conformations from FRET data. PMID:24223465
Devore, Matthew S; Gull, Stephen F; Johnson, Carey K
2013-08-30
We analyze single molecule FRET burst measurements using Bayesian nested sampling. The MultiNest algorithm produces accurate FRET efficiency distributions from single-molecule data. FRET efficiency distributions recovered by MultiNest and classic maximum entropy are compared for simulated data and for calmodulin labeled at residues 44 and 117. MultiNest compares favorably with maximum entropy analysis for simulated data, judged by the Bayesian evidence. FRET efficiency distributions recovered for calmodulin labeled with two different FRET dye pairs depended on the dye pair and changed upon Ca 2+ binding. We also looked at the FRET efficiency distributions of calmodulin bound to the calcium/calmodulin dependent protein kinase II (CaMKII) binding domain. For both dye pairs, the FRET efficiency distribution collapsed to a single peak in the case of calmodulin bound to the CaMKII peptide. These measurements strongly suggest that consideration of dye-protein interactions is crucial in forming an accurate picture of protein conformations from FRET data.
NASA Astrophysics Data System (ADS)
Trudel, Mélanie; Leconte, Robert; Paniconi, Claudio
2014-06-01
Data assimilation techniques not only enhance model simulations and forecast, they also provide the opportunity to obtain a diagnostic of both the model and observations used in the assimilation process. In this research, an ensemble Kalman filter was used to assimilate streamflow observations at a basin outlet and at interior locations, as well as soil moisture at two different depths (15 and 45 cm). The simulation model is the distributed physically-based hydrological model CATHY (CATchment HYdrology) and the study site is the Des Anglais watershed, a 690 km2 river basin located in southern Quebec, Canada. Use of Latin hypercube sampling instead of a conventional Monte Carlo method to generate the ensemble reduced the size of the ensemble, and therefore the calculation time. Different post-assimilation diagnostics, based on innovations (observation minus background), analysis residuals (observation minus analysis), and analysis increments (analysis minus background), were used to evaluate assimilation optimality. An important issue in data assimilation is the estimation of error covariance matrices. These diagnostics were also used in a calibration exercise to determine the standard deviation of model parameters, forcing data, and observations that led to optimal assimilations. The analysis of innovations showed a lag between the model forecast and the observation during rainfall events. Assimilation of streamflow observations corrected this discrepancy. Assimilation of outlet streamflow observations improved the Nash-Sutcliffe efficiencies (NSE) between the model forecast (one day) and the observation at both outlet and interior point locations, owing to the structure of the state vector used. However, assimilation of streamflow observations systematically increased the simulated soil moisture values.
NASA Astrophysics Data System (ADS)
Li, Siwei; Li, Jun; Liu, Zhuochu; Wang, Min; Yue, Liang
2017-05-01
After the access of household distributed photovoltaic, conditions of high permeability generally occur, which cut off the connection between distributed power supply and major network rapidly and use energy storage device to realize electrical energy storage. The above operations cannot be adequate for the power grid health after distributed power supply access any more from the perspective of economy and rationality. This paper uses the integration between device and device, integration between device and system and integration between system and system of household microgrid and household energy efficiency management, to design household microgrid building program and operation strategy containing household energy efficiency management, to achieve efficient integration of household energy efficiency management and household microgrid, to effectively solve problems of high permeability of household distributed power supply and so on.
Theater Level Distribution Statement
1992-06-19
The United States Military’s theater level distribution management system is analyzed for adequacy and efficiency through a review of the current...considered. In order to maximize efficiency within the distribution management system it is important that more specific doctrine governing theater level
Robustness of location estimators under t-distributions: a literature review
NASA Astrophysics Data System (ADS)
Sumarni, C.; Sadik, K.; Notodiputro, K. A.; Sartono, B.
2017-03-01
The assumption of normality is commonly used in estimation of parameters in statistical modelling, but this assumption is very sensitive to outliers. The t-distribution is more robust than the normal distribution since the t-distributions have longer tails. The robustness measures of location estimators under t-distributions are reviewed and discussed in this paper. For the purpose of illustration we use the onion yield data which includes outliers as a case study and showed that the t model produces better fit than the normal model.
Efficient Information Access for Location-Based Services in Mobile Environments
ERIC Educational Resources Information Center
Lee, Chi Keung
2009-01-01
The demand for pervasive access of location-related information (e.g., local traffic, restaurant locations, navigation maps, weather conditions, pollution index, etc.) fosters a tremendous application base of "Location Based Services (LBSs)". Without loss of generality, we model location-related information as "spatial objects" and the accesses…
Place field assembly distribution encodes preferred locations
Mamad, Omar; Stumpp, Lars; McNamara, Harold M.; Ramakrishnan, Charu; Deisseroth, Karl; Reilly, Richard B.
2017-01-01
The hippocampus is the main locus of episodic memory formation and the neurons there encode the spatial map of the environment. Hippocampal place cells represent location, but their role in the learning of preferential location remains unclear. The hippocampus may encode locations independently from the stimuli and events that are associated with these locations. We have discovered a unique population code for the experience-dependent value of the context. The degree of reward-driven navigation preference highly correlates with the spatial distribution of the place fields recorded in the CA1 region of the hippocampus. We show place field clustering towards rewarded locations. Optogenetic manipulation of the ventral tegmental area demonstrates that the experience-dependent place field assembly distribution is directed by tegmental dopaminergic activity. The ability of the place cells to remap parallels the acquisition of reward context. Our findings present key evidence that the hippocampal neurons are not merely mapping the static environment but also store the concurrent context reward value, enabling episodic memory for past experience to support future adaptive behavior. PMID:28898248
Hari, Pradip; Ko, Kevin; Koukoumidis, Emmanouil; Kremer, Ulrich; Martonosi, Margaret; Ottoni, Desiree; Peh, Li-Shiuan; Zhang, Pei
2008-10-28
Increasingly, spatial awareness plays a central role in many distributed and mobile computing applications. Spatially aware applications rely on information about the geographical position of compute devices and their supported services in order to support novel functionality. While many spatial application drivers already exist in mobile and distributed computing, very little systems research has explored how best to program these applications, to express their spatial and temporal constraints, and to allow efficient implementations on highly dynamic real-world platforms. This paper proposes the SARANA system architecture, which includes language and run-time system support for spatially aware and resource-aware applications. SARANA allows users to express spatial regions of interest, as well as trade-offs between quality of result (QoR), latency and cost. The goal is to produce applications that use resources efficiently and that can be run on diverse resource-constrained platforms ranging from laptops to personal digital assistants and to smart phones. SARANA's run-time system manages QoR and cost trade-offs dynamically by tracking resource availability and locations, brokering usage/pricing agreements and migrating programs to nodes accordingly. A resource cost model permeates the SARANA system layers, permitting users to express their resource needs and QoR expectations in units that make sense to them. Although we are still early in the system development, initial versions have been demonstrated on a nine-node system prototype.
Im, Seokjin; Choi, JinTak
2014-06-17
In the pervasive computing environment using smart devices equipped with various sensors, a wireless data broadcasting system for spatial data items is a natural way to efficiently provide a location dependent information service, regardless of the number of clients. A non-flat wireless broadcast system can support the clients in accessing quickly their preferred data items by disseminating the preferred data items more frequently than regular data on the wireless channel. To efficiently support the processing of spatial window queries in a non-flat wireless data broadcasting system, we propose a distributed air index based on a maximum boundary rectangle (MaxBR) over grid-cells (abbreviated DAIM), which uses MaxBRs for filtering out hot data items on the wireless channel. Unlike the existing index that repeats regular data items in close proximity to hot items at same frequency as hot data items in a broadcast cycle, DAIM makes it possible to repeat only hot data items in a cycle and reduces the length of the broadcast cycle. Consequently, DAIM helps the clients access the desired items quickly, improves the access time, and reduces energy consumption. In addition, a MaxBR helps the clients decide whether they have to access regular data items or not. Simulation studies show the proposed DAIM outperforms existing schemes with respect to the access time and energy consumption.
Quasi 3D modelling of water flow in the sandy soil
NASA Astrophysics Data System (ADS)
Rezaei, Meisam; Seuntjens, Piet; Joris, Ingeborg; Boënne, Wesley; De Pue, Jan; Cornelis, Wim
2016-04-01
Monitoring and modeling tools may improve irrigation strategies in precision agriculture. Spatial interpolation is required for analyzing the effects of soil hydraulic parameters, soil layer thickness and groundwater level on irrigation management using hydrological models at field scale. We used non-invasive soil sensor, a crop growth (LINGRA-N) and a soil hydrological model (Hydrus-1D) to predict soil-water content fluctuations and crop yield in a heterogeneous sandy grassland soil under supplementary irrigation. In the first step, the sensitivity of the soil hydrological model to hydraulic parameters, water stress, crop yield and lower boundary conditions was assessed after integrating models at one soil column. Free drainage and incremental constant head conditions were implemented in a lower boundary sensitivity analysis. In the second step, to predict Ks over the whole field, the spatial distributions of Ks and its relationship between co-located soil ECa measured by a DUALEM-21S sensor were investigated. Measured groundwater levels and soil layer thickness were interpolated using ordinary point kriging (OK) to a 0.5 by 0.5 m in aim of digital elevation maps. In the third step, a quasi 3D modelling approach was conducted using interpolated data as input hydraulic parameter, geometric information and boundary conditions in the integrated model. In addition, three different irrigation scenarios namely current, no irrigation and optimized irrigations were carried out to find out the most efficient irrigation regime. In this approach, detailed field scale maps of soil water stress, water storage and crop yield were produced at each specific time interval to evaluate the best and most efficient distribution of water using standard gun sprinkler irrigation. The results show that the effect of the position of the groundwater level was dominant in soil-water content prediction and associated water stress. A time-dependent sensitivity analysis of the hydraulic parameters showed that changes in soil water content are mainly affected by the soil saturated hydraulic conductivity Ks in a two-layered soil. Results demonstrated the large spatial variability of Ks (CV = 86.21%). A significant negative correlation was found between ln Ks and ECa (r = 0.83; P≤0.01). This site-specific relation between ln Ks and ECa was used to predict Ks for the whole field after validation using an independent dataset of measured Ks. Result showed that this approach can accurately determine the field scale irrigation requirements, taking into account variations in boundary conditions and spatial variations of model parameters across the field. We found that uniform distribution of water using standard gun sprinkler irrigation is not an efficient approach since at locations with shallow groundwater, the amount of water applied will be excessive as compared to the crop requirements, while in locations with a deeper groundwater table, the crop irrigation requirements will not be met during crop water stress. Numerical results showed that optimal irrigation scheduling using the aforementioned water stress calculations can save up to ~25% irrigation water as compared to the current irrigation regime. This resulted in a yield increase of ~7%, simulated by the crop growth model.
DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.
2012-01-01
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694
Complex network description of the ionosphere
NASA Astrophysics Data System (ADS)
Lu, Shikun; Zhang, Hao; Li, Xihai; Li, Yihong; Niu, Chao; Yang, Xiaoyun; Liu, Daizhi
2018-03-01
Complex networks have emerged as an essential approach of geoscience to generate novel insights into the nature of geophysical systems. To investigate the dynamic processes in the ionosphere, a directed complex network is constructed, based on a probabilistic graph of the vertical total electron content (VTEC) from 2012. The results of the power-law hypothesis test show that both the out-degree and in-degree distribution of the ionospheric network are not scale-free. Thus, the distribution of the interactions in the ionosphere is homogenous. None of the geospatial positions play an eminently important role in the propagation of the dynamic ionospheric processes. The spatial analysis of the ionospheric network shows that the interconnections principally exist between adjacent geographical locations, indicating that the propagation of the dynamic processes primarily depends on the geospatial distance in the ionosphere. Moreover, the joint distribution of the edge distances with respect to longitude and latitude directions shows that the dynamic processes travel further along the longitude than along the latitude in the ionosphere. The analysis of small-world-ness
indicates that the ionospheric network possesses the small-world property, which can make the ionosphere stable and efficient in the propagation of dynamic processes.
NASA Astrophysics Data System (ADS)
Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.
2015-09-01
In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.
Innate colour preferences of the Australian native stingless bee Tetragonula carbonaria Sm.
Dyer, Adrian G; Boyd-Gerny, Skye; Shrestha, Mani; Lunau, Klaus; Garcia, Jair E; Koethe, Sebastian; Wong, Bob B M
2016-10-01
Innate preferences promote the capacity of pollinators to find flowers. Honeybees and bumblebees have strong preferences for 'blue' stimuli, and flowers of this colour typically present higher nectar rewards. Interestingly, flowers from multiple different locations around the world independently have the same distribution in bee colour space. Currently, however, there is a paucity of data on the innate colour preferences of stingless bees that are often implicated as being key pollinators in many parts of the world. In Australia, the endemic stingless bee Tetragonula carbonaria is widely distributed and known to be an efficient pollinator of both native plants and agricultural crops. In controlled laboratory conditions, we tested the innate colour responses of naïve bees using standard broadband reflectance stimuli representative of common flower colours. Colorimetric analyses considering hymenopteran vision and a hexagon colour space revealed a difference between test colonies, and a significant effect of green contrast and an interaction effect of green contrast with spectral purity on bee choices. We also observed colour preferences for stimuli from the blue and blue-green categorical regions of colour space. Our results are discussed in relation to the similar distribution of flower colours observed from bee pollination around the world.
Thermal feature extraction of servers in a datacenter using thermal image registration
NASA Astrophysics Data System (ADS)
Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan
2017-09-01
Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.
Okeniyi, Joshua O; Atayero, Aderemi A; Popoola, Segun I; Okeniyi, Elizabeth T; Alalade, Gbenga M
2018-04-01
This data article presents comparisons of energy generation costs from gas-fired turbine and diesel-powered systems of distributed generation type of electrical energy in Covenant University, Ota, Nigeria, a smart university campus driven by Information and Communication Technologies (ICT). Cumulative monthly data of the energy generation costs, for consumption in the institution, from the two modes electric power, which was produced at locations closed to the community consuming the energy, were recorded for the period spanning January to December 2017. By these, energy generation costs from the turbine system proceed from the gas-firing whereas the generation cost data from the diesel-powered generator also include data on maintenance cost for this mode of electrical power generation. These energy generation cost data that were presented in tables and graphs employ descriptive probability distribution and goodness-of-fit tests of statistical significance as the methods for the data detailing and comparisons. Information details from this data of energy generation costs are useful for furthering research developments and aiding energy stakeholders and decision-makers in the formulation of policies on energy generation modes, economic valuation in terms of costing and management for attaining energy-efficient/smart educational environment.
Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.
2000-01-01
Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining, gather/scatter, and redistribution. At the end of the conversion process most intermediate Charon function calls will have been removed, the non-distributed arrays will have been deleted, and virtually the only remaining Charon functions calls are the high-level, highly optimized communications. Distribution of the data is under complete control of the programmer, although a wide range of useful distributions is easily available through predefined functions. A crucial aspect of the library is that it does not allocate space for distributed arrays, but accepts programmer-specified memory. This has two major consequences. First, codes parallelized using Charon do not suffer from encapsulation; user data is always directly accessible. This provides high efficiency, and also retains the possibility of using message passing directly for highly irregular communications. Second, non-distributed arrays can be interpreted as (trivial) distributions in the Charon sense, which allows them to be mapped to truly distributed arrays, and vice versa. This is the mechanism that enables incremental parallelization. In this paper we provide a brief introduction of the library and then focus on the actual steps in the parallelization process, using some representative examples from, among others, the NAS Parallel Benchmarks. We show how a complicated two-dimensional pipeline-the prototypical non-data-parallel algorithm- can be constructed with ease. To demonstrate the flexibility of the library, we give examples of the stepwise, efficient parallel implementation of nonlocal boundary conditions common in aircraft simulations, as well as the construction of the sequence of grids required for multigrid.
Roh, Chul-Young; Moon, M Jae; Jung, Kwangho
2013-11-01
This study examined the impact of ownership, size, location, and network on the relative technical efficiency of community hospitals in Tennessee for the 2002-2006 period, by applying data envelopment analysis (DEA) to measure technical efficiency (decomposed into scale efficiency and pure technical efficiency). Data envelopment analysis results indicate that medium-size hospitals (126-250 beds) are more efficient than their counterparts. Interestingly, public hospitals are significantly more efficient than private and nonprofit hospitals in Tennessee, and rural hospitals are more efficient than urban hospitals. This is the first study to investigate whether hospital networks with other health care providers affect hospital efficiency. Results indicate that community hospitals with networks are more efficient than non-network hospitals. From a management and policy perspective, this study suggests that public policies should induce hospitals to downsize or upsize into optional size, and private hospitals and nonprofit hospitals should change their organizational objectives from profit-driven to quality-driven.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xibing; Dong, Longjun, E-mail: csudlj@163.com; Australian Centre for Geomechanics, The University of Western Australia, Crawley, 6009
This paper presents an efficient closed-form solution (ECS) for acoustic emission(AE) source location in three-dimensional structures using time difference of arrival (TDOA) measurements from N receivers, N ≥ 6. The nonlinear location equations of TDOA are simplified to linear equations. The unique analytical solution of AE sources for unknown velocity system is obtained by solving the linear equations. The proposed ECS method successfully solved the problems of location errors resulting from measured deviations of velocity as well as the existence and multiplicity of solutions induced by calculations of square roots in existed close-form methods.
Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352
Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.
Mitigating agricultural impacts on groundwater using distributed managed aquifer recharge ponds
NASA Astrophysics Data System (ADS)
Schmidt, C. M.; Russo, T. A.; Fisher, A. T.; Racz, A. J.; Wheat, C. G.; Los Huertos, M.; Lockwood, B. S.
2010-12-01
Groundwater is likely to become increasingly important for irrigated agriculture due to anticipated changes to the hydrologic cycle associated with climate change. Protecting the quantity and quality of subsurface water supplies will require flexible management strategies that can enhance groundwater recharge. We present results from a study of managed aquifer recharge (MAR) in central coastal California, and propose the use of distributed, small-scale (1-5 ha) MAR systems to improve the quantity and quality of recharge in agricultural basins. Our field site is located in a basin where the primary use of groundwater is irrigation for agriculture, and groundwater resources are increasingly threatened by seawater intrusion and nutrient contamination from fertilizer application. The MAR system we are monitoring is supplied by stormwater and irrigation runoff of variable quality, which is diverted from a wetland during periods of high flow. This MAR system delivers approximately 1x106 m3 of recharge annually to the underlying aquifer, a portion of which is recovered and distributed to growers during the dry season. Our sampling and measurements (at high spatial and temporal resolution) show that a significant percentage of the nitrogen load added during MAR operation is eliminated from recharge during shallow infiltration (~30% to 60%, ~40 kg NO3-N/d). Isotopic analyses of the residual nitrate indicate that a significant fraction of the nitrate load reduction is attributable to denitrification. When normalized to infiltration pond area, this system achieves a mean load reduction of 7 kg NO3-N/d/ha, which compares favorably with the nitrogen load reduction efficiency achieved by treatment wetlands receiving agricultural runoff. Much of the reduction in nitrogen load occurs during periods of rapid infiltration (0.2 to 2.0 m/day), as demonstrated with point measurements of infiltration rate collocated with fluid samples. These results suggest that developing a network of small-scale MAR ponds could be a useful strategy for improving groundwater conditions in this basin. Although the efficiency of small recharge ponds can be high, numerous projects would be needed to impact the overall water balance of a basin such as ours. We are applying a GIS-based approach to assess how small-scale MAR systems could be distributed to achieve significant benefit. This analysis involves determining where topography, soil type, land ownership, groundwater conditions, and cropping practices are the most favorable for locating recharge systems. Results of this work should be applicable to other basins facing similar challenges, ultimately helping to improve the sustainability of groundwater supplies.
Cheng, Tao; Sun, Guifeng; Huo, Jingyu; He, Xiaoji; Wang, Yining; Ren, Yan-Fang
2012-11-01
To study patient satisfaction and masticatory efficiency of single implant-retained mandibular overdentures using the stud and magnetic attachments in a randomized clinical trial with a crossover design. Patients received a single implant placed in the midline of the mandible and either a stud (Locator) or a magnetic (Magfit) attachment, assigned at random. Patient satisfaction, including patient comfort, speech, chewing ability and retention, and masticatory efficiency measured by chewing peanuts, were assessed before and 3 months after attachment insertion. Patient satisfaction and masticatory efficiency were evaluated again 3 months after insertion of the alternate attachment bodies. The outcomes were compared before and after insertion of the attachments and between the two types of attachments using Wilcoxon signed rank tests. Patient overall satisfaction, comfort, speech, chewing ability, and retention improved significantly after insertion of both types of attachment bodies (p<0.05). Masticatory efficiencies also increased in both the Locator and the Magfit groups (p<0.05). There were no statistically significant differences in patient overall satisfaction, comfort, speech, and retention between the two types of attachments (p>0.05). The Locator attachments performed better in perceived chewing ability than the Magfit (p<0.05), but there was no statistically significant difference in masticatory efficiency between the two attachment types (p>0.05). Clinical outcomes were significantly improved in single implant-retained mandibular overdentures using either the Locator or the Magfit magnetic attachments. There was no difference in masticatory efficiency between the two attachment types. Copyright © 2012 Elsevier Ltd. All rights reserved.
Efficient Saccade Planning Requires Time and Clear Choices
Ghahghaei, Saiedeh; Verghese, Preeti
2015-01-01
We use eye movements constantly to gather information. Saccades are efficient when they maximize the information required for the task, however there is controversy regarding the efficiency of eye movement planning. For example, saccades are efficient when searching for a single target (Nature, 434 (2005) 387–91), but are inefficient when searching for an unknown number of targets in noise, particularly under time pressure (Vision Research 74 (2012), 61–71). In this study, we used a multiple-target search paradigm and explored whether altering the noise level or increasing saccadic latency improved efficiency. Experiments used stimuli with two levels of discriminability such that saccades to the less discriminable stimuli provided more information. When these two noise levels corresponded to low and moderate visibility, most observers did not preferentially select informative locations, but looked at uncertain and probable target locations equally often. We then examined whether eye movements could be made more efficient by increasing the discriminability of the two stimulus levels and by delaying the first saccade so that there was more time for decision processes to influence the saccade choices. Some observers did indeed increase the proportion of their saccades to informative locations under these conditions. Others, however, made as many saccades as they could during the limited time and were unselective about the saccade goal. A clear trend that emerges across all experiments is that conditions with a greater proportion of efficient saccades are associated with a longer latency to initiate saccades, suggesting that the choice of informative locations requires deliberate planning. PMID:26037735
Seen areas and the distribution of fires about a lookout
Romain M. Mees
1978-01-01
From the location of a fire lookout and the sites of past fires within a given radius about a lookout, an estimate of the fire distribution with respect to distance from the lookout can be obtained. The estimated distribution can include all fires located within a given number of feet below the last maximum line of sight from the lookout. Seen areas for the same...
R. Bruce Anderson; R. Bruce Anderson
1991-01-01
To assess the impact of grocery pallet production on future hardwood resources, better information is needed on the current use of reusable pallets by the grocery and related products industry. A spatial model of pallet use in the grocery distribution system that identifies the locational aspects of grocery pallet production and distribution, determines how these...
Dynamic water allocation policies improve the global efficiency of storage systems
NASA Astrophysics Data System (ADS)
Niayifar, Amin; Perona, Paolo
2017-06-01
Water impoundment by dams strongly affects the river natural flow regime, its attributes and the related ecosystem biodiversity. Fostering the sustainability of water uses e.g., hydropower systems thus implies searching for innovative operational policies able to generate Dynamic Environmental Flows (DEF) that mimic natural flow variability. The objective of this study is to propose a Direct Policy Search (DPS) framework based on defining dynamic flow release rules to improve the global efficiency of storage systems. The water allocation policies proposed for dammed systems are an extension of previously developed flow redistribution rules for small hydropower plants by Razurel et al. (2016).The mathematical form of the Fermi-Dirac statistical distribution applied to lake equations for the stored water in the dam is used to formulate non-proportional redistribution rules that partition the flow for energy production and environmental use. While energy production is computed from technical data, riverine ecological benefits associated with DEF are computed by integrating the Weighted Usable Area (WUA) for fishes with Richter's hydrological indicators. Then, multiobjective evolutionary algorithms (MOEAs) are applied to build ecological versus economic efficiency plot and locate its (Pareto) frontier. This study benchmarks two MOEAs (NSGA II and Borg MOEA) and compares their efficiency in terms of the quality of Pareto's frontier and computational cost. A detailed analysis of dam characteristics is performed to examine their impact on the global system efficiency and choice of the best redistribution rule. Finally, it is found that non-proportional flow releases can statistically improve the global efficiency, specifically the ecological one, of the hydropower system when compared to constant minimal flows.
Process for predicting structural performance of mechanical systems
Gardner, David R.; Hendrickson, Bruce A.; Plimpton, Steven J.; Attaway, Stephen W.; Heinstein, Martin W.; Vaughan, Courtenay T.
1998-01-01
A process for predicting the structural performance of a mechanical system represents the mechanical system by a plurality of surface elements. The surface elements are grouped according to their location in the volume occupied by the mechanical system so that contacts between surface elements can be efficiently located. The process is well suited for efficient practice on multiprocessor computers.
76 FR 70376 - Efficiency and Renewables Advisory Committee; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-14
...-Voltage Dry-Type Distribution Transformers. The Liquid Immersed and Medium-Voltage Dry- Type Group (MV... of distribution transformers, as authorized by the Energy Policy Conservation Act (EPCA) of 1975, as... negotiated rulemaking process to develop proposed energy efficiency standards for distribution transformers...
Estimated home ranges can misrepresent habitat relationships on patchy landscapes
Mitchell, M.S.; Powell, R.A.
2008-01-01
Home ranges of animals are generally structured by the selective use of resource-bearing patches that comprise habitat. Based on this concept, home ranges of animals estimated from location data are commonly used to infer habitat relationships. Because home ranges estimated from animal locations are largely continuous in space, the resource-bearing patches selected by an animal from a fragmented distribution of patches would be difficult to discern; unselected patches included in the home range estimate would bias an understanding of important habitat relationships. To evaluate potential for this bias, we generated simulated home ranges based on optimal selection of resource-bearing patches across a series of simulated resource distributions that varied in the spatial continuity of resources. For simulated home ranges where selected patches were spatially disjunct, we included interstitial, unselected cells most likely to be traveled by an animal moving among selected patches. We compared characteristics of the simulated home ranges with and without interstitial patches to evaluate how insights derived from field estimates can differ from actual characteristics of home ranges, depending on patchiness of landscapes. Our results showed that contiguous home range estimates could lead to misleading insights on the quality, size, resource content, and efficiency of home ranges, proportional to the spatial discontinuity of resource-bearing patches. We conclude the potential bias of including unselected, largely irrelevant patches in the field estimates of home ranges of animals can be high, particularly for home range estimators that assume uniform use of space within home range boundaries. Thus, inferences about the habitat relationships that ultimately define an animal's home range can be misleading where animals occupy landscapes with patchily distributed resources.
Spatial distribution of aerosol hygroscopicity and its effect on PM2.5 retrieval in East China
NASA Astrophysics Data System (ADS)
He, Qianshan; Zhou, Guangqiang; Geng, Fuhai; Gao, Wei; Yu, Wei
2016-03-01
The hygroscopic properties of aerosol particles have strong impact on climate as well as visibility in polluted areas. Understanding of the scattering enhancement due to water uptake is of great importance in linking dry aerosol measurements with relevant ambient measurements, especially for satellite retrievals. In this study, an observation-based algorithm combining meteorological data with the particulate matter (PM) measurement was introduced to estimate spatial distribution of indicators describing the integrated humidity effect in East China and the main factors impacting the hygroscopicity were explored. Investigation of 1 year data indicates that the larger mass extinction efficiency αext values (> 9.0 m2/g) located in middle and northern Jiangsu Province, which might be caused by particulate organic material (POM) and sulfate aerosol from industries and human activities. The high level of POM in Jiangsu Province might also be responsible for the lower growth coefficient γ value in this region. For the inland junction provinces of Jiangsu and Anhui, a considerable higher hygroscopic growth region in East China might be attributed to more hygroscopic particles mainly comprised of inorganic salts (e.g., sulfates and nitrates) from several large-scale industrial districts distributed in this region. Validation shows good agreement of calculated PM2.5 mass concentrations with in situ measurements in most stations with correlative coefficients of over 0.85, even if several defective stations induced by station location or seasonal variation of aerosol properties in this region. This algorithm can be used for more accurate surface level PM2.5 retrieval from satellite-based aerosol optical depth (AOD) with combination of the vertical correction for aerosol profile.
2012-01-01
The theory of speciation is dominated by adaptationist thinking, with less attention to mechanisms that do not affect species adaptation. Degeneracy – the imperfect specificity of interactions between diverse elements of biological systems and their environments – is key to the adaptability of populations. A mathematical model was explored in which population and resource were distributed one-dimensionally according to trait value. Resource consumption was degenerate – neither strictly location-specific nor location-independent. As a result, the competition for resources among the elements of the population was non-local. Two modeling approaches, a modified differential-integral Verhulstian equation and a cellular automata model, showed similar results: narrower degeneracy led to divergent dynamics with suppression of intermediate forms, whereas broader degeneracy led to suppression of diversifying forms, resulting in population stasis with increasing phenotypic homogeneity. Such behaviors did not increase overall adaptation because they continued after the model populations achieved maximal resource consumption rates, suggesting that degeneracy-driven distributed competition for resources rather than selective pressure toward more efficient resource exploitation was the driving force. The solutions were stable in the presence of limited environmental stochastic variability or heritable phenotypic variability. A conclusion was made that both dynamic diversification and static homogeneity of populations may be outcomes of the same process – distributed competition for resource not affecting the overall adaptation – with the difference between them defined by the spread of trait degeneracy in a given environment. Thus, biological degeneracy is a driving force of both speciation and stasis in biology, which, by themselves, are not necessarily adaptive in nature. PMID:23268831
Impacts on the Voltage Profile of DC Distribution Network with DG Access
NASA Astrophysics Data System (ADS)
Tu, J. J.; Yin, Z. D.
2017-07-01
With the development of electronic, more and more distributed generations (DGs) access into grid and cause the research fever of direct current (DC) distribution network. Considering distributed generation (DG) location and capacity have great impacts on voltage profile, so use IEEE9 and IEEE33 typical circuit as examples, with DGs access in centralized and decentralized mode, to compare voltage profile in alternating and direct current (AC/DC) distribution network. Introducing the voltage change ratio as an evaluation index, so gets the general results on voltage profile of DC distributed network with DG access. Simulation shows that, in the premise of reasonable location and capacity, DC distribution network is more suitable for DG access.
Efficient High Performance Collective Communication for Distributed Memory Environments
ERIC Educational Resources Information Center
Ali, Qasim
2009-01-01
Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…
If it's not there, where is it? Locating illusory conjunctions.
Hazeltine, R E; Prinzmetal, W; Elliott, W
1997-02-01
There is evidence that complex objects are decomposed by the visual system into features, such as shape and color. Consistent with this theory is the phenomenon of illusory conjunctions, which occur when features are incorrectly combined to form an illusory object. We analyzed the perceived location of illusory conjunctions to study the roles of color and shape in the location of visual objects. In Experiments 1 and 2, participants located illusory conjunctions about halfway between the veridical locations of the component features. Experiment 3 showed that the distribution of perceived locations was not the mixture of two distributions centered at the 2 feature locations. Experiment 4 replicated these results with an identification task rather than a detection task. We concluded that the locations of illusory conjunctions were not arbitrary but were determined by both constituent shape and color.
Herschel Detects a Massive Dust Reservoir in Supernova 1987A
NASA Technical Reports Server (NTRS)
Matsuura, M.; Dwek, E.; Meixner, M.; Otsuka, M.; Babler, B.; Barlow, M. J.; Roman-Duval, J.; Engelbracht, C.; Sandstrom K.; Lakicevic, M.;
2011-01-01
We report far-infrared and submillimeter observations of Supernova 1987A, the star that exploded on February 23, 1987 in the Large Magellanic Cloud, a galaxy located 160,000 light years away. The observations reveal the presence of a population of cold dust grains radiating with a temperature of approx.17-23 K at a rate of about 220 stellar luminosity. The intensity and spectral energy distribution of the emission suggests a dust mass of approx.0.4-0.7 stellar mass. The radiation must originate from the SN ejecta and requires the efficient precipitation of all refractory material into dust. Our observations imply that supernovae can produce the large dust masses detected in young galaxies at very high red shifts.
Exergy analysis on industrial boiler energy conservation and emission evaluation applications
NASA Astrophysics Data System (ADS)
Li, Henan
2017-06-01
Industrial boiler is one of the most energy-consuming equipments in china, the annual consumption of energy accounts for about one-third of the national energy consumption. Industrial boilers in service at present have several severe problems such as small capacity, low efficiency, high energy consumption and causing severe pollution on environment. In recent years, our country in the big scope, long time serious fog weather, with coal-fired industrial boilers is closely related to the regional characteristics of high strength and low emissions [1]. The energy-efficient and emission-reducing of industry boiler is of great significance to improve China’s energy usage efficiency and environmental protection. Difference in thermal equilibrium theory is widely used in boiler design, exergy analysis method is established on the basis of the first law and second law of thermodynamics, by studying the cycle of the effect of energy conversion and utilization, to analyze its influencing factors, to reveal the exergy loss of location, distribution and size, find out the weak links, and a method of mining system of the boiler energy saving potential. Exergy analysis method is used for layer combustion boiler efficiency and pollutant emission characteristics analysis and evaluation, and can more objectively and accurately the energy conserving potential of the mining system of the boiler, find out the weak link of energy consumption, and improve equipment performance to improve the industrial boiler environmental friendliness.
NASA Technical Reports Server (NTRS)
Green, Robert D.; Agui, Juan H.; Vijayakumar, R.; Berger, Gordon M.; Perry, Jay L.
2017-01-01
The air quality control equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provide the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Air (HEPA) filters deployed at multiple locations in each U.S. Seg-ment module; these filters are referred to as Bacterial Filter Elements, or BFEs. In our previous work, we presented results of efficiency and pressure drop measurements for a sample set of two returned BFEs with a service life of 2.5 years. In this follow-on work, we present similar efficiency, pressure drop, and leak tests results for a larger sample set of six returned BFEs. The results of this work can aid the ISS Program in managing BFE logistics inventory through the stations planned lifetime as well as provide insight for managing filter element logistics for future exploration missions. These results also can provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.
Filter Efficiency and Pressure Testing of Returned ISS Bacterial Filter Elements (BFEs)
NASA Technical Reports Server (NTRS)
Green, Robert D.; Agui, Juan H.; Berger, Gordon M.; Vijayakumar, R.; Perry, Jay L.
2017-01-01
The air quality control equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provide the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Air (HEPA) filters deployed at multiple locations in each U.S. Seg-ment module; these filters are referred to as Bacterial Filter Elements, or BFEs. In our previous work, we presented results of efficiency and pressure drop measurements for a sample set of two returned BFEs with a service life of 2.5 years. In this follow-on work, we present similar efficiency, pressure drop, and leak tests results for a larger sample set of six returned BFEs. The results of this work can aid the ISS Program in managing BFE logistics inventory through the stations planned lifetime as well as provide insight for managing filter element logistics for future exploration missions. These results also can provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.
An analytical method to compute comet cloud formation efficiency and its application
NASA Astrophysics Data System (ADS)
Brasser, Ramon; Duncan, Martin J.
2008-01-01
A quick analytical method is presented for calculating comet cloud formation efficiency in the case of a single planet or multiple-planet system for planets that are not too eccentric ( e p ≲ 0.3). A method to calculate the fraction of comets that stay under the control of each planet is also presented, as well as a way to determine the efficiency in different star cluster environments. The location of the planet(s) in mass-semi-major axis space to form a comet cloud is constrained based on the conditions developed by Tremaine (1993) together with estimates of the likelyhood of passing comets between planets; and, in the case of a single, eccentric planet, the additional constraint that it is, by itself, able to accelerate material to relative encounter velocity U ~ 0.4 within the age of the stellar system without sweeping up the majority of the material beforehand. For a single planet, it turns out the efficiency is mainly a function of planetary mass and semi-major axis of the planet and density of the stellar environment. The theory has been applied to some extrasolar systems and compared to numerical simulations for both these systems and the Solar System, as well as a diffusion scheme based on the energy kick distribution of Everhart (Astron J 73:1039 1052, 1968). The analytic results are in good agreement with the simulations.
Advances in Significance Testing for Cluster Detection
NASA Astrophysics Data System (ADS)
Coleman, Deidra Andrea
Over the past two decades, much attention has been given to data driven project goals such as the Human Genome Project and the development of syndromic surveillance systems. A major component of these types of projects is analyzing the abundance of data. Detecting clusters within the data can be beneficial as it can lead to the identification of specified sequences of DNA nucleotides that are related to important biological functions or the locations of epidemics such as disease outbreaks or bioterrorism attacks. Cluster detection techniques require efficient and accurate hypothesis testing procedures. In this dissertation, we improve upon the hypothesis testing procedures for cluster detection by enhancing distributional theory and providing an alternative method for spatial cluster detection using syndromic surveillance data. In Chapter 2, we provide an efficient method to compute the exact distribution of the number and coverage of h-clumps of a collection of words. This method involves defining a Markov chain using a minimal deterministic automaton to reduce the number of states needed for computation. We allow words of the collection to contain other words of the collection making the method more general. We use our method to compute the distributions of the number and coverage of h-clumps in the Chi motif of H. influenza.. In Chapter 3, we provide an efficient algorithm to compute the exact distribution of multiple window discrete scan statistics for higher-order, multi-state Markovian sequences. This algorithm involves defining a Markov chain to efficiently keep track of probabilities needed to compute p-values of the statistic. We use our algorithm to identify cases where the available approximation does not perform well. We also use our algorithm to detect unusual clusters of made free throw shots by National Basketball Association players during the 2009-2010 regular season. In Chapter 4, we give a procedure to detect outbreaks using syndromic surveillance data while controlling the Bayesian False Discovery Rate (BFDR). The procedure entails choosing an appropriate Bayesian model that captures the spatial dependency inherent in epidemiological data and considers all days of interest, selecting a test statistic based on a chosen measure that provides the magnitude of the maximumal spatial cluster for each day, and identifying a cutoff value that controls the BFDR for rejecting the collective null hypothesis of no outbreak over a collection of days for a specified region.We use our procedure to analyze botulism-like syndrome data collected by the North Carolina Disease Event Tracking and Epidemiologic Collection Tool (NC DETECT).
Light Extraction From Solution-Based Processable Electrophosphorescent Organic Light-Emitting Diodes
NASA Astrophysics Data System (ADS)
Krummacher, Benjamin C.; Mathai, Mathew; So, Franky; Choulis, Stelios; Choong, And-En, Vi
2007-06-01
Molecular dye dispersed solution processable blue emitting organic light-emitting devices have been fabricated and the resulting devices exhibit efficiency as high as 25 cd/A. With down-conversion phosphors, white emitting devices have been demonstrated with peak efficiency of 38 cd/A and luminous efficiency of 25 lm/W. The high efficiencies have been a product of proper tuning of carrier transport, optimization of the location of the carrier recombination zone and, hence, microcavity effect, efficient down-conversion from blue to white light, and scattering/isotropic remission due to phosphor particles. An optical model has been developed to investigate all these effects. In contrast to the common misunderstanding that light out-coupling efficiency is about 22% and independent of device architecture, our device data and optical modeling results clearly demonstrated that the light out-coupling efficiency is strongly dependent on the exact location of the recombination zone. Estimating the device internal quantum efficiencies based on external quantum efficiencies without considering the device architecture could lead to erroneous conclusions.
The Land-Use Efficiency of Big Solar
NASA Astrophysics Data System (ADS)
Hernandez, R. R.; Hoffacker, M.; Field, C. B.
2013-12-01
As utility-scale solar energy (USSE) systems increase in size and numbers globally, there is a growing interest in understanding environmental interactions between solar energy development and land-use decisions. Maximizing the efficient use of land for USSE is one of the major challenges in realizing the full potential of solar energy, however, the land-use efficiency (LUE; Wm-2) of USSE remains unknown. We quantified the nominal LUE of 183 USSE installations (> 20 megawatts; planned, under construction, and operating) using California as a case study. In California, we found that USSE installations are concentrated in the Central Valley and desert interior of southern California and have a LUE of 35.01 Wm-2. The installations comprise approximately 86,000 hectares (ha) and more land is allocated for photovoltaic schemes (72,294 ha) than for concentrating solar power (13,604 ha). Photovoltaic installations are greater in abundance (93%) than concentrating solar power, but technology type and nameplate capacity has no impact on LUE. More USSE installations are on private land (80%) and have a significantly greater LUE (35.83 Wm-2) than installations on public land (25.42 Wm-2). We show how LUE can be improved and how co-benefit opportunities can be integrated with USSE enterprises to maximize their economic, energetic, and environmental returns on investment. (Left) The distribution of utility-scale solar energy installations in California (constructed and in progress) by technology type: concentrating solar power and photovoltaic with county lines shown. (Right) The distribution of utility-scale solar energy installations in California (constructed and in progress) by location: public or privately owned land. Larger capacity installations (megawatts) have relatively greater point size.
Monte Carlo evaluation of magnetically focused proton beams for radiosurgery
NASA Astrophysics Data System (ADS)
McAuley, Grant A.; Heczko, Sarah L.; Nguyen, Theodore T.; Slater, James M.; Slater, Jerry D.; Wroe, Andrew J.
2018-03-01
The purpose of this project is to investigate the advantages in dose distribution and delivery of proton beams focused by a triplet of quadrupole magnets in the context of potential radiosurgery treatments. Monte Carlo simulations were performed using various configurations of three quadrupole magnets located immediately upstream of a water phantom. Magnet parameters were selected to match what can be commercially manufactured as assemblies of rare-earth permanent magnetic materials. Focused unmodulated proton beams with a range of ~10 cm in water were target matched with passive collimated beams (the current beam delivery method for proton radiosurgery) and properties of transverse dose, depth dose and volumetric dose distributions were compared. Magnetically focused beams delivered beam spots of low eccentricity to Bragg peak depth with full widths at the 90% reference dose contour from ~2.5 to 5 mm. When focused initial beam diameters were larger than matching unfocused beams (10 of 11 cases) the focused beams showed 16%–83% larger peak-to-entrance dose ratios and 1.3 to 3.4-fold increases in dose delivery efficiency. Peak-to-entrance and efficiency benefits tended to increase with larger magnet gradients and larger initial diameter focused beams. Finally, it was observed that focusing tended to shift dose in the water phantom volume from the 80%–20% dose range to below 20% of reference dose, compared to unfocused beams. We conclude that focusing proton beams immediately upstream from tissue entry using permanent magnet assemblies can produce beams with larger peak-to-entrance dose ratios and increased dose delivery efficiencies. Such beams could potentially be used in the clinic to irradiate small-field radiosurgical targets with fewer beams, lower entrance dose and shorter treatment times.
The geography of hotspots of rarity-weighted richness of birds and their coverage by Natura 2000
de Albuquerque, Fábio Suzart; Gregory, Andrew
2017-01-01
A major challenge for biogeographers and conservation planners is to identify where to best locate or distribute high-priority areas for conservation and to explore whether these areas are well represented by conservation actions such as protected areas (PAs). We aimed to identify high-priority areas for conservation, expressed as hotpots of rarity-weighted richness (HRR)–sites that efficiently represent species–for birds across EU countries, and to explore whether HRR are well represented by the Natura 2000 network. Natura 2000 is an evolving network of PAs that seeks to conserve biodiversity through the persistence of the most patrimonial species and habitats across Europe. This network includes Sites of Community Importance (SCI) and Special Areas of Conservation (SAC), where the latter regulated the designation of Special Protected Areas (SPA). Distribution maps for 416 bird species and complementarity-based approaches were used to map geographical patterns of rarity-weighted richness (RWR) and HRR for birds. We used species accumulation index to evaluate whether RWR was efficient surrogates to identify HRRs for birds. The results of our analysis support the proposition that prioritizing sites in order of RWR is a reliable way to identify sites that efficiently represent birds. HRRs were concentrated in the Mediterranean Basin and alpine and boreal biogeographical regions of northern Europe. The cells with high RWR values did not correspond to cells where Natura 2000 was present. We suggest that patterns of RWR could become a focus for conservation biogeography. Our analysis demonstrates that identifying HRR is a robust approach for prioritizing management actions, and reveals the need for more conservation actions, especially on HRR. PMID:28379991
The geography of hotspots of rarity-weighted richness of birds and their coverage by Natura 2000.
Albuquerque, Fábio Suzart de; Gregory, Andrew
2017-01-01
A major challenge for biogeographers and conservation planners is to identify where to best locate or distribute high-priority areas for conservation and to explore whether these areas are well represented by conservation actions such as protected areas (PAs). We aimed to identify high-priority areas for conservation, expressed as hotpots of rarity-weighted richness (HRR)-sites that efficiently represent species-for birds across EU countries, and to explore whether HRR are well represented by the Natura 2000 network. Natura 2000 is an evolving network of PAs that seeks to conserve biodiversity through the persistence of the most patrimonial species and habitats across Europe. This network includes Sites of Community Importance (SCI) and Special Areas of Conservation (SAC), where the latter regulated the designation of Special Protected Areas (SPA). Distribution maps for 416 bird species and complementarity-based approaches were used to map geographical patterns of rarity-weighted richness (RWR) and HRR for birds. We used species accumulation index to evaluate whether RWR was efficient surrogates to identify HRRs for birds. The results of our analysis support the proposition that prioritizing sites in order of RWR is a reliable way to identify sites that efficiently represent birds. HRRs were concentrated in the Mediterranean Basin and alpine and boreal biogeographical regions of northern Europe. The cells with high RWR values did not correspond to cells where Natura 2000 was present. We suggest that patterns of RWR could become a focus for conservation biogeography. Our analysis demonstrates that identifying HRR is a robust approach for prioritizing management actions, and reveals the need for more conservation actions, especially on HRR.
Wolf, Tabea; Zimprich, Daniel
2016-10-01
The reminiscence bump phenomenon has frequently been reported for the recall of autobiographical memories. The present study complements previous research by examining individual differences in the distribution of word-cued autobiographical memories. More importantly, we introduce predictor variables that might account for individual differences in the mean (location) and the standard deviation (scale) of individual memory distributions. All variables were derived from different theoretical accounts for the reminiscence bump phenomenon. We used a mixed location-scale logitnormal model, to analyse the 4602 autobiographical memories reported by 118 older participants. Results show reliable individual differences in the location and the scale. After controlling for age and gender, individual proportions of first-time experiences and individual proportions of positive memories, as well as the ratings on Openness to new Experiences and Self-Concept Clarity accounted for 29% of individual differences in location and 42% of individual differences in scale of autobiographical memory distributions. Results dovetail with a life-story account for the reminiscence bump which integrates central components of previous accounts.
A facility location model for municipal solid waste management system under uncertain environment.
Yadav, Vinay; Bhurjee, A K; Karmakar, Subhankar; Dikshit, A K
2017-12-15
In municipal solid waste management system, decision makers have to develop an insight into the processes namely, waste generation, collection, transportation, processing, and disposal methods. Many parameters (e.g., waste generation rate, functioning costs of facilities, transportation cost, and revenues) in this system are associated with uncertainties. Often, these uncertainties of parameters need to be modeled under a situation of data scarcity for generating probability distribution function or membership function for stochastic mathematical programming or fuzzy mathematical programming respectively, with only information of extreme variations. Moreover, if uncertainties are ignored, then the problems like insufficient capacities of waste management facilities or improper utilization of available funds may be raised. To tackle uncertainties of these parameters in a more efficient manner an algorithm, based on interval analysis, has been developed. This algorithm is applied to find optimal solutions for a facility location model, which is formulated to select economically best locations of transfer stations in a hypothetical urban center. Transfer stations are an integral part of contemporary municipal solid waste management systems, and economic siting of transfer stations ensures financial sustainability of this system. The model is written in a mathematical programming language AMPL with KNITRO as a solver. The developed model selects five economically best locations out of ten potential locations with an optimum overall cost of [394,836, 757,440] Rs. 1 /day ([5906, 11,331] USD/day) approximately. Further, the requirement of uncertainty modeling is explained based on the results of sensitivity analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
International Review of Standards and Labeling Programs for Distribution Transformers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letschert, Virginie; Scholand, Michael; Carreño, Ana MarÃa
Transmission and distribution (T&D) losses in electricity networks represent 8.5% of final energy consumption in the world. In Latin America, T&D losses range between 6% and 20% of final energy consumption, and represent 7% in Chile. Because approximately one-third of T&D losses take place in distribution transformers alone, there is significant potential to save energy and reduce costs and carbon emissions through policy intervention to increase distribution transformer efficiency. A large number of economies around the world have recognized the significant impact of addressing distribution losses and have implemented policies to support market transformation towards more efficient distribution transformers. Asmore » a result, there is considerable international experience to be shared and leveraged to inform countries interested in reducing distribution losses through policy intervention. The report builds upon past international studies of standards and labeling (S&L) programs for distribution transformers to present the current energy efficiency programs for distribution transformers around the world.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-13
... Medium- and Low-Voltage Dry-Type Distribution Transformers AGENCY: Department of Energy, Office of Energy... Dry-Type and the second addressing Low-Voltage Dry-Type Distribution Transformers. The Liquid Immersed... proposed rule for regulating the energy efficiency of distribution transformers, as authorized by the...
Discrete Wavelet Transform for Fault Locations in Underground Distribution System
NASA Astrophysics Data System (ADS)
Apisit, C.; Ngaopitakkul, A.
2010-10-01
In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.
Process for predicting structural performance of mechanical systems
Gardner, D.R.; Hendrickson, B.A.; Plimpton, S.J.; Attaway, S.W.; Heinstein, M.W.; Vaughan, C.T.
1998-05-19
A process for predicting the structural performance of a mechanical system represents the mechanical system by a plurality of surface elements. The surface elements are grouped according to their location in the volume occupied by the mechanical system so that contacts between surface elements can be efficiently located. The process is well suited for efficient practice on multiprocessor computers. 12 figs.
Chiu, Singa Wang; Huang, Chao-Chih; Chiang, Kuo-Wei; Wu, Mei-Fang
2015-01-01
Transnational companies, operating in extremely competitive global markets, always seek to lower different operating costs, such as inventory holding costs in their intra- supply chain system. This paper incorporates a cost reducing product distribution policy into an intra-supply chain system with multiple sales locations and quality assurance studied by [Chiu et al., Expert Syst Appl, 40:2669-2676, (2013)]. Under the proposed cost reducing distribution policy, an added initial delivery of end items is distributed to multiple sales locations to meet their demand during the production unit's uptime and rework time. After rework when the remaining production lot goes through quality assurance, n fixed quantity installments of finished items are then transported to sales locations at a fixed time interval. Mathematical modeling and optimization techniques are used to derive closed-form optimal operating policies for the proposed system. Furthermore, the study demonstrates significant savings in stock holding costs for both the production unit and sales locations. Alternative of outsourcing product delivery task to an external distributor is analyzed to assist managerial decision making in potential outsourcing issues in order to facilitate further reduction in operating costs.
A Novel Method for Constructing a WIFI Positioning System with Efficient Manpower
Du, Yuanfeng; Yang, Dongkai; Xiu, Chundi
2015-01-01
With the rapid development of WIFI technology, WIFI-based indoor positioning technology has been widely studied for location-based services. To solve the problems related to the signal strength database adopted in the widely used fingerprint positioning technology, we first introduce a new system framework in this paper, which includes a modified AP firmware and some cheap self-made WIFI sensor anchors. The periodically scanned reports regarding the neighboring APs and sensor anchors are sent to the positioning server and serve as the calibration points. Besides the calculation of correlations between the target points and the neighboring calibration points, we take full advantage of the important but easily overlooked feature that the signal attenuation model varies in different regions in the regression algorithm to get more accurate results. Thus, a novel method called RSSI Geography Weighted Regression (RGWR) is proposed to solve the fingerprint database construction problem. The average error of all the calibration points’ self-localization results will help to make the final decision of whether the database is the latest or has to be updated automatically. The effects of anchors on system performance are further researched to conclude that the anchors should be deployed at the locations that stand for the features of RSSI distributions. The proposed system is convenient for the establishment of practical positioning system and extensive experiments have been performed to validate that the proposed method is robust and manpower efficient. PMID:25868078
A novel method for constructing a WIFI positioning system with efficient manpower.
Du, Yuanfeng; Yang, Dongkai; Xiu, Chundi
2015-04-10
With the rapid development of WIFI technology, WIFI-based indoor positioning technology has been widely studied for location-based services. To solve the problems related to the signal strength database adopted in the widely used fingerprint positioning technology, we first introduce a new system framework in this paper, which includes a modified AP firmware and some cheap self-made WIFI sensor anchors. The periodically scanned reports regarding the neighboring APs and sensor anchors are sent to the positioning server and serve as the calibration points. Besides the calculation of correlations between the target points and the neighboring calibration points, we take full advantage of the important but easily overlooked feature that the signal attenuation model varies in different regions in the regression algorithm to get more accurate results. Thus, a novel method called RSSI Geography Weighted Regression (RGWR) is proposed to solve the fingerprint database construction problem. The average error of all the calibration points' self-localization results will help to make the final decision of whether the database is the latest or has to be updated automatically. The effects of anchors on system performance are further researched to conclude that the anchors should be deployed at the locations that stand for the features of RSSI distributions. The proposed system is convenient for the establishment of practical positioning system and extensive experiments have been performed to validate that the proposed method is robust and manpower efficient.
NASA Astrophysics Data System (ADS)
Iltis, A.; Snoussi, H.; Magalhaes, L. Rodrigues de; Hmissi, M. Z.; Zafiarifety, C. Tata; Tadonkeng, G. Zeufack; Morel, C.
2018-01-01
During nuclear decommissioning or waste management operations, a camera that could make an image of the contamination field and identify and quantify the contaminants would be a great progress. Compton cameras have been proposed, but their limited efficiency for high energy gamma rays and their cost have severely limited their application. Our objective is to promote a Compton camera for the energy range (200 keV - 2 MeV) that uses fast scintillating crystals and a new concept for locating scintillation event: Temporal Imaging. Temporal Imaging uses monolithic plates of fast scintillators and measures photons time of arrival distribution in order to locate each gamma ray with a high precision in space (X,Y,Z), time (T) and energy (E). This provides a native estimation of the depth of interaction (Z) of every detected gamma ray. This also allows a time correction for the propagation time of scintillation photons inside the crystal, therefore resulting in excellent time resolution. The high temporal resolution of the system makes it possible to veto quite efficiently background by using narrow time coincidence (< 300 ps). It is also possible to reconstruct the direction of propagation of the photons inside the detector using timing constraints. The sensitivity of our system is better than 1 nSv/h in a 60 s acquisition with a 22Na source. The project TEMPORAL is funded by the ANDRA/PAI under the grant No. RTSCNADAA160019.
GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data
NASA Astrophysics Data System (ADS)
Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.
2016-12-01
Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.
NASA Astrophysics Data System (ADS)
Raposo, Henrique; Mughal, Shahid; Ashworth, Richard
2018-04-01
Acoustic receptivity to Tollmien-Schlichting waves in the presence of surface roughness is investigated for a flat plate boundary layer using the time-harmonic incompressible linearized Navier-Stokes equations. It is shown to be an accurate and efficient means of predicting receptivity amplitudes and, therefore, to be more suitable for parametric investigations than other approaches with direct-numerical-simulation-like accuracy. Comparison with the literature provides strong evidence of the correctness of the approach, including the ability to quantify non-parallel flow effects. These effects are found to be small for the efficiency function over a wide range of frequencies and local Reynolds numbers. In the presence of a two-dimensional wavy-wall, non-parallel flow effects are quite significant, producing both wavenumber detuning and an increase in maximum amplitude. However, a smaller influence is observed when considering an oblique Tollmien-Schlichting wave. This is explained by considering the non-parallel effects on receptivity and on linear growth which may, under certain conditions, cancel each other out. Ultimately, we undertake a Monte Carlo type uncertainty quantification analysis with two-dimensional distributed random roughness. Its power spectral density (PSD) is assumed to follow a power law with an associated uncertainty following a probabilistic Gaussian distribution. The effects of the acoustic frequency over the mean amplitude of the generated two-dimensional Tollmien-Schlichting waves are studied. A strong dependence on the mean PSD shape is observed and discussed according to the basic resonance mechanisms leading to receptivity. The growth of Tollmien-Schlichting waves is predicted with non-linear parabolized stability equations computations to assess the effects of stochasticity in transition location.
Federated data storage system prototype for LHC experiments and data intensive science
NASA Astrophysics Data System (ADS)
Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.
2017-10-01
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.
Application of Theodorsen's Theory to Propeller Design
NASA Technical Reports Server (NTRS)
Crigler, John L
1948-01-01
A theoretical analysis is presented for obtaining by use of Theodorsen's propeller theory the load distribution along a propeller radius to give the optimum propeller efficiency for any design condition.The efficiencies realized by designing for the optimum load distribution are given in graphs, and the optimum efficiency for any design condition may be read directly from the graph without any laborious calculations. Examples are included to illustrate the method of obtaining the optimum load distributions for both single-rotating and dual-rotating propellers.
Application of Theodorsen's theory to propeller design
NASA Technical Reports Server (NTRS)
Crigler, John L
1949-01-01
A theoretical analysis is presented for obtaining, by use of Theodorsen's propeller theory, the load distribution along a propeller radius to give the optimum propeller efficiency for any design condition. The efficiencies realized by designing for the optimum load distribution are given in graphs, and the optimum efficiency for any design condition may be read directly from the graph without any laborious calculations. Examples are included to illustrate the method of obtaining the optimum load distributions for both single-rotating and dual-rotating propellers.
Leveraging socially networked mobile ICT platforms for the last-mile delivery problem.
Suh, Kyo; Smith, Timothy; Linhoff, Michelle
2012-09-04
Increasing numbers of people are managing their social networks on mobile information and communication technology (ICT) platforms. This study materializes these social relationships by leveraging spatial and networked information for sharing excess capacity to reduce the environmental impacts associated with "last-mile" package delivery systems from online purchases, particularly in low population density settings. Alternative package pickup location systems (PLS), such as a kiosk on a public transit platform or in a grocery store, have been suggested as effective strategies for reducing package travel miles and greenhouse gas emissions, compared to current door-to-door delivery models (CDS). However, our results suggest that a pickup location delivery system operating in a suburban setting may actually increase travel miles and emissions. Only once a social network is employed to assist in package pickup (SPLS) are significant reductions in the last-mile delivery distance and carbon emissions observed across both urban and suburban settings. Implications for logistics management's decades-long focus on improving efficiencies of dedicated distribution systems through specialization, as well as for public policy targeting carbon emissions of the transport sector are discussed.
Integrated renewable energy networks
NASA Astrophysics Data System (ADS)
Mansouri Kouhestani, F.; Byrne, J. M.; Hazendonk, P.; Brown, M. B.; Spencer, L.
2015-12-01
This multidisciplinary research is focused on studying implementation of diverse renewable energy networks. Our modern economy now depends heavily on large-scale, energy-intensive technologies. A transition to low carbon, renewable sources of energy is needed. We will develop a procedure for designing and analyzing renewable energy systems based on the magnitude, distribution, temporal characteristics, reliability and costs of the various renewable resources (including biomass waste streams) in combination with various measures to control the magnitude and timing of energy demand. The southern Canadian prairies are an ideal location for developing renewable energy networks. The region is blessed with steady, westerly winds and bright sunshine for more hours annually than Houston Texas. Extensive irrigation agriculture provides huge waste streams that can be processed biologically and chemically to create a range of biofuels. The first stage involves mapping existing energy and waste flows on a neighbourhood, municipal, and regional level. Optimal sites and combinations of sites for solar and wind electrical generation, such as ridges, rooftops and valley walls, will be identified. Geomatics based site and grid analyses will identify best locations for energy production based on efficient production and connectivity to regional grids.
Jiang, Ya-Jun; Che, Mei-Xia; Yuan, Jin-Qiao; Xie, Yuan-Yuan; Yan, Xian-Zhong; Hu, Hong-Yu
2011-01-01
Huntington disease (HD) is an autosomal inherited disorder that causes the deterioration of brain cells. The polyglutamine (polyQ) expansion of huntingtin (Htt) is implicated in the pathogenesis of HD via interaction with an RNA splicing factor, Htt yeast two-hybrid protein A/forming-binding protein 11 (HYPA/FBP11). Besides the pathogenic polyQ expansion, Htt also contains a proline-rich region (PRR) located exactly in the C terminus to the polyQ tract. However, how the polyQ expansion influences the PRR-mediated protein interaction and how this abnormal interaction leads to the biological consequence remain elusive. Our NMR structural analysis indicates that the PRR motif of Htt cooperatively interacts with the tandem WW domains of HYPA through domain chaperoning effect of WW1 on WW2. The polyQ-expanded Htt sequesters HYPA to the cytosolic location and then significantly reduces the efficiency of pre-mRNA splicing. We propose that the toxic gain-of-function of the polyQ-expanded Htt that causes dysfunction of cellular RNA processing contributes to the pathogenesis of HD. PMID:21566141
Jiang, Ya-Jun; Che, Mei-Xia; Yuan, Jin-Qiao; Xie, Yuan-Yuan; Yan, Xian-Zhong; Hu, Hong-Yu
2011-07-15
Huntington disease (HD) is an autosomal inherited disorder that causes the deterioration of brain cells. The polyglutamine (polyQ) expansion of huntingtin (Htt) is implicated in the pathogenesis of HD via interaction with an RNA splicing factor, Htt yeast two-hybrid protein A/forming-binding protein 11 (HYPA/FBP11). Besides the pathogenic polyQ expansion, Htt also contains a proline-rich region (PRR) located exactly in the C terminus to the polyQ tract. However, how the polyQ expansion influences the PRR-mediated protein interaction and how this abnormal interaction leads to the biological consequence remain elusive. Our NMR structural analysis indicates that the PRR motif of Htt cooperatively interacts with the tandem WW domains of HYPA through domain chaperoning effect of WW1 on WW2. The polyQ-expanded Htt sequesters HYPA to the cytosolic location and then significantly reduces the efficiency of pre-mRNA splicing. We propose that the toxic gain-of-function of the polyQ-expanded Htt that causes dysfunction of cellular RNA processing contributes to the pathogenesis of HD.
Progress in Validation of Wind-US for Ramjet/Scramjet Combustion
NASA Technical Reports Server (NTRS)
Engblom, William A.; Frate, Franco C.; Nelson, Chris C.
2005-01-01
Validation of the Wind-US flow solver against two sets of experimental data involving high-speed combustion is attempted. First, the well-known Burrows- Kurkov supersonic hydrogen-air combustion test case is simulated, and the sensitively of ignition location and combustion performance to key parameters is explored. Second, a numerical model is developed for simulation of an X-43B candidate, full-scale, JP-7-fueled, internal flowpath operating in ramjet mode. Numerical results using an ethylene-air chemical kinetics model are directly compared against previously existing pressure-distribution data along the entire flowpath, obtained in direct-connect testing conducted at NASA Langley Research Center. Comparison to derived quantities such as burn efficiency and thermal throat location are also made. Reasonable to excellent agreement with experimental data is demonstrated for key parameters in both simulation efforts. Additional Wind-US feature needed to improve simulation efforts are described herein, including maintaining stagnation conditions at inflow boundaries for multi-species flow. An open issue regarding the sensitivity of isolator unstart to key model parameters is briefly discussed.
Learning by Association in Plants.
Gagliano, Monica; Vyazovskiy, Vladyslav V; Borbély, Alexander A; Grimonprez, Mavra; Depczynski, Martial
2016-12-02
In complex and ever-changing environments, resources such as food are often scarce and unevenly distributed in space and time. Therefore, utilizing external cues to locate and remember high-quality sources allows more efficient foraging, thus increasing chances for survival. Associations between environmental cues and food are readily formed because of the tangible benefits they confer. While examples of the key role they play in shaping foraging behaviours are widespread in the animal world, the possibility that plants are also able to acquire learned associations to guide their foraging behaviour has never been demonstrated. Here we show that this type of learning occurs in the garden pea, Pisum sativum. By using a Y-maze task, we show that the position of a neutral cue, predicting the location of a light source, affected the direction of plant growth. This learned behaviour prevailed over innate phototropism. Notably, learning was successful only when it occurred during the subjective day, suggesting that behavioural performance is regulated by metabolic demands. Our results show that associative learning is an essential component of plant behaviour. We conclude that associative learning represents a universal adaptive mechanism shared by both animals and plants.
Continuum topology optimization considering uncertainties in load locations based on the cloud model
NASA Astrophysics Data System (ADS)
Liu, Jie; Wen, Guilin
2018-06-01
Few researchers have paid attention to designing structures in consideration of uncertainties in the loading locations, which may significantly influence the structural performance. In this work, cloud models are employed to depict the uncertainties in the loading locations. A robust algorithm is developed in the context of minimizing the expectation of the structural compliance, while conforming to a material volume constraint. To guarantee optimal solutions, sufficient cloud drops are used, which in turn leads to low efficiency. An innovative strategy is then implemented to enormously improve the computational efficiency. A modified soft-kill bi-directional evolutionary structural optimization method using derived sensitivity numbers is used to output the robust novel configurations. Several numerical examples are presented to demonstrate the effectiveness and efficiency of the proposed algorithm.
Comparing Different Fault Identification Algorithms in Distributed Power System
NASA Astrophysics Data System (ADS)
Alkaabi, Salim
A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.
Excitation efficiency of an optical fiber core source
NASA Technical Reports Server (NTRS)
Egalon, Claudio O.; Rogowski, Robert S.; Tai, Alan C.
1992-01-01
The exact field solution of a step-index profile fiber is used to determine the excitation efficiency of a distribution of sources in the core of an optical fiber. Previous results of a thin-film cladding source distribution to its core source counterpart are used for comparison. The behavior of power efficiency with the fiber parameters is examined and found to be similar to the behavior exhibited by cladding sources. It is also found that a core-source fiber is two orders of magnitude more efficient than a fiber with a bulk distribution of cladding sources. This result agrees qualitatively with previous ones obtained experimentally.
Benchmarking of vertically-integrated CO2 flow simulations at the Sleipner Field, North Sea
NASA Astrophysics Data System (ADS)
Cowton, L. R.; Neufeld, J. A.; White, N. J.; Bickle, M. J.; Williams, G. A.; White, J. C.; Chadwick, R. A.
2018-06-01
Numerical modeling plays an essential role in both identifying and assessing sub-surface reservoirs that might be suitable for future carbon capture and storage projects. Accuracy of flow simulations is tested by benchmarking against historic observations from on-going CO2 injection sites. At the Sleipner project located in the North Sea, a suite of time-lapse seismic reflection surveys enables the three-dimensional distribution of CO2 at the top of the reservoir to be determined as a function of time. Previous attempts have used Darcy flow simulators to model CO2 migration throughout this layer, given the volume of injection with time and the location of the injection point. Due primarily to computational limitations preventing adequate exploration of model parameter space, these simulations usually fail to match the observed distribution of CO2 as a function of space and time. To circumvent these limitations, we develop a vertically-integrated fluid flow simulator that is based upon the theory of topographically controlled, porous gravity currents. This computationally efficient scheme can be used to invert for the spatial distribution of reservoir permeability required to minimize differences between the observed and calculated CO2 distributions. When a uniform reservoir permeability is assumed, inverse modeling is unable to adequately match the migration of CO2 at the top of the reservoir. If, however, the width and permeability of a mapped channel deposit are allowed to independently vary, a satisfactory match between the observed and calculated CO2 distributions is obtained. Finally, the ability of this algorithm to forecast the flow of CO2 at the top of the reservoir is assessed. By dividing the complete set of seismic reflection surveys into training and validation subsets, we find that the spatial pattern of permeability required to match the training subset can successfully predict CO2 migration for the validation subset. This ability suggests that it might be feasible to forecast migration patterns into the future with a degree of confidence. Nevertheless, our analysis highlights the difficulty in estimating reservoir parameters away from the region swept by CO2 without additional observational constraints.
NASA Technical Reports Server (NTRS)
Maxwell, Theresa G.; McNair, Ann R. (Technical Monitor)
2002-01-01
The planning processes for the International Space Station (ISS) Program are quite complex. Detailed mission planning for ISS on-orbit operations is a distributed function. Pieces of the on-orbit plan are developed by multiple planning organizations, located around the world, based on their respective expertise and responsibilities. The "pieces" are then integrated to yield the final detailed plan that will be executed onboard the ISS. Previous space programs have not distributed the planning and scheduling functions to this extent. Major ISS planning organizations are currently located in the United States (at both the NASA Johnson Space Center (JSC) and NASA Marshall Space Flight Center (MSFC)), in Russia, in Europe, and in Japan. Software systems have been developed by each of these planning organizations to support their assigned planning and scheduling functions. Although there is some cooperative development and sharing of key software components, each planning system has been tailored to meet the unique requirements and operational environment of the facility in which it operates. However, all the systems must operate in a coordinated fashion in order to effectively and efficiently produce a single integrated plan of ISS operations, in accordance with the established planning processes. This paper addresses lessons learned during the development of these multiple distributed planning systems, from the perspective of the developer of one of the software systems. The lessons focus on the coordination required to allow the multiple systems to operate together, rather than on the problems associated with the development of any particular system. Included in the paper is a discussion of typical problems faced during the development and coordination process, such as incompatible development schedules, difficulties in defining system interfaces, technical coordination and funding for shared tools, continually evolving planning concepts/requirements, programmatic and budget issues, and external influences. Techniques that mitigated some of these problems will also be addressed, along with recommendations for any future programs involving the development of multiple planning and scheduling systems. Many of these lessons learned are not unique to the area of planning and scheduling systems, so may be applied to other distributed ground systems that must operate in concert to successfully support space mission operations.
NASA Technical Reports Server (NTRS)
Maxwell, Theresa G.
2002-01-01
The planning processes for the International Space Station (ISS) Program are quite complex. Detailed mission planning for ISS on-orbit operations is a distributed function. Pieces of the on-orbit plan are developed by multiple planning organizations, located around the world, based on their respective expertise and responsibilities. The pieces are then integrated to yield the final detailed plan that will be executed onboard the ISS. Previous space programs have not distributed the planning and scheduling functions to this extent. Major ISS planning organizations are currently located in the United States (at both the NASA Johnson Space Center (JSC) and NASA Marshall Space Flight Center (MSFC)), in Russia, in Europe, and in Japan. Software systems have been developed by each of these planning organizations to support their assigned planning and scheduling functions. Although there is some cooperative development and sharing of key software components, each planning system has been tailored to meet the unique requirements and operational environment of the facility in which it operates. However, all the systems must operate in a coordinated fashion in order to effectively and efficiently produce a single integrated plan of ISS operations, in accordance with the established planning processes. This paper addresses lessons learned during the development of these multiple distributed planning systems, from the perspective of the developer of one of the software systems. The lessons focus on the coordination required to allow the multiple systems to operate together, rather than on the problems associated with the development of any particular system. Included in the paper is a discussion of typical problems faced during the development and coordination process, such as incompatible development schedules, difficulties in defining system interfaces, technical coordination and funding for shared tools, continually evolving planning concepts/requirements, programmatic and budget issues, and external influences. Techniques that mitigated some of these problems will also be addressed, along with recommendations for any future programs involving the development of multiple planning and scheduling systems. Many of these lessons learned are not unique to the area of planning and scheduling systems, so may be applied to other distributed ground systems that must operate in concert to successfully support space mission operations.
Locational Marginal Pricing in the Campus Power System at the Power Distribution Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Jun; Gu, Yi; Zhang, Yingchen
2016-11-14
In the development of smart grid at distribution level, the realization of real-time nodal pricing is one of the key challenges. The research work in this paper implements and studies the methodology of locational marginal pricing at distribution level based on a real-world distribution power system. The pricing mechanism utilizes optimal power flow to calculate the corresponding distributional nodal prices. Both Direct Current Optimal Power Flow and Alternate Current Optimal Power Flow are utilized to calculate and analyze the nodal prices. The University of Denver campus power grid is used as the power distribution system test bed to demonstrate themore » pricing methodology.« less
Zhang, Yifei; Kang, Jian
2017-11-01
The building of biomass combined heat and power (CHP) plants is an effective means of developing biomass energy because they can satisfy demands for winter heating and electricity consumption. The purpose of this study was to analyse the effect of the distribution density of a biomass CHP plant network on heat utilisation efficiency in a village-town system. The distribution density is determined based on the heat transmission threshold, and the heat utilisation efficiency is determined based on the heat demand distribution, heat output efficiency, and heat transmission loss. The objective of this study was to ascertain the optimal value for the heat transmission threshold using a multi-scheme comparison based on an analysis of these factors. To this end, a model of a biomass CHP plant network was built using geographic information system tools to simulate and generate three planning schemes with different heat transmission thresholds (6, 8, and 10 km) according to the heat demand distribution. The heat utilisation efficiencies of these planning schemes were then compared by calculating the gross power, heat output efficiency, and heat transmission loss of the biomass CHP plant for each scenario. This multi-scheme comparison yielded the following results: when the heat transmission threshold was low, the distribution density of the biomass CHP plant network was high and the biomass CHP plants tended to be relatively small. In contrast, when the heat transmission threshold was high, the distribution density of the network was low and the biomass CHP plants tended to be relatively large. When the heat transmission threshold was 8 km, the distribution density of the biomass CHP plant network was optimised for efficient heat utilisation. To promote the development of renewable energy sources, a planning scheme for a biomass CHP plant network that maximises heat utilisation efficiency can be obtained using the optimal heat transmission threshold and the nonlinearity coefficient for local roads. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automatic analysis of the 2015 Gorkha earthquake aftershock sequence.
NASA Astrophysics Data System (ADS)
Baillard, C.; Lyon-Caen, H.; Bollinger, L.; Rietbrock, A.; Letort, J.; Adhikari, L. B.
2016-12-01
The Mw 7.8 Gorkha earthquake, that partially ruptured the Main Himalayan Thrust North of Kathmandu on the 25th April 2015, was the largest and most catastrophic earthquake striking Nepal since the great M8.4 1934 earthquake. This mainshock was followed by multiple aftershocks, among them, two notable events that occurred on the 12th May with magnitudes of 7.3 Mw and 6.3 Mw. Due to these recent events it became essential for the authorities and for the scientific community to better evaluate the seismic risk in the region through a detailed analysis of the earthquake catalog, amongst others, the spatio-temporal distribution of the Gorkha aftershock sequence. Here we complement this first study by doing a microseismic study using seismic data coming from the eastern part of the Nepalese Seismological Center network associated to one broadband station in Everest. Our primary goal is to deliver an accurate catalog of the aftershock sequence. Due to the exceptional number of events detected we performed an automatic picking/locating procedure which can be splitted in 4 steps: 1) Coarse picking of the onsets using a classical STA/LTA picker, 2) phase association of picked onsets to detect and declare seismic events, 3) Kurtosis pick refinement around theoretical arrival times to increase picking and location accuracy and, 4) local magnitude calculation based amplitude of waveforms. This procedure is time efficient ( 1 sec/event), reduces considerably the location uncertainties ( 2 to 5 km errors) and increases the number of events detected compared to manual processing. Indeed, the automatic detection rate is 10 times higher than the manual detection rate. By comparing to the USGS catalog we were able to give a new attenuation law to compute local magnitudes in the region. A detailed analysis of the seismicity shows a clear migration toward the east of the region and a sudden decrease of seismicity 100 km east of Kathmandu which may reveal the presence of a tectonic feature acting as a seismic barrier. Comparison of the aftershock distribution with respect to the coseismic slip distribution will be discussed.d.
Li, Rui; Zhang, Qing; Li, Junbai; Shi, Hualin
2016-01-01
An experimental system was designed to measure in vivo termination efficiency (TE) of the Rho-independent terminator and position–function relations were quantified for the terminator tR2 in Escherichia coli. The terminator function was almost completely repressed when tR2 was located several base pairs downstream from the gene, and TE gradually increased to maximum values with the increasing distance between the gene and terminator. This TE–distance relation reflected a stochastic coupling of the ribosome and RNA polymerase (RNAP). Terminators located in the first 100 bp of the coding region can function efficiently. However, functional repression was observed when the terminator was located in the latter part of the coding region, and the degree of repression was determined by transcriptional and translational dynamics. These results may help to elucidate mechanisms of Rho-independent termination and reveal genomic locations of terminators and functions of the sequence that precedes terminators. These observations may have important applications in synthetic biology. PMID:26602687
Locating dayside magnetopause reconnection with exhaust ion distributions
NASA Astrophysics Data System (ADS)
Broll, J. M.; Fuselier, S. A.; Trattner, K. J.
2017-05-01
Magnetic reconnection at Earth's dayside magnetopause is essential to magnetospheric dynamics. Determining where reconnection takes place is important to understanding the processes involved, and many questions about reconnection location remain unanswered. We present a method for locating the magnetic reconnection X line at Earth's dayside magnetopause under southward interplanetary magnetic field conditions using only ion velocity distribution measurements. Particle-in-cell simulations based on Cluster magnetopause crossings produce ion velocity distributions that we propagate through a model magnetosphere, allowing us to calculate the field-aligned distance between an exhaust observation and its associated reconnection line. We demonstrate this procedure for two events and compare our results with those of the Maximum Magnetic Shear Model; we find good agreement with its results and show that when our method is applicable, it produces more precise locations than the Maximum Shear Model.
Namboodiri, Vijay Mohan K; Levy, Joshua M; Mihalas, Stefan; Sims, David W; Hussain Shuler, Marshall G
2016-08-02
Understanding the exploration patterns of foragers in the wild provides fundamental insight into animal behavior. Recent experimental evidence has demonstrated that path lengths (distances between consecutive turns) taken by foragers are well fitted by a power law distribution. Numerous theoretical contributions have posited that "Lévy random walks"-which can produce power law path length distributions-are optimal for memoryless agents searching a sparse reward landscape. It is unclear, however, whether such a strategy is efficient for cognitively complex agents, from wild animals to humans. Here, we developed a model to explain the emergence of apparent power law path length distributions in animals that can learn about their environments. In our model, the agent's goal during search is to build an internal model of the distribution of rewards in space that takes into account the cost of time to reach distant locations (i.e., temporally discounting rewards). For an agent with such a goal, we find that an optimal model of exploration in fact produces hyperbolic path lengths, which are well approximated by power laws. We then provide support for our model by showing that humans in a laboratory spatial exploration task search space systematically and modify their search patterns under a cost of time. In addition, we find that path length distributions in a large dataset obtained from free-ranging marine vertebrates are well described by our hyperbolic model. Thus, we provide a general theoretical framework for understanding spatial exploration patterns of cognitively complex foragers.
Chybalski, Filip
The existing literature on the efficiency of pension system, usually addresses the problem between the choice of different theoretical models, or concerns one or few empirical pension systems. In this paper quite different approach to the measurement of pension system efficiency is proposed. It is dedicated mainly to the cross-country studies of empirical pension systems, however it may be also employed to the analysis of a given pension system on the basis of time series. I identify four dimensions of pension system efficiency, referring to: GDP-distribution, adequacy of pension, influence on the labour market and administrative costs. Consequently, I propose four sets of static and one set of dynamic efficiency indicators. In the empirical part of the paper, I use Spearman's rank correlation coefficient and cluster analysis to verify the proposed method on statistical data covering 28 European countries in years 2007-2011. I prove that the method works and enables some comparisons as well as clustering of analyzed pension systems. The study delivers also some interesting empirical findings. The main goal of pension systems seems to become poverty alleviation, since the efficiency of ensuring protection against poverty, as well as the efficiency of reducing poverty, is very resistant to the efficiency of GDP-distribution. The opposite situation characterizes the efficiency of consumption smoothing-this is generally sensitive to the efficiency of GDP-distribution, and its dynamics are sensitive to the dynamics of GDP-distribution efficiency. The results of the study indicate the Norwegian and the Icelandic pension systems to be the most efficient in the analyzed group.
NASA Astrophysics Data System (ADS)
Zubelzu, Sergio; Rodriguez-Sinobas, Leonor; Sobrino, Fernando; Sánchez, Raúl
2017-04-01
Irrigation programing determines when and how much water apply to fulfill the plant water requirements depending of its phenology stage and location, and soil water content. Thus, the amount of water, the irrigation time and the irrigation frequency are variables that must be estimated. Likewise, irrigation programing has been based in approaches such as: the determination of plant evapotranspiration and the maintenance of soil water status between a given interval or soil matrix potential. Most of these approaches are based on the measurements of soil water sensors (or tensiometers) located at specific points within the study area which lack of the spatial information of the monitor variable. The information provided in such as few points might not be adequate to characterize the soil water distribution in irrigation systems with poor water application uniformity and thus, it would lead to wrong decisions in irrigation scheduling. Nevertheless, it can be overcome if the active heating pulses distributed fiber optic temperature measurement (AHFO) is used. This estimates the temperature variation along a cable of fiber optic and then, it is correlated with the soil water content. This method applies a known amount of heat to the soil and monitors the temperature evolution, which mainly depends on the soil moisture content. Thus, it allows estimations of soil water content every 12.5 cm along the fiber optic cable, as long as 1500 m (with 2 % accuracy) , every second. This study presents the results obtained in a green area located at the ETSI Agronómica, Agroalimentaria y Biosistesmas in Madrid. The area is irrigated by an sprinkler irrigation system which applies water with low uniformity. Also, it has deployed and installation of 147 m of fiber optic cable at 15 cm depth. The Distribute Temperature Sensing unit was a SILIXA ULTIMA SR (Silixa Ltd, UK) with spatial and temporal resolution of 0.29 m and 1 s, respectively. In this study, heat pulses of 7 W/m for 2 min were applied uniformly along the fiber optic cable and the thermal response on an adjacent cable was monitored prior, during and after the irrigation event. Data was logged every 0.3 m and every 5 s then, the heating and drying phase integer (called Tcum) was determined following the approach of Sayde et al., (2010). Thus, the infiltration and redistribution of soil water content was fully characterized. The results are promising since the water spatial variability within the soil is known and it can be correlated with the water distribution in the irrigation unit to make better irrigation scheduling in the green area improving water/nutrient/energy efficiency.. Reference Létourneau, G., Caron, J., Anderson, L., & Cormier, J. (2015). Matric potential-based irrigation management of field-grown strawberry: Effects on yield and water use efficiency. Agricultural Water Management, 161, 102-113. Liang, X., Liakos, V., Wendroth, O., & Vellidis, G. (2016). Scheduling irrigation using an approach based on the van Genuchten model. Agricultural Water Management, 176, 170-179. Sayde,C., Gregory, C., Gil-Rodriguez, M., Tufillaro, N., Tyler, S., van de Giesen, N., English, M. Cuenca, R. and Selker, J. S.. 2010. Feasibility of soil moisture monitoring with heated fiber optics. Water Resources Research. Vol.46 (6). DOI: 10.1029/2009WR007846 Stirzaker, R. J., Maeko, T. C., Annandale, J. G., Steyn, J. M., Adhanom, G. T., & Mpuisang, T. (2017). Scheduling irrigation from wetting front depth. Agricultural Water Management, 179, 306-313.
Demographic Mapping via Computer Graphics.
ERIC Educational Resources Information Center
Banghart, Frank W.; And Others
A computerized system, developed at Florida State University, is designed to locate students and resources on a geographic network. Using addresses of resources and students as input, the system quickly and accurately locates the addresses on a grid and creates a map showing their distribution. This geographical distribution serves as an…
NASA Astrophysics Data System (ADS)
Cheng, Irene; Zhang, Leiming; Mao, Huiting
2015-08-01
Relative contributions to mercury wet deposition by gaseous oxidized mercury (%GOM) and fine and coarse particle-bound mercury (%FPBM and %CPBM) were estimated making use of monitored FPBM air concentration and mercury wet deposition at nine North American locations. Scavenging ratios of particulate inorganic ions (K+ and Ca2+, Mg2+ and Na+) were used as a surrogate for those of FPBM and CPBM, respectively. FPBM and CPBM were estimated to contribute 8-36% and 5-27%, respectively, depending on the location, to total wet deposition. The rest of the 39-87% was attributed to the contribution of GOM. The average %GOM, %FPBM and %CPBM among all locations were 65%, 17%, and 18%, respectively. The relative distributions of %GOM, %FPBM, and %CPBM were influenced by Hg(II) gas-particle partitioning, urban site characteristics, and precipitation type. At the regional scale, %GOM dominated over %FPBM and %CPBM. However, the sum of FPBM and CPBM contributed to nearly half of the total Hg wet deposition in urban areas, which was greater than other site categories and is attributed to higher FPBM air concentrations. At four locations, %FPBM exceeded %GOM during winter in contrast to summer, suggesting the efficient snow scavenging of aerosols. The results from this study are useful in improving mercury transport models since most of these models do not estimate CPBM, but frequently use monitored mercury wet deposition data for model evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiter, H.L.; Cook, C.
Regulators need to take a hard look at stranded cost policies that make it difficult for municipalities to replace incumbent distributors, and also reconsider whether distributors should be allowed to roll expansion costs into systemwide rates. This article focuses on the importance of efficient electric distribution in the post-restructuring era and how regulators can promote that efficiency by (1) protecting and encouraging franchise competition, (2) employing regulatory yardsticks, and (3) designing rate structures that send proper price signals about the relative costs of expanding distribution plant and substituting distributed generation, conservation services, or other alternatives.
NASA Astrophysics Data System (ADS)
Tang, Hong; Lin, Jian-Zhong
2013-01-01
An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.
Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods
ERIC Educational Resources Information Center
Zhong, Xiaoling; Yuan, Ke-Hai
2011-01-01
In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…
Effect of an upstream bulge configuration on film cooling with and without mist injection.
Wang, Jin; Li, Qianqian; Sundén, Bengt; Ma, Ting; Cui, Pei
2017-12-01
To meet the economic requirements of power output, the increased inlet temperature of modern gas turbines is above the melting point of the material. Therefore, high-efficient cooling technology is needed to protect the blades from the hot mainstream. In this study, film cooling was investigated in a simplified channel. A bulge located upstream of the film hole was numerically investigated by analysis of the film cooling effectiveness distribution downstream of the wall. The flow distribution in the plate channel is first presented. Comparing with a case without bulge, different cases with bulge heights of 0.1d, 0.3d and 0.5d were examined with blowing ratios of 0.5 and 1.0. Cases with 1% mist injection were also included in order to obtain better cooling performance. Results show that the bulge configuration located upstream the film hole makes the cooling film more uniform, and enhanceslateral cooling effectiveness. Unlike other cases, the configuration with a 0.3d-height bulge shows a good balance in improving the downstream and lateral cooling effectiveness. Compared with the case without mist at M = 0.5, the 0.3d-height bulge with 1% mist injection increases lateral average effectiveness by 559% at x/d = 55. In addition, a reduction of the thermal stress concentration can be obtained by increasing the height of the bulge configuration. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Dongmei; Zhu, Shouping; Cao, Xu; Zhao, Fengjun; Liang, Jimin
2015-01-01
X-ray luminescence computed tomography (XLCT) has become a promising imaging technology for biological application based on phosphor nanoparticles. There are mainly three kinds of XLCT imaging systems: pencil beam XLCT, narrow beam XLCT and cone beam XLCT. Narrow beam XLCT can be regarded as a balance between the pencil beam mode and the cone-beam mode in terms of imaging efficiency and image quality. The collimated X-ray beams are assumed to be parallel ones in the traditional narrow beam XLCT. However, we observe that the cone beam X-rays are collimated into X-ray beams with fan-shaped broadening instead of parallel ones in our prototype narrow beam XLCT. Hence we incorporate the distribution of the X-ray beams in the physical model and collected the optical data from only two perpendicular directions to further speed up the scanning time. Meanwhile we propose a depth related adaptive regularized split Bregman (DARSB) method in reconstruction. The simulation experiments show that the proposed physical model and method can achieve better results in the location error, dice coefficient, mean square error and the intensity error than the traditional split Bregman method and validate the feasibility of method. The phantom experiment can obtain the location error less than 1.1 mm and validate that the incorporation of fan-shaped X-ray beams in our model can achieve better results than the parallel X-rays. PMID:26203388
Requirements for high-efficiency solar cells
NASA Technical Reports Server (NTRS)
Sah, C. T.
1986-01-01
Minimum recombination and low injection level are essential for high efficiency. Twenty percent AM1 efficiency requires a dark recombination current density of 2 x 10 to the minus 13th power A/sq cm and a recombination center density of less than 10 to the 10th power /cu cm. Recombination mechanisms at thirteen locations in a conventional single crystalline silicon cell design are reviewed. Three additional recombination locations are described at grain boundaries in polycrystalline cells. Material perfection and fabrication process optimization requirements for high efficiency are outlined. Innovative device designs to reduce recombination in the bulk and interfaces of single crystalline cells and in the grain boundary of polycrystalline cells are reviewed.
Degerman, Alexander; Rinne, Teemu; Särkkä, Anna-Kaisa; Salmi, Juha; Alho, Kimmo
2008-06-01
Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.
Accelerated molecular dynamics: A promising and efficient simulation method for biomolecules
NASA Astrophysics Data System (ADS)
Hamelberg, Donald; Mongan, John; McCammon, J. Andrew
2004-06-01
Many interesting dynamic properties of biological molecules cannot be simulated directly using molecular dynamics because of nanosecond time scale limitations. These systems are trapped in potential energy minima with high free energy barriers for large numbers of computational steps. The dynamic evolution of many molecular systems occurs through a series of rare events as the system moves from one potential energy basin to another. Therefore, we have proposed a robust bias potential function that can be used in an efficient accelerated molecular dynamics approach to simulate the transition of high energy barriers without any advance knowledge of the location of either the potential energy wells or saddle points. In this method, the potential energy landscape is altered by adding a bias potential to the true potential such that the escape rates from potential wells are enhanced, which accelerates and extends the time scale in molecular dynamics simulations. Our definition of the bias potential echoes the underlying shape of the potential energy landscape on the modified surface, thus allowing for the potential energy minima to be well defined, and hence properly sampled during the simulation. We have shown that our approach, which can be extended to biomolecules, samples the conformational space more efficiently than normal molecular dynamics simulations, and converges to the correct canonical distribution.
An Analytical Method To Compute Comet Cloud Formation Efficiency And Its Application
NASA Astrophysics Data System (ADS)
Brasser, Ramon; Duncan, M. J.
2007-07-01
A quick analytical method is presented for calculating comet cloud formation efficiency in the case of a single planet or multiple-planet system for planets that are not too eccentric (e_p < 0.2). A method to calculate the fraction of comets that stay under the control of each planet is also presented. The location of the planet(s) in mass-semi-major axis space to form a comet cloud is constrained based on the conditions developed by Tremaine (1993) together with estimates of the likelihood of passing comets between planets; and, in the case of a single, eccentric planet, the additional constraint that it is, by itself, able to accelerate material to lower values of Tisserand parameter within the age of the stellar system without sweeping up the majority of the material beforehand. For a single planet, it turns out the efficiency is mainly a function of planetary mass and semi-major axis of the planet and density of the stellar environment. The theory has been applied to some extrasolar systems and compared to numerical simulations for both these systems and the Solar system, as well as a diffusion scheme based on the energy kick distribution of Everhart (1968). Results agree well with analytical predictions.
NASA Astrophysics Data System (ADS)
Dinzi, R.; Hamonangan, TS; Fahmi, F.
2018-02-01
In the current distribution system, a large-capacity distribution transformer supplies loads to remote locations. The use of 220/380 V network is nowadays less common compared to 20 kV network. This results in losses due to the non-optimal distribution transformer, which neglected the load location, poor consumer profile, and large power losses along the carrier. This paper discusses how high voltage distribution systems (HVDS) can be a better system used in distribution networks than the currently used distribution system (Low Voltage Distribution System, LVDS). The proposed change of the system into the new configuration is done by replacing a large-capacity distribution transformer with some smaller-capacity distribution transformers and installed them in positions that closest to the load. The use of high voltage distribution systems will result in better voltage profiles and fewer power losses. From the non-technical side, the annual savings and payback periods on high voltage distribution systems will also be the advantage.
Candidate locations for SPS rectifying antennas
NASA Technical Reports Server (NTRS)
Eberhardt, A. W.
1977-01-01
The feasibility of placing 120 Satellite Power System (SPS) rectifying antenna (rectenna) sites across the U.S. was studied. An initial attempt is made to put two land sites in each state using several land site selection criteria. When only 69 land sites are located, it is decided to put the remaining sites in the sea and sea site selection criteria are identified. An estimated projection of electrical demand distribution for the year 2000 is then used to determine the distribution of these sites along the Pacific, Atlantic, and Gulf Coasts. A methodology for distributing rectenna sites across the country and for fine-tuning exact locations is developed, and recommendations on rectenna design and operations are made.
Benefit-cost estimation for alternative drinking water maximum contaminant levels
NASA Astrophysics Data System (ADS)
Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.
2001-08-01
A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.
Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie
2014-02-01
Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.
Advanced microscopy of star-shaped gold nanoparticles and their adsorption-uptake by macrophages
Plascencia-Villa, Germán; Bahena, Daniel; Rodríguez, Annette R.; Ponce, Arturo; José-Yacamán, Miguel
2013-01-01
Metallic nanoparticles have diverse applications in biomedicine, as diagnostics, image contrast agents, nanosensors and drug delivery systems. Anisotropic metallic nanoparticles possess potential applications in cell imaging and therapy+diagnostics (theranostics), but controlled synthesis and growth of these anisotropic or branched nanostructures has been challenging and usually require use of high concentrations of surfactants. Star-shaped gold nanoparticles were synthesized in high yield through a seed mediated route using HEPES as a precise shape-directing capping agent. Characterization was performed using advanced electron microscopy techniques including atomic resolution TEM, obtaining a detailed characterization of nanostructure and atomic arrangement. Spectroscopy techniques showed that particles have narrow size distribution, monodispersity and high colloidal stability, with absorbance into NIR region and high efficiency for SERS applications. Gold nanostars showed to be biocompatible and efficiently adsorbed and internalized by macrophages, as revealed by advanced FE-SEM and backscattered electron imaging techniques of complete unstained uncoated cells. Additionally, low voltage STEM and X-ray microanalysis revealed the ultra-structural location and confirmed stability of nanoparticles after endocytosis with high spatial resolution. PMID:23443314
Optimizing Data Management in Grid Environments
NASA Astrophysics Data System (ADS)
Zissimos, Antonis; Doka, Katerina; Chazapis, Antony; Tsoumakos, Dimitrios; Koziris, Nectarios
Grids currently serve as platforms for numerous scientific as well as business applications that generate and access vast amounts of data. In this paper, we address the need for efficient, scalable and robust data management in Grid environments. We propose a fully decentralized and adaptive mechanism comprising of two components: A Distributed Replica Location Service (DRLS) and a data transfer mechanism called GridTorrent. They both adopt Peer-to-Peer techniques in order to overcome performance bottlenecks and single points of failure. On one hand, DRLS ensures resilience by relying on a Byzantine-tolerant protocol and is able to handle massive concurrent requests even during node churn. On the other hand, GridTorrent allows for maximum bandwidth utilization through collaborative sharing among the various data providers and consumers. The proposed integrated architecture is completely backwards-compatible with already deployed Grids. To demonstrate these points, experiments have been conducted in LAN as well as WAN environments under various workloads. The evaluation shows that our scheme vastly outperforms the conventional mechanisms in both efficiency (up to 10 times faster) and robustness in case of failures and flash crowd instances.
Petroleum-resource appraisal and discovery rate forecasting in partially explored regions
Drew, Lawrence J.; Schuenemeyer, J.H.; Root, David H.; Attanasi, E.D.
1980-01-01
PART A: A model of the discovery process can be used to predict the size distribution of future petroleum discoveries in partially explored basins. The parameters of the model are estimated directly from the historical drilling record, rather than being determined by assumptions or analogies. The model is based on the concept of the area of influence of a drill hole, which states that the area of a basin exhausted by a drill hole varies with the size and shape of targets in the basin and with the density of previously drilled wells. It also uses the concept of discovery efficiency, which measures the rate of discovery within several classes of deposit size. The model was tested using 25 years of historical exploration data (1949-74) from the Denver basin. From the trend in the discovery rate (the number of discoveries per unit area exhausted), the discovery efficiencies in each class of deposit size were estimated. Using pre-1956 discovery and drilling data, the model accurately predicted the size distribution of discoveries for the 1956-74 period. PART B: A stochastic model of the discovery process has been developed to predict, using past drilling and discovery data, the distribution of future petroleum deposits in partially explored basins, and the basic mathematical properties of the model have been established. The model has two exogenous parameters, the efficiency of exploration and the effective basin size. The first parameter is the ratio of the probability that an actual exploratory well will make a discovery to the probability that a randomly sited well will make a discovery. The second parameter, the effective basin size, is the area of that part of the basin in which drillers are willing to site wells. Methods for estimating these parameters from locations of past wells and from the sizes and locations of past discoveries were derived, and the properties of estimators of the parameters were studied by simulation. PART C: This study examines the temporal properties and determinants of petroleum exploration for firms operating in the Denver basin. Expectations associated with the favorability of a specific area are modeled by using distributed lag proxy variables (of previous discoveries) and predictions from a discovery process model. In the second part of the study, a discovery process model is linked with a behavioral well-drilling model in order to predict the supply of new reserves. Results of the study indicate that the positive effects of new discoveries on drilling increase for several periods and then diminish to zero within 2? years after the deposit discovery date. Tests of alternative specifications of the argument of the distributed lag function using alternative minimum size classes of deposits produced little change in the model's explanatory power. This result suggests that, once an exploration play is underway, favorable operator expectations are sustained by the quantity of oil found per time period rather than by the discovery of specific size deposits. When predictions of the value of undiscovered deposits (generated from a discovery process model) were substituted for the expectations variable in models used to explain exploration effort, operator behavior was found to be consistent with these predictions. This result suggests that operators, on the average, were efficiently using information contained in the discovery history of the basin in carrying out their exploration plans. Comparison of the two approaches to modeling unobservable operator expectations indicates that the two models produced very similar results. The integration of the behavioral well-drilling model and discovery process model to predict the additions to reserves per unit time was successful only when the quarterly predictions were aggregated to annual values. The accuracy of the aggregated predictions was also found to be reasonably robust to errors in predictions from the behavioral well-drilling equation.
Hybrid Monte Carlo/deterministic methods for radiation shielding problems
NASA Astrophysics Data System (ADS)
Becker, Troy L.
For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.
Sawakuchi, Gabriel O; Yukihara, Eduardo G
2012-01-21
The objective of this work is to test analytical models to calculate the luminescence efficiency of Al(2)O(3):C optically stimulated luminescence detectors (OSLDs) exposed to heavy charged particles with energies relevant to space dosimetry and particle therapy. We used the track structure model to obtain an analytical expression for the relative luminescence efficiency based on the average radial dose distribution produced by the heavy charged particle. We compared the relative luminescence efficiency calculated using seven different radial dose distribution models, including a modified model introduced in this work, with experimental data. The results obtained using the modified radial dose distribution function agreed within 20% with experimental data from Al(2)O(3):C OSLDs relative luminescence efficiency for particles with atomic number ranging from 1 to 54 and linear energy transfer in water from 0.2 up to 1368 keV µm(-1). In spite of the significant improvement over other radial dose distribution models, understanding of the underlying physical processes associated with these radial dose distribution models remain elusive and may represent a limitation of the track structure model.
Efficiency measurement and the operationalization of hospital production.
Magnussen, J
1996-04-01
To discuss the usefulness of efficiency measures as instruments of monitoring and resource allocation by analyzing their invariance to changes in the operationalization of hospital production. Norwegian hospitals over the three-year period 1989-1991. Efficiency is measured using Data Envelopment Analysis (DEA). The distribution of efficiency and the ranking of hospitals is compared across models using various distribution-free tests. Input and output data are collected by the Norwegian Central Bureau of Statistics. The distribution of efficiency is found to be unaffected by changes in the specification of hospital output. Both the ranking of hospitals and the scale properties of the technology, however, are found to depend on the choice of output specification. Extreme care should be taken before resource allocation is based on DEA-type efficiency measures alone. Both the identification of efficient and inefficient hospitals and the cardinal measure of inefficiency will depend on the specification of output. Since the scale properties of the technology also vary with the specification of output, the search for an optimal hospital size may be futile.
A Note on the Assumption of Identical Distributions for Nonparametric Tests of Location
ERIC Educational Resources Information Center
Nordstokke, David W.; Colp, S. Mitchell
2018-01-01
Often, when testing for shift in location, researchers will utilize nonparametric statistical tests in place of their parametric counterparts when there is evidence or belief that the assumptions of the parametric test are not met (i.e., normally distributed dependent variables). An underlying and often unattended to assumption of nonparametric…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-26
... Status; Brightpoint North America L.P. (Cell Phone Kitting and Distribution) Indianapolis, IN Pursuant to... the cell phone kitting and distribution facilities of Brightpoint North America L.P., located in... cell phones at the facilities of Brightpoint North America L.P., located in Plainfield, Indiana...
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
Observer efficiency in free-localization tasks with correlated noise.
Abbey, Craig K; Eckstein, Miguel P
2014-01-01
The efficiency of visual tasks involving localization has traditionally been evaluated using forced choice experiments that capitalize on independence across locations to simplify the performance of the ideal observer. However, developments in ideal observer analysis have shown how an ideal observer can be defined for free-localization tasks, where a target can appear anywhere in a defined search region and subjects respond by localizing the target. Since these tasks are representative of many real-world search tasks, it is of interest to evaluate the efficiency of observer performance in them. The central question of this work is whether humans are able to effectively use the information in a free-localization task relative to a similar task where target location is fixed. We use a yes-no detection task at a cued location as the reference for this comparison. Each of the tasks is evaluated using a Gaussian target profile embedded in four different Gaussian noise backgrounds having power-law noise power spectra with exponents ranging from 0 to 3. The free localization task had a square 6.7° search region. We report on two follow-up studies investigating efficiency in a detect-and-localize task, and the effect of processing the white-noise backgrounds. In the fixed-location detection task, we find average observer efficiency ranges from 35 to 59% for the different noise backgrounds. Observer efficiency improves dramatically in the tasks involving localization, ranging from 63 to 82% in the forced localization tasks and from 78 to 92% in the detect-and- localize tasks. Performance in white noise, the lowest efficiency condition, was improved by filtering to give them a power-law exponent of 2. Classification images, used to examine spatial frequency weights for the tasks, show better tuning to ideal weights in the free-localization tasks. The high absolute levels of efficiency suggest that observers are well-adapted to free-localization tasks.
Observer efficiency in free-localization tasks with correlated noise
Abbey, Craig K.; Eckstein, Miguel P.
2014-01-01
The efficiency of visual tasks involving localization has traditionally been evaluated using forced choice experiments that capitalize on independence across locations to simplify the performance of the ideal observer. However, developments in ideal observer analysis have shown how an ideal observer can be defined for free-localization tasks, where a target can appear anywhere in a defined search region and subjects respond by localizing the target. Since these tasks are representative of many real-world search tasks, it is of interest to evaluate the efficiency of observer performance in them. The central question of this work is whether humans are able to effectively use the information in a free-localization task relative to a similar task where target location is fixed. We use a yes-no detection task at a cued location as the reference for this comparison. Each of the tasks is evaluated using a Gaussian target profile embedded in four different Gaussian noise backgrounds having power-law noise power spectra with exponents ranging from 0 to 3. The free localization task had a square 6.7° search region. We report on two follow-up studies investigating efficiency in a detect-and-localize task, and the effect of processing the white-noise backgrounds. In the fixed-location detection task, we find average observer efficiency ranges from 35 to 59% for the different noise backgrounds. Observer efficiency improves dramatically in the tasks involving localization, ranging from 63 to 82% in the forced localization tasks and from 78 to 92% in the detect-and- localize tasks. Performance in white noise, the lowest efficiency condition, was improved by filtering to give them a power-law exponent of 2. Classification images, used to examine spatial frequency weights for the tasks, show better tuning to ideal weights in the free-localization tasks. The high absolute levels of efficiency suggest that observers are well-adapted to free-localization tasks. PMID:24817854
Enabling affordable and efficiently deployed location based smart home systems.
Kelly, Damian; McLoone, Sean; Dishongh, Terry
2009-01-01
With the obvious eldercare capabilities of smart environments it is a question of "when", rather than "if", these technologies will be routinely integrated into the design of future houses. In the meantime, health monitoring applications must be integrated into already complete home environments. However, there is significant effort involved in installing the hardware necessary to monitor the movements of an elder throughout an environment. Our work seeks to address the high infrastructure requirements of traditional location-based smart home systems by developing an extremely low infrastructure localisation technique. A study of the most efficient method of obtaining calibration data for an environment is conducted and different mobile devices are compared for localisation accuracy and cost trade-off. It is believed that these developments will contribute towards more efficiently deployed location-based smart home systems.
Thermal Characterization of a Simulated Fission Engine via Distributed Fiber Bragg Gratings
NASA Astrophysics Data System (ADS)
Duncan, Roger G.; Fielder, Robert S.; Seeley, Ryan J.; Kozikowski, Carrie L.; Raum, Matthew T.
2005-02-01
We report the use of distributed fiber Bragg gratings to monitor thermal conditions within a simulated nuclear reactor core located at the Early Flight Fission Test Facility of the NASA Marshall Space Flight Center. Distributed fiber-optic temperature measurements promise to add significant capability and advance the state-of-the-art in high-temperature sensing. For the work reported herein, seven probes were constructed with ten sensors each for a total of 70 sensor locations throughout the core. These discrete temperature sensors were monitored over a nine hour period while the test article was heated to over 700 °C and cooled to ambient through two operational cycles. The sensor density available permits a significantly elevated understanding of thermal effects within the simulated reactor. Fiber-optic sensor performance is shown to compare very favorably with co-located thermocouples where such co-location was feasible.