Continuous Variable Quantum Key Distribution Using Polarized Coherent States
NASA Astrophysics Data System (ADS)
Vidiella-Barranco, A.; Borelli, L. F. M.
We discuss a continuous variables method of quantum key distribution employing strongly polarized coherent states of light. The key encoding is performed using the variables known as Stokes parameters, rather than the field quadratures. Their quantum counterpart, the Stokes operators Ŝi (i=1,2,3), constitute a set of non-commuting operators, being the precision of simultaneous measurements of a pair of them limited by an uncertainty-like relation. Alice transmits a conveniently modulated two-mode coherent state, and Bob randomly measures one of the Stokes parameters of the incoming beam. After performing reconciliation and privacy amplification procedures, it is possible to distill a secret common key. We also consider a non-ideal situation, in which coherent states with thermal noise, instead of pure coherent states, are used for encoding.
Variable speed generator technology options for wind turbine generators
NASA Technical Reports Server (NTRS)
Lipo, T. A.
1995-01-01
The electrical system options for variable speed operation of a wind turbine generator are treated in this paper. The key operating characteristics of each system are discussed and the major advantages and disadvantages of each are identified
Continuous-variable measurement-device-independent quantum key distribution with photon subtraction
NASA Astrophysics Data System (ADS)
Ma, Hong-Xin; Huang, Peng; Bai, Dong-Yun; Wang, Shi-Yu; Bao, Wan-Su; Zeng, Gui-Hua
2018-04-01
It has been found that non-Gaussian operations can be applied to increase and distill entanglement between Gaussian entangled states. We show the successful use of the non-Gaussian operation, in particular, photon subtraction operation, on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI-QKD) protocol. The proposed method can be implemented based on existing technologies. Security analysis shows that the photon subtraction operation can remarkably increase the maximal transmission distance of the CV-MDI-QKD protocol, which precisely make up for the shortcoming of the original CV-MDI-QKD protocol, and one-photon subtraction operation has the best performance. Moreover, the proposed protocol provides a feasible method for the experimental implementation of the CV-MDI-QKD protocol.
Integrating Variable Renewable Energy into the Grid: Key Issues, Greening the Grid (Spanish Version)
DOE Office of Scientific and Technical Information (OSTI.GOV)
This is the Spanish version of 'Greening the Grid - Integrating Variable Renewable Energy into the Grid: Key Issues'. To foster sustainable, low-emission development, many countries are establishing ambitious renewable energy targets for their electricity supply. Because solar and wind tend to be more variable and uncertain than conventional sources, meeting these targets will involve changes to power system planning and operations. Grid integration is the practice of developing efficient ways to deliver variable renewable energy (VRE) to the grid. Good integration methods maximize the cost-effectiveness of incorporating VRE into the power system while maintaining or increasing system stability andmore » reliability. When considering grid integration, policy makers, regulators, and system operators consider a variety of issues, which can be organized into four broad topics: New Renewable Energy Generation, New Transmission, Increased System Flexibility, and Planning for a High RE Future.« less
Zhang, Zheshen; Voss, Paul L
2009-07-06
We propose a continuous variable based quantum key distribution protocol that makes use of discretely signaled coherent light and reverse error reconciliation. We present a rigorous security proof against collective attacks with realistic lossy, noisy quantum channels, imperfect detector efficiency, and detector electronic noise. This protocol is promising for convenient, high-speed operation at link distances up to 50 km with the use of post-selection.
Two-key concurrent responding: response-reinforcement dependencies and blackouts1
Herbert, Emily W.
1970-01-01
Two-key concurrent responding was maintained for three pigeons by a single variable-interval 1-minute schedule of reinforcement in conjunction with a random number generator that assigned feeder operations between keys with equal probability. The duration of blackouts was varied between keys when each response initiated a blackout, and grain arranged by the variable-interval schedule was automatically presented after a blackout (Exp. I). In Exp. II every key peck, except for those that produced grain, initiated a blackout, and grain was dependent upon a response following a blackout. For each pigeon in Exp. I and for one pigeon in Exp. II, the relative frequency of responding on a key approximated, i.e., matched, the relative reciprocal of the duration of the blackout interval on that key. In a third experiment, blackouts scheduled on a variable-interval were of equal duration on the two keys. For one key, grain automatically followed each blackout; for the other key, grain was dependent upon a response and never followed a blackout. The relative frequency of responding on the former key, i.e., the delay key, better approximated the negative exponential function obtained by Chung (1965) than the matching function predicted by Chung and Herrnstein (1967). PMID:16811458
Quantum hacking of two-way continuous-variable quantum key distribution using Trojan-horse attack
NASA Astrophysics Data System (ADS)
Ma, Hong-Xin; Bao, Wan-Su; Li, Hong-Wei; Chou, Chun
2016-08-01
We present a Trojan-horse attack on the practical two-way continuous-variable quantum key distribution system. Our attack mainly focuses on the imperfection of the practical system that the modulator has a redundancy of modulation pulse-width, which leaves a loophole for the eavesdropper inserting a Trojan-horse pulse. Utilizing the unique characteristics of two-way continuous-variable quantum key distribution that Alice only takes modulation operation on the received mode without any measurement, this attack allows the eavesdropper to render all of the final keys shared between the legitimate parties insecure without being detected. After analyzing the feasibility of the attack, the corresponding countermeasures are put forward. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the National Natural Science Foundation of China (Grant Nos. 11304397 and 61505261).
Field test of classical symmetric encryption with continuous variables quantum key distribution.
Jouguet, Paul; Kunz-Jacques, Sébastien; Debuisschert, Thierry; Fossier, Simon; Diamanti, Eleni; Alléaume, Romain; Tualle-Brouri, Rosa; Grangier, Philippe; Leverrier, Anthony; Pache, Philippe; Painchault, Philippe
2012-06-18
We report on the design and performance of a point-to-point classical symmetric encryption link with fast key renewal provided by a Continuous Variable Quantum Key Distribution (CVQKD) system. Our system was operational and able to encrypt point-to-point communications during more than six months, from the end of July 2010 until the beginning of February 2011. This field test was the first demonstration of the reliability of a CVQKD system over a long period of time in a server room environment. This strengthens the potential of CVQKD for information technology security infrastructure deployments.
Chung, Ji-Woo; Kim, Kyung-Min; Yoon, Tae-Ung; Kim, Seung-Ik; Jung, Tae-Sung; Han, Sang-Sup; Bae, Youn-Sang
2017-12-22
A novel power partial-discard (PPD) strategy was developed as a variant of the partial-discard (PD) operation to further improve the separation performance of the simulated moving bed (SMB) process. The PPD operation varied the flow rates of discard streams by introducing a new variable, the discard amount (DA) as well as varying the reported variable, discard length (DL), while the conventional PD used fixed discard flow rates. The PPD operations showed significantly improved purities in spite of losses in recoveries. Remarkably, the PPD operation could provide more enhanced purity for a given recovery or more enhanced recovery for a given purity than the PD operation. The two variables, DA and DL, in the PPD operation played a key role in achieving the desired purity and recovery. The PPD operations will be useful for attaining high-purity products with reasonable recoveries. Copyright © 2017 Elsevier B.V. All rights reserved.
Pelissari, Daniele Maria; Rocha, Marli Souza; Bartholomay, Patricia; Sanchez, Mauro Niskier; Duarte, Elisabeth Carmen; Arakaki-Sanchez, Denise; Dantas, Cíntia Oliveira; Jacobs, Marina Gasino; Andrade, Kleydson Bonfim; Codenotti, Stefano Barbosa; Andrade, Elaine Silva Nascimento; Araújo, Wildo Navegantes de; Costa, Fernanda Dockhorn; Ramalho, Walter Massa; Diaz-Quijano, Fredi Alexander
2018-06-06
To identify scenarios based on socioeconomic, epidemiological and operational healthcare factors associated with tuberculosis incidence in Brazil. Ecological study. The study was based on new patients with tuberculosis and epidemiological/operational variables of the disease from the Brazilian National Information System for Notifiable Diseases and the Mortality Information System. We also analysed socioeconomic and demographic variables. The units of analysis were the Brazilian municipalities, which in 2015 numbered 5570 but 5 were excluded due to the absence of socioeconomic information. Tuberculosis incidence rate in 2015. We evaluated as independent variables the socioeconomic (2010), epidemiological and operational healthcare indicators of tuberculosis (2014 or 2015) using negative binomial regression. Municipalities were clustered by the k-means method considering the variables identified in multiple regression models. We identified two clusters according to socioeconomic variables associated with the tuberculosis incidence rate (unemployment rate and household crowding): a higher socioeconomic scenario (n=3482 municipalities) with a mean tuberculosis incidence rate of 16.3/100 000 population and a lower socioeconomic scenario (2083 municipalities) with a mean tuberculosis incidence rate of 22.1/100 000 population. In a second stage of clusterisation, we defined four subgroups in each of the socioeconomic scenarios using epidemiological and operational variables such as tuberculosis mortality rate, AIDS case detection rate and proportion of vulnerable population among patients with tuberculosis. Some of the subscenarios identified were characterised by fragility in their information systems, while others were characterised by the concentration of tuberculosis cases in key populations. Clustering municipalities in scenarios allowed us to classify them according to the socioeconomic, epidemiological and operational variables associated with tuberculosis risk. This classification can support targeted evidence-based decisions such as monitoring data quality for improving the information system or establishing integrative social protective policies for key populations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Kleis, Sebastian; Rueckmann, Max; Schaeffer, Christian G
2017-04-15
In this Letter, we propose a novel implementation of continuous variable quantum key distribution that operates with a real local oscillator placed at the receiver site. In addition, pulsing of the continuous wave laser sources is not required, leading to an extraordinary practical and secure setup. It is suitable for arbitrary schemes based on modulated coherent states and heterodyne detection. The shown results include transmission experiments, as well as an excess noise analysis applying a discrete 8-state phase modulation. Achievable key rates under collective attacks are estimated. The results demonstrate the high potential of the approach to achieve high secret key rates at relatively low effort and cost.
ERIC Educational Resources Information Center
Ong, Caroline C.; Dodds, Agnes; Nestel, Debra
2016-01-01
Surgeons require advanced psychomotor skills, critical decision-making and teamwork skills. Much of surgical skills training involve progressive trainee participation in supervised operations where case variability, operating team interaction and environment affect learning, while surgical teachers face the key challenge of ensuring patient…
Continuous variable quantum cryptography: beating the 3 dB loss limit.
Silberhorn, Ch; Ralph, T C; Lütkenhaus, N; Leuchs, G
2002-10-14
We demonstrate that secure quantum key distribution systems based on continuous variable implementations can operate beyond the apparent 3 dB loss limit that is implied by the beam splitting attack. The loss limit was established for standard minimum uncertainty states such as coherent states. We show that, by an appropriate postselection mechanism, we can enter a region where Eve's knowledge on Alice's key falls behind the information shared between Alice and Bob, even in the presence of substantial losses.
Matching: Its Acquisition and Generalization
ERIC Educational Resources Information Center
Crowley, Michael A.; Donahoe, John W.
2004-01-01
Choice typically is studied by exposing organisms to concurrent variable-interval schedules in which not only responses controlled by stimuli on the key are acquired but also switching responses and likely other operants as well. In the present research, discriminated key-pecking responses in pigeons were first acquired using a multiple schedule…
PHYSICAL AND OPTICAL PROPERTIES OF STEAM-EXPLODED LASER-PRINTED PAPER
Laser-printed paper was pulped by the steam-explosion process. A full-factorial experimental design was applied to determine the effects of key operating variables on the properties of steam-exploded pulp. The variables were addition level for pulping chemicals (NaOH and/or Na2SO...
High performance frame synchronization for continuous variable quantum key distribution systems.
Lin, Dakai; Huang, Peng; Huang, Duan; Wang, Chao; Peng, Jinye; Zeng, Guihua
2015-08-24
Considering a practical continuous variable quantum key distribution(CVQKD) system, synchronization is of significant importance as it is hardly possible to extract secret keys from unsynchronized strings. In this paper, we proposed a high performance frame synchronization method for CVQKD systems which is capable to operate under low signal-to-noise(SNR) ratios and is compatible with random phase shift induced by quantum channel. A practical implementation of this method with low complexity is presented and its performance is analysed. By adjusting the length of synchronization frame, this method can work well with large range of SNR values which paves the way for longer distance CVQKD.
Discriminated Timeout Avoidance in Pigeons: The Roles of Added Stimuli
ERIC Educational Resources Information Center
DeFulio, Anthony; Hackenberg, Timothy D.
2007-01-01
Two experiments examined pigeons' postponement of a signaled extinction period, or timeout (TO), from an ongoing schedule of response-dependent food delivery. A concurrent-operant procedure was used in which responses on one (food) key produced food according to a variable-interval schedule and responses on a second (postponement) key delayed the…
Natural language processing to ascertain two key variables from operative reports in ophthalmology.
Liu, Liyan; Shorstein, Neal H; Amsden, Laura B; Herrinton, Lisa J
2017-04-01
Antibiotic prophylaxis is critical to ophthalmology and other surgical specialties. We performed natural language processing (NLP) of 743 838 operative notes recorded for 315 246 surgeries to ascertain two variables needed to study the comparative effectiveness of antibiotic prophylaxis in cataract surgery. The first key variable was an exposure variable, intracameral antibiotic injection. The second was an intraoperative complication, posterior capsular rupture (PCR), which functioned as a potential confounder. To help other researchers use NLP in their settings, we describe our NLP protocol and lessons learned. For each of the two variables, we used SAS Text Miner and other SAS text-processing modules with a training set of 10 000 (1.3%) operative notes to develop a lexicon. The lexica identified misspellings, abbreviations, and negations, and linked words into concepts (e.g. "antibiotic" linked with "injection"). We confirmed the NLP tools by iteratively obtaining random samples of 2000 (0.3%) notes, with replacement. The NLP tools identified approximately 60 000 intracameral antibiotic injections and 3500 cases of PCR. The positive and negative predictive values for intracameral antibiotic injection exceeded 99%. For the intraoperative complication, they exceeded 94%. NLP was a valid and feasible method for obtaining critical variables needed for a research study of surgical safety. These NLP tools were intended for use in the study sample. Use with external datasets or future datasets in our own setting would require further testing. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Natural Language Processing to Ascertain Two Key Variables from Operative Reports in Ophthalmology
Liu, Liyan; Shorstein, Neal H.; Amsden, Laura B; Herrinton, Lisa J.
2016-01-01
Purpose Antibiotic prophylaxis is critical to ophthalmology and other surgical specialties. We performed natural language processing (NLP) of 743,838 operative notes recorded for 315,246 surgeries to ascertain two variables needed to study the comparative effectiveness of antibiotic prophylaxis in cataract surgery. The first key variable was an exposure variable, intracameral antibiotic injection. The second was an intraoperative complication, posterior capsular rupture (PCR), that functioned as a potential confounder. To help other researchers use NLP in their settings, we describe our NLP protocol and lessons learned. Methods For each of the two variables, we used SAS Text Miner and other SAS text-processing modules with a training set of 10,000 (1.3%) operative notes to develop a lexicon. The lexica identified misspellings, abbreviations, and negations, and linked words into concepts (e.g., “antibiotic” linked with “injection”). We confirmed the NLP tools by iteratively obtaining random samples of 2,000 (0.3%) notes, with replacement. Results The NLP tools identified approximately 60,000 intracameral antibiotic injections and 3,500 cases of PCR. The positive and negative predictive values for intracameral antibiotic injection exceeded 99%. For the intraoperative complication, they exceeded 94%. Conclusion NLP was a valid and feasible method for obtaining critical variables needed for a research study of surgical safety. These NLP tools were intended for use in the study sample. Use with external datasets or future datasets in our own setting would require further testing. PMID:28052483
Sensitivity study of Space Station Freedom operations cost and selected user resources
NASA Technical Reports Server (NTRS)
Accola, Anne; Fincannon, H. J.; Williams, Gregory J.; Meier, R. Timothy
1990-01-01
The results of sensitivity studies performed to estimate probable ranges for four key Space Station parameters using the Space Station Freedom's Model for Estimating Space Station Operations Cost (MESSOC) are discussed. The variables examined are grouped into five main categories: logistics, crew, design, space transportation system, and training. The modification of these variables implies programmatic decisions in areas such as orbital replacement unit (ORU) design, investment in repair capabilities, and crew operations policies. The model utilizes a wide range of algorithms and an extensive trial logistics data base to represent Space Station operations. The trial logistics data base consists largely of a collection of the ORUs that comprise the mature station, and their characteristics based on current engineering understanding of the Space Station. A nondimensional approach is used to examine the relative importance of variables on parameters.
Planning for Production of Freshwater Fish Fry in a Variable Climate in Northern Thailand.
Uppanunchai, Anuwat; Apirumanekul, Chusit; Lebel, Louis
2015-10-01
Provision of adequate numbers of quality fish fry is often a key constraint on aquaculture development. The management of climate-related risks in hatchery and nursery management operations has not received much attention, but is likely to be a key element of successful adaptation to climate change in the aquaculture sector. This study explored the sensitivities and vulnerability of freshwater fish fry production in 15 government hatcheries across Northern Thailand to climate variability and evaluated the robustness of the proposed adaptation measures. This study found that hatcheries have to consider several factors when planning production, including: taking into account farmer demand; production capacity of the hatchery; availability of water resources; local climate and other area factors; and, individual species requirements. Nile tilapia is the most commonly cultured species of freshwater fish. Most fry production is done in the wet season, as cold spells and drought conditions disrupt hatchery production and reduce fish farm demand in the dry season. In the wet season, some hatcheries are impacted by floods. Using a set of scenarios to capture major uncertainties and variability in climate, this study suggests a couple of strategies that should help make hatchery operations more climate change resilient, in particular: improving hatchery operations and management to deal better with risks under current climate variability; improving monitoring and information systems so that emerging climate-related risks are known sooner and understood better; and, research and development on alternative species, breeding programs, improving water management and other features of hatchery operations.
Improvement of two-way continuous-variable quantum key distribution with virtual photon subtraction
NASA Astrophysics Data System (ADS)
Zhao, Yijia; Zhang, Yichen; Li, Zhengyu; Yu, Song; Guo, Hong
2017-08-01
We propose a method to improve the performance of two-way continuous-variable quantum key distribution protocol by virtual photon subtraction. The virtual photon subtraction implemented via non-Gaussian post-selection not only enhances the entanglement of two-mode squeezed vacuum state but also has advantages in simplifying physical operation and promoting efficiency. In two-way protocol, virtual photon subtraction could be applied on two sources independently. Numerical simulations show that the optimal performance of renovated two-way protocol is obtained with photon subtraction only used by Alice. The transmission distance and tolerable excess noise are improved by using the virtual photon subtraction with appropriate parameters. Moreover, the tolerable excess noise maintains a high value with the increase in distance so that the robustness of two-way continuous-variable quantum key distribution system is significantly improved, especially at long transmission distance.
A Significant Role for Renewables in a Low-Carbon Energy Economy?
NASA Astrophysics Data System (ADS)
Newmark, R. L.
2015-12-01
Renewables currently make up a small (but growing) fraction of total U.S. electricity generation. In some regions, renewable growth has resulted in instantaneous penetration levels of wind and solar in excess of 60% of demand. With decreasing costs, abundant resource potential and low carbon emissions and water requirements, wind and solar are increasingly becoming attractive new generation options. However, factors such as resource variability and geographic distribution of prime resources raise questions regarding the extent to which our power system can rely on variable generation resources. Here, we describe scenario analyses designed to tackle engineering and economic challenges associated with variable generation, along with insights derived from research results. These analyses demonstrate the operability of high renewable systems and quantify some of the engineering challenges (and solutions) associated with maintaining reliability. Key questions addressed include the operational and economic impacts of increasing levels of variable generation on the U.S. power system. Since reliability and economic efficiency are measured across a variety of time frames, and with a variety of metrics, a suite of tools addressing different system impacts are used to understand how new resources affect incumbent resources and operational practices. We summarize a range of modeled scenarios, focusing on ones with 80% RE in the United States and >30% variable wind and solar in the East and the West. We also summarize the environmental impacts and benefits estimated for these and similar scenarios. Results provide key insights to inform the technical, operational and regulatory evolution of the U.S. power system. This work is extended internationally through the 21st Century Power Partnership's collaborations on power system transformation, with active collaboration in Canada, Mexico, India, China and South Africa, among others.
Understanding traffic variations by vehicle classifications
DOT National Transportation Integrated Search
1998-08-01
To provide a better understanding of how short-duration truck volume counts can be used to accurately estimate the key variables needed for design, planning, and operational analyses, the Long-Term Pavement Performance (LTPP) program recently complet...
Hydropower Modeling Challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoll, Brady; Andrade, Juan; Cohen, Stuart
Hydropower facilities are important assets for the electric power sector and represent a key source of flexibility for electric grids with large amounts of variable generation. As variable renewable generation sources expand, understanding the capabilities and limitations of the flexibility from hydropower resources is important for grid planning. Appropriately modeling these resources, however, is difficult because of the wide variety of constraints these plants face that other generators do not. These constraints can be broadly categorized as environmental, operational, and regulatory. This report highlights several key issues involving incorporating these constraints when modeling hydropower operations in terms of production costmore » and capacity expansion. Many of these challenges involve a lack of data to adequately represent the constraints or issues of model complexity and run time. We present several potential methods for improving the accuracy of hydropower representation in these models to allow for a better understanding of hydropower's capabilities.« less
Medical Support for Aircraft Disaster Search and Recovery Operations at Sea: the RSN Experience.
Teo, Kok Ann Colin; Chong, Tse Feng Gabriel; Liow, Min Han Lincoln; Tang, Kong Choong
2016-06-01
The maritime environment presents a unique set of challenges to search and recovery (SAR) operations. There is a paucity of information available to guide provision of medical support for SAR operations for aircraft disasters at sea. The Republic of Singapore Navy (RSN) took part in two such SAR operations in 2014 which showcased the value of a military organization in these operations. Key considerations in medical support for similar operations include the resultant casualty profile and challenges specific to the maritime environment, such as large distances of area of operations from land, variable sea states, and space limitations. Medical support planning can be approached using well-established disaster management life cycle phases of preparedness, mitigation, response, and recovery, which all are described in detail. This includes key areas of dedicated training and exercises, force protection, availability of air assets and chamber support, psychological care, and the forensic handling of human remains. Relevant lessons learned by RSN from the Air Asia QZ8501 search operation are also included in the description of these key areas. Teo KAC , Chong TFG , Liow MHL , Tang KC . Medical support for aircraft disaster search and recovery operations at sea: the RSN experience. Prehosp Disaster Med. 2016; 31(3):294-299.
Predictive factors of difficulty in lower third molar extraction: A prospective cohort study
Alvira-González, Joaquín; Valmaseda-Castellón, Eduard; Quesada-Gómez, Carmen; Gay-Escoda, Cosme
2017-01-01
Background Several publications have measured the difficulty of third molar removal, trying to establish the main risk factors, however several important preoperative and intraoperative variables are overlooked. Material and Methods A prospective cohort study comprising a total of 130 consecutive lower third molar extractions was performed. The outcome variables used to measure the difficulty of the extraction were operation time and a 100mm visual analogue scale filled by the surgeon at the end of the surgical procedure. The predictors were divided into 4 different groups (demographic, anatomic, radiographic and operative variables). A descriptive, bivariate and multivariate analysis of the data was performed. Results Patients’ weight, the presence of bulbous roots, the need to perform crown and root sectioning of the lower third molar and Pell and Gregory 123 classification significantly influenced both outcome variables (p< 0.05). Conclusions Certain anatomical, radiological and operative variables appear to be important factors in the assessment of surgical difficulty in the extraction of lower third molars. Key words:Third molar, surgical extraction, surgical difficulty. PMID:27918736
DWPF Melter Off-Gas Flammability Assessment for Sludge Batch 9
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, A. S.
2016-07-11
The slurry feed to the Defense Waste Processing Facility (DWPF) melter contains several organic carbon species that decompose in the cold cap and produce flammable gases that could accumulate in the off-gas system and create potential flammability hazard. To mitigate such a hazard, DWPF has implemented a strategy to impose the Technical Safety Requirement (TSR) limits on all key operating variables affecting off-gas flammability and operate the melter within those limits using both hardwired/software interlocks and administrative controls. The operating variables that are currently being controlled include; (1) total organic carbon (TOC), (2) air purges for combustion and dilution, (3)more » melter vapor space temperature, and (4) feed rate. The safety basis limits for these operating variables are determined using two computer models, 4-stage cold cap and Melter Off-Gas (MOG) dynamics models, under the baseline upset scenario - a surge in off-gas flow due to the inherent cold cap instabilities in the slurry-fed melter.« less
2016-12-02
Quantum Computing , University of Waterloo, Waterloo ON, N2L 3G1, Canada (Dated: December 1, 2016) Continuous variable (CV) quantum key distribution (QKD...Networking with QUantum operationally-Secure Technology for Maritime Deployment (CONQUEST) Contract Period of Performance: 2 September 2016 – 1 September...this letter or have any other questions. Sincerely, Raytheon BBN Technologies Kathryn Carson Program Manager Quantum Information Processing
Past and Potential Theory for Special Warfare Operational Art: People’s War and Contentious Politics
2015-03-04
bourgeoisie revolution, revolution from above, and peasant revolution.123 These events produced three corresponding revolutionary outcomes...the key structural variables that determined these paths and outcomes within a country were the strength of the bourgeoisie
Integrating Variable Renewable Energy - Russia
DOE Office of Scientific and Technical Information (OSTI.GOV)
To foster sustainable, low-emission development, many countries are establishing ambitious renewable energy targets for their electricity supply. Because solar and wind tend to be more variable and uncertain than conventional sources, meeting these targets will involve changes to power system planning and operations. Grid integration is the practice of developing efficient ways to deliver variable renewable energy (VRE) to the grid. Good integration methods maximize the cost-effectiveness of incorporating VRE into the power system while maintaining or increasing system stability and reliability. When considering grid integration, policy makers, regulators, and system operators consider a variety of issues, which can bemore » organized into four broad topics: New Renewable Energy Generation, New Transmission, Increased System Flexibility, Planning for a High RE Future. This is a Russian-language translation of Integrating Variable Renewable Energy into the Grid: Key Issues, Greening the Grid, originally published in English in May 2015.« less
Using wavelets to decompose the time frequency effects of monetary policy
NASA Astrophysics Data System (ADS)
Aguiar-Conraria, Luís; Azevedo, Nuno; Soares, Maria Joana
2008-05-01
Central banks have different objectives in the short and long run. Governments operate simultaneously at different timescales. Many economic processes are the result of the actions of several agents, who have different term objectives. Therefore, a macroeconomic time series is a combination of components operating on different frequencies. Several questions about economic time series are connected to the understanding of the behavior of key variables at different frequencies over time, but this type of information is difficult to uncover using pure time-domain or pure frequency-domain methods. To our knowledge, for the first time in an economic setup, we use cross-wavelet tools to show that the relation between monetary policy variables and macroeconomic variables has changed and evolved with time. These changes are not homogeneous across the different frequencies.
NASA Astrophysics Data System (ADS)
Guo, Ying; Liao, Qin; Wang, Yijun; Huang, Duan; Huang, Peng; Zeng, Guihua
2017-03-01
A suitable photon-subtraction operation can be exploited to improve the maximal transmission of continuous-variable quantum key distribution (CVQKD) in point-to-point quantum communication. Unfortunately, the photon-subtraction operation faces solving the improvement transmission problem of practical quantum networks, where the entangled source is located in the third part, which may be controlled by a malicious eavesdropper, instead of in one of the trusted parts, controlled by Alice or Bob. In this paper, we show that a solution can come from using a non-Gaussian operation, in particular, the photon-subtraction operation, which provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that CVQKD with an entangled source in the middle (ESIM) from applying photon subtraction can well increase the secure transmission distance in both direct and reverse reconciliations of the EB-CVQKD scheme, even if the entangled source originates from an untrusted part. Moreover, it can defend against the inner-source attack, which is a specific attack by an untrusted entangled source in the framework of ESIM.
Ma, Jian; Bai, Bing; Wang, Liu-Jun; Tong, Cun-Zhu; Jin, Ge; Zhang, Jun; Pan, Jian-Wei
2016-09-20
InGaAs/InP single-photon avalanche diodes (SPADs) are widely used in practical applications requiring near-infrared photon counting such as quantum key distribution (QKD). Photon detection efficiency and dark count rate are the intrinsic parameters of InGaAs/InP SPADs, due to the fact that their performances cannot be improved using different quenching electronics given the same operation conditions. After modeling these parameters and developing a simulation platform for InGaAs/InP SPADs, we investigate the semiconductor structure design and optimization. The parameters of photon detection efficiency and dark count rate highly depend on the variables of absorption layer thickness, multiplication layer thickness, excess bias voltage, and temperature. By evaluating the decoy-state QKD performance, the variables for SPAD design and operation can be globally optimized. Such optimization from the perspective of specific applications can provide an effective approach to design high-performance InGaAs/InP SPADs.
NASA Technical Reports Server (NTRS)
Welch, Gerand E.
2010-01-01
The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range (100% at take-off to 54% at cruise). The variable-speed power turbine, when coupled to a fixed-gear-ratio transmission, offers one approach to accomplish this speed variation. The key aero-challenges of the variable-speed power turbine are related to high work factors at cruise, where the power turbine operates at 54% of take-off speed, wide incidence variations into the vane, blade, and exit-guide-vane rows associated with the power-turbine speed change, and the impact of low aft-stage Reynolds number (transitional flow) at 28 kft cruise. Meanline and 2-D Reynolds-Averaged Navier- Stokes analyses are used to characterize the variable-speed power-turbine aerodynamic challenges and to outline a conceptual design approach that accounts for multi-point operation. Identified technical challenges associated with the aerodynamics of high work factor, incidence-tolerant blading, and low Reynolds numbers pose research needs outlined in the paper
An Investigation of Turbulent Heat Exchange in the Subtropics
2014-09-30
meteorological sensors aboard the research vessel the R/V Revelle during the DYNAMO field program. In situ meteorology and high-rate flux sensors operated...continuously while in the sampling period for DYNAMO Leg 3. This included all sensors operating during Leg 2 with the addition of a closed-path LI...stress; wave data; surface and near surface sea temperatures, salinity and currents; and other key variables specifically requested by DYNAMO /LASP PIs
Scarani, Valerio; Renner, Renato
2008-05-23
We derive a bound for the security of quantum key distribution with finite resources under one-way postprocessing, based on a definition of security that is composable and has an operational meaning. While our proof relies on the assumption of collective attacks, unconditional security follows immediately for standard protocols such as Bennett-Brassard 1984 and six-states protocol. For single-qubit implementations of such protocols, we find that the secret key rate becomes positive when at least N approximately 10(5) signals are exchanged and processed. For any other discrete-variable protocol, unconditional security can be obtained using the exponential de Finetti theorem, but the additional overhead leads to very pessimistic estimates.
Earth Observatory Satellite system definition study. Report no. 3: Design/cost tradeoff studies
NASA Technical Reports Server (NTRS)
1974-01-01
The key issues in the Earth Observatory Satellite (EOS) program which are subject to configuration study and tradeoff are identified. The issue of a combined operational and research and development program is considered. It is stated that cost and spacecraft weight are the key design variables and design options are proposed in terms of these parameters. A cost analysis of the EOS program is provided. Diagrams of the satellite configuration and subsystem components are included.
Using near infrared spectroscopy and heart rate variability to detect mental overload.
Durantin, G; Gagnon, J-F; Tremblay, S; Dehais, F
2014-02-01
Mental workload is a key factor influencing the occurrence of human error, especially during piloting and remotely operated vehicle (ROV) operations, where safety depends on the ability of pilots to act appropriately. In particular, excessively high or low mental workload can lead operators to neglect critical information. The objective of the present study is to investigate the potential of functional near infrared spectroscopy (fNIRS) - a non-invasive method of measuring prefrontal cortex activity - in combination with measurements of heart rate variability (HRV), to predict mental workload during a simulated piloting task, with particular regard to task engagement and disengagement. Twelve volunteers performed a computer-based piloting task in which they were asked to follow a dynamic target with their aircraft, a task designed to replicate key cognitive demands associated with real life ROV operating tasks. In order to cover a wide range of mental workload levels, task difficulty was manipulated in terms of processing load and difficulty of control - two critical sources of workload associated with piloting and remotely operating a vehicle. Results show that both fNIRS and HRV are sensitive to different levels of mental workload; notably, lower prefrontal activation as well as a lower LF/HF ratio at the highest level of difficulty, suggest that these measures are suitable for mental overload detection. Moreover, these latter measurements point toward the existence of a quadratic model of mental workload. Copyright © 2013 Elsevier B.V. All rights reserved.
Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stenclik, Derek; Denholm, Paul; Chalamala, Babu
For nearly a century, global power systems have focused on three key functions: generating, transmitting, and distributing electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load-despite variability in load on time scales ranging from subsecond disturbances to multiyear trends. With the increasing role of variable generation from wind and solar, the retirement of fossil-fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.
Barbagallo, Simone; Corradi, Luca; de Ville de Goyet, Jean; Iannucci, Marina; Porro, Ivan; Rosso, Nicola; Tanfani, Elena; Testi, Angela
2015-05-17
The Operating Room (OR) is a key resource of all major hospitals, but it also accounts for up 40% of resource costs. Improving cost effectiveness, while maintaining a quality of care, is a universal objective. These goals imply an optimization of planning and a scheduling of the activities involved. This is highly challenging due to the inherent variable and unpredictable nature of surgery. A Business Process Modeling Notation (BPMN 2.0) was used for the representation of the "OR Process" (being defined as the sequence of all of the elementary steps between "patient ready for surgery" to "patient operated upon") as a general pathway ("path"). The path was then both further standardized as much as possible and, at the same time, keeping all of the key-elements that would allow one to address or define the other steps of planning, and the inherent and wide variability in terms of patient specificity. The path was used to schedule OR activity, room-by-room, and day-by-day, feeding the process from a "waiting list database" and using a mathematical optimization model with the objective of ending up in an optimized planning. The OR process was defined with special attention paid to flows, timing and resource involvement. Standardization involved a dynamics operation and defined an expected operating time for each operation. The optimization model has been implemented and tested on real clinical data. The comparison of the results reported with the real data, shows that by using the optimization model, allows for the scheduling of about 30% more patients than in actual practice, as well as to better exploit the OR efficiency, increasing the average operating room utilization rate up to 20%. The optimization of OR activity planning is essential in order to manage the hospital's waiting list. Optimal planning is facilitated by defining the operation as a standard pathway where all variables are taken into account. By allowing a precise scheduling, it feeds the process of planning and, further up-stream, the management of a waiting list in an interactive and bi-directional dynamic process.
Multi-rate DPSK optical transceivers for free-space applications
NASA Astrophysics Data System (ADS)
Caplan, D. O.; Carney, J. J.; Fitzgerald, J. J.; Gaschits, I.; Kaminsky, R.; Lund, G.; Hamilton, S. A.; Magliocco, R. J.; Murphy, R. J.; Rao, H. G.; Spellmeyer, N. W.; Wang, J. P.
2014-03-01
We describe a flexible high-sensitivity laser communication transceiver design that can significantly benefit performance and cost of NASA's satellite-based Laser Communications Relay Demonstration. Optical communications using differential phase shift keying, widely deployed for use in long-haul fiber-optic networks, is well known for its superior sensitivity and link performance over on-off keying, while maintaining a relatively straightforward design. However, unlike fiber-optic links, free-space applications often require operation over a wide dynamic range of power due to variations in link distance and channel conditions, which can include rapid kHz-class fading when operating through the turbulent atmosphere. Here we discuss the implementation of a robust, near-quantum-limited multi-rate DPSK transceiver, co-located transmitter and receiver subsystems that can operate efficiently over the highly-variable free-space channel. Key performance features will be presented on the master oscillator power amplifier (MOPA) based TX, including a wavelength-stabilized master laser, high-extinction-ratio burst-mode modulator, and 0.5 W single polarization power amplifier, as well as low-noise optically preamplified DSPK receiver and built-in test capabilities.
Continuous operation of four-state continuous-variable quantum key distribution system
NASA Astrophysics Data System (ADS)
Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Ichikawa, Tsubasa; Hirano, Takuya; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2016-10-01
We report on the development of continuous-variable quantum key distribution (CV-QKD) system that are based on discrete quadrature amplitude modulation (QAM) and homodyne detection of coherent states of light. We use a pulsed light source whose wavelength is 1550 nm and repetition rate is 10 MHz. The CV-QKD system can continuously generate secret key which is secure against entangling cloner attack. Key generation rate is 50 kbps when the quantum channel is a 10 km optical fiber. The CV-QKD system we have developed utilizes the four-state and post-selection protocol [T. Hirano, et al., Phys. Rev. A 68, 042331 (2003).]; Alice randomly sends one of four states {|+/-α⟩,|+/-𝑖α⟩}, and Bob randomly performs x- or p- measurement by homodyne detection. A commercially available balanced receiver is used to realize shot-noise-limited pulsed homodyne detection. GPU cards are used to accelerate the software-based post-processing. We use a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification.
DOT National Transportation Integrated Search
2001-01-14
The FAA's new generation Runway Visual Range (RVR) : system was first placed into service in 1994 at several : key airports in the United States. During the last three : years, the Volpe National Transportation Systems Center : has monitored RVR data...
2008-01-01
Cambell 2007). Also, the government force is usually better equipped and trained than the insurgents. The key advantage of the insurgents is their...22-27. O’Hanlon, M. E., J. H. Cambell . 2007. Iraq index, tracking variables of reconstruction & security in post- Saddam Iraq, The Brookings Institute
The Promise of Virtual Teams: Identifying Key Factors in Effectiveness and Failure
ERIC Educational Resources Information Center
Horwitz, Frank M.; Bravington, Desmond; Silvis, Ulrik
2006-01-01
Purpose: The aim of the investigation is to identify enabling and disenabling factors in the development and operation of virtual teams; to evaluate the importance of factors such as team development, cross-cultural variables, leadership, communication and social cohesion as contributors to virtual team effectiveness. Design/methodology/approach:…
Four-State Continuous-Variable Quantum Key Distribution with Photon Subtraction
NASA Astrophysics Data System (ADS)
Li, Fei; Wang, Yijun; Liao, Qin; Guo, Ying
2018-06-01
Four-state continuous-variable quantum key distribution (CVQKD) is one of the discretely modulated CVQKD which generates four nonorthogonal coherent states and exploits the sign of the measured quadrature of each state to encode information rather than uses the quadrature \\hat {x} or \\hat {p} itself. It has been proven that four-state CVQKD is more suitable than Gaussian modulated CVQKD in terms of transmission distance. In this paper, we propose an improved four-state CVQKD using an non-Gaussian operation, photon subtraction. A suitable photon-subtraction operation can be exploited to improve the maximal transmission of CVQKD in point-to-point quantum communication since it provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that the proposed scheme can lengthen the maximum transmission distance. Furthermore, by taking finite-size effect into account we obtain a tighter bound of the secure distance, which is more practical than that obtained in the asymptotic limit.
NASA Astrophysics Data System (ADS)
Liao, Qin; Guo, Ying; Huang, Duan; Huang, Peng; Zeng, Guihua
2018-02-01
We propose a long-distance continuous-variable quantum key distribution (CVQKD) with a four-state protocol using non-Gaussian state-discrimination detection. A photon subtraction operation, which is deployed at the transmitter, is used for splitting the signal required for generating the non-Gaussian operation to lengthen the maximum transmission distance of the CVQKD. Whereby an improved state-discrimination detector, which can be deemed as an optimized quantum measurement that allows the discrimination of nonorthogonal coherent states beating the standard quantum limit, is applied at the receiver to codetermine the measurement result with the conventional coherent detector. By tactfully exploiting the multiplexing technique, the resulting signals can be simultaneously transmitted through an untrusted quantum channel, and subsequently sent to the state-discrimination detector and coherent detector, respectively. Security analysis shows that the proposed scheme can lengthen the maximum transmission distance up to hundreds of kilometers. Furthermore, by taking the finite-size effect and composable security into account we obtain the tightest bound of the secure distance, which is more practical than that obtained in the asymptotic limit.
Operations Optimization of Nuclear Hybrid Energy Systems
Chen, Jun; Garcia, Humberto E.; Kim, Jong Suk; ...
2016-08-01
We proposed a plan for nuclear hybrid energy systems (NHES) as an effective element to incorporate high penetration of clean energy. Our paper focuses on the operations optimization of two specific NHES configurations to address the variability raised from various markets and renewable generation. Both analytical and numerical approaches are used to obtain the optimization solutions. Furthermore, key economic figures of merit are evaluated under optimized and constant operations to demonstrate the benefit of the optimization, which also suggests the economic viability of considered NHES under proposed operations optimizer. Furthermore, sensitivity analysis on commodity price is conducted for better understandingmore » of considered NHES.« less
Operations Optimization of Nuclear Hybrid Energy Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jun; Garcia, Humberto E.; Kim, Jong Suk
We proposed a plan for nuclear hybrid energy systems (NHES) as an effective element to incorporate high penetration of clean energy. Our paper focuses on the operations optimization of two specific NHES configurations to address the variability raised from various markets and renewable generation. Both analytical and numerical approaches are used to obtain the optimization solutions. Furthermore, key economic figures of merit are evaluated under optimized and constant operations to demonstrate the benefit of the optimization, which also suggests the economic viability of considered NHES under proposed operations optimizer. Furthermore, sensitivity analysis on commodity price is conducted for better understandingmore » of considered NHES.« less
Extended analysis of the Trojan-horse attack in quantum key distribution
NASA Astrophysics Data System (ADS)
Vinay, Scott E.; Kok, Pieter
2018-04-01
The discrete-variable quantum key distribution protocols based on the 1984 protocol of Bennett and Brassard (BB84) are known to be secure against an eavesdropper, Eve, intercepting the flying qubits and performing any quantum operation on them. However, these protocols may still be vulnerable to side-channel attacks. We investigate the Trojan-horse side-channel attack where Eve sends her own state into Alice's apparatus and measures the reflected state to estimate the key. We prove that the separable coherent state is optimal for Eve among the class of multimode Gaussian attack states, even in the presence of thermal noise. We then provide a bound on the secret key rate in the case where Eve may use any separable state.
Wittmann, Christoffer; Andersen, Ulrik L; Takeoka, Masahiro; Sych, Denis; Leuchs, Gerd
2010-03-12
We experimentally demonstrate a new measurement scheme for the discrimination of two coherent states. The measurement scheme is based on a displacement operation followed by a photon-number-resolving detector, and we show that it outperforms the standard homodyne detector which we, in addition, prove to be optimal within all Gaussian operations including conditional dynamics. We also show that the non-Gaussian detector is superior to the homodyne detector in a continuous variable quantum key distribution scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A; Cole, Wesley J; Sun, Yinong
Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss commonmore » modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges associate with integration of variable generation resources.« less
Determining the effect of key climate drivers on global hydropower production
NASA Astrophysics Data System (ADS)
Galelli, S.; Ng, J. Y.; Lee, D.; Block, P. J.
2017-12-01
Accounting for about 17% of total global electrical power production, hydropower is arguably the world's main renewable energy source and a key asset to meet Paris climate agreements. A key component of hydropower production is water availability, which depends on both precipitation and multiple drivers of climate variability acting at different spatial and temporal scales. To understand how these drivers impact global hydropower production, we study the relation between four patterns of ocean-atmosphere climate variability (i.e., El Niño Southern Oscillation, Pacific Decadal Oscillation, North Atlantic Oscillation, and Atlantic Multidecadal Oscillation) and monthly time series of electrical power production for over 1,500 hydropower reservoirs—obtained via simulation with a high-fidelity dam model forced with 20th century climate conditions. Notably significant relationships between electrical power productions and climate variability are found in many climate sensitive regions globally, including North and South America, East Asia, West Africa, and Europe. Coupled interactions from multiple, simultaneous climate drivers are also evaluated. Finally, we highlight the importance of using these climate drivers as an additional source of information within reservoir operating rules where the skillful predictability of inflow exists.
Key performance indicators for electric mining shovels and oil sands diggability
NASA Astrophysics Data System (ADS)
Patnayak, Sibabrata
A shovel performance monitoring study was undertaken in two oil sands mines operated by Syncrude Canada Ltd. using performance data obtained from P&H 4100 TS and BOSS electric mining shovels. One year of shovel performance data along with geological, geotechnical, and climatic data were analyzed. The approach adopted was to use current and voltage data collected from hoist and crowd motors and to calculate the energy and/or power associated with digging. Analysis of performance data along with digital video records of operating shovels indicated that hoist and crowd motor voltages and currents can be used to identify the beginning and the end of individual dig cycles. A dig cycle identification algorithm was developed. Performance indicators such as dig cycle time, hoist motor energy and power, and crowd motor energy and power were determined. The shovel performance indicators provide important insight into how geology, equipment and operators affect the digging efficiency. The hoist motor power is a useful key performance indicator for assessing diggability. Hoist motor energy consumption per tonne of material excavated and the number of dig cycles required for loading a truck can be useful key performance indicators for assessing operator performance and productivity. Analysis of performance data along with operators team schedules showed that the performance of a shovel can be significantly influenced by the operator's digging technique while digging uniform material. Up to 25% variability in hoist motor power consumption and 50% variability in productivity was noted between different operators. Shovel type and dipper teeth configuration can also influence the power draw on electrical motors during digging. There is no common agreement existing on the influence of bitumen content on oil sands diggability. By comparing the hoist motor power consumption, it was found that the rich ore was more difficult to dig than the lean ore. Similarly, estuarine ore was more difficult to dig than marine ore. Winter weather was expected to have a significant influence on oil sands diggability but was found to have only a minor and localized influence that depends upon the ore type, temperature conditions and the duration of bench exposure.
Practical somewhat-secure quantum somewhat-homomorphic encryption with coherent states
NASA Astrophysics Data System (ADS)
Tan, Si-Hui; Ouyang, Yingkai; Rohde, Peter P.
2018-04-01
We present a scheme for implementing homomorphic encryption on coherent states encoded using phase-shift keys. The encryption operations require only rotations in phase space, which commute with computations in the code space performed via passive linear optics, and with generalized nonlinear phase operations that are polynomials of the photon-number operator in the code space. This encoding scheme can thus be applied to any computation with coherent-state inputs, and the computation proceeds via a combination of passive linear optics and generalized nonlinear phase operations. An example of such a computation is matrix multiplication, whereby a vector representing coherent-state amplitudes is multiplied by a matrix representing a linear optics network, yielding a new vector of coherent-state amplitudes. By finding an orthogonal partitioning of the support of our encoded states, we quantify the security of our scheme via the indistinguishability of the encrypted code words. While we focus on coherent-state encodings, we expect that this phase-key encoding technique could apply to any continuous-variable computation scheme where the phase-shift operator commutes with the computation.
NASA Astrophysics Data System (ADS)
Guo, Ying; Xie, Cailang; Liao, Qin; Zhao, Wei; Zeng, Guihua; Huang, Duan
2017-08-01
The survival of Gaussian quantum states in a turbulent atmospheric channel is of crucial importance in free-space continuous-variable (CV) quantum key distribution (QKD), in which the transmission coefficient will fluctuate in time, thus resulting in non-Gaussian quantum states. Different from quantum hacking of the imperfections of practical devices, here we propose a different type of attack by exploiting the security loopholes that occur in a real lossy channel. Under a turbulent atmospheric environment, the Gaussian states are inevitably afflicted by decoherence, which would cause a degradation of the transmitted entanglement. Therefore, an eavesdropper can perform an intercept-resend attack by applying an entanglement-distillation operation on the transmitted non-Gaussian mixed states, which allows the eavesdropper to bias the estimation of the parameters and renders the final keys shared between the legitimate parties insecure. Our proposal highlights the practical CV QKD vulnerabilities with free-space quantum channels, including the satellite-to-earth links, ground-to-ground links, and a link from moving objects to ground stations.
NASA Astrophysics Data System (ADS)
Ruiz-Cárcel, C.; Jaramillo, V. H.; Mba, D.; Ottewill, J. R.; Cao, Y.
2016-01-01
The detection and diagnosis of faults in industrial processes is a very active field of research due to the reduction in maintenance costs achieved by the implementation of process monitoring algorithms such as Principal Component Analysis, Partial Least Squares or more recently Canonical Variate Analysis (CVA). Typically the condition of rotating machinery is monitored separately using vibration analysis or other specific techniques. Conventional vibration-based condition monitoring techniques are based on the tracking of key features observed in the measured signal. Typically steady-state loading conditions are required to ensure consistency between measurements. In this paper, a technique based on merging process and vibration data is proposed with the objective of improving the detection of mechanical faults in industrial systems working under variable operating conditions. The capabilities of CVA for detection and diagnosis of faults were tested using experimental data acquired from a compressor test rig where different process faults were introduced. Results suggest that the combination of process and vibration data can effectively improve the detectability of mechanical faults in systems working under variable operating conditions.
Outsourcing decision factors in publicly owned electric utilities
NASA Astrophysics Data System (ADS)
Gonzales, James Edward
Purpose. The outsourcing of services in publicly owned electric utilities has generated some controversy. The purpose of this study was to explore this controversy by investigating the relationships between eight key independent variables and a dependent variable, "manager perceptions of overall value of outsourced services." The intent was to provide data so that utilities could make better decisions regarding outsourcing efforts. Theoretical framework. Decision theory was used as the framework for analyzing variables and alternatives used to support the outsourcing decision-making process. By reviewing these eight variables and the projected outputs and outcomes, a more predictive and potentially successful outsourcing effort can be realized. Methodology. A survey was distributed to a sample of 323 publicly owned electric utilities randomly selected from a population of 2,020 in the United States. Analysis of the data was made using statistical techniques including the Chi-Square, Lambda, Spearman's coefficient of rank correlation, as well as the Hypothesis Test, Rank Correlation, to test for relationships among the variables. Findings. Relationships among the eight key variables and perceptions of the overall value of outsourced services were generally weak. The notable exception was with the driving force (reason) for outsourcing decisions where the relationship was strongly positive. Conclusions and recommendations. The data in support of the research questions suggest that seven of the eight key variables may be weakly predictive of perceptions of the overall value of outsourced services. However, the primary driving force for outsourcing was strongly predictive. The data also suggest that many of the sampled utilities did not formally address these variables and alternatives, and therefore may not be achieving maximal results. Further studies utilizing customer perceptions rather than those of outsourcing service managers are recommended. In addition, it is recommended that a smaller sample population be analyzed after identifying one or more champions to ensure cooperation and legitimacy of data. Finally, this study supports the position that a manager's ability to identify and understand the relationships between these eight key variables and desired outcomes and outputs may contribute to more successful outsourcing operations.
Key variables analysis of a novel continuous biodrying process for drying mixed sludge.
Navaee-Ardeh, Shahram; Bertrand, François; Stuart, Paul R
2010-05-01
A novel continuous biodrying process has been developed whose goal is to increase the dry solids content of the sludge to economic levels rendering it suitable for a safe and economic combustion operation in a biomass boiler. The sludge drying rates are enhanced by the metabolic bioheat produced in the matrix of mixed sludge. The goal of this study was to systematically analyze the continuous biodrying reactor. By performing a variable analysis, it was found that the outlet relative humidity profile was the key variable in the biodrying reactor. The influence of different outlet relative humidity profiles was then evaluated using biodrying efficiency index. It was found that by maintaining the air outlet relative humidity profile at 85/85/96/96% in the four compartments of the reactor, the highest biodrying efficiency index can be achieved, while economic dry solids level (>45%w/w) are guaranteed. Crown Copyright 2009. Published by Elsevier Ltd. All rights reserved.
Zador, Zsolt; Sperrin, Matthew; King, Andrew T
2016-01-01
Traumatic brain injury remains a global health problem. Understanding the relative importance of outcome predictors helps optimize our treatment strategies by informing assessment protocols, clinical decisions and trial designs. In this study we establish importance ranking for outcome predictors based on receiver operating indices to identify key predictors of outcome and create simple predictive models. We then explore the associations between key outcome predictors using Bayesian networks to gain further insight into predictor importance. We analyzed the corticosteroid randomization after significant head injury (CRASH) trial database of 10008 patients and included patients for whom demographics, injury characteristics, computer tomography (CT) findings and Glasgow Outcome Scale (GCS) were recorded (total of 13 predictors, which would be available to clinicians within a few hours following the injury in 6945 patients). Predictions of clinical outcome (death or severe disability at 6 months) were performed using logistic regression models with 5-fold cross validation. Predictive performance was measured using standardized partial area (pAUC) under the receiver operating curve (ROC) and we used Delong test for comparisons. Variable importance ranking was based on pAUC targeted at specificity (pAUCSP) and sensitivity (pAUCSE) intervals of 90-100%. Probabilistic associations were depicted using Bayesian networks. Complete AUC analysis showed very good predictive power (AUC = 0.8237, 95% CI: 0.8138-0.8336) for the complete model. Specificity focused importance ranking highlighted age, pupillary, motor responses, obliteration of basal cisterns/3rd ventricle and midline shift. Interestingly when targeting model sensitivity, the highest-ranking variables were age, severe extracranial injury, verbal response, hematoma on CT and motor response. Simplified models, which included only these key predictors, had similar performance (pAUCSP = 0.6523, 95% CI: 0.6402-0.6641 and pAUCSE = 0.6332, 95% CI: 0.62-0.6477) compared to the complete models (pAUCSP = 0.6664, 95% CI: 0.6543-0.679, pAUCSE = 0.6436, 95% CI: 0.6289-0.6585, de Long p value 0.1165 and 0.3448 respectively). Bayesian networks showed the predictors that did not feature in the simplified models were associated with those that did. We demonstrate that importance based variable selection allows simplified predictive models to be created while maintaining prediction accuracy. Variable selection targeting specificity confirmed key components of clinical assessment in TBI whereas sensitivity based ranking suggested extracranial injury as one of the important predictors. These results help refine our approach to head injury assessment, decision-making and outcome prediction targeted at model sensitivity and specificity. Bayesian networks proved to be a comprehensive tool for depicting probabilistic associations for key predictors giving insight into why the simplified model has maintained accuracy.
USDA-ARS?s Scientific Manuscript database
Snow-covered area (SCA) is a key variable in the Snowmelt-Runoff Model (SRM). Landsat Thematic Mapper (TM) or Operational Land Imager (OLI) provide remotely sensed data at an appropriate spatial resolution for mapping SCA in small headwater basins, but the temporal resolution of the data is low and ...
ERIC Educational Resources Information Center
Jagannathan, Christine
2017-01-01
Purpose: The purpose of this study was to explore how 2 important stakeholder groups of Southern California business education, regional faculty and employers of accounting graduates, defined and assessed critical thinking skills. Methods: A literature review identified 2 key variables--conceptualization and operational assessment of critical…
Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stenclik, Derek; Denholm, Paul; Chalamala, Babu
For nearly a century, global power systems have focused on three key functions: to generate, transmit, and distribute electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load, despite variability in load on timescales ranging from sub-second disturbances to multi-year trends. With the increasing role of variable generation from wind and solar, retirements of fossil fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.
Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration
Stenclik, Derek; Denholm, Paul; Chalamala, Babu
2017-10-17
For nearly a century, global power systems have focused on three key functions: to generate, transmit, and distribute electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load, despite variability in load on timescales ranging from sub-second disturbances to multi-year trends. With the increasing role of variable generation from wind and solar, retirements of fossil fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.
Overview of Variable-Speed Power-Turbine Research
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
2011-01-01
The vertical take-off and landing (VTOL) and high-speed cruise capability of the NASA Large Civil Tilt-Rotor (LCTR) notional vehicle is envisaged to enable increased throughput in the national airspace. A key challenge of the LCTR is the requirement to vary the main rotor speeds from 100% at take-off to near 50% at cruise as required to minimize mission fuel burn. The variable-speed power-turbine (VSPT), driving a fixed gear-ratio transmission, provides one approach for effecting this wide speed variation. The key aerodynamic and rotordynamic challenges of the VSPT were described in the FAP Conference presentation. The challenges include maintaining high turbine efficiency at high work factor, wide (60 deg.) of incidence variation in all blade rows due to the speed variation, and operation at low Reynolds numbers (with transitional flow). The PT -shaft of the VSPT must be designed for safe operation in the wide speed range required, and therefore poses challenges associated with rotordynamics. The technical challenges drive research activities underway at NASA. An overview of the NASA SRW VSPT research activities was provided. These activities included conceptual and preliminary aero and mechanical (rotordynamics) design of the VSPT for the LCTR application, experimental and computational research supporting the development of incidence tolerant blading, and steps toward component-level testing of a variable-speed power-turbine of relevance to the LCTR application.
Kuo, Yi-Ming; Wu, Jiunn-Tzong
2016-12-01
This study was conducted to identify the key factors related to the spatiotemporal variations in phytoplankton abundance in a subtropical reservoir from 2006 to 2010 and to assist in developing strategies for water quality management. Dynamic factor analysis (DFA), a dimension-reduction technique, was used to identify interactions between explanatory variables (i.e., environmental variables) and abundance (biovolume) of predominant phytoplankton classes. The optimal DFA model significantly described the dynamic changes in abundances of predominant phytoplankton groups (including dinoflagellates, diatoms, and green algae) at five monitoring sites. Water temperature, electrical conductivity, water level, nutrients (total phosphorus, NO 3 -N, and NH 3 -N), macro-zooplankton, and zooplankton were the key factors affecting the dynamics of aforementioned phytoplankton. Therefore, transformations of nutrients and reactions between water quality variables and aforementioned processes altered by hydrological conditions may also control the abundance dynamics of phytoplankton, which may represent common trends in the DFA model. The meandering shape of Shihmen Reservoir and its surrounding rivers caused a complex interplay between hydrological conditions and abiotic and biotic variables, resulting in phytoplankton abundance that could not be estimated using certain variables. Additional water quality and hydrological variables at surrounding rivers and monitoring plans should be executed a few days before and after reservoir operations and heavy storm, which would assist in developing site-specific preventive strategies to control phytoplankton abundance.
Particle Engineering in Pharmaceutical Solids Processing: Surface Energy Considerations
Williams, Daryl R.
2015-01-01
During the past 10 years particle engineering in the pharmaceutical industry has become a topic of increasing importance. Engineers and pharmacists need to understand and control a range of key unit manufacturing operations such as milling, granulation, crystallisation, powder mixing and dry powder inhaled drugs which can be very challenging. It has now become very clear that in many of these particle processing operations, the surface energy of the starting, intermediate or final products is a key factor in understanding the processing operation and or the final product performance. This review will consider the surface energy and surface energy heterogeneity of crystalline solids, methods for the measurement of surface energy, effects of milling on powder surface energy, adhesion and cohesion on powder mixtures, crystal habits and surface energy, surface energy and powder granulation processes, performance of DPI systems and finally crystallisation conditions and surface energy. This review will conclude that the importance of surface energy as a significant factor in understanding the performance of many particulate pharmaceutical products and processes has now been clearly established. It is still nevertheless, work in progress both in terms of development of methods and establishing the limits for when surface energy is the key variable of relevance. PMID:25876912
Message Variability and Heterogeneity: A Core Challenge for Communication Research
Slater, Michael D.; Peter, Jochen; Valkenberg, Patti
2015-01-01
Messages are central to human social experience, and pose key conceptual and methodological challenges in the study of communication. In response to these challenges, we outline a systematic approach to conceptualizing, operationalizing, and analyzing messages. At the conceptual level, we distinguish between two core aspects of messages: message variability (the defined and operationalized features of messages) and message heterogeneity (the undefined and unmeasured features of messages), and suggest preferred approaches to defining message variables. At the operational level, we identify message sampling, selection, and research design strategies responsive to issues of message variability and heterogeneity in experimental and survey research. At the analytical level, we highlight effective techniques to deal with message variability and heterogeneity. We conclude with seven recommendations to increase rigor in the study of communication through appropriately addressing the challenges presented by messages. PMID:26681816
Method for adding nodes to a quantum key distribution system
Grice, Warren P
2015-02-24
An improved quantum key distribution (QKD) system and method are provided. The system and method introduce new clients at intermediate points along a quantum channel, where any two clients can establish a secret key without the need for a secret meeting between the clients. The new clients perform operations on photons as they pass through nodes in the quantum channel, and participate in a non-secret protocol that is amended to include the new clients. The system and method significantly increase the number of clients that can be supported by a conventional QKD system, with only a modest increase in cost. The system and method are compatible with a variety of QKD schemes, including polarization, time-bin, continuous variable and entanglement QKD.
NASA Technical Reports Server (NTRS)
Sullivan, T. J.; Parker, D. E.
1979-01-01
A design technology study was performed to identify a high speed, multistage, variable geometry fan configuration capable of achieving wide flow modulation with near optimum efficiency at the important operating condition. A parametric screening study of the front and rear block fans was conducted in which the influence of major fan design features on weight and efficiency was determined. Key design parameters were varied systematically to determine the fan configuration most suited for a double bypass, variable cycle engine. Two and three stage fans were considered for the front block. A single stage, core driven fan was studied for the rear block. Variable geometry concepts were evaluated to provide near optimum off design performance. A detailed aerodynamic design and a preliminary mechanical design were carried out for the selected fan configuration. Performance predictions were made for the front and rear block fans.
NASA Astrophysics Data System (ADS)
Ireland, Gareth; North, Matthew R.; Petropoulos, George P.; Srivastava, Prashant K.; Hodges, Crona
2015-04-01
Acquiring accurate information on the spatio-temporal variability of soil moisture content (SM) and evapotranspiration (ET) is of key importance to extend our understanding of the Earth system's physical processes, and is also required in a wide range of multi-disciplinary research studies and applications. The utility and applicability of Earth Observation (EO) technology provides an economically feasible solution to derive continuous spatio-temporal estimates of key parameters characterising land surface interactions, including ET as well as SM. Such information is of key value to practitioners, decision makers and scientists alike. The PREMIER-EO project recently funded by High Performance Computing Wales (HPCW) is a research initiative directed towards the development of a better understanding of EO technology's present ability to derive operational estimations of surface fluxes and SM. Moreover, the project aims at addressing knowledge gaps related to the operational estimation of such parameters, and thus contribute towards current ongoing global efforts towards enhancing the accuracy of those products. In this presentation we introduce the PREMIER-EO project, providing a detailed overview of the research aims and objectives for the 1 year duration of the project's implementation. Subsequently, we make available the initial results of the work carried out herein, in particular, related to an all-inclusive and robust evaluation of the accuracy of existing operational products of ET and SM from different ecosystems globally. The research outcomes of this project, once completed, will provide an important contribution towards addressing the knowledge gaps related to the operational estimation of ET and SM. This project results will also support efforts ongoing globally towards the operational development of related products using technologically advanced EO instruments which were launched recently or planned be launched in the next 1-2 years. Key Words: PREMIER-EO, HPC Wales, Soil Moisture, Evapotranspiration, , Earth Observation
NASA Technical Reports Server (NTRS)
Xue, Yan; Balmaseda, Magdalena A.; Boyer, Tim; Ferry, Nicolas; Good, Simon; Ishikawa, Ichiro; Rienecker, Michele; Rosati, Tony; Yin, Yonghong; Kumar, Arun
2012-01-01
Upper ocean heat content (HC) is one of the key indicators of climate variability on many time-scales extending from seasonal to interannual to long-term climate trends. For example, HC in the tropical Pacific provides information on thermocline anomalies that is critical for the longlead forecast skill of ENSO. Since HC variability is also associated with SST variability, a better understanding and monitoring of HC variability can help us understand and forecast SST variability associated with ENSO and other modes such as Indian Ocean Dipole (IOD), Pacific Decadal Oscillation (PDO), Tropical Atlantic Variability (TAV) and Atlantic Multidecadal Oscillation (AMO). An accurate ocean initialization of HC anomalies in coupled climate models could also contribute to skill in decadal climate prediction. Errors, and/or uncertainties, in the estimation of HC variability can be affected by many factors including uncertainties in surface forcings, ocean model biases, and deficiencies in data assimilation schemes. Changes in observing systems can also leave an imprint on the estimated variability. The availability of multiple operational ocean analyses (ORA) that are routinely produced by operational and research centers around the world provides an opportunity to assess uncertainties in HC analyses, to help identify gaps in observing systems as they impact the quality of ORAs and therefore climate model forecasts. A comparison of ORAs also gives an opportunity to identify deficiencies in data assimilation schemes, and can be used as a basis for development of real-time multi-model ensemble HC monitoring products. The OceanObs09 Conference called for an intercomparison of ORAs and use of ORAs for global ocean monitoring. As a follow up, we intercompared HC variations from ten ORAs -- two objective analyses based on in-situ data only and eight model analyses based on ocean data assimilation systems. The mean, annual cycle, interannual variability and longterm trend of HC have been analyzed
Banning PRF programmer's manual. [considering MOS integrated circuits
NASA Technical Reports Server (NTRS)
Kuelthau, R. L.
1970-01-01
This manual describes a modification of the Banning placement routing folding program. The modifications to this program have been made to implement it on a Sigma 5 computer. Flow charts of various levels, beginning with high level functional diagrams and working down to the level of detail deemed necessary to understand the operations of the various sections of the program are included. Along with the flow charts of each subroutine is a narrative description of its functional operation and definitions of its arrays and key variables, and a section to assist the programmer in dimensioning the program's arrays.
Penloglou, Giannis; Chatzidoukas, Christos; Kiparissides, Costas
2012-01-01
The microbial production of polyhydroxybutyrate (PHB) is a complex process in which the final quantity and quality of the PHB depend on a large number of process operating variables. Consequently, the design and optimal dynamic operation of a microbial process for the efficient production of PHB with tailor-made molecular properties is an extremely interesting problem. The present study investigates how key process operating variables (i.e., nutritional and aeration conditions) affect the biomass production rate and the PHB accumulation in the cells and its associated molecular weight distribution. A combined metabolic/polymerization/macroscopic modelling approach, relating the process performance and product quality with the process variables, was developed and validated using an extensive series of experiments and measurements. The model predicts the dynamic evolution of the biomass growth, the polymer accumulation, the consumption of carbon and nitrogen sources and the average molecular weights of the PHB in a bioreactor, under batch and fed-batch operating conditions. The proposed integrated model was used for the model-based optimization of the production of PHB with tailor-made molecular properties in Azohydromonas lata bacteria. The process optimization led to a high intracellular PHB accumulation (up to 95% g of PHB per g of DCW) and the production of different grades (i.e., different molecular weight distributions) of PHB. Copyright © 2011 Elsevier Inc. All rights reserved.
Disentangling Global Warming, Multidecadal Variability, and El Niño in Pacific Temperatures
NASA Astrophysics Data System (ADS)
Wills, Robert C.; Schneider, Tapio; Wallace, John M.; Battisti, David S.; Hartmann, Dennis L.
2018-03-01
A key challenge in climate science is to separate observed temperature changes into components due to internal variability and responses to external forcing. Extended integrations of forced and unforced climate models are often used for this purpose. Here we demonstrate a novel method to separate modes of internal variability from global warming based on differences in time scale and spatial pattern, without relying on climate models. We identify uncorrelated components of Pacific sea surface temperature variability due to global warming, the Pacific Decadal Oscillation (PDO), and the El Niño-Southern Oscillation (ENSO). Our results give statistical representations of PDO and ENSO that are consistent with their being separate processes, operating on different time scales, but are otherwise consistent with canonical definitions. We isolate the multidecadal variability of the PDO and find that it is confined to midlatitudes; tropical sea surface temperatures and their teleconnections mix in higher-frequency variability. This implies that midlatitude PDO anomalies are more persistent than previously thought.
Emergency strategy optimization for the environmental control system in manned spacecraft
NASA Astrophysics Data System (ADS)
Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin
2018-02-01
It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.
NASA Astrophysics Data System (ADS)
Besse, Nicolas; Coulette, David
2016-08-01
Achieving plasmas with good stability and confinement properties is a key research goal for magnetic fusion devices. The underlying equations are the Vlasov-Poisson and Vlasov-Maxwell (VPM) equations in three space variables, three velocity variables, and one time variable. Even in those somewhat academic cases where global equilibrium solutions are known, studying their stability requires the analysis of the spectral properties of the linearized operator, a daunting task. We have identified a model, for which not only equilibrium solutions can be constructed, but many of their stability properties are amenable to rigorous analysis. It uses a class of solution to the VPM equations (or to their gyrokinetic approximations) known as waterbag solutions which, in particular, are piecewise constant in phase-space. It also uses, not only the gyrokinetic approximation of fast cyclotronic motion around magnetic field lines, but also an asymptotic approximation regarding the magnetic-field-induced anisotropy: the spatial variation along the field lines is taken much slower than across them. Together, these assumptions result in a drastic reduction in the dimensionality of the linearized problem, which becomes a set of two nested one-dimensional problems: an integral equation in the poloidal variable, followed by a one-dimensional complex Schrödinger equation in the radial variable. We show here that the operator associated to the poloidal variable is meromorphic in the eigenparameter, the pulsation frequency. We also prove that, for all but a countable set of real pulsation frequencies, the operator is compact and thus behaves mostly as a finite-dimensional one. The numerical algorithms based on such ideas have been implemented in a companion paper [D. Coulette and N. Besse, "Numerical resolution of the global eigenvalue problem for gyrokinetic-waterbag model in toroidal geometry" (submitted)] and were found to be surprisingly close to those for the original gyrokinetic-Vlasov equations. The purpose of the present paper is to make these new ideas accessible to two readerships: applied mathematicians and plasma physicists.
Design Considerations for a New Terminal Area Arrival Scheduler
NASA Technical Reports Server (NTRS)
Thipphavong, Jane; Mulfinger, Daniel
2010-01-01
Design of a terminal area arrival scheduler depends on the interrelationship between throughput, delay and controller intervention. The main contribution of this paper is an analysis of the above interdependence for several stochastic behaviors of expected system performance distributions in the aircraft s time of arrival at the meter fix and runway. Results of this analysis serve to guide the scheduler design choices for key control variables. Two types of variables are analyzed, separation buffers and terminal delay margins. The choice for these decision variables was tested using sensitivity analysis. Analysis suggests that it is best to set the separation buffer at the meter fix to its minimum and adjust the runway buffer to attain the desired system performance. Delay margin was found to have the least effect. These results help characterize the variables most influential in the scheduling operations of terminal area arrivals.
Wide range operation of advanced low NOx aircraft gas turbine combustors
NASA Technical Reports Server (NTRS)
Roberts, P. B.; Fiorito, R. J.; Butze, H. F.
1978-01-01
The paper summarizes the results of an experimental test rig program designed to define and demonstrates techniques which would allow the jet-induced circulation and vortex air blast combustors to operate stably with acceptable emissions at simulated engine idle without compromise to the low NOx emissions under the high-altitude supersonic cruise condition. The discussion focuses on the test results of the key combustor modifications for both the simulated engine idle and cruise conditions. Several range-augmentation techniques are demonstrated that allow the lean-reaction premixed aircraft gas turbine combustor to operate with low NOx emissons at engine cruise and acceptable CO and UHC levels at engine idle. These techniques involve several combinations, including variable geometry and fuel switching designs.
The 200-kilowatt wind turbine project
NASA Technical Reports Server (NTRS)
1978-01-01
The three 200 kilowatt wind turbines described, compose the first of three separate systems. Proposed wind turbines of the two other systems, although similar in design, are larger in both physical size and rated power generation. The overall objective of the project is to obtain early operation and performance data while gaining initial experience in the operation of large, horizontal-axis wind turbines in typical utility environments. Several of the key issues addressed include the following: (1) impact of the variable power output (due to varying wind speeds) on the utility grid (2) compatibility with utility requirements (voltage and frequency control of generated power) (3) demonstration of unattended, fail-safe operation (4) reliability of the wind turbine system (5) required maintenance and (6) initial public reaction and acceptance.
Robertson, Erin L; Liber, Karsten
2007-11-01
The main objectives of this in situ study were to evaluate the usefulness of an in situ bioassay to determine if downstream water bodies at the Key Lake and Rabbit Lake uranium operations (Saskatchewan, Canada) were toxic to Hyalella azteca and, if toxicity was observed, to differentiate between the contribution of surface water and sediment contamination to in situ toxicity. These objectives were achieved by performing 4-d in situ bioassays with laboratory-reared H. azteca confined in specially designed, paired, surface water and sediment exposure chambers. Results from the in situ bioassays revealed significant mortality, relative to the respective reference site, at the exposure sites at both Key Lake (p = 0.001) and Rabbit Lake (p = 0.001). No statistical differences were found between survival in surface water and sediment exposure chambers at either Key Lake (p = 0.232) or Rabbit Lake (p = 0.072). This suggests that surface water (the common feature of both types of exposure chambers) was the primary cause of in situ mortality of H. azteca at both operations, although this relationship was stronger at Key Lake. At Key Lake, the primary cause of aquatic toxicity to H. azteca did not appear to be correlated with the variables measured in this study, but most likely with a pulse of organic mill-process chemicals released during the time of the in situ study-a transient event that was caused by a problem with the mill's solvent extraction process. The suspected cause of in situ toxicity to H. azteca at Rabbit Lake was high levels of uranium in surface water, sediment, and pore water.
Breaking the trade-off between efficiency and service.
Frei, Frances X
2006-11-01
For manufacturers, customers are the open wallets at the end of the supply chain. But for most service businesses, they are key inputs to the production process. Customers introduce tremendous variability to that process, but they also complain about any lack of consistency and don't care about the company's profit agenda. Managing customer-introduced variability, the author argues, is a central challenge for service companies. The first step is to diagnose which type of variability is causing mischief: Customers may arrive at different times, request different kinds of service, possess different capabilities, make varying degrees of effort, and have different personal preferences. Should companies accommodate variability or reduce it? Accommodation often involves asking employees to compensate for the variations among customers--a potentially costly solution. Reduction often means offering a limited menu of options, which may drive customers away. Some companies have learned to deal with customer-introduced variability without damaging either their operating environments or customers' service experiences. Starbucks, for example, handles capability variability among its customers by teaching them the correct ordering protocol. Dell deals with arrival and request variability in its high-end server business by outsourcing customer service while staying in close touch with customers to discuss their needs and assess their experiences with third-party providers. The effective management of variability often requires a company to influence customers' behavior. Managers attempting that kind of intervention can follow a three-step process: diagnosing the behavioral problem, designing an operating role for customers that creates new value for both parties, and testing and refining approaches for influencing behavior.
Small Interactive Image Processing System (SMIPS) system description
NASA Technical Reports Server (NTRS)
Moik, J. G.
1973-01-01
The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.
ENSO detection and use to inform the operation of large scale water systems
NASA Astrophysics Data System (ADS)
Pham, Vuong; Giuliani, Matteo; Castelletti, Andrea
2016-04-01
El Nino Southern Oscillation (ENSO) is a large-scale, coupled ocean-atmosphere phenomenon occurring in the tropical Pacific Ocean, and is considered one of the most significant factors causing hydro-climatic anomalies throughout the world. Water systems operations could benefit from a better understanding of this global phenomenon, which has the potential for enhancing the accuracy and lead-time of long-range streamflow predictions. In turn, these are key to design interannual water transfers in large scale water systems to contrast increasingly frequent extremes induced by changing climate. Despite the ENSO teleconnection is well defined in some locations such as Western USA and Australia, there is no consensus on how it can be detected and used in other river basins, particularly in Europe, Africa, and Asia. In this work, we contribute a general framework relying on Input Variable Selection techniques for detecting ENSO teleconnection and using this information for improving water reservoir operations. Core of our procedure is the Iterative Input variable Selection (IIS) algorithm, which is employed to find the most relevant determinants of streamflow variability for deriving predictive models based on the selected inputs as well as to find the most valuable information for conditioning operating decisions. Our framework is applied to the multipurpose operations of the Hoa Binh reservoir in the Red River basin (Vietnam), taking into account hydropower production, water supply for irrigation, and flood mitigation during the monsoon season. Numerical results show that our framework is able to quantify the relationship between the ENSO fluctuations and the Red River basin hydrology. Moreover, we demonstrate that such ENSO teleconnection represents valuable information for improving the operations of Hoa Binh reservoir.
Corcoran, Jennifer M.; Knight, Joseph F.; Gallant, Alisa L.
2013-01-01
Wetland mapping at the landscape scale using remotely sensed data requires both affordable data and an efficient accurate classification method. Random forest classification offers several advantages over traditional land cover classification techniques, including a bootstrapping technique to generate robust estimations of outliers in the training data, as well as the capability of measuring classification confidence. Though the random forest classifier can generate complex decision trees with a multitude of input data and still not run a high risk of over fitting, there is a great need to reduce computational and operational costs by including only key input data sets without sacrificing a significant level of accuracy. Our main questions for this study site in Northern Minnesota were: (1) how does classification accuracy and confidence of mapping wetlands compare using different remote sensing platforms and sets of input data; (2) what are the key input variables for accurate differentiation of upland, water, and wetlands, including wetland type; and (3) which datasets and seasonal imagery yield the best accuracy for wetland classification. Our results show the key input variables include terrain (elevation and curvature) and soils descriptors (hydric), along with an assortment of remotely sensed data collected in the spring (satellite visible, near infrared, and thermal bands; satellite normalized vegetation index and Tasseled Cap greenness and wetness; and horizontal-horizontal (HH) and horizontal-vertical (HV) polarization using L-band satellite radar). We undertook this exploratory analysis to inform decisions by natural resource managers charged with monitoring wetland ecosystems and to aid in designing a system for consistent operational mapping of wetlands across landscapes similar to those found in Northern Minnesota.
Gaussian private quantum channel with squeezed coherent states
Jeong, Kabgyun; Kim, Jaewan; Lee, Su-Yong
2015-01-01
While the objective of conventional quantum key distribution (QKD) is to secretly generate and share the classical bits concealed in the form of maximally mixed quantum states, that of private quantum channel (PQC) is to secretly transmit individual quantum states concealed in the form of maximally mixed states using shared one-time pad and it is called Gaussian private quantum channel (GPQC) when the scheme is in the regime of continuous variables. We propose a GPQC enhanced with squeezed coherent states (GPQCwSC), which is a generalization of GPQC with coherent states only (GPQCo) [Phys. Rev. A 72, 042313 (2005)]. We show that GPQCwSC beats the GPQCo for the upper bound on accessible information. As a subsidiary example, it is shown that the squeezed states take an advantage over the coherent states against a beam splitting attack in a continuous variable QKD. It is also shown that a squeezing operation can be approximated as a superposition of two different displacement operations in the small squeezing regime. PMID:26364893
Gaussian private quantum channel with squeezed coherent states.
Jeong, Kabgyun; Kim, Jaewan; Lee, Su-Yong
2015-09-14
While the objective of conventional quantum key distribution (QKD) is to secretly generate and share the classical bits concealed in the form of maximally mixed quantum states, that of private quantum channel (PQC) is to secretly transmit individual quantum states concealed in the form of maximally mixed states using shared one-time pad and it is called Gaussian private quantum channel (GPQC) when the scheme is in the regime of continuous variables. We propose a GPQC enhanced with squeezed coherent states (GPQCwSC), which is a generalization of GPQC with coherent states only (GPQCo) [Phys. Rev. A 72, 042313 (2005)]. We show that GPQCwSC beats the GPQCo for the upper bound on accessible information. As a subsidiary example, it is shown that the squeezed states take an advantage over the coherent states against a beam splitting attack in a continuous variable QKD. It is also shown that a squeezing operation can be approximated as a superposition of two different displacement operations in the small squeezing regime.
Simultaneous classical communication and quantum key distribution using continuous variables*
NASA Astrophysics Data System (ADS)
Qi, Bing
2016-10-01
Presently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters show that both deterministic classical communication with a bit error rate of 10-9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.
Bio-inspired online variable recruitment control of fluidic artificial muscles
NASA Astrophysics Data System (ADS)
Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew
2016-12-01
This paper details the creation of a hybrid variable recruitment control scheme for fluidic artificial muscle (FAM) actuators with an emphasis on maximizing system efficiency and switching control performance. Variable recruitment is the process of altering a system’s active number of actuators, allowing operation in distinct force regimes. Previously, FAM variable recruitment was only quantified with offline, manual valve switching; this study addresses the creation and characterization of novel, on-line FAM switching control algorithms. The bio-inspired algorithms are implemented in conjunction with a PID and model-based controller, and applied to a simulated plant model. Variable recruitment transition effects and chatter rejection are explored via a sensitivity analysis, allowing a system designer to weigh tradeoffs in actuator modeling, algorithm choice, and necessary hardware. Variable recruitment is further developed through simulation of a robotic arm tracking a variety of spline position inputs, requiring several levels of actuator recruitment. Switching controller performance is quantified and compared with baseline systems lacking variable recruitment. The work extends current variable recruitment knowledge by creating novel online variable recruitment control schemes, and exploring how online actuator recruitment affects system efficiency and control performance. Key topics associated with implementing a variable recruitment scheme, including the effects of modeling inaccuracies, hardware considerations, and switching transition concerns are also addressed.
Some practical aspects of lossless and nearly-lossless compression of AVHRR imagery
NASA Technical Reports Server (NTRS)
Hogan, David B.; Miller, Chris X.; Christensen, Than Lee; Moorti, Raj
1994-01-01
Compression of Advanced Very high Resolution Radiometers (AVHRR) imagery operating in a lossless or nearly-lossless mode is evaluated. Several practical issues are analyzed including: variability of compression over time and among channels, rate-smoothing buffer size, multi-spectral preprocessing of data, day/night handling, and impact on key operational data applications. This analysis is based on a DPCM algorithm employing the Universal Noiseless Coder, which is a candidate for inclusion in many future remote sensing systems. It is shown that compression rates of about 2:1 (daytime) can be achieved with modest buffer sizes (less than or equal to 2.5 Mbytes) and a relatively simple multi-spectral preprocessing step.
Quantification of Gaussian quantum steering.
Kogias, Ioannis; Lee, Antony R; Ragy, Sammy; Adesso, Gerardo
2015-02-13
Einstein-Podolsky-Rosen steering incarnates a useful nonclassical correlation which sits between entanglement and Bell nonlocality. While a number of qualitative steering criteria exist, very little has been achieved for what concerns quantifying steerability. We introduce a computable measure of steering for arbitrary bipartite Gaussian states of continuous variable systems. For two-mode Gaussian states, the measure reduces to a form of coherent information, which is proven never to exceed entanglement, and to reduce to it on pure states. We provide an operational connection between our measure and the key rate in one-sided device-independent quantum key distribution. We further prove that Peres' conjecture holds in its stronger form within the fully Gaussian regime: namely, steering bound entangled Gaussian states by Gaussian measurements is impossible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besse, Nicolas, E-mail: Nicolas.Besse@oca.eu; Institut Jean Lamour, UMR CNRS/UL 7198, Université de Lorraine, BP 70239 54506 Vandoeuvre-lès-Nancy Cedex; Coulette, David, E-mail: David.Coulette@ipcms.unistra.fr
2016-08-15
Achieving plasmas with good stability and confinement properties is a key research goal for magnetic fusion devices. The underlying equations are the Vlasov–Poisson and Vlasov–Maxwell (VPM) equations in three space variables, three velocity variables, and one time variable. Even in those somewhat academic cases where global equilibrium solutions are known, studying their stability requires the analysis of the spectral properties of the linearized operator, a daunting task. We have identified a model, for which not only equilibrium solutions can be constructed, but many of their stability properties are amenable to rigorous analysis. It uses a class of solution to themore » VPM equations (or to their gyrokinetic approximations) known as waterbag solutions which, in particular, are piecewise constant in phase-space. It also uses, not only the gyrokinetic approximation of fast cyclotronic motion around magnetic field lines, but also an asymptotic approximation regarding the magnetic-field-induced anisotropy: the spatial variation along the field lines is taken much slower than across them. Together, these assumptions result in a drastic reduction in the dimensionality of the linearized problem, which becomes a set of two nested one-dimensional problems: an integral equation in the poloidal variable, followed by a one-dimensional complex Schrödinger equation in the radial variable. We show here that the operator associated to the poloidal variable is meromorphic in the eigenparameter, the pulsation frequency. We also prove that, for all but a countable set of real pulsation frequencies, the operator is compact and thus behaves mostly as a finite-dimensional one. The numerical algorithms based on such ideas have been implemented in a companion paper [D. Coulette and N. Besse, “Numerical resolution of the global eigenvalue problem for gyrokinetic-waterbag model in toroidal geometry” (submitted)] and were found to be surprisingly close to those for the original gyrokinetic-Vlasov equations. The purpose of the present paper is to make these new ideas accessible to two readerships: applied mathematicians and plasma physicists.« less
Hannah, Iain; Montefiori, Erica; Modenese, Luca; Prinold, Joe; Viceconti, Marco; Mazzà, Claudia
2017-01-01
Subject-specific musculoskeletal modelling is especially useful in the study of juvenile and pathological subjects. However, such methodologies typically require a human operator to identify key landmarks from medical imaging data and are thus affected by unavoidable variability in the parameters defined and subsequent model predictions. The aim of this study was to thus quantify the inter- and intra-operator repeatability of a subject-specific modelling methodology developed for the analysis of subjects with juvenile idiopathic arthritis. Three operators each created subject-specific musculoskeletal foot and ankle models via palpation of bony landmarks, adjustment of geometrical muscle points and definition of joint coordinate systems. These models were then fused to a generic Arnold lower limb model for each of three modelled patients. The repeatability of each modelling operation was found to be comparable to those previously reported for the modelling of healthy, adult subjects. However, the inter-operator repeatability of muscle point definition was significantly greater than intra-operator repeatability (p < 0.05) and predicted ankle joint contact forces ranged by up to 24% and 10% of the peak force for the inter- and intra-operator analyses, respectively. Similarly, the maximum inter- and intra-operator variations in muscle force output were 64% and 23% of peak force, respectively. Our results suggest that subject-specific modelling is operator dependent at the foot and ankle, with the definition of muscle geometry the most significant source of output uncertainty. The development of automated procedures to prevent the misplacement of crucial muscle points should therefore be considered a particular priority for those developing subject-specific models. PMID:28427313
Hannah, Iain; Montefiori, Erica; Modenese, Luca; Prinold, Joe; Viceconti, Marco; Mazzà, Claudia
2017-05-01
Subject-specific musculoskeletal modelling is especially useful in the study of juvenile and pathological subjects. However, such methodologies typically require a human operator to identify key landmarks from medical imaging data and are thus affected by unavoidable variability in the parameters defined and subsequent model predictions. The aim of this study was to thus quantify the inter- and intra-operator repeatability of a subject-specific modelling methodology developed for the analysis of subjects with juvenile idiopathic arthritis. Three operators each created subject-specific musculoskeletal foot and ankle models via palpation of bony landmarks, adjustment of geometrical muscle points and definition of joint coordinate systems. These models were then fused to a generic Arnold lower limb model for each of three modelled patients. The repeatability of each modelling operation was found to be comparable to those previously reported for the modelling of healthy, adult subjects. However, the inter-operator repeatability of muscle point definition was significantly greater than intra-operator repeatability ( p < 0.05) and predicted ankle joint contact forces ranged by up to 24% and 10% of the peak force for the inter- and intra-operator analyses, respectively. Similarly, the maximum inter- and intra-operator variations in muscle force output were 64% and 23% of peak force, respectively. Our results suggest that subject-specific modelling is operator dependent at the foot and ankle, with the definition of muscle geometry the most significant source of output uncertainty. The development of automated procedures to prevent the misplacement of crucial muscle points should therefore be considered a particular priority for those developing subject-specific models.
2013-06-01
Kobu, 2007) Gunasekaran and Kobu also presented six observations as they relate to these key performance indicators ( KPI ), as follows: 1...Internal business process (50% of the KPI ) and customers (50% of the KPI ) play a significant role in SC environments. This implies that internal business...process PMs have significant impact on the operational performance. 2. The most widely used PM is financial performance (38% of the KPI ). This
Self-imposed timeouts under increasing response requirements.
NASA Technical Reports Server (NTRS)
Dardano, J. F.
1973-01-01
Three male White Carneaux pigeons were used in the investigation. None of the results obtained contradicts the interpretation of self-imposed timeouts as an escape response reinforced by the removal of unfavorable reinforcement conditions, although some details of the performances reflect either a weak control and/or operation of other controlling variables. Timeout key responding can be considered as one of several classes of behavior having a low probability of occurrence, all of which compete with the behavior maintained by positive reinforcement schedule.
NASA Astrophysics Data System (ADS)
Spuler, Scott; Repasky, Kevin; Hayman, Matt; Nehrir, Amin
2018-04-01
The National Center for Atmospheric Research (NCAR) and Montana State Univeristy (MSU) are developing a test network of five micro-pulse differential absorption lidars to continuously measure high-vertical-resolution water vapor in the lower atmosphere. The instruments are accurate, yet low-cost; operate unattended, and eye-safe - all key features to enable the larger network needed to characterize atmospheric moisture variability which influences important processes related to weather and climate.
Managerial accounting applications in radiology.
Lexa, Frank James; Mehta, Tushar; Seidmann, Abraham
2005-03-01
We review the core issues in managerial accounting for radiologists. We introduce the topic and then explore its application to diagnostic imaging. We define key terms such as fixed cost, variable cost, marginal cost, and marginal revenue and discuss their role in understanding the operational and financial implications for a radiology facility by using a cost-volume-profit model. Our work places particular emphasis on the role of managerial accounting in understanding service costs, as well as how it assists executive decision making.
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
A dynamic plug flow reactor model for a vanadium redox flow battery cell
NASA Astrophysics Data System (ADS)
Li, Yifeng; Skyllas-Kazacos, Maria; Bao, Jie
2016-04-01
A dynamic plug flow reactor model for a single cell VRB system is developed based on material balance, and the Nernst equation is employed to calculate cell voltage with consideration of activation and concentration overpotentials. Simulation studies were conducted under various conditions to investigate the effects of several key operation variables including electrolyte flow rate, upper SOC limit and input current magnitude on the cell charging performance. The results show that all three variables have a great impact on performance, particularly on the possibility of gassing during charging at high SOCs or inadequate flow rates. Simulations were also carried out to study the effects of electrolyte imbalance during long term charging and discharging cycling. The results show the minimum electrolyte flow rate needed for operation within a particular SOC range in order to avoid gassing side reactions during charging. The model also allows scheduling of partial electrolyte remixing operations to restore capacity and also avoid possible gassing side reactions during charging. Simulation results also suggest the proper placement for cell voltage monitoring and highlight potential problems associated with setting the upper charging cut-off limit based on the inlet SOC calculated from the open-circuit cell voltage measurement.
Variable-Speed Power-Turbine Research at Glenn Research Center
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; McVetta, Ashlie B.; Stevens, Mark A.; Howard, Samuel A.; Giel, Paul W.; Ameri, Ali, A.; To, Waiming; Skoch, Gary J.; Thurman, Douglas R.
2012-01-01
The main rotors of the NASA Large Civil Tilt-Rotor (LCTR) notional vehicle operate over a wide speed-range, from 100 percent at takeoff to 54 percent at cruise. The variable-speed power turbine (VSPT) offers one approach by which to effect this speed variation. VSPT aerodynamics challenges include high work factors at cruise, wide (40 to 60 ) incidence-angle variations in blade and vane rows over the speed range, and operation at low Reynolds numbers. Rotordynamics challenges include potential responsiveness to shaft modes within the 50 percent VSPT speed-range. A research effort underway at NASA Glenn Research Center, intended to address these key aerodynamic and rotordynamic challenges, is described. Conceptual design and 3-D multistage RANS and URANS analyses, conducted internally and under contract, provide expected VSPT sizing, stage-count, performance and operability information, and maps for system studies. Initial steps toward experimental testing of incidence-tolerant blading in a transonic linear cascade are described, and progress toward development/improvement of a simulation capability for multistage turbines with low Reynolds number transitional flow is summarized. Preliminary rotordynamics analyses indicate that viable concept engines with 50 percent VSPT shaft-speed range. Assessments of potential paths toward VSPT component-level testing are summarized.
Linear dynamical modes as new variables for data-driven ENSO forecast
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen
2018-05-01
A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.
Remote creation of hybrid entanglement between particle-like and wave-like optical qubits
NASA Astrophysics Data System (ADS)
Morin, Olivier; Huang, Kun; Liu, Jianli; Le Jeannic, Hanna; Fabre, Claude; Laurat, Julien
2014-07-01
The wave-particle duality of light has led to two different encodings for optical quantum information processing. Several approaches have emerged based either on particle-like discrete-variable states (that is, finite-dimensional quantum systems) or on wave-like continuous-variable states (that is, infinite-dimensional systems). Here, we demonstrate the generation of entanglement between optical qubits of these different types, located at distant places and connected by a lossy channel. Such hybrid entanglement, which is a key resource for a variety of recently proposed schemes, including quantum cryptography and computing, enables information to be converted from one Hilbert space to the other via teleportation and therefore the connection of remote quantum processors based upon different encodings. Beyond its fundamental significance for the exploration of entanglement and its possible instantiations, our optical circuit holds promise for implementations of heterogeneous network, where discrete- and continuous-variable operations and techniques can be efficiently combined.
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
McKerr, Caoimhe; Lo, Yi-Chun; Edeghere, Obaghe; Bracebridge, Sam
2015-03-01
In Taiwan, around 1,500 cases of dengue fever are reported annually and incidence has been increasing over time. A national web-based Notifiable Diseases Surveillance System (NDSS) has been in operation since 1997 to monitor incidence and trends and support case and outbreak management. We present the findings of an evaluation of the NDSS to ascertain the extent to which dengue fever surveillance objectives are being achieved. We extracted the NDSS data on all laboratory-confirmed dengue fever cases reported during 1 January 2010 to 31 December 2012 to assess and describe key system attributes based on the Centers for Disease Control and Prevention surveillance evaluation guidelines. The system's structure and processes were delineated and operational staff interviewed using a semi-structured questionnaire. Crude and age-adjusted incidence rates were calculated and key demographic variables were summarised to describe reporting activity. Data completeness and validity were described across several variables. Of 5,072 laboratory-confirmed dengue fever cases reported during 2010-2012, 4,740 (93%) were reported during July to December. The system was judged to be simple due to its minimal reporting steps. Data collected on key variables were correctly formatted and usable in > 90% of cases, demonstrating good data completeness and validity. The information collected was considered relevant by users with high acceptability. Adherence to guidelines for 24-hour reporting was 99%. Of 720 cases (14%) recorded as travel-related, 111 (15%) had an onset >14 days after return, highlighting the potential for misclassification. Information on hospitalization was missing for 22% of cases. The calculated PVP was 43%. The NDSS for dengue fever surveillance is a robust, well maintained and acceptable system that supports the collection of complete and valid data needed to achieve the surveillance objectives. The simplicity of the system engenders compliance leading to timely and accurate reporting. Completeness of hospitalization information could be further improved to allow assessment of severity of illness. To minimize misclassification, an algorithm to accurately classify travel cases should be established.
Summary of the key features of seven biomathematical models of human fatigue and performance.
Mallis, Melissa M; Mejdal, Sig; Nguyen, Tammy T; Dinges, David F
2004-03-01
Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbély, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.
Summary of the key features of seven biomathematical models of human fatigue and performance
NASA Technical Reports Server (NTRS)
Mallis, Melissa M.; Mejdal, Sig; Nguyen, Tammy T.; Dinges, David F.
2004-01-01
BACKGROUND: Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. METHODS: An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. RESULTS: Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. CONCLUSIONS: Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbely, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.
NASA Astrophysics Data System (ADS)
Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang
2018-06-01
We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.
Continuous variable quantum key distribution with modulated entangled states.
Madsen, Lars S; Usenko, Vladyslav C; Lassen, Mikael; Filip, Radim; Andersen, Ulrik L
2012-01-01
Quantum key distribution enables two remote parties to grow a shared key, which they can use for unconditionally secure communication over a certain distance. The maximal distance depends on the loss and the excess noise of the connecting quantum channel. Several quantum key distribution schemes based on coherent states and continuous variable measurements are resilient to high loss in the channel, but are strongly affected by small amounts of channel excess noise. Here we propose and experimentally address a continuous variable quantum key distribution protocol that uses modulated fragile entangled states of light to greatly enhance the robustness to channel noise. We experimentally demonstrate that the resulting quantum key distribution protocol can tolerate more noise than the benchmark set by the ideal continuous variable coherent state protocol. Our scheme represents a very promising avenue for extending the distance for which secure communication is possible.
Evaluation of vertical profiles to design continuous descent approach procedure
NASA Astrophysics Data System (ADS)
Pradeep, Priyank
The current research focuses on predictability, variability and operational feasibility aspect of Continuous Descent Approach (CDA), which is among the key concepts of the Next Generation Air Transportation System (NextGen). The idle-thrust CDA is a fuel economical, noise and emission abatement procedure, but requires increased separation to accommodate for variability and uncertainties in vertical and speed profiles of arriving aircraft. Although a considerable amount of researches have been devoted to the estimation of potential benefits of the CDA, only few have attempted to explain the predictability, variability and operational feasibility aspect of CDA. The analytical equations derived using flight dynamics and Base of Aircraft and Data (BADA) Total Energy Model (TEM) in this research gives insight into dependency of vertical profile of CDA on various factors like wind speed and gradient, weight, aircraft type and configuration, thrust settings, atmospheric factors (deviation from ISA (DISA), pressure and density of the air) and descent speed profile. Application of the derived equations to idle-thrust CDA gives an insight into sensitivity of its vertical profile to multiple factors. This suggests fixed geometric flight path angle (FPA) CDA has higher degree of predictability and lesser variability at the cost of non-idle and low thrust engine settings. However, with optimized design this impact can be overall minimized. The CDA simulations were performed using Future ATM Concept Evaluation Tool (FACET) based on radar-track and aircraft type data (BADA) of the real air-traffic to some of the busiest airports in the USA (ATL, SFO and New York Metroplex (JFK, EWR and LGA)). The statistical analysis of the vertical profiles of CDA shows 1) mean geometric FPAs derived from various simulated vertical profiles are consistently shallower than 3° glideslope angle and 2) high level of variability in vertical profiles of idle-thrust CDA even in absence of uncertainties in external factors. Analysis from operational feasibility perspective suggests that two key features of the performance based Flight Management System (FMS) i.e. required time of arrival (RTA) and geometric descent path would help in reduction of unpredictability associated with arrival time and vertical profile of aircraft guided by the FMS coupled with auto-pilot (AP) and auto-throttle (AT). The statistical analysis of the vertical profiles of CDA also suggests that for procedure design window type, 'AT or above' and 'AT or below' altitude and FPA constraints are more realistic and useful compared to obsolete 'AT' type altitude constraint.
McCoy, Ryan J; O'Brien, Fergal J
2012-12-01
Tissue engineering approaches to developing functional substitutes are often highly complex, multivariate systems where many aspects of the biomaterials, bio-regulatory factors or cell sources may be controlled in an effort to enhance tissue formation. Furthermore, success is based on multiple performance criteria reflecting both the quantity and quality of the tissue produced. Managing the trade-offs between different performance criteria is a challenge. A "windows of operation" tool that graphically represents feasible operating spaces to achieve user-defined levels of performance has previously been described by researchers in the bio-processing industry. This paper demonstrates the value of "windows of operation" to the tissue engineering field using a perfusion-scaffold bioreactor system as a case study. In our laboratory, perfusion bioreactor systems are utilized in the context of bone tissue engineering to enhance the osteogenic differentiation of cell-seeded scaffolds. A key challenge of such perfusion bioreactor systems is to maximize the induction of osteogenesis but minimize cell detachment from the scaffold. Two key operating variables that influence these performance criteria are the mean scaffold pore size and flow-rate. Using cyclooxygenase-2 and osteopontin gene expression levels as surrogate indicators of osteogenesis, we employed the "windows of operation" methodology to rapidly identify feasible operating ranges for the mean scaffold pore size and flow-rate that achieved user-defined levels of performance for cell detachment and differentiation. Incorporation of such tools into the tissue engineer's armory will hopefully yield a greater understanding of the highly complex systems used and help aid decision making in future translation of products from the bench top to the market place. Copyright © 2012 Wiley Periodicals, Inc.
Defining process design space for monoclonal antibody cell culture.
Abu-Absi, Susan Fugett; Yang, LiYing; Thompson, Patrick; Jiang, Canping; Kandula, Sunitha; Schilling, Bernhard; Shukla, Abhinav A
2010-08-15
The concept of design space has been taking root as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. During mapping of the process design space, the multidimensional combination of operational variables is studied to quantify the impact on process performance in terms of productivity and product quality. An efficient methodology to map the design space for a monoclonal antibody cell culture process is described. A failure modes and effects analysis (FMEA) was used as the basis for the process characterization exercise. This was followed by an integrated study of the inoculum stage of the process which includes progressive shake flask and seed bioreactor steps. The operating conditions for the seed bioreactor were studied in an integrated fashion with the production bioreactor using a two stage design of experiments (DOE) methodology to enable optimization of operating conditions. A two level Resolution IV design was followed by a central composite design (CCD). These experiments enabled identification of the edge of failure and classification of the operational parameters as non-key, key or critical. In addition, the models generated from the data provide further insight into balancing productivity of the cell culture process with product quality considerations. Finally, process and product-related impurity clearance was evaluated by studies linking the upstream process with downstream purification. Production bioreactor parameters that directly influence antibody charge variants and glycosylation in CHO systems were identified.
Uncertainty quantification for accident management using ACE surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varuttamaseni, A.; Lee, J. C.; Youngblood, R. W.
The alternating conditional expectation (ACE) regression method is used to generate RELAP5 surrogates which are then used to determine the distribution of the peak clad temperature (PCT) during the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed (F and B) operation in the Zion-1 nuclear power plant. The construction of the surrogates assumes conditional independence relations among key reactor parameters. The choice of parameters to model is based on the macroscopic balance statements governing the behavior of the reactor. The peak clad temperature is calculated based on the independent variables that are known tomore » be important in determining the success of the F and B operation. The relationship between these independent variables and the plant parameters such as coolant pressure and temperature is represented by surrogates that are constructed based on 45 RELAP5 cases. The time-dependent PCT for different values of F and B parameters is calculated by sampling the independent variables from their probability distributions and propagating the information through two layers of surrogates. The results of our analysis show that the ACE surrogates are able to satisfactorily reproduce the behavior of the plant parameters even though a quasi-static assumption is primarily used in their construction. The PCT is found to be lower in cases where the F and B operation is initiated, compared to the case without F and B, regardless of the F and B parameters used. (authors)« less
Partitioned key-value store with atomic memory operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Grider, Gary
A partitioned key-value store is provided that supports atomic memory operations. A server performs a memory operation in a partitioned key-value store by receiving a request from an application for at least one atomic memory operation, the atomic memory operation comprising a memory address identifier; and, in response to the atomic memory operation, performing one or more of (i) reading a client-side memory location identified by the memory address identifier and storing one or more key-value pairs from the client-side memory location in a local key-value store of the server; and (ii) obtaining one or more key-value pairs from themore » local key-value store of the server and writing the obtained one or more key-value pairs into the client-side memory location identified by the memory address identifier. The server can perform functions obtained from a client-side memory location and return a result to the client using one or more of the atomic memory operations.« less
Trends in onroad transportation energy and emissions.
Frey, H Christopher
2018-06-01
Globally, 1.3 billion on-road vehicles consume 79 quadrillion BTU of energy, mostly gasoline and diesel fuels, emit 5.7 gigatonnes of CO 2 , and emit other pollutants to which approximately 200,000 annual premature deaths are attributed. Improved vehicle energy efficiency and emission controls have helped offset growth in vehicle activity. New technologies are diffusing into the vehicle fleet in response to fuel efficiency and emission standards. Empirical assessment of vehicle emissions is challenging because of myriad fuels and technologies, intervehicle variability, multiple emission processes, variability in operating conditions, and varying capabilities of measurement methods. Fuel economy and emissions regulations have been effective in reducing total emissions of key pollutants. Real-world fuel use and emissions are consistent with official values in the United States but not in Europe or countries that adopt European standards. Portable emission measurements systems, which uncovered a recent emissions cheating scandal, have a key role in regulatory programs to ensure conformity between "real driving emissions" and emission standards. The global vehicle fleet will experience tremendous growth, especially in Asia. Although existing data and modeling tools are useful, they are often based on convenience samples, small sample sizes, large variability, and unquantified uncertainty. Vehicles emit precursors to several important secondary pollutants, including ozone and secondary organic aerosols, which requires a multipollutant emissions and air quality management strategy. Gasoline and diesel are likely to persist as key energy sources to mid-century. Adoption of electric vehicles is not a panacea with regard to greenhouse gas emissions unless coupled with policies to change the power generation mix. Depending on how they are actually implemented and used, autonomous vehicles could lead to very large reductions or increases in energy consumption. Numerous other trends are addressed with regard to technology, emissions controls, vehicle operations, emission measurements, impacts on exposure, and impacts on public health. Without specific policies to the contrary, fossil fuels are likely to continue to be the major source of on-road vehicle energy consumption. Fuel economy and emission standards are generally effective in achieving reductions per unit of vehicle activity. However, the number of vehicles and miles traveled will increase. Total energy use and emissions depend on factors such as fuels, technologies, land use, demographics, economics, road design, vehicle operation, societal values, and others that affect demand for transportation, mode choice, energy use, and emissions. Thus, there are many opportunities to influence future trends in vehicle energy use and emissions.
Miniature Variable Pressure Scanning Electron Microscope for In-Situ Imaging and Chemical Analysis
NASA Technical Reports Server (NTRS)
Gaskin, Jessica A.; Jerman, Gregory; Gregory, Don; Sampson, Allen R.
2012-01-01
NASA Marshall Space Flight Center (MSFC) is leading an effort to develop a Miniaturized Variable Pressure Scanning Electron Microscope (MVP-SEM) for in-situ imaging and chemical analysis of uncoated samples. This instrument development will be geared towards operation on Mars and builds on a previous MSFC design of a mini-SEM for the moon (funded through the NASA Planetary Instrument Definition and Development Program). Because Mars has a dramatically different environment than the moon, modifications to the MSFC lunar mini-SEM are necessary. Mainly, the higher atmospheric pressure calls for the use of an electron gun that can operate at High Vacuum, rather than Ultra-High Vacuum. The presence of a CO2-rich atmosphere also allows for the incorporation of a variable pressure system that enables the in-situ analysis of nonconductive geological specimens. Preliminary testing of Mars meteorites in a commercial Environmental SEM(Tradmark) (FEI) confirms the usefulness of lowcurrent/low-accelerating voltage imaging and highlights the advantages of using the Mars atmosphere for environmental imaging. The unique capabilities of the MVP-SEM make it an ideal tool for pursuing key scientific goals of NASA's Flagship Mission Max-C; to perform in-situ science and collect and cache samples in preparation for sample return from Mars.
Ausserhofer, Dietmar; Rakic, Severin; Novo, Ahmed; Dropic, Emira; Fisekovic, Eldin; Sredic, Ana; Van Malderen, Greet
2016-06-01
We explored how selected 'positive deviant' healthcare facilities in Bosnia and Herzegovina approach the continuous development, adaptation, implementation, monitoring and evaluation of nursing-related standard operating procedures. Standardized nursing care is internationally recognized as a critical element of safe, high-quality health care; yet very little research has examined one of its key instruments: nursing-related standard operating procedures. Despite variability in Bosnia and Herzegovina's healthcare and nursing care quality, we assumed that some healthcare facilities would have developed effective strategies to elevate nursing quality and safety through the use of standard operating procedures. Guided by the 'positive deviance' approach, we used a multiple-case study design to examine a criterion sample of four facilities (two primary healthcare centres and two hospitals), collecting data via focus groups and individual interviews. In each studied facility, certification/accreditation processes were crucial to the initiation of continuous development, adaptation, implementation, monitoring and evaluation of nursing-related SOPs. In one hospital and one primary healthcare centre, nurses working in advanced roles (i.e. quality coordinators) were responsible for developing and implementing nursing-related standard operating procedures. Across the four studied institutions, we identified a consistent approach to standard operating procedures-related processes. The certification/accreditation process is enabling necessary changes in institutions' organizational cultures, empowering nurses to take on advanced roles in improving the safety and quality of nursing care. Standardizing nursing procedures is key to improve the safety and quality of nursing care. Nursing and Health Policy are needed in Bosnia and Herzegovina to establish a functioning institutional framework, including regulatory bodies, educational systems for developing nurses' capacities or the inclusion of nursing-related standard operating procedures in certification/accreditation standards. © 2016 International Council of Nurses.
NASA Technical Reports Server (NTRS)
Rayner, J. T.; Chuter, T. C.; Mclean, I. S.; Radostitz, J. V.; Nolt, I. G.
1988-01-01
A technique for establishing a stable intermediate temperature stage in liquid He/liquid N2 double vessel cryostats is described. The tertiary cold stage, which can be tuned to any temperature between 10 and 60 K, is ideal for cooling IR sensors for use in astronomy and physics applications. The device is called a variable-conductance gas switch. It is essentially a small chamber, located between the cold stage and liquid helium cold-face, whose thermal conductance may be controlled by varying the pressure of helium gas within the chamber. A key feature of this device is the large range of temperature control achieved with a very small (less than 10 mW) heat input from the cryogenic temperature control switch.
Public transportation in the 1980's: responding to pressures of fiscal austerity. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, M.D.; Hemily, P.B.
A telephone survey of 30 transit general managers was used to determine the general response of transit agencies to fiscal pressures. A more detailed case study of the Greater Bridgeport Transit District provided greater detail on the response process in one agency, especially focussing on the identification and implementation of feasible options. The concept of a transit agency's operational environment was used to identify the key analysis variables that guided the survey and case study. In general, these variables could be classified into two major categories: (1) those relating to the degree of decisionmaking independence of the agency, and (2)more » those relating to the degree to which an agency is capable of responding to financial pressures.« less
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F; Schnabel, Roman
2015-10-30
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein-Podolsky-Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components.
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F.; Schnabel, Roman
2015-01-01
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein–Podolsky–Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components. PMID:26514280
A System Trade Study of Remote Infrared Imaging for Space Shuttle Reentry
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Ross, Martin N.; Baize, Rosemary; Horvath, Thomas J.; Berry, Scott A.; Krasa, Paul W.
2008-01-01
A trade study reviewing the primary operational parameters concerning the deployment of imaging assets in support of the Hypersonic Thermodynamic Infrared Measurements (HYTHIRM) project was undertaken. The objective was to determine key variables and constraints for obtaining thermal images of the Space Shuttle orbiter during reentry. The trade study investigated the performance characteristics and operating environment of optical instrumentation that may be deployed during a HYTHIRM data collection mission, and specified contributions to the Point Spread Function. It also investigated the constraints that have to be considered in order to optimize deployment through the use of mission planning tools. These tools simulate the radiance modeling of the vehicle as well as the expected spatial resolution based on the Orbiter trajectory and placement of land based or airborne optical sensors for given Mach numbers. Lastly, this report focused on the tools and methodology that have to be in place for real-time mission planning in order to handle the myriad of variables such as trajectory ground track, weather, and instrumentation availability that may only be known in the hours prior to landing.
MODELING OF HUMAN EXPOSURE TO IN-VEHICLE PM2.5 FROM ENVIRONMENTAL TOBACCO SMOKE
Cao, Ye; Frey, H. Christopher
2012-01-01
Environmental tobacco smoke (ETS) is estimated to be a significant contributor to in-vehicle human exposure to fine particulate matter of 2.5 µm or smaller (PM2.5). A critical assessment was conducted of a mass balance model for estimating PM2.5 concentration with smoking in a motor vehicle. Recommendations for the range of inputs to the mass-balance model are given based on literature review. Sensitivity analysis was used to determine which inputs should be prioritized for data collection. Air exchange rate (ACH) and the deposition rate have wider relative ranges of variation than other inputs, representing inter-individual variability in operations, and inter-vehicle variability in performance, respectively. Cigarette smoking and emission rates, and vehicle interior volume, are also key inputs. The in-vehicle ETS mass balance model was incorporated into the Stochastic Human Exposure and Dose Simulation for Particulate Matter (SHEDS-PM) model to quantify the potential magnitude and variability of in-vehicle exposures to ETS. The in-vehicle exposure also takes into account near-road incremental PM2.5 concentration from on-road emissions. Results of probabilistic study indicate that ETS is a key contributor to the in-vehicle average and high-end exposure. Factors that mitigate in-vehicle ambient PM2.5 exposure lead to higher in-vehicle ETS exposure, and vice versa. PMID:23060732
Entangled-coherent-state quantum key distribution with entanglement witnessing
NASA Astrophysics Data System (ADS)
Simon, David S.; Jaeger, Gregg; Sergienko, Alexander V.
2014-01-01
An entanglement-witness approach to quantum coherent-state key distribution and a system for its practical implementation are described. In this approach, eavesdropping can be detected by a change in sign of either of two witness functions: an entanglement witness S or an eavesdropping witness W. The effects of loss and eavesdropping on system operation are evaluated as a function of distance. Although the eavesdropping witness W does not directly witness entanglement for the system, its behavior remains related to that of the true entanglement witness S. Furthermore, W is easier to implement experimentally than S. W crosses the axis at a finite distance, in a manner reminiscent of entanglement sudden death. The distance at which this occurs changes measurably when an eavesdropper is present. The distance dependence of the two witnesses due to amplitude reduction and due to increased variance resulting from both ordinary propagation losses and possible eavesdropping activity is provided. Finally, the information content and secure key rate of a continuous variable protocol using this witness approach are given.
Simultaneous classical communication and quantum key distribution using continuous variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Bing
Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less
Simultaneous classical communication and quantum key distribution using continuous variables
Qi, Bing
2016-10-26
Currently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters showmore » that both deterministic classical communication with a bit error rate of 10 –9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.« less
Mitchell, D A; von Meien, O F
2000-04-20
Zymotis bioreactors for solid-state fermentation (SSF) are packed-bed bioreactors with internal cooling plates. This design has potential to overcome the problem of heat removal, which is one of the main challenges in SSF. In ordinary packed-bed bioreactors, which lack internal plates, large axial temperature gradients arise, leading to poor microbial growth in the end of the bed near the air outlet. The Zymotis design is suitable for SSF processes in which the substrate bed must be maintained static, but little is known about how to design and operate Zymotis bioreactors. We use a two-dimensional heat transfer model, describing the growth of Aspergillus niger on a starchy substrate, to provide guidelines for the optimum design and operation of Zymotis bioreactors. As for ordinary packed-beds, the superficial velocity of the process air is a key variable. However, the Zymotis design introduces other important variables, namely, the spacing between the internal cooling plates and the temperature of the cooling water. High productivities can be achieved at large scale, but only if small spacings between the cooling plates are used, and if the cooling water temperature is varied during the fermentation in response to bed temperatures. Copyright 2000 John Wiley & Sons, Inc.
Källhammer, Jan-Erik; Smith, Kip
2012-08-01
We investigated five contextual variables that we hypothesized would influence driver acceptance of alerts to pedestrians issued by a night vision active safety system to inform the specification of the system's alerting strategies. Driver acceptance of automotive active safety systems is a key factor to promote their use and implies a need to assess factors influencing driver acceptance. In a field operational test, 10 drivers drove instrumented vehicles equipped with a preproduction night vision system with pedestrian detection software. In a follow-up experiment, the 10 drivers and 25 additional volunteers without experience with the system watched 57 clips with pedestrian encounters gathered during the field operational test. They rated the acceptance of an alert to each pedestrian encounter. Levels of rating concordance were significant between drivers who experienced the encounters and participants who did not. Two contextual variables, pedestrian location and motion, were found to influence ratings. Alerts were more accepted when pedestrians were close to or moving toward the vehicle's path. The study demonstrates the utility of using subjective driver acceptance ratings to inform the design of active safety systems and to leverage expensive field operational test data within the confines of the laboratory. The design of alerting strategies for active safety systems needs to heed the driver's contextual sensitivity to issued alerts.
Long-distance continuous-variable quantum key distribution by controlling excess noise
NASA Astrophysics Data System (ADS)
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise.
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-13
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network. PMID:26758727
EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physi...
Bayesian network representing system dynamics in risk analysis of nuclear systems
NASA Astrophysics Data System (ADS)
Varuttamaseni, Athi
2011-12-01
A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.
NASA Astrophysics Data System (ADS)
Cossarini, Gianpiero; D'Ortenzio, Fabrizio; Mariotti, Laura; Mignot, Alexandre; Salon, Stefano
2017-04-01
The Mediterranean Sea is a very promising site to develop and test the assimilation of Bio-Argo data since 1) the Bio-Argo network is one of the densest of the global ocean, and 2) a consolidate data assimilation framework of biogeochemical variables (3DVAR-BIO, presently based on assimilation of satellite-estimated surface chlorophyll data) already exists within the CMEMS biogeochemical model system for Mediterranean Sea. The MASSIMILI project, granted by the CMEMS Service Evolution initiative, is aimed to develop the assimilation of Bio-Argo Floats data into the CMEMS biogeochemical model system of the Mediterranean Sea, by means of an upgrade of the 3DVAR-BIO scheme. Specific developments of the 3DVAR-BIO scheme focus on the estimate of new operators of the variational decomposition of the background error covariance matrix and on the implementation of the new observation operator specifically for the Bio-Argo float vertical profile data. In particular, a new horizontal covariance operator for chlorophyll, nitrate and oxygen is based on 3D fields of horizontal correlation radius calculated from a long-term reanalysis simulation. A new vertical covariance operator is built on monthly and spatial varying EOF decomposition to account for the spatiotemporal variability of vertical structure of the three variables error covariance. Further, the observation error covariance is a key factor for an effective assimilation of the Bio-Argo data into the model dynamics. The sensitivities of assimilation to the different factors are estimated. First results of the implementation of the new 3DVAR-BIO scheme show the impact of Bio-Argo data on the 3D fields of chlorophyll, nitrate and oxygen. Tuning the length scale factors of horizontal covariance, analysing the sensitivity of the observation error covariance, introducing non-diagonal biogeochemical covariance operator and non-diagonal multi-platform operator (i.e. Bio-Argo and satellite) are crucial future steps for the success of the MASSIMILI project. In our contribute, we will discuss the recent and promising advancements this strategic project has been having in the past year and its potential for the whole operational biogeochemical modelling community.
Zhang, Hang; Xu, Qingyan; Liu, Baicheng
2014-01-01
The rapid development of numerical modeling techniques has led to more accurate results in modeling metal solidification processes. In this study, the cellular automaton-finite difference (CA-FD) method was used to simulate the directional solidification (DS) process of single crystal (SX) superalloy blade samples. Experiments were carried out to validate the simulation results. Meanwhile, an intelligent model based on fuzzy control theory was built to optimize the complicate DS process. Several key parameters, such as mushy zone width and temperature difference at the cast-mold interface, were recognized as the input variables. The input variables were functioned with the multivariable fuzzy rule to get the output adjustment of withdrawal rate (v) (a key technological parameter). The multivariable fuzzy rule was built, based on the structure feature of casting, such as the relationship between section area, and the delay time of the temperature change response by changing v, and the professional experience of the operator as well. Then, the fuzzy controlling model coupled with CA-FD method could be used to optimize v in real-time during the manufacturing process. The optimized process was proven to be more flexible and adaptive for a steady and stray-grain free DS process. PMID:28788535
Civil air transport: A fresh look at power-by-wire and fly-by-light
NASA Technical Reports Server (NTRS)
Sundberg, Gale R.
1991-01-01
Power-by-wire (PBW) is a key element under subsonic transport flight systems technology with potential savings of over 10 percent in operating empty weight and in fuel consumption compared to today's transport aircraft. The PBW technology substitutes electrical actuation in place of centralized hydraulics, uses internal starter-motor/generators and eliminates the need for variable engine bleed air to supply cabin comfort. The application of advanced fiber optics to the electrical power system controls, to built-in-test (BIT) equipment, and to fly-by-light (FBL) flight controls provides additional benefits in lightning and high energy radio frequency (HERF) immunity over existing mechanical or even fly-by-wire controls. The program plan is reviewed and a snapshot is given of the key technologies and their benefits to all future aircraft, both civil and military.
NASA Astrophysics Data System (ADS)
Heine, Frank; Saucke, Karen; Troendle, Daniel; Motzigemba, Matthias; Bischl, Hermann; Elser, Dominique; Marquardt, Christoph; Henninger, Hennes; Meyer, Rolf; Richter, Ines; Sodnik, Zoran
2017-02-01
Optical ground stations can be an alternative to radio frequency based transmit (forward) and receive (return) systems for data relay services and other applications including direct to earth optical communications from low earth orbit spacecrafts, deep space receivers, space based quantum key distribution systems and Tbps capacity feeder links to geostationary spacecrafts. The Tesat Transportable Adaptive Optical Ground Station is operational since September 2015 at the European Space Agency site in Tenerife, Spain.. This paper reports about the results of the 2016 experimental campaigns including the characterization of the optical channel from Tenerife for an optimized coding scheme, the performance of the T-AOGS under different atmospheric conditions and the first successful measurements of the suitability of the Alphasat LCT optical downlink performance for future continuous variable quantum key distribution systems.
The art of spacecraft design: A multidisciplinary challenge
NASA Technical Reports Server (NTRS)
Abdi, F.; Ide, H.; Levine, M.; Austel, L.
1989-01-01
Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.
NO2 and HCHO variability in Mexico City from MAX-DOAS measurements
NASA Astrophysics Data System (ADS)
Grutter, M.; Friedrich, M. M.; Rivera, C. I.; Arellano, E. J.; Stremme, W.
2015-12-01
Atmospheric studies in large cities are of great relevance since pollution affects air quality and human health. A network of Multi Axis Differential Optical Absorption Spectrometers (MAX-DOAS) has been established in strategic sites within the Mexico City metropolitan area. Four instruments are now in operation with the aim to study the variability and spatial distribution of key pollutants, providing results of O4, NO2 and HCHO slant column densities (SCD). A numerical code has been written to retrieve gas profiles of NO2 and HCHO using radiative transfer simulations. We present the first results of the variability of these trace gases which will bring new insight in the current knowledge of transport patterns, emissions as well as frequency and origin of extraordinary events. Results of the vertical column densities (VCD) valiability of NO2 and HCHO in Mexico City are presented. These studies are useful to validate current and future satellite observatopns such as OMI, TROPOMI and TEMPO.
2013-06-01
1 18th ICCRTS Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables...AND SUBTITLE Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables 5a. CONTRACT...command in crisis management. C2 Agility Model Agility can be conceptualized at a number of different levels; for instance at the team
A financial planning model for estimating hospital debt capacity.
Hopkins, D S; Heath, D; Levin, P J
1982-01-01
A computer-based financial planning model was formulated to measure the impact of a major capital improvement project on the fiscal health of Stanford University Hospital. The model had to be responsive to many variables and easy to use, so as to allow for the testing of numerous alternatives. Special efforts were made to identify the key variables that needed to be presented in the model and to include all known links between capital investment, debt, and hospital operating expenses. Growth in the number of patient days of care was singled out as a major source of uncertainty that would have profound effects on the hospital's finances. Therefore this variable was subjected to special scrutiny in terms of efforts to gauge expected demographic trends and market forces. In addition, alternative base runs of the model were made under three distinct patient-demand assumptions. Use of the model enabled planners at the Stanford University Hospital (a) to determine that a proposed modernization plan was financially feasible under a reasonable (that is, not unduly optimistic) set of assumptions and (b) to examine the major sources of risk. Other than patient demand, these sources were found to be gross revenues per patient, operating costs, and future limitations on government reimbursement programs. When the likely financial consequences of these risks were estimated, both separately and in combination, it was determined that even if two or more assumptions took a somewhat more negative turn than was expected, the hospital would be able to offset adverse consequences by a relatively minor reduction in operating costs. PMID:7111658
Leverrier, Anthony; Grangier, Philippe
2009-05-08
We present a continuous-variable quantum key distribution protocol combining a discrete modulation and reverse reconciliation. This protocol is proven unconditionally secure and allows the distribution of secret keys over long distances, thanks to a reverse reconciliation scheme efficient at very low signal-to-noise ratio.
Operator agency in process intervention: tampering versus application of tacit knowledge
NASA Astrophysics Data System (ADS)
Van Gestel, P.; Pons, D. J.; Pulakanam, V.
2015-09-01
Statistical process control (SPC) theory takes a negative view of adjustment of process settings, which is termed tampering. In contrast, quality and lean programmes actively encourage operators to acts of intervention and personal agency in the improvement of production outcomes. This creates a conflict that requires operator judgement: How does one differentiate between unnecessary tampering and needful intervention? Also, difficult is that operators apply tacit knowledge to such judgements. There is a need to determine where in a given production process the operators are applying tacit knowledge, and whether this is hindering or aiding quality outcomes. The work involved the conjoint application of systems engineering, statistics, and knowledge management principles, in the context of a case study. Systems engineering was used to create a functional model of a real plant. Actual plant data were analysed with the statistical methods of ANOVA, feature selection, and link analysis. This identified the variables to which the output quality was most sensitive. These key variables were mapped back to the functional model. Fieldwork was then directed to those areas to prospect for operator judgement activities. A natural conversational approach was used to determine where and how operators were applying judgement. This contrasts to the interrogative approach of conventional knowledge management. Data are presented for a case study of a meat rendering plant. The results identify specific areas where operators' tacit knowledge and mental model contribute to quality outcomes and untangles the motivations behind their agency. Also evident is how novice and expert operators apply their knowledge differently. Novices were focussed on meeting throughput objectives, and their incomplete understanding of the plant characteristics led them to inadvertently sacrifice quality in the pursuit of productivity in certain situations. Operators' responses to the plant are affected by their individual mental models of the plant, which differ between operators and have variable validity. Their behaviour is also affected by differing interpretations of how their personal agency should be applied to the achievement of production objectives. The methodology developed here is an integration of systems engineering, statistical analysis, and knowledge management. It shows how to determine where in a given production process the operator intervention is occurring, how it affects quality outcomes, and what tacit knowledge operators are using. It thereby assists the continuous quality improvement processes in a different way to SPC. A second contribution is the provision of a novel methodology for knowledge management, one that circumvents the usual codification barriers to knowledge management.
Optimality of Gaussian attacks in continuous-variable quantum cryptography.
Navascués, Miguel; Grosshans, Frédéric; Acín, Antonio
2006-11-10
We analyze the asymptotic security of the family of Gaussian modulated quantum key distribution protocols for continuous-variables systems. We prove that the Gaussian unitary attack is optimal for all the considered bounds on the key rate when the first and second momenta of the canonical variables involved are known by the honest parties.
End-to-end communication test on variable length packet structures utilizing AOS testbed
NASA Technical Reports Server (NTRS)
Miller, Warner H.; Sank, V.; Fong, Wai; Miko, J.; Powers, M.; Folk, John; Conaway, B.; Michael, K.; Yeh, Pen-Shu
1994-01-01
This paper describes a communication test, which successfully demonstrated the transfer of losslessly compressed images in an end-to-end system. These compressed images were first formatted into variable length Consultative Committee for Space Data Systems (CCSDS) packets in the Advanced Orbiting System Testbed (AOST). The CCSDS data Structures were transferred from the AOST to the Radio Frequency Simulations Operations Center (RFSOC), via a fiber optic link, where data was then transmitted through the Tracking and Data Relay Satellite System (TDRSS). The received data acquired at the White Sands Complex (WSC) was transferred back to the AOST where the data was captured and decompressed back to the original images. This paper describes the compression algorithm, the AOST configuration, key flight components, data formats, and the communication link characteristics and test results.
FDTD modelling of induced polarization phenomena in transient electromagnetics
NASA Astrophysics Data System (ADS)
Commer, Michael; Petrov, Peter V.; Newman, Gregory A.
2017-04-01
The finite-difference time-domain scheme is augmented in order to treat the modelling of transient electromagnetic signals containing induced polarization effects from 3-D distributions of polarizable media. Compared to the non-dispersive problem, the discrete dispersive Maxwell system contains costly convolution operators. Key components to our solution for highly digitized model meshes are Debye decomposition and composite memory variables. We revert to the popular Cole-Cole model of dispersion to describe the frequency-dependent behaviour of electrical conductivity. Its inversely Laplace-transformed Debye decomposition results in a series of time convolutions between electric field and exponential decay functions, with the latter reflecting each Debye constituents' individual relaxation time. These function types in the discrete-time convolution allow for their substitution by memory variables, annihilating the otherwise prohibitive computing demands. Numerical examples demonstrate the efficiency and practicality of our algorithm.
Charting Multidisciplinary Team External Dynamics Using a Systems Thinking Approach
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois; Waszak, Martin R.; Jones, Kenneth M.; Silcox, Richard J.; Silva, Walter A.; Nowaczyk, Ronald H.
1998-01-01
Using the formalism provided by the Systems Thinking approach, the dynamics present when operating multidisciplinary teams are examined in the context of the NASA Langley Research and Technology Group, an R&D organization organized along functional lines. The paper focuses on external dynamics and examines how an organization creates and nurtures the teams and how it disseminates and retains the lessons and expertise created by the multidisciplinary activities. Key variables are selected and the causal relationships between the variables are identified. Five "stories" are told, each of which touches on a different aspect of the dynamics. The Systems Thinking Approach provides recommendations as to interventions that will facilitate the introduction of multidisciplinary teams and that therefore will increase the likelihood of performing successful multidisciplinary developments. These interventions can be carried out either by individual researchers, line management or program management.
Funkenbusch, Paul D; Rotella, Mario; Ercoli, Carlo
2015-04-01
Laboratory studies of tooth preparation are often performed under a limited range of conditions involving single values for all variables other than the 1 being tested. In contrast, in clinical settings not all variables can be tightly controlled. For example, a new dental rotary cutting instrument may be tested in the laboratory by making a specific cut with a fixed force, but in clinical practice, the instrument must make different cuts with individual dentists applying a range of different forces. Therefore, the broad applicability of laboratory results to diverse clinical conditions is uncertain and the comparison of effects across studies is difficult. The purpose of this study was to examine the effect of 9 process variables on dental cutting in a single experiment, allowing each variable to be robustly tested over a range of values for the other 8 and permitting a direct comparison of the relative importance of each on the cutting process. The effects of 9 key process variables on the efficiency of a simulated dental cutting operation were measured. A fractional factorial experiment was conducted by using a computer-controlled, dedicated testing apparatus to simulate dental cutting procedures and Macor blocks as the cutting substrate. Analysis of Variance (ANOVA) was used to judge the statistical significance (α=.05). Five variables consistently produced large, statistically significant effects (target applied load, cut length, starting rpm, diamond grit size, and cut type), while 4 variables produced relatively small, statistically insignificant effects (number of cooling ports, rotary cutting instrument diameter, disposability, and water flow rate). The control exerted by the dentist, simulated in this study by targeting a specific level of applied force, was the single most important factor affecting cutting efficiency. Cutting efficiency was also significantly affected by factors simulating patient/clinical circumstances as well as hardware choices. These results highlight the importance of local clinical conditions (procedure, dentist) in understanding dental cutting procedures and in designing adequate experimental methodologies for future studies. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Grassmann phase space theory and the Jaynes-Cummings model
NASA Astrophysics Data System (ADS)
Dalton, B. J.; Garraway, B. M.; Jeffers, J.; Barnett, S. M.
2013-07-01
The Jaynes-Cummings model of a two-level atom in a single mode cavity is of fundamental importance both in quantum optics and in quantum physics generally, involving the interaction of two simple quantum systems—one fermionic system (the TLA), the other bosonic (the cavity mode). Depending on the initial conditions a variety of interesting effects occur, ranging from ongoing oscillations of the atomic population difference at the Rabi frequency when the atom is excited and the cavity is in an n-photon Fock state, to collapses and revivals of these oscillations starting with the atom unexcited and the cavity mode in a coherent state. The observation of revivals for Rydberg atoms in a high-Q microwave cavity is key experimental evidence for quantisation of the EM field. Theoretical treatments of the Jaynes-Cummings model based on expanding the state vector in terms of products of atomic and n-photon states and deriving coupled equations for the amplitudes are a well-known and simple method for determining the effects. In quantum optics however, the behaviour of the bosonic quantum EM field is often treated using phase space methods, where the bosonic mode annihilation and creation operators are represented by c-number phase space variables, with the density operator represented by a distribution function of these variables. Fokker-Planck equations for the distribution function are obtained, and either used directly to determine quantities of experimental interest or used to develop c-number Langevin equations for stochastic versions of the phase space variables from which experimental quantities are obtained as stochastic averages. Phase space methods have also been developed to include atomic systems, with the atomic spin operators being represented by c-number phase space variables, and distribution functions involving these variables and those for any bosonic modes being shown to satisfy Fokker-Planck equations from which c-number Langevin equations are often developed. However, atomic spin operators satisfy the standard angular momentum commutation rules rather than the commutation rules for bosonic annihilation and creation operators, and are in fact second order combinations of fermionic annihilation and creation operators. Though phase space methods in which the fermionic operators are represented directly by c-number phase space variables have not been successful, the anti-commutation rules for these operators suggest the possibility of using Grassmann variables—which have similar anti-commutation properties. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of phase space methods in quantum optics to treat fermionic systems by representing fermionic annihilation and creation operators directly by Grassmann phase space variables is rather rare. This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the TLA) can be used to treat the Jaynes-Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker-Planck equation involving both left and right Grassmann differentiations can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, in which the correspondence rules for the bosonic operators are non-standard and hence the Fokker-Planck equation is also unusual. Initial conditions, such as those above for initially uncorrelated states, are discussed and used to determine the initial distribution function. Transformations to new bosonic variables rotating at the cavity frequency enable the six coupled equations for the new c-number functions-that are also equivalent to the canonical Grassmann distribution function-to be solved analytically, based on an ansatz from an earlier paper by Stenholm. It is then shown that the distribution function is exactly the same as that determined from the well-known solution based on coupled amplitude equations. In quantum-atom optics theories for many atom bosonic and fermionic systems are needed. With large atom numbers, treatments must often take into account many quantum modes—especially for fermions. Generalisations of phase space distribution functions of phase space variables for a few modes to phase space distribution functionals of field functions (which represent the field operators, c-number fields for bosons, Grassmann fields for fermions) are now being developed for large systems. For the fermionic case, the treatment of the simple two mode problem represented by the Jaynes-Cummings model is a useful test case for the future development of phase space Grassmann distribution functional methods for fermionic applications in quantum-atom optics.
Niu, Kunyu; Wu, Jian; Yu, Fang; Guo, Jingli
2016-11-15
This paper aims to develop a construction and operation cost model of wastewater treatment for the paper industry in China and explores the main factors that determine these costs. Previous models mainly involved factors relating to the treatment scale and efficiency of treatment facilities for deriving the cost function. We considered the factors more comprehensively by adding a regional variable to represent the economic development level, a corporate ownership factor to represent the plant characteristics, a subsector variable to capture pollutant characteristics, and a detailed-classification technology variable. We applied a unique data set from a national pollution source census for the model simulation. The major findings include the following: (1) Wastewater treatment costs in the paper industry are determined by scale, technology, degree of treatment, ownership, and regional factors; (2) Wastewater treatment costs show a large decreasing scale effect; (3) The current level of pollutant discharge fees is far lower than the marginal treatment costs for meeting the wastewater discharge standard. Key implications are as follows: (1) Cost characteristics and impact factors should be fully recognized when planning or making policies relating to wastewater treatment projects or technology development; (2) There is potential to reduce treatment costs by centralizing wastewater treatment via industrial parks; (3) Wastewater discharge fee rates should be increased; (4) Energy efficient technology should become the future focus of wastewater treatment.
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
2011-01-01
The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.
Robustness of quantum key distribution with discrete and continuous variables to channel noise
NASA Astrophysics Data System (ADS)
Lasota, Mikołaj; Filip, Radim; Usenko, Vladyslav C.
2017-06-01
We study the robustness of quantum key distribution protocols using discrete or continuous variables to the channel noise. We introduce the model of such noise based on coupling of the signal to a thermal reservoir, typical for continuous-variable quantum key distribution, to the discrete-variable case. Then we perform a comparison of the bounds on the tolerable channel noise between these two kinds of protocols using the same noise parametrization, in the case of implementation which is perfect otherwise. Obtained results show that continuous-variable protocols can exhibit similar robustness to the channel noise when the transmittance of the channel is relatively high. However, for strong loss discrete-variable protocols are superior and can overcome even the infinite-squeezing continuous-variable protocol while using limited nonclassical resources. The requirement on the probability of a single-photon production which would have to be fulfilled by a practical source of photons in order to demonstrate such superiority is feasible thanks to the recent rapid development in this field.
Grassmann phase space methods for fermions. I. Mode theory
NASA Astrophysics Data System (ADS)
Dalton, B. J.; Jeffers, J.; Barnett, S. M.
2016-07-01
In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggest the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. The theory of Grassmann phase space methods for fermions based on separate modes is developed, showing how the distribution function is defined and used to determine quantum correlation functions, Fock state populations and coherences via Grassmann phase space integrals, how the Fokker-Planck equations are obtained and then converted into equivalent Ito equations for stochastic Grassmann variables. The fermion distribution function is an even Grassmann function, and is unique. The number of c-number Wiener increments involved is 2n2, if there are n modes. The situation is somewhat different to the bosonic c-number case where only 2 n Wiener increments are involved, the sign of the drift term in the Ito equation is reversed and the diffusion matrix in the Fokker-Planck equation is anti-symmetric rather than symmetric. The un-normalised B distribution is of particular importance for determining Fock state populations and coherences, and as pointed out by Plimak, Collett and Olsen, the drift vector in its Fokker-Planck equation only depends linearly on the Grassmann variables. Using this key feature we show how the Ito stochastic equations can be solved numerically for finite times in terms of c-number stochastic quantities. Averages of products of Grassmann stochastic variables at the initial time are also involved, but these are determined from the initial conditions for the quantum state. The detailed approach to the numerics is outlined, showing that (apart from standard issues in such numerics) numerical calculations for Grassmann phase space theories of fermion systems could be carried out without needing to represent Grassmann phase space variables on the computer, and only involving processes using c-numbers. We compare our approach to that of Plimak, Collett and Olsen and show that the two approaches differ. As a simple test case we apply the B distribution theory and solve the Ito stochastic equations to demonstrate coupling between degenerate Cooper pairs in a four mode fermionic system involving spin conserving interactions between the spin 1 / 2 fermions, where modes with momenta - k , + k-each associated with spin up, spin down states, are involved.
Effects of "D"-Amphetamine and Ethanol on Variable and Repetitive Key-Peck Sequences in Pigeons
ERIC Educational Resources Information Center
Ward, Ryan D.; Bailey, Ericka M.; Odum, Amy L.
2006-01-01
This experiment assessed the effects of "d"-Amphetamine and ethanol on reinforced variable and repetitive key-peck sequences in pigeons. Pigeons responded on two keys under a multiple schedule of Repeat and Vary components. In the Repeat component, completion of a target sequence of right, right, left, left resulted in food. In the Vary component,…
Boahen, Kwabena
2013-01-01
A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436
Method of operating a thermoelectric generator
Reynolds, Michael G; Cowgill, Joshua D
2013-11-05
A method for operating a thermoelectric generator supplying a variable-load component includes commanding the variable-load component to operate at a first output and determining a first load current and a first load voltage to the variable-load component while operating at the commanded first output. The method also includes commanding the variable-load component to operate at a second output and determining a second load current and a second load voltage to the variable-load component while operating at the commanded second output. The method includes calculating a maximum power output of the thermoelectric generator from the determined first load current and voltage and the determined second load current and voltage, and commanding the variable-load component to operate at a third output. The commanded third output is configured to draw the calculated maximum power output from the thermoelectric generator.
Probabilistic Predictions of PM2.5 Using a Novel Ensemble Design for the NAQFC
NASA Astrophysics Data System (ADS)
Kumar, R.; Lee, J. A.; Delle Monache, L.; Alessandrini, S.; Lee, P.
2017-12-01
Poor air quality (AQ) in the U.S. is estimated to cause about 60,000 premature deaths with costs of 100B-150B annually. To reduce such losses, the National AQ Forecasting Capability (NAQFC) at the National Oceanic and Atmospheric Administration (NOAA) produces forecasts of ozone, particulate matter less than 2.5 mm in diameter (PM2.5), and other pollutants so that advance notice and warning can be issued to help individuals and communities limit the exposure and reduce air pollution-caused health problems. The current NAQFC, based on the U.S. Environmental Protection Agency Community Multi-scale AQ (CMAQ) modeling system, provides only deterministic AQ forecasts and does not quantify the uncertainty associated with the predictions, which could be large due to the chaotic nature of atmosphere and nonlinearity in atmospheric chemistry. This project aims to take NAQFC a step further in the direction of probabilistic AQ prediction by exploring and quantifying the potential value of ensemble predictions of PM2.5, and perturbing three key aspects of PM2.5 modeling: the meteorology, emissions, and CMAQ secondary organic aerosol formulation. This presentation focuses on the impact of meteorological variability, which is represented by three members of NOAA's Short-Range Ensemble Forecast (SREF) system that were down-selected by hierarchical cluster analysis. These three SREF members provide the physics configurations and initial/boundary conditions for the Weather Research and Forecasting (WRF) model runs that generate required output variables for driving CMAQ that are missing in operational SREF output. We conducted WRF runs for Jan, Apr, Jul, and Oct 2016 to capture seasonal changes in meteorology. Estimated emissions of trace gases and aerosols via the Sparse Matrix Operator Kernel (SMOKE) system were developed using the WRF output. WRF and SMOKE output drive a 3-member CMAQ mini-ensemble of once-daily, 48-h PM2.5 forecasts for the same four months. The CMAQ mini-ensemble is evaluated against both observations and the current operational deterministic NAQFC products, and analyzed to assess the impact of meteorological biases on PM2.5 variability. Quantification of the PM2.5 prediction uncertainty will prove a key factor to support cost-effective decision-making while protecting public health.
NASA Astrophysics Data System (ADS)
McCurdy, David R.; Krivanek, Thomas M.; Roche, Joseph M.; Zinolabedini, Reza
2006-01-01
The concept of a human rated transport vehicle for various near earth missions is evaluated using a liquid hydrogen fueled Bimodal Nuclear Thermal Propulsion (BNTP) approach. In an effort to determine the preliminary sizing and optimal propulsion system configuration, as well as the key operating design points, an initial investigation into the main system level parameters was conducted. This assessment considered not only the performance variables but also the more subjective reliability, operability, and maintainability attributes. The SIZER preliminary sizing tool was used to facilitate rapid modeling of the trade studies, which included tank materials, propulsive versus an aero-capture trajectory, use of artificial gravity, reactor chamber operating pressure and temperature, fuel element scaling, engine thrust rating, engine thrust augmentation by adding oxygen to the flow in the nozzle for supersonic combustion, and the baseline turbopump configuration to address mission redundancy and safety requirements. A high level system perspective was maintained to avoid focusing solely on individual component optimization at the expense of system level performance, operability, and development cost.
Shin, Sangbaie; Park, Yun Sung; Cho, Sunghwan; You, Insang; Kang, In Seok
2018-01-01
Electro-generated chemiluminescence (ECL) has attracted increasing attention as a new platform for light-emitting devices; in particular, the use of mechanically stretchable ECL gels opens up the opportunity to achieve deformable displays. The movements of radical ions under an external electric field include short-range diffusion near the electrodes and long-distance migration between the electrodes. So far, only the diffusion of radical ions has been considered as the operating principle behind ECL. In this study, electrochemical and optical analysis was performed systematically to investigate the role of ion migration in ECL devices. This study reveals that long-distance migration of radical ions can be a key variable in ECL at low frequencies and that this effect depends on the type of ion species and the operating conditions (e.g. voltage and frequency). We also report that the emissions from the two electrodes are not identical, and the emission behaviors are different in the optimal operating conditions for the red, green, and blue ECL emissions. PMID:29732124
HgCdTe APD-based linear-mode photon counting components and ladar receivers
NASA Astrophysics Data System (ADS)
Jack, Michael; Wehner, Justin; Edwards, John; Chapman, George; Hall, Donald N. B.; Jacobson, Shane M.
2011-05-01
Linear mode photon counting (LMPC) provides significant advantages in comparison with Geiger Mode (GM) Photon Counting including absence of after-pulsing, nanosecond pulse to pulse temporal resolution and robust operation in the present of high density obscurants or variable reflectivity objects. For this reason Raytheon has developed and previously reported on unique linear mode photon counting components and modules based on combining advanced APDs and advanced high gain circuits. By using HgCdTe APDs we enable Poisson number preserving photon counting. A metric of photon counting technology is dark count rate and detection probability. In this paper we report on a performance breakthrough resulting from improvement in design, process and readout operation enabling >10x reduction in dark counts rate to ~10,000 cps and >104x reduction in surface dark current enabling long 10 ms integration times. Our analysis of key dark current contributors suggest that substantial further reduction in DCR to ~ 1/sec or less can be achieved by optimizing wavelength, operating voltage and temperature.
Optimizing Wellfield Operation in a Variable Power Price Regime.
Bauer-Gottwein, Peter; Schneider, Raphael; Davidsen, Claus
2016-01-01
Wellfield management is a multiobjective optimization problem. One important objective has been energy efficiency in terms of minimizing the energy footprint (EFP) of delivered water (MWh/m(3) ). However, power systems in most countries are moving in the direction of deregulated markets and price variability is increasing in many markets because of increased penetration of intermittent renewable power sources. In this context the relevant management objective becomes minimizing the cost of electric energy used for pumping and distribution of groundwater from wells rather than minimizing energy use itself. We estimated EFP of pumped water as a function of wellfield pumping rate (EFP-Q relationship) for a wellfield in Denmark using a coupled well and pipe network model. This EFP-Q relationship was subsequently used in a Stochastic Dynamic Programming (SDP) framework to minimize total cost of operating the combined wellfield-storage-demand system over the course of a 2-year planning period based on a time series of observed price on the Danish power market and a deterministic, time-varying hourly water demand. In the SDP setup, hourly pumping rates are the decision variables. Constraints include storage capacity and hourly water demand fulfilment. The SDP was solved for a baseline situation and for five scenario runs representing different EFP-Q relationships and different maximum wellfield pumping rates. Savings were quantified as differences in total cost between the scenario and a constant-rate pumping benchmark. Minor savings up to 10% were found in the baseline scenario, while the scenario with constant EFP and unlimited pumping rate resulted in savings up to 40%. Key factors determining potential cost savings obtained by flexible wellfield operation under a variable power price regime are the shape of the EFP-Q relationship, the maximum feasible pumping rate and the capacity of available storage facilities. © 2015 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.
Finite-size analysis of a continuous-variable quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverrier, Anthony; Grosshans, Frederic; Grangier, Philippe
2010-06-15
The goal of this paper is to extend the framework of finite-size analysis recently developed for quantum key distribution to continuous-variable protocols. We do not solve this problem completely here, and we mainly consider the finite-size effects on the parameter estimation procedure. Despite the fact that some questions are left open, we are able to give an estimation of the secret key rate for protocols which do not contain a postselection procedure. As expected, these results are significantly more pessimistic than those obtained in the asymptotic regime. However, we show that recent continuous-variable protocols are able to provide fully securemore » secret keys in the finite-size scenario, over distances larger than 50 km.« less
Maintenance & construction operations user service : an addendum to the ITS program plan
DOT National Transportation Integrated Search
2001-01-26
The Maintenance and Construction Operations User Service describes the need for integrating key activities. Generally, key Maintenance and Construction Operations (MCO) activities include monitoring, operating, maintaining, improving, and managing th...
Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard
Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton
2017-01-01
The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385
Culture as a variable in neuroscience and clinical neuropsychology: A comprehensive review
Wajman, José Roberto; Bertolucci, Paulo Henrique Ferreira; Mansur, Letícia Lessa; Gauthier, Serge
2015-01-01
Culture is a dynamic system of bidirectional influences among individuals and their environment, including psychological and biological processes, which facilitate adaptation and social interaction. One of the main challenges in clinical neuropsychology involves cognitive, behavioral and functional assessment of people with different sociocultural backgrounds. In this review essay, examining culture from a historical perspective to ethical issues in cross-cultural research, including the latest significant and publications, the authors sought to explore the main features related to cultural variables in neuropsychological practice and to debate the challenges found regarding the operational methods currently in use. Literature findings suggest a more comprehensive approach in cognitive and behavioral neuroscience, including an interface between elementary disciplines and applied neuropsychology. Thus, as a basis for discussion on this issue, the authors analyzed key-topics related to the study of new trends in sociocultural neuroscience and the application of their concepts from a clinical perspective. PMID:29213964
NASA Astrophysics Data System (ADS)
Roy, Satadru
Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.
Operant Variability: Procedures and Processes
ERIC Educational Resources Information Center
Machado, Armando; Tonneau, Francois
2012-01-01
Barba's (2012) article deftly weaves three main themes in one argument about operant variability. From general theoretical considerations on operant behavior (Catania, 1973), Barba derives methodological guidelines about response differentiation and applies them to the study of operant variability. In the process, he uncovers unnoticed features of…
New methodology to baseline and match AME polysilicon etcher using advanced diagnostic tools
NASA Astrophysics Data System (ADS)
Poppe, James; Shipman, John; Reinhardt, Barbara E.; Roussel, Myriam; Hedgecock, Raymond; Fonda, Arturo
1999-09-01
As process controls tighten in the semiconductor industry, the need to understand the variables that determine system performance become more important. For plasma etch systems, process success depends on the control of key parameters such as: vacuum integrity, pressure, gas flows, and RF power. It is imperative to baseline, monitor, and control these variables. This paper presents an overview of the methods and tools used by Motorola BMC fabrication facility to characterize an Applied Materials polysilicon etcher. Tool performance data obtained from our traditional measurement techniques are limited in their scope and do not provide a complete picture of the ultimate tool performance. Presently the BMC traditional characterization tools provide a snapshot of the static operation of the equipment under test (EUT); however, complete evaluation of the dynamic performance cannot be monitored without the aid of specialized diagnostic equipment. To provide us with a complete system baseline evaluation of the polysilicon etcher, three diagnostic tools were utilized: Lucas Labs Vacuum Diagnostic System, Residual Gas Analyzer, and the ENI Voltage/Impedance Probe. The diagnostic methodology used to baseline and match key parameters of qualified production equipment has had an immense impact on other equipment characterization in the facility. It has resulted in reduced cycle time for new equipment introduction as well.
Grassmann phase space theory and the Jaynes–Cummings model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalton, B.J., E-mail: bdalton@swin.edu.au; Centre for Atom Optics and Ultrafast Spectroscopy, Swinburne University of Technology, Melbourne, Victoria 3122; Garraway, B.M.
2013-07-15
The Jaynes–Cummings model of a two-level atom in a single mode cavity is of fundamental importance both in quantum optics and in quantum physics generally, involving the interaction of two simple quantum systems—one fermionic system (the TLA), the other bosonic (the cavity mode). Depending on the initial conditions a variety of interesting effects occur, ranging from ongoing oscillations of the atomic population difference at the Rabi frequency when the atom is excited and the cavity is in an n-photon Fock state, to collapses and revivals of these oscillations starting with the atom unexcited and the cavity mode in a coherentmore » state. The observation of revivals for Rydberg atoms in a high-Q microwave cavity is key experimental evidence for quantisation of the EM field. Theoretical treatments of the Jaynes–Cummings model based on expanding the state vector in terms of products of atomic and n-photon states and deriving coupled equations for the amplitudes are a well-known and simple method for determining the effects. In quantum optics however, the behaviour of the bosonic quantum EM field is often treated using phase space methods, where the bosonic mode annihilation and creation operators are represented by c-number phase space variables, with the density operator represented by a distribution function of these variables. Fokker–Planck equations for the distribution function are obtained, and either used directly to determine quantities of experimental interest or used to develop c-number Langevin equations for stochastic versions of the phase space variables from which experimental quantities are obtained as stochastic averages. Phase space methods have also been developed to include atomic systems, with the atomic spin operators being represented by c-number phase space variables, and distribution functions involving these variables and those for any bosonic modes being shown to satisfy Fokker–Planck equations from which c-number Langevin equations are often developed. However, atomic spin operators satisfy the standard angular momentum commutation rules rather than the commutation rules for bosonic annihilation and creation operators, and are in fact second order combinations of fermionic annihilation and creation operators. Though phase space methods in which the fermionic operators are represented directly by c-number phase space variables have not been successful, the anti-commutation rules for these operators suggest the possibility of using Grassmann variables—which have similar anti-commutation properties. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of phase space methods in quantum optics to treat fermionic systems by representing fermionic annihilation and creation operators directly by Grassmann phase space variables is rather rare. This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the TLA) can be used to treat the Jaynes–Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker–Planck equation involving both left and right Grassmann differentiations can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, in which the correspondence rules for the bosonic operators are non-standard and hence the Fokker–Planck equation is also unusual. Initial conditions, such as those above for initially uncorrelated states, are discussed and used to determine the initial distribution function. Transformations to new bosonic variables rotating at the cavity frequency enable the six coupled equations for the new c-number functions–that are also equivalent to the canonical Grassmann distribution function–to be solved analytically, based on an ansatz from an earlier paper by Stenholm. It is then shown that the distribution function is exactly the same as that determined from the well-known solution based on coupled amplitude equations. In quantum–atom optics theories for many atom bosonic and fermionic systems are needed. With large atom numbers, treatments must often take into account many quantum modes—especially for fermions. Generalisations of phase space distribution functions of phase space variables for a few modes to phase space distribution functionals of field functions (which represent the field operators, c-number fields for bosons, Grassmann fields for fermions) are now being developed for large systems. For the fermionic case, the treatment of the simple two mode problem represented by the Jaynes–Cummings model is a useful test case for the future development of phase space Grassmann distribution functional methods for fermionic applications in quantum–atom optics. -- Highlights: •Novel phase space theory of the Jaynes–Cummings model using Grassmann variables. •Fokker–Planck equations solved analytically. •Results agree with the standard quantum optics treatment. •Grassmann phase space theory applicable to fermion many-body problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, University Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex
2010-06-15
In this article, we give a simple proof of the fact that the optimal collective attacks against continuous-variable quantum key distribution with a Gaussian modulation are Gaussian attacks. Our proof, which makes use of symmetry properties of the protocol in phase space, is particularly relevant for the finite-key analysis of the protocol and therefore for practical applications.
NASA Astrophysics Data System (ADS)
Bartell, Richard J.; Perram, Glen P.; Fiorino, Steven T.; Long, Scott N.; Houle, Marken J.; Rice, Christopher A.; Manning, Zachary P.; Bunch, Dustin W.; Krizo, Matthew J.; Gravley, Liesebet E.
2005-06-01
The Air Force Institute of Technology's Center for Directed Energy has developed a software model, the High Energy Laser End-to-End Operational Simulation (HELEEOS), under the sponsorship of the High Energy Laser Joint Technology Office (JTO), to facilitate worldwide comparisons across a broad range of expected engagement scenarios of expected performance of a diverse range of weight-constrained high energy laser system types. HELEEOS has been designed to meet JTO's goals of supporting a broad range of analyses applicable to the operational requirements of all the military services, constraining weapon effectiveness through accurate engineering performance assessments allowing its use as an investment strategy tool, and the establishment of trust among military leaders. HELEEOS is anchored to respected wave optics codes and all significant degradation effects, including thermal blooming and optical turbulence, are represented in the model. The model features operationally oriented performance metrics, e.g. dwell time required to achieve a prescribed probability of kill and effective range. Key features of HELEEOS include estimation of the level of uncertainty in the calculated Pk and generation of interactive nomographs to allow the user to further explore a desired parameter space. Worldwide analyses are enabled at five wavelengths via recently available databases capturing climatological, seasonal, diurnal, and geographical spatial-temporal variability in atmospheric parameters including molecular and aerosol absorption and scattering profiles and optical turbulence strength. Examples are provided of the impact of uncertainty in weight-power relationships, coupled with operating condition variability, on results of performance comparisons between chemical and solid state lasers.
Continuous-variable quantum key distribution protocols over noisy channels.
García-Patrón, Raúl; Cerf, Nicolas J
2009-04-03
A continuous-variable quantum key distribution protocol based on squeezed states and heterodyne detection is introduced and shown to attain higher secret key rates over a noisy line than any other one-way Gaussian protocol. This increased resistance to channel noise can be understood as resulting from purposely adding noise to the signal that is converted into the secret key. This notion of noise-enhanced tolerance to noise also provides a better physical insight into the poorly understood discrepancies between the previously defined families of Gaussian protocols.
Nursing workforce policy and the economic crisis: a global overview.
Buchan, James; O'May, Fiona; Dussault, Gilles
2013-09-01
To assess the impact of the global financial crisis on the nursing workforce and identify appropriate policy responses. This article draws from international data sources (Organisation for Economic Co-operation and Development [OECD] and World Health Organization), from national data sources (nursing regulatory authorities), and the literature to provide a context in which to examine trends in labor market and health spending indicators, nurse employment, and nurse migration patterns. A variable impact of the crisis at the country level was shown by different changes in unemployment rates and funding of the health sector. Some evidence was obtained of reductions in nurse staffing in a small number of countries. A significant and variable change in the patterns of nurse migration also was observed. The crisis has had a variable impact; nursing shortages are likely to reappear in some OECD countries. Policy responses will have to take account of the changed economic reality in many countries. This article highlights key trends and issues for the global nursing workforce; it then identifies policy interventions appropriate to the new economic realities in many OECD countries. © 2013 Sigma Theta Tau International.
Overview of Key Results from SDO Extreme ultraviolet Variability Experiment (EVE)
NASA Astrophysics Data System (ADS)
Woods, Tom; Eparvier, Frank; Jones, Andrew; Mason, James; Didkovsky, Leonid; Chamberlin, Phil
2016-10-01
The SDO Extreme ultraviolet Variability Experiment (EVE) includes several channels to observe the solar extreme ultraviolet (EUV) spectral irradiance from 1 to 106 nm. These channels include the Multiple EUV Grating Spectrograph (MEGS) A, B, and P channels from the University of Colorado (CU) and the EUV SpectroPhometer (ESP) channels from the University of Southern California (USC). The solar EUV spectrum is rich in many different emission lines from the corona, transition region, and chromosphere. The EVE full-disk irradiance spectra are important for studying the solar impacts in Earth's ionosphere and thermosphere and are useful for space weather operations. In addition, the EVE observations, with its high spectral resolution of 0.1 nm and in collaboration with AIA solar EUV images, have proven valuable for studying active region evolution and explosive energy release during flares and coronal eruptions. These SDO measurements have revealed interesting results such as understanding the flare variability over all wavelengths, discovering and classifying different flare phases, using coronal dimming measurements to predict CME properties of mass and velocity, and exploring the role of nano-flares in continual heating of active regions.
Report of the Defense Science Board Task Force on Defense Biometrics
2007-03-01
certificates, crypto variables, and encoded biometric indices. The Department of Defense has invested prestige and resources in its Common Access Card (CAC...in turn, could be used to unlock an otherwise secret key or crypto variable which would support the remote authentication. A new key variable...The PSA for biometrics should commission development of appropriate threat model(s) and assign responsibility for maintaining currency of the model
Experimental design for evaluating WWTP data by linear mass balances.
Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P
2018-05-15
A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tysowski, Piotr K.; Ling, Xinhua; Lütkenhaus, Norbert; Mosca, Michele
2018-04-01
Quantum key distribution (QKD) is a means of generating keys between a pair of computing hosts that is theoretically secure against cryptanalysis, even by a quantum computer. Although there is much active research into improving the QKD technology itself, there is still significant work to be done to apply engineering methodology and determine how it can be practically built to scale within an enterprise IT environment. Significant challenges exist in building a practical key management service (KMS) for use in a metropolitan network. QKD is generally a point-to-point technique only and is subject to steep performance constraints. The integration of QKD into enterprise-level computing has been researched, to enable quantum-safe communication. A novel method for constructing a KMS is presented that allows arbitrary computing hosts on one site to establish multiple secure communication sessions with the hosts of another site. A key exchange protocol is proposed where symmetric private keys are granted to hosts while satisfying the scalability needs of an enterprise population of users. The KMS operates within a layered architectural style that is able to interoperate with various underlying QKD implementations. Variable levels of security for the host population are enforced through a policy engine. A network layer provides key generation across a network of nodes connected by quantum links. Scheduling and routing functionality allows quantum key material to be relayed across trusted nodes. Optimizations are performed to match the real-time host demand for key material with the capacity afforded by the infrastructure. The result is a flexible and scalable architecture that is suitable for enterprise use and independent of any specific QKD technology.
Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef
2012-10-01
The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vanwalleghem, T.; Román, A.; Giraldez, J. V.
2016-12-01
There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of a geostatistical versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.
Learning Multisensory Integration and Coordinate Transformation via Density Estimation
Sabes, Philip N.
2013-01-01
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588
Geometrization and Generalization of the Kowalevski Top
NASA Astrophysics Data System (ADS)
Dragović, Vladimir
2010-08-01
A new view on the Kowalevski top and the Kowalevski integration procedure is presented. For more than a century, the Kowalevski 1889 case, has attracted full attention of a wide community as the highlight of the classical theory of integrable systems. Despite hundreds of papers on the subject, the Kowalevski integration is still understood as a magic recipe, an unbelievable sequence of skillful tricks, unexpected identities and smart changes of variables. The novelty of our present approach is based on our four observations. The first one is that the so-called fundamental Kowalevski equation is an instance of a pencil equation of the theory of conics which leads us to a new geometric interpretation of the Kowalevski variables w, x 1, x 2 as the pencil parameter and the Darboux coordinates, respectively. The second is observation of the key algebraic property of the pencil equation which is followed by introduction and study of a new class of discriminantly separable polynomials. All steps of the Kowalevski integration procedure are now derived as easy and transparent logical consequences of our theory of discriminantly separable polynomials. The third observation connects the Kowalevski integration and the pencil equation with the theory of multi-valued groups. The Kowalevski change of variables is now recognized as an example of a two-valued group operation and its action. The final observation is surprising equivalence of the associativity of the two-valued group operation and its action to the n = 3 case of the Great Poncelet Theorem for pencils of conics.
Telestroke network fundamentals.
Meyer, Brett C; Demaerschalk, Bart M
2012-10-01
The objectives of this manuscript are to identify key components to maintaining the logistic and/or operational sustainability of a telestroke network, to identify best practices to be considered for assessment and management of acute stroke when planning for and developing a telestroke network, to show practical steps to enable progress toward implementing a telestroke solution for optimizing acute stroke care, to incorporate evidence-based practice guidelines and care pathways into a telestroke network, to emphasize technology variables and options, and to propose metrics to use when determining the performance, outcomes, and quality of a telestroke network. Copyright © 2012 National Stroke Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.
2014-12-01
It is shown in analytical form that the carrier capture from the matrix as well as carrier dynamics in quantum dots plays an important role in double-state lasing phenomenon. In particular, the de-synchronization of hole and electron captures allows one to describe recently observed quenching of ground-state lasing, which takes place in quantum dot lasers operating in double-state lasing regime at high injection. From the other side, the detailed analysis of charge carrier dynamics in the single quantum dot enables one to describe the observed light-current characteristics and key temperature dependences.
Crew behavior and performance in space analog environments
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.
1992-01-01
The objectives and the current status of the Crew Factors research program conducted at NASA-Ames Research Center are reviewed. The principal objectives of the program are to determine the effects of a broad class of input variables on crew performance and to provide guidance with respect to the design and management of crews assigned to future space missions. A wide range of research environments are utilized, including controlled experimental settings, high fidelity full mission simulator facilities, and fully operational field environments. Key group processes are identified, and preliminary data are presented on the effect of crew size, type, and structure on team performance.
Deterministic nonlinear phase gates induced by a single qubit
NASA Astrophysics Data System (ADS)
Park, Kimin; Marek, Petr; Filip, Radim
2018-05-01
We propose deterministic realizations of nonlinear phase gates by repeating a finite sequence of non-commuting Rabi interactions between a harmonic oscillator and only a single two-level ancillary qubit. We show explicitly that the key nonclassical features of the ideal cubic phase gate and the quartic phase gate are generated in the harmonic oscillator faithfully by our method. We numerically analyzed the performance of our scheme under realistic imperfections of the oscillator and the two-level system. The methodology is extended further to higher-order nonlinear phase gates. This theoretical proposal completes the set of operations required for continuous-variable quantum computation.
Patel, Darshan C; Lyu, Yaqi Fara; Gandarilla, Jorge; Doherty, Steve
2018-04-03
In-process sampling and analysis is an important aspect of monitoring kinetic profiles and impurity formation or rejection, both in development and during commercial manufacturing. In pharmaceutical process development, the technology of choice for a substantial portion of this analysis is high-performance liquid chromatography (HPLC). Traditionally, the sample extraction and preparation for reaction characterization have been performed manually. This can be time consuming, laborious, and impractical for long processes. Depending on the complexity of the sample preparation, there can be variability introduced by different analysts, and in some cases, the integrity of the sample can be compromised during handling. While there are commercial instruments available for on-line monitoring with HPLC, they lack capabilities in many key areas. Some do not provide integration of the sampling and analysis, while others afford limited flexibility in sample preparation. The current offerings provide a limited number of unit operations available for sample processing and no option for workflow customizability. This work describes development of a microfluidic automated program (MAP) which fully automates the sample extraction, manipulation, and on-line LC analysis. The flexible system is controlled using an intuitive Microsoft Excel based user interface. The autonomous system is capable of unattended reaction monitoring that allows flexible unit operations and workflow customization to enable complex operations and on-line sample preparation. The automated system is shown to offer advantages over manual approaches in key areas while providing consistent and reproducible in-process data. Copyright © 2017 Elsevier B.V. All rights reserved.
Pragmatic analysis of the electric submerged arc furnace continuum
NASA Astrophysics Data System (ADS)
Karalis, K.; Karkalos, N.; Antipas, G. S. E.; Xenidis, A.
2017-09-01
A transient mathematical model was developed for the description of fluid flow, heat transfer and electromagnetic phenomena involved in the production of ferronickel in electric arc furnaces. The key operating variables considered were the thermal and electrical conductivity of the slag and the shape, immersion depth and applied electric potential of the electrodes. It was established that the principal stimuli of the velocities in the slag bath were the electric potential and immersion depth of the electrodes and the thermal and electrical conductivities of the slag. Additionally, it was determined that, under the set of operating conditions examined, the maximum slag temperature ranged between 1756 and 1825 K, which is in accordance with industrial measurements. Moreover, it was affirmed that contributions to slag stirring due to Lorentz forces and momentum forces due to the release of carbon monoxide bubbles from the electrode surface were negligible.
Concepts for Multi-Speed Rotorcraft Drive System - Status of Design and Testing at NASA GRC
NASA Technical Reports Server (NTRS)
Stevens, Mark A.; Lewicki, David G.; Handschuh, Robert F.
2015-01-01
In several studies and on-going developments for advanced rotorcraft, the need for variable multi-speed capable rotors has been raised. Speed changes of up to 50 have been proposed for future rotorcraft to improve vehicle performance. A rotor speed change during operation not only requires a rotor that can perform effectively over the operating speedload range, but also requires a propulsion system possessing these same capabilities. A study was completed investigating possible drive system arrangements that can accommodate up to a 50 speed change. Key drivers were identified from which simplicity and weight were judged as central. This paper presents the current status of two gear train concepts coupled with the first of two clutch types developed and tested thus far with focus on design lessons learned and areas requiring development. Also, a third concept is presented, a dual input planetary differential as leveraged from a simple planetary with fixed carrier.
Pragmatic analysis of the electric submerged arc furnace continuum
Karkalos, N.; Xenidis, A.
2017-01-01
A transient mathematical model was developed for the description of fluid flow, heat transfer and electromagnetic phenomena involved in the production of ferronickel in electric arc furnaces. The key operating variables considered were the thermal and electrical conductivity of the slag and the shape, immersion depth and applied electric potential of the electrodes. It was established that the principal stimuli of the velocities in the slag bath were the electric potential and immersion depth of the electrodes and the thermal and electrical conductivities of the slag. Additionally, it was determined that, under the set of operating conditions examined, the maximum slag temperature ranged between 1756 and 1825 K, which is in accordance with industrial measurements. Moreover, it was affirmed that contributions to slag stirring due to Lorentz forces and momentum forces due to the release of carbon monoxide bubbles from the electrode surface were negligible. PMID:28989738
Stronger steerability criterion for more uncertain continuous-variable systems
NASA Astrophysics Data System (ADS)
Chowdhury, Priyanka; Pramanik, Tanumoy; Majumdar, A. S.
2015-10-01
We derive a fine-grained uncertainty relation for the measurement of two incompatible observables on a single quantum system of continuous variables, and show that continuous-variable systems are more uncertain than discrete-variable systems. Using the derived fine-grained uncertainty relation, we formulate a stronger steering criterion that is able to reveal the steerability of NOON states that has hitherto not been possible using other criteria. We further obtain a monogamy relation for our steering inequality which leads to an, in principle, improved lower bound on the secret key rate of a one-sided device independent quantum key distribution protocol for continuous variables.
Villaveces, Andrés; Peck, Michael; Faraklas, Iris; Hsu-Chang, Naiwei; Joe, Victor; Wibbenmeyer, Lucy
2014-01-01
Detailed information on the cause of burns is necessary to construct effective prevention programs. The International Classification of External Causes of Injury (ICECI) is a data collection tool that allows comprehensive categorization of multiple facets of injury events. The objective of this study was to conduct a process evaluation of software designed to improve the ease of use of the ICECI so as to identify key additional variables useful for understanding the occurrence of burn injuries, and compare this software with existing data-collection practices conducted for burn injuries. The authors completed a process evaluation of the implementation and ease of use of the software in six U.S. burn centers. They also collected preliminary burn injury data and compared them with existing variables reported to the American Burn Association's National Burn Repository (NBR). The authors accomplished their goals of 1) creating a data-collection tool for the ICECI, which can be linked to existing operational programs of the NBR, 2) training registrars in the use of this tool, 3) establishing quality-control mechanisms for ensuring accuracy and reliability, 4) incorporating ICECI data entry into the weekly routine of the burn registrar, and 5) demonstrating the quality differences between data collected using this tool and the NBR. Using this or similar tools with the ICECI structure or key selected variables can improve the quantity and quality of data on burn injuries in the United States and elsewhere and thus can be more useful in informing prevention strategies.
Chen, Xiaodong Phoenix; Sullivan, Amy M; Alseidi, Adnan; Kwakye, Gifty; Smink, Douglas S
Providing resident autonomy in the operating room (OR) is one of the major challenges for surgical educators today. The purpose of this study was to explore what approaches expert surgical teachers use to assess residents' readiness for autonomy in the OR. We particularly focused on the assessments that experts make prior to conducting the surgical time-out. We conducted semistructured in-depth interviews with expert surgical teachers from March 2016 to September 2016. Purposeful sampling and snowball sampling were applied to identify and recruit expert surgical teachers from general surgery residency programs across the United States to represent a range of clinical subspecialties. All interviews were audio-recorded, deidentified, and transcribed. We applied the Framework Method of content analysis, discussed and reached final consensus on the themes. We interviewed 15 expert teachers from 9 institutions. The majority (13/15) were Program or Associate Program Directors; 47% (7/15) primarily performed complex surgical operations (e.g., endocrine surgery). Five themes regarding how expert surgical teachers determine residents' readiness for OR autonomy before the surgical time-out emerged. These included 3 domains of evidence elicited about the resident (resident characteristics, medical knowledge, and beyond the current OR case), 1 variable relating to attending characteristics, and 1 variable composed of contextual factors. Experts obtained one or more examples of evidence, and adjusted residents' initial autonomy using factors from the attending variable and the context variable. Expert surgical teachers' assessments of residents' readiness for OR autonomy included 5 key components. Better understanding these inputs can contribute to both faculty and resident development, enabling increased resident autonomy and preparation for independent practice. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying
2017-02-01
Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.
Framework for a U.S. Geological Survey Hydrologic Climate-Response Program in Maine
Hodgkins, Glenn A.; Lent, Robert M.; Dudley, Robert W.; Schalk, Charles W.
2009-01-01
This report presents a framework for a U.S. Geological Survey (USGS) hydrologic climate-response program designed to provide early warning of changes in the seasonal water cycle of Maine. Climate-related hydrologic changes on Maine's rivers and lakes in the winter and spring during the last century are well documented, and several river and lake variables have been shown to be sensitive to air-temperature changes. Monitoring of relevant hydrologic data would provide important baseline information against which future climate change can be measured. The framework of the hydrologic climate-response program presented here consists of four major parts: (1) identifying homogeneous climate-response regions; (2) identifying hydrologic components and key variables of those components that would be included in a hydrologic climate-response data network - as an example, streamflow has been identified as a primary component, with a key variable of streamflow being winter-spring streamflow timing; the data network would be created by maintaining existing USGS data-collection stations and establishing new ones to fill data gaps; (3) regularly updating historical trends of hydrologic data network variables; and (4) establishing basins for process-based studies. Components proposed for inclusion in the hydrologic climate-response data network have at least one key variable for which substantial historical data are available. The proposed components are streamflow, lake ice, river ice, snowpack, and groundwater. The proposed key variables of each component have extensive historical data at multiple sites and are expected to be responsive to climate change in the next few decades. These variables are also important for human water use and (or) ecosystem function. Maine would be divided into seven climate-response regions that follow major river-basin boundaries (basins subdivided to hydrologic units with 8-digit codes or larger) and have relatively homogeneous climates. Key hydrologic variables within each climate-response region would be analyzed regularly to maintain up-to-date analyses of year-to-year variability, decadal variability, and longer term trends. Finally, one basin in each climate-response region would be identified for process-based hydrologic and ecological studies.
An examination of loads and responses of a wind turbine undergoing variable-speed operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, A.D.; Buhl, M.L. Jr.; Bir, G.S.
1996-11-01
The National Renewable Energy Laboratory has recently developed the ability to predict turbine loads and responses for machines undergoing variable-speed operation. The wind industry has debated the potential benefits of operating wind turbine sat variable speeds for some time. Turbine system dynamic responses (structural response, resonance, and component interactions) are an important consideration for variable-speed operation of wind turbines. The authors have implemented simple, variable-speed control algorithms for both the FAST and ADAMS dynamics codes. The control algorithm is a simple one, allowing the turbine to track the optimum power coefficient (C{sub p}). The objective of this paper is tomore » show turbine loads and responses for a particular two-bladed, teetering-hub, downwind turbine undergoing variable-speed operation. The authors examined the response of the machine to various turbulent wind inflow conditions. In addition, they compare the structural responses under fixed-speed and variable-speed operation. For this paper, they restrict their comparisons to those wind-speed ranges for which limiting power by some additional control strategy (blade pitch or aileron control, for example) is not necessary. The objective here is to develop a basic understanding of the differences in loads and responses between the fixed-speed and variable-speed operation of this wind turbine configuration.« less
Notes from the field: the economic value chain in disease management organizations.
Fetterolf, Donald
2006-12-01
The disease management (DM) "value chain" is composed of a linear series of steps that include operational milestones in the development of knowledge, each stage evolving from the preceding one. As an adaptation of Michael Porter's "value chain" model, the process flow in DM moves along the following path: (1) data/information technology, (2) information generation, (3) analysis, (4) assessment/recommendations, (5) actionable customer plan, and (6) program assessment/reassessment. Each of these stages is managed as a major line of product operations within a DM company or health plan. Metrics around each of the key production variables create benchmark milestones, ongoing management insight into program effectiveness, and potential drivers for activity-based cost accounting pricing models. The value chain process must remain robust from early entry of data and information into the system, through the final presentation and recommendations for our clients if the program is to be effective. For individuals involved in the evaluation or review of DM programs, this framework is an excellent method to visualize the key components and sequence in the process. The value chain model is an excellent way to establish the value of a formal DM program and to create a consultancy relationship with a client involved in purchasing these complex services.
Interresponse Time Structures in Variable-Ratio and Variable-Interval Schedules
ERIC Educational Resources Information Center
Bowers, Matthew T.; Hill, Jade; Palya, William L.
2008-01-01
The interresponse-time structures of pigeon key pecking were examined under variable-ratio, variable-interval, and variable-interval plus linear feedback schedules. Whereas the variable-ratio and variable-interval plus linear feedback schedules generally resulted in a distinct group of short interresponse times and a broad distribution of longer…
Discrete and continuous variables for measurement-device-independent quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Feihu; Curty, Marcos; Qi, Bing
In a recent Article in Nature Photonics, Pirandola et al.1 claim that the achievable secret key rates of discrete-variable (DV) measurementdevice- independent (MDI) quantum key distribution (QKD) (refs 2,3) are “typically very low, unsuitable for the demands of a metropolitan network” and introduce a continuous-variable (CV) MDI QKD protocol capable of providing key rates which, they claim, are “three orders of magnitude higher” than those of DV MDI QKD. We believe, however, that the claims regarding low key rates of DV MDI QKD made by Pirandola et al.1 are too pessimistic. Here in this paper, we show that the secretmore » key rate of DV MDI QKD with commercially available high-efficiency single-photon detectors (SPDs) (for example, see http://www.photonspot.com/detectors and http://www.singlequantum.com) and good system alignment is typically rather high and thus highly suitable for not only long-distance communication but also metropolitan networks.« less
Discrete and continuous variables for measurement-device-independent quantum cryptography
Xu, Feihu; Curty, Marcos; Qi, Bing; ...
2015-11-16
In a recent Article in Nature Photonics, Pirandola et al.1 claim that the achievable secret key rates of discrete-variable (DV) measurementdevice- independent (MDI) quantum key distribution (QKD) (refs 2,3) are “typically very low, unsuitable for the demands of a metropolitan network” and introduce a continuous-variable (CV) MDI QKD protocol capable of providing key rates which, they claim, are “three orders of magnitude higher” than those of DV MDI QKD. We believe, however, that the claims regarding low key rates of DV MDI QKD made by Pirandola et al.1 are too pessimistic. Here in this paper, we show that the secretmore » key rate of DV MDI QKD with commercially available high-efficiency single-photon detectors (SPDs) (for example, see http://www.photonspot.com/detectors and http://www.singlequantum.com) and good system alignment is typically rather high and thus highly suitable for not only long-distance communication but also metropolitan networks.« less
Furrer, F; Franz, T; Berta, M; Leverrier, A; Scholz, V B; Tomamichel, M; Werner, R F
2012-09-07
We provide a security analysis for continuous variable quantum key distribution protocols based on the transmission of two-mode squeezed vacuum states measured via homodyne detection. We employ a version of the entropic uncertainty relation for smooth entropies to give a lower bound on the number of secret bits which can be extracted from a finite number of runs of the protocol. This bound is valid under general coherent attacks, and gives rise to keys which are composably secure. For comparison, we also give a lower bound valid under the assumption of collective attacks. For both scenarios, we find positive key rates using experimental parameters reachable today.
Quantum key distillation from Gaussian states by Gaussian operations.
Navascués, M; Bae, J; Cirac, J I; Lewestein, M; Sanpera, A; Acín, A
2005-01-14
We study the secrecy properties of Gaussian states under Gaussian operations. Although such operations are useless for quantum distillation, we prove that it is possible to distill a secret key secure against any attack from sufficiently entangled Gaussian states with nonpositive partial transposition. Moreover, all such states allow for key distillation, when Eve is assumed to perform finite-size coherent attacks before the reconciliation process.
Comparison between variable and constant rotor speed operation on WINDMEL-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sasamoto, Akira; Matsumiya, Hikaru; Kawamura, Shunji
1996-10-01
On a wind turbine control system for rotor revolution speed, it is believed that variable speed operation has the advantages over constant speed from a view point of both aerodynamics and mechanics. However, there is no experimental study which shows the differences. In this report, the authors intend to clarify the differences about shaft torque by using experimental data, from a new wind turbine system which has both variable and constant operation. The result in observation of the experimental data shows that variable speed operational shaft torque is lower than constant speed operational one.
Variability as a Subject Matter in a Science of Behavior: Reply to Commentaries
ERIC Educational Resources Information Center
Barba, Lourenco de Souza
2012-01-01
In his article, the author claimed that studies of operant variability that use a lag-"n" or threshold procedure and measure the obtained variability through the change in U value fail to provide direct evidence that variability is an operant dimension of behavior. To do so, he adopted Catania's (1973) concept of the operant, which takes the…
NASA Astrophysics Data System (ADS)
Mader, Julien; Rubio, Anna; Asensio Igoa, Jose Luis; Corgnati, Lorenzo; Mantovani, Carlo; Griffa, Annalisa; Gorringe, Patrick; Alba, Marco; Novellino, Antonio
2017-04-01
High Frequency radar (HFR) is a land-based remote sensing instrument offering a unique insight to coastal ocean variability, by providing synoptic, high frequency and high resolution data at the ocean atmosphere interface. HFRs have become invaluable tools in the field of operational oceanography for measuring surface currents, waves and winds, with direct applications in different sectors and an unprecedented potential for the integrated management of the coastal zone. To further the use of HFRs into the Copernicus Marine environment monitoring service, CMEMS, is becoming crucial to ensure the improved management of several related key issues such as Marine Safety, Marine Resources, Coastal & Marine Environment, Weather, Climate & Seasonal Forecast. In this context, INCREASE (Innovation and Networking for the integration of Coastal Radars into European mArine SErvices) project aims to set the necessary developments towards the integration of the existing European HFR operational systems into the CMEMS, following five main objectives: (i) Define and implement a common data and metadata model for HFR real-time data; (ii) Provide HFR quality controlled real-time surface currents and key derived products; (iii) Set the basis for the management of historical data and methodologies for advanced delayed mode quality-control techniques; (iv) Advance the use of HFR data for improving CMEMS numerical modelling systems; and (v) Enable an HFR European operational node to ensure the link with operational CMEMS. In cooperation with other ongoing initiatives (like the EuroGOOS HFR Task Team and the European project JERICO_NEXT), INCREASE has already set up the data management infrastructure to manage and make discoverable and accessible near real time data from 30 systems in Europe. This paper presents the achieved results and available products and features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houssainy, Sammy; Janbozorgi, Mohammad; Kavehpour, Pirouz
Compressed Air Energy Storage (CAES) can potentially allow renewable energy sources to meet electricity demands as reliably as coal-fired power plants. However, conventional CAES systems rely on the combustion of natural gas, require large storage volumes, and operate at high pressures, which possess inherent problems such as high costs, strict geological locations, and the production of greenhouse gas emissions. A novel and patented hybrid thermal-compressed air energy storage (HT-CAES) design is presented which allows a portion of the available energy, from the grid or renewable sources, to operate a compressor and the remainder to be converted and stored in themore » form of heat, through joule heating in a sensible thermal storage medium. The HT-CAES design incudes a turbocharger unit that provides supplementary mass flow rate alongside the air storage. The hybrid design and the addition of a turbocharger have the beneficial effect of mitigating the shortcomings of conventional CAES systems and its derivatives by eliminating combustion emissions and reducing storage volumes, operating pressures, and costs. Storage efficiency and cost are the two key factors, which upon integration with renewable energies would allow the sources to operate as independent forms of sustainable energy. The potential of the HT-CAES design is illustrated through a thermodynamic optimization study, which outlines key variables that have a major impact on the performance and economics of the storage system. The optimization analysis quantifies the required distribution of energy between thermal and compressed air energy storage, for maximum efficiency, and for minimum cost. This study provides a roundtrip energy and exergy efficiency map of the storage system and illustrates a trade off that exists between its capital cost and performance.« less
Hurricane intensification along United States coast suppressed during active hurricane periods
NASA Astrophysics Data System (ADS)
Kossin, James P.
2017-01-01
The North Atlantic ocean/atmosphere environment exhibits pronounced interdecadal variability that is known to strongly modulate Atlantic hurricane activity. Variability in sea surface temperature (SST) is correlated with hurricane variability through its relationship with the genesis and thermodynamic potential intensity of hurricanes. Another key factor that governs the genesis and intensity of hurricanes is ambient environmental vertical wind shear (VWS). Warmer SSTs generally correlate with more frequent genesis and greater potential intensity, while VWS inhibits genesis and prevents any hurricanes that do form from reaching their potential intensity. When averaged over the main hurricane-development region in the Atlantic, SST and VWS co-vary inversely, so that the two factors act in concert to either enhance or inhibit basin-wide hurricane activity. Here I show, however, that conditions conducive to greater basin-wide Atlantic hurricane activity occur together with conditions for more probable weakening of hurricanes near the United States coast. Thus, the VWS and SST form a protective barrier along the United States coast during periods of heightened basin-wide hurricane activity. Conversely, during the most-recent period of basin-wide quiescence, hurricanes (and particularly major hurricanes) near the United States coast, although substantially less frequent, exhibited much greater variability in their rate of intensification, and were much more likely to intensify rapidly. Such heightened variability poses greater challenges to operational forecasting and, consequently, greater coastal risk during hurricane events.
Clark, Renee M; Besterfield-Sacre, Mary E
2009-03-01
We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk.
Physically Active Adults: An Analysis of the Key Variables That Keep Them Moving
ERIC Educational Resources Information Center
Downs, Andrew
2016-01-01
Background: A large proportion of adults are insufficiently physically active, and researchers have yet to determine the factors that enable individuals to maintain adequate levels of physical activity throughout adulthood. Purpose: This study sought to identify the key variables linked with consistent physical activity in adulthood as elucidated…
Education and Success: A Case Study of the Thai Public Service.
ERIC Educational Resources Information Center
Fry, Gerald W.
1980-01-01
Studied is the bureaucracy in Thailand, and access to an promotion within the system--or the "degree of openness" in the Thai public service. The key dependent variable is occupational attainment. Some key intervening variables include educational attainment, total job experience, sex, and regional remoteness of early schooling. (KC)
Worku, Yohannes; Muchie, Mammo
2012-01-01
Objective. The objective was to investigate factors that affect the efficient management of solid waste produced by commercial businesses operating in the city of Pretoria, South Africa. Methods. Data was gathered from 1,034 businesses. Efficiency in solid waste management was assessed by using a structural time-based model designed for evaluating efficiency as a function of the length of time required to manage waste. Data analysis was performed using statistical procedures such as frequency tables, Pearson's chi-square tests of association, and binary logistic regression analysis. Odds ratios estimated from logistic regression analysis were used for identifying key factors that affect efficiency in the proper disposal of waste. Results. The study showed that 857 of the 1,034 businesses selected for the study (83%) were found to be efficient enough with regards to the proper collection and disposal of solid waste. Based on odds ratios estimated from binary logistic regression analysis, efficiency in the proper management of solid waste was significantly influenced by 4 predictor variables. These 4 influential predictor variables are lack of adherence to waste management regulations, wrong perception, failure to provide customers with enough trash cans, and operation of businesses by employed managers, in a decreasing order of importance. PMID:23209483
Enzyme reactor design under thermal inactivation.
Illanes, Andrés; Wilson, Lorena
2003-01-01
Temperature is a very relevant variable for any bioprocess. Temperature optimization of bioreactor operation is a key aspect for process economics. This is especially true for enzyme-catalyzed processes, because enzymes are complex, unstable catalysts whose technological potential relies on their operational stability. Enzyme reactor design is presented with a special emphasis on the effect of thermal inactivation. Enzyme thermal inactivation is a very complex process from a mechanistic point of view. However, for the purpose of enzyme reactor design, it has been oversimplified frequently, considering one-stage first-order kinetics of inactivation and data gathered under nonreactive conditions that poorly represent the actual conditions within the reactor. More complex mechanisms are frequent, especially in the case of immobilized enzymes, and most important is the effect of catalytic modulators (substrates and products) on enzyme stability under operation conditions. This review focuses primarily on reactor design and operation under modulated thermal inactivation. It also presents a scheme for bioreactor temperature optimization, based on validated temperature-explicit functions for all the kinetic and inactivation parameters involved. More conventional enzyme reactor design is presented merely as a background for the purpose of highlighting the need for a deeper insight into enzyme inactivation for proper bioreactor design.
A 24 km fiber-based discretely signaled continuous variable quantum key distribution system.
Dinh Xuan, Quyen; Zhang, Zheshen; Voss, Paul L
2009-12-21
We report a continuous variable key distribution system that achieves a final secure key rate of 3.45 kilobits/s over a distance of 24.2 km of optical fiber. The protocol uses discrete signaling and post-selection to improve reconciliation speed and quantifies security by means of quantum state tomography. Polarization multiplexing and a frequency translation scheme permit transmission of a continuous wave local oscillator and suppression of noise from guided acoustic wave Brillouin scattering by more than 27 dB.
Coherent attacking continuous-variable quantum key distribution with entanglement in the middle
NASA Astrophysics Data System (ADS)
Zhang, Zhaoyuan; Shi, Ronghua; Zeng, Guihua; Guo, Ying
2018-06-01
We suggest an approach on the coherent attack of continuous-variable quantum key distribution (CVQKD) with an untrusted entangled source in the middle. The coherent attack strategy can be performed on the double links of quantum system, enabling the eavesdropper to steal more information from the proposed scheme using the entanglement correlation. Numeric simulation results show the improved performance of the attacked CVQKD system in terms of the derived secret key rate with the controllable parameters maximizing the stolen information.
Learning abstract visual concepts via probabilistic program induction in a Language of Thought.
Overlan, Matthew C; Jacobs, Robert A; Piantadosi, Steven T
2017-11-01
The ability to learn abstract concepts is a powerful component of human cognition. It has been argued that variable binding is the key element enabling this ability, but the computational aspects of variable binding remain poorly understood. Here, we address this shortcoming by formalizing the Hierarchical Language of Thought (HLOT) model of rule learning. Given a set of data items, the model uses Bayesian inference to infer a probability distribution over stochastic programs that implement variable binding. Because the model makes use of symbolic variables as well as Bayesian inference and programs with stochastic primitives, it combines many of the advantages of both symbolic and statistical approaches to cognitive modeling. To evaluate the model, we conducted an experiment in which human subjects viewed training items and then judged which test items belong to the same concept as the training items. We found that the HLOT model provides a close match to human generalization patterns, significantly outperforming two variants of the Generalized Context Model, one variant based on string similarity and the other based on visual similarity using features from a deep convolutional neural network. Additional results suggest that variable binding happens automatically, implying that binding operations do not add complexity to peoples' hypothesized rules. Overall, this work demonstrates that a cognitive model combining symbolic variables with Bayesian inference and stochastic program primitives provides a new perspective for understanding people's patterns of generalization. Copyright © 2017 Elsevier B.V. All rights reserved.
Simple Sensitivity Analysis for Orion GNC
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
25 MHz clock continuous-variable quantum key distribution system over 50 km fiber channel
Wang, Chao; Huang, Duan; Huang, Peng; Lin, Dakai; Peng, Jinye; Zeng, Guihua
2015-01-01
In this paper, a practical continuous-variable quantum key distribution system is developed and it runs in the real-world conditions with 25 MHz clock rate. To reach high-rate, we have employed a homodyne detector with maximal bandwidth to 300 MHz and an optimal high-efficiency error reconciliation algorithm with processing speed up to 25 Mbps. To optimize the stability of the system, several key techniques are developed, which include a novel phase compensation algorithm, a polarization feedback algorithm, and related stability method on the modulators. Practically, our system is tested for more than 12 hours with a final secret key rate of 52 kbps over 50 km transmission distance, which is the highest rate so far in such distance. Our system may pave the road for practical broadband secure quantum communication with continuous variables in the commercial conditions. PMID:26419413
25 MHz clock continuous-variable quantum key distribution system over 50 km fiber channel.
Wang, Chao; Huang, Duan; Huang, Peng; Lin, Dakai; Peng, Jinye; Zeng, Guihua
2015-09-30
In this paper, a practical continuous-variable quantum key distribution system is developed and it runs in the real-world conditions with 25 MHz clock rate. To reach high-rate, we have employed a homodyne detector with maximal bandwidth to 300 MHz and an optimal high-efficiency error reconciliation algorithm with processing speed up to 25 Mbps. To optimize the stability of the system, several key techniques are developed, which include a novel phase compensation algorithm, a polarization feedback algorithm, and related stability method on the modulators. Practically, our system is tested for more than 12 hours with a final secret key rate of 52 kbps over 50 km transmission distance, which is the highest rate so far in such distance. Our system may pave the road for practical broadband secure quantum communication with continuous variables in the commercial conditions.
Teleportation-based continuous variable quantum cryptography
NASA Astrophysics Data System (ADS)
Luiz, F. S.; Rigolin, Gustavo
2017-03-01
We present a continuous variable (CV) quantum key distribution (QKD) scheme based on the CV quantum teleportation of coherent states that yields a raw secret key made up of discrete variables for both Alice and Bob. This protocol preserves the efficient detection schemes of current CV technology (no single-photon detection techniques) and, at the same time, has efficient error correction and privacy amplification schemes due to the binary modulation of the key. We show that for a certain type of incoherent attack, it is secure for almost any value of the transmittance of the optical line used by Alice to share entangled two-mode squeezed states with Bob (no 3 dB or 50% loss limitation characteristic of beam splitting attacks). The present CVQKD protocol works deterministically (no postselection needed) with efficient direct reconciliation techniques (no reverse reconciliation) in order to generate a secure key and beyond the 50% loss case at the incoherent attack level.
The SECOQC quantum key distribution network in Vienna
NASA Astrophysics Data System (ADS)
Peev, M.; Pacher, C.; Alléaume, R.; Barreiro, C.; Bouda, J.; Boxleitner, W.; Debuisschert, T.; Diamanti, E.; Dianati, M.; Dynes, J. F.; Fasel, S.; Fossier, S.; Fürst, M.; Gautier, J.-D.; Gay, O.; Gisin, N.; Grangier, P.; Happe, A.; Hasani, Y.; Hentschel, M.; Hübel, H.; Humer, G.; Länger, T.; Legré, M.; Lieger, R.; Lodewyck, J.; Lorünser, T.; Lütkenhaus, N.; Marhold, A.; Matyus, T.; Maurhart, O.; Monat, L.; Nauerth, S.; Page, J.-B.; Poppe, A.; Querasser, E.; Ribordy, G.; Robyr, S.; Salvail, L.; Sharpe, A. W.; Shields, A. J.; Stucki, D.; Suda, M.; Tamas, C.; Themel, T.; Thew, R. T.; Thoma, Y.; Treiber, A.; Trinkler, P.; Tualle-Brouri, R.; Vannel, F.; Walenta, N.; Weier, H.; Weinfurter, H.; Wimberger, I.; Yuan, Z. L.; Zbinden, H.; Zeilinger, A.
2009-07-01
In this paper, we present the quantum key distribution (QKD) network designed and implemented by the European project SEcure COmmunication based on Quantum Cryptography (SECOQC) (2004-2008), unifying the efforts of 41 research and industrial organizations. The paper summarizes the SECOQC approach to QKD networks with a focus on the trusted repeater paradigm. It discusses the architecture and functionality of the SECOQC trusted repeater prototype, which has been put into operation in Vienna in 2008 and publicly demonstrated in the framework of a SECOQC QKD conference held from October 8 to 10, 2008. The demonstration involved one-time pad encrypted telephone communication, a secure (AES encryption protected) video-conference with all deployed nodes and a number of rerouting experiments, highlighting basic mechanisms of the SECOQC network functionality. The paper gives an overview of the eight point-to-point network links in the prototype and their underlying technology: three plug and play systems by id Quantique, a one way weak pulse system from Toshiba Research in the UK, a coherent one-way system by GAP Optique with the participation of id Quantique and the AIT Austrian Institute of Technology (formerly ARCAustrian Research Centers GmbH—ARC is now operating under the new name AIT Austrian Institute of Technology GmbH following a restructuring initiative.), an entangled photons system by the University of Vienna and the AIT, a continuous-variables system by Centre National de la Recherche Scientifique (CNRS) and THALES Research and Technology with the participation of Université Libre de Bruxelles, and a free space link by the Ludwig Maximillians University in Munich connecting two nodes situated in adjacent buildings (line of sight 80 m). The average link length is between 20 and 30 km, the longest link being 83 km. The paper presents the architecture and functionality of the principal networking agent—the SECOQC node module, which enables the authentic classical communication required for key distillation, manages the generated key material, determines a communication path between any destinations in the network, and realizes end-to-end secure transport of key material between these destinations. The paper also illustrates the operation of the network in a number of typical exploitation regimes and gives an initial estimate of the network transmission capacity, defined as the maximum amount of key that can be exchanged, or alternatively the amount of information that can be transmitted with information theoretic security, between two arbitrary nodes.
Reinforcement and Induction of Operant Variability
ERIC Educational Resources Information Center
Neuringer, Allen
2012-01-01
The target paper by Barba (2012) raises issues that were the focus of the author's first two publications on operant variability. The author will describe the main findings in those papers and then discuss Barba's specific arguments. Barba has argued against the operant nature of variability. (Contains 2 figures.)
Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.
Cost analysis of open radical cystectomy versus robot-assisted radical cystectomy.
Bansal, Sukhchain S; Dogra, Tara; Smith, Peter W; Amran, Maisarah; Auluck, Ishna; Bhambra, Maninder; Sura, Manraj S; Rowe, Edward; Koupparis, Anthony
2018-03-01
To perform a cost analysis comparing the cost of robot-assisted radical cystectomy (RARC) with open RC (ORC) in a UK tertiary referral centre and to identify the key cost drivers. Data on hospital length of stay (LOS), operative time (OT), transfusion rate, and volume and complication rate were obtained from a prospectively updated institutional database for patients undergoing RARC or ORC. A cost decision tree model was created. Sensitivity analysis was performed to find key drivers of overall cost and to find breakeven points with ORC. Monte Carlo analysis was performed to quantify the variability in the dataset. One RARC procedure costs £12 449.87, or £12 106.12 if the robot was donated via charitable funds. In comparison, one ORC procedure costs £10 474.54. RARC is 18.9% more expensive than ORC. The key cost drivers were OT, LOS, and the number of cases performed per annum. High ongoing equipment costs remain a large barrier to the cost of RARC falling. However, minimal improvements in patient quality of life would be required to offset this difference. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagans, K.G.; Clough, R.E.
2000-04-25
An optical key system comprises a battery-operated optical key and an isolated lock that derives both its operating power and unlock signals from the correct optical key. A light emitting diode or laser diode is included within the optical key and is connected to transmit a bit-serial password. The key user physically enters either the code-to-transmit directly, or an index to a pseudorandom number code, in the key. Such person identification numbers can be retained permanently, or ephemeral. When a send button is pressed, the key transmits a beam of light modulated with the password information. The modulated beam ofmore » light is received by a corresponding optical lock with a photovoltaic cell that produces enough power from the beam of light to operate a password-screen digital logic. In one application, an acceptable password allows a two watt power laser diode to pump ignition and timing information over a fiberoptic cable into a sealed engine compartment. The receipt of a good password allows the fuel pump, spark, and starter systems to each operate. Therefore, bypassing the lock mechanism as is now routine with automobile thieves is pointless because the engine is so thoroughly disabled.« less
Hagans, Karla G.; Clough, Robert E.
2000-01-01
An optical key system comprises a battery-operated optical key and an isolated lock that derives both its operating power and unlock signals from the correct optical key. A light emitting diode or laser diode is included within the optical key and is connected to transmit a bit-serial password. The key user physically enters either the code-to-transmit directly, or an index to a pseudorandom number code, in the key. Such person identification numbers can be retained permanently, or ephemeral. When a send button is pressed, the key transmits a beam of light modulated with the password information. The modulated beam of light is received by a corresponding optical lock with a photovoltaic cell that produces enough power from the beam of light to operate a password-screen digital logic. In one application, an acceptable password allows a two watt power laser diode to pump ignition and timing information over a fiberoptic cable into a sealed engine compartment. The receipt of a good password allows the fuel pump, spark, and starter systems to each operate. Therefore, bypassing the lock mechanism as is now routine with automobile thieves is pointless because the engine is so thoroughly disabled.
The Mauna Kea Weather Center: Custom Atmospheric Forecasting Support for Mauna Kea
NASA Astrophysics Data System (ADS)
Businger, Steven
2011-03-01
The success of operations at Mauna Kea Observatories is strongly influenced by weather conditions. The Mauna Kea Weather Center, an interdisciplinary research program, was established in 1999 to develop and provide custom weather support for Mauna Kea Observatories. The operational forecasting goals of the program are to facilitate the best possible use of favorable atmospheric conditions for scientific benefit and to ensure operational safety. During persistent clear periods, astronomical observing quality varies substantially due to changes in the vertical profiles of temperature, wind, moisture, and turbulence. Cloud and storm systems occasionally cause adverse or even hazardous conditions. A dedicated, daily, real-time mesoscale numerical modeling effort provides crucial forecast guidance in both cases. Several key atmospheric variables are forecast with sufficient skill to be of operational and scientific benefit to the telescopes on Mauna Kea. Summit temperature forecasts allow mirrors to be set to the ambient temperature to reduce image distortion. Precipitable water forecasts allow infrared observations to be prioritized according to atmospheric opacity. Forecasts of adverse and hazardous conditions protect the safety of personnel and allow for scheduling of maintenance when observing is impaired by cloud. The research component of the project continues to improve the accuracy and content of the forecasts. In particular, case studies have resulted in operational forecasts of astronomical observing quality, or seeing.
The Copernicus Climate Change Service (C3S): A European Answer to Climate Change
NASA Astrophysics Data System (ADS)
Thepaut, Jean-Noel
2016-04-01
Copernicus is the European Commission's flagship Earth observation programme that delivers freely accessible operational data and information services. ECMWF has been entrusted to operate two key parts of the Copernicus programme, which will bring a consistent standard to the measurement, forecasting and prediction of atmospheric conditions and climate change: • The Copernicus Atmosphere Monitoring Service, CAMS, provides daily forecasts detailing the makeup composition of the atmosphere from the ground up to the stratosphere. • The Copernicus Climate Change Service (C3S) (in development) will routinely monitor and analyse more than 20 essential climate variables to build a global picture of our climate, from the past to the future, as well as developing customisable climate indicators for relevant economic sectors, such as energy, water management, agriculture, insurance, health…. C3S has now taken off and a number of proof-of-concept sectoral climate services have been initiated. This paper will focus on the description and expected outcome of these proof-of-concept activities as well as the definition of a roadmap towards a fully operational European Climate Change Service.
NASA Technical Reports Server (NTRS)
Vascik, Parker D.; Jung, Jaewoo
2016-01-01
An economic impact market analysis was conducted for 16 leading sectors of commercial Unmanned Aerial System (UAS) applications predicted to be enabled by 2020 through the NASA UAS Traffic Management (UTM) program. Subject matter experts from seven industries were interviewed to validate concept of operations (ConOps) and market adoption assumptions for each sector. The market analysis was used to estimate direct economic impacts for each sector including serviceable addressable market, capital investment, revenue recovery potential, and operations cost savings. The resultant economic picture distinguishes the agricultural, pipeline and railroad inspection, construction, and maritime sectors of the nascent commercial UAS industry as providing the highest potential economic value in the United States. Sensitivity studies characterized the variability of select UAS sectors economic value to key regulatory or UTM ConOps requirements such as weight, altitude, and flight over populated area constraints. Takeaways from the analysis inform the validation of UTM requirements, technologies and timetables from a commercial market need and value viewpoint. This work concluded in August 2015 and reflects the state of the UAS industry and market projections at that time.
Radiation effect on rocket engine performance
NASA Technical Reports Server (NTRS)
Chiu, Huei-Huang
1988-01-01
The effects of radiation on the performance of modern rocket propulsion systems operating at high pressure and temperature were recognized as a key issue in the design and operation of various liquid rocket engines of the current and future generations. Critical problem areas of radiation coupled with combustion of bipropellants are assessed and accounted for in the formulation of a universal scaling law incorporated with a radiation-enhanced vaporization combustion model. Numerical algorithms are developed and the pertaining data of the Variable Thrust Engine (VTE) and Space Shuttle Main Engine (SSME) are used to conduct parametric sensitivity studies to predict the principal intercoupling effects of radiation. The analysis reveals that low enthalpy engines, such as the VTE, are vulnerable to a substantial performance set back by the radiative loss, whereas the performance of high enthalpy engines such as the SSME, are hardly affected over a broad range of engine operation. Additionally, combustion enhancement by the radiative heating of the propellant has a significant impact in those propellants with high absorptivity. Finally, the areas of research related with radiation phenomena in bipropellant engines are identified.
Preliminary Design of the Low Speed Propulsion Air Intake of the LAPCAT-MR2 Aircraft
NASA Astrophysics Data System (ADS)
Meerts, C.; Steelant, J.; Hendrick, P.
2011-08-01
A supersonic air intake has been designed for the low speed propulsion system of the LAPCAT-MR2 aircraft. Development has been based on the XB-70 aircraft air intake which achieves extremely high performances over a wide operation range through the combined use of variable geometry and porous wall suction for boundary layer control. Design of the LAPCAT-MR2 intake has been operated through CFD simulations using DLR TAU-Code (perfect gas model - Menter SST turbulence model). First, a new boundary condition has been validated into the DLR TAU-Code (perfect gas model) for porous wall suction modelling. Standard test cases have shown surprisingly good agreement with both theoretical predictions and experimental results. Based upon this validation, XB-70 air intake performances have been assessed through CFD simulations over the subsonic, transonic and supersonic operation regions and compared to available flight data. A new simulation strategy was deployed avoiding numerical instabilities when initiating the flow in both transonic and supersonic operation modes. First, the flow must be initiated with a far field Mach number higher than the target flight Mach number. Additionally, the inlet backpressure may only be increased to its target value once the oblique shock pattern downstream the intake compression ramps is converged. Simulations using that strategy have shown excellent agreement with in-flight measurements for both total pressure recovery ratio and variable geometry schedule prediction. The demarcation between stable and unstable operation could be well reproduced. Finally, a modified version of the XB-70 air intake has been integrated in the elliptical intake on the LAPCAT vehicle. Operation of this intake in the LAPCAT-MR2 environment is under evaluation using the same simulation strategy as the one developed for the XB-70. Performances are assessed at several key operation points to assess viability of this design. This information will allow in a next phase to better quantify the operation of the aerojet engines from take-off till the switch-over flight Mach number for the dual mode ramjet.
Machine learning for real time remote detection
NASA Astrophysics Data System (ADS)
Labbé, Benjamin; Fournier, Jérôme; Henaff, Gilles; Bascle, Bénédicte; Canu, Stéphane
2010-10-01
Infrared systems are key to providing enhanced capability to military forces such as automatic control of threats and prevention from air, naval and ground attacks. Key requirements for such a system to produce operational benefits are real-time processing as well as high efficiency in terms of detection and false alarm rate. These are serious issues since the system must deal with a large number of objects and categories to be recognized (small vehicles, armored vehicles, planes, buildings, etc.). Statistical learning based algorithms are promising candidates to meet these requirements when using selected discriminant features and real-time implementation. This paper proposes a new decision architecture benefiting from recent advances in machine learning by using an effective method for level set estimation. While building decision function, the proposed approach performs variable selection based on a discriminative criterion. Moreover, the use of level set makes it possible to manage rejection of unknown or ambiguous objects thus preserving the false alarm rate. Experimental evidences reported on real world infrared images demonstrate the validity of our approach.
Feng, Shi-Jin; Cao, Ben-Yi; Xie, Hai-Jian
2017-10-01
Leachate recirculation in municipal solid waste (MSW) landfills operated as bioreactors offers significant economic and environmental benefits. Combined drainage blanket (DB)-horizontal trench (HT) systems can be an alternative to single conventional recirculation approaches and can have competitive advantages. The key objectives of this study are to investigate combined drainage blanket -horizontal trench systems, to analyze the effects of applying two recirculation systems on the leachate migration in landfills, and to estimate some key design parameters (e.g., the steady-state flow rate, the influence width, and the cumulative leachate volume). It was determined that an effective recirculation model should consist of a moderate horizontal trench injection pressure head and supplementary leachate recirculated through drainage blanket, with an objective of increasing the horizontal unsaturated hydraulic conductivity and thereby allowing more leachate to flow from the horizontal trench system in a horizontal direction. In addition, design charts for engineering application were established using a dimensionless variable formulation.
Bonaretti, Serena; Vilayphiou, Nicolas; Chan, Caroline Mai; Yu, Andrew; Nishiyama, Kyle; Liu, Danmei; Boutroy, Stephanie; Ghasem-Zadeh, Ali; Boyd, Steven K.; Chapurlat, Roland; McKay, Heather; Shane, Elizabeth; Bouxsein, Mary L.; Black, Dennis M.; Majumdar, Sharmila; Orwoll, Eric S.; Lang, Thomas F.; Khosla, Sundeep; Burghardt, Andrew J.
2017-01-01
Introduction HR-pQCT is increasingly used to assess bone quality, fracture risk and anti-fracture interventions. The contribution of the operator has not been adequately accounted in measurement precision. Operators acquire a 2D projection (“scout view image”) and define the region to be scanned by positioning a “reference line” on a standard anatomical landmark. In this study, we (i) evaluated the contribution of positioning variability to in vivo measurement precision, (ii) measured intra- and inter-operator positioning variability, and (iii) tested if custom training software led to superior reproducibility in new operators compared to experienced operators. Methods To evaluate the operator in vivo measurement precision we compared precision errors calculated in 64 co-registered and non-co-registered scan-rescan images. To quantify operator variability, we developed software that simulates the positioning process of the scanner’s software. Eight experienced operators positioned reference lines on scout view images designed to test intra- and inter-operator reproducibility. Finally, we developed modules for training and evaluation of reference line positioning. We enrolled 6 new operators to participate in a common training, followed by the same reproducibility experiments performed by the experienced group. Results In vivo precision errors were up to three-fold greater (Tt.BMD and Ct.Th) when variability in scan positioning was included. Inter-operator precision errors were significantly greater than short-term intra-operator precision (p<0.001). New trained operators achieved comparable intra-operator reproducibility to experienced operators, and lower inter-operator reproducibility (p<0.001). Precision errors were significantly greater for the radius than for the tibia. Conclusion Operator reference line positioning contributes significantly to in vivo measurement precision and is significantly greater for multi-operator datasets. Inter-operator variability can be significantly reduced using a systematic training platform, now available online (http://webapps.radiology.ucsf.edu/refline/). PMID:27475931
Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance
NASA Technical Reports Server (NTRS)
Viswanathan, Arun
2012-01-01
This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this variability in training data for anomaly detectors, but has ignored variability in the attack signal that will necessarily affect the evaluation results for such detectors. We posit that current evaluation strategies implicitly assume that attacks always manifest in a stable manner; we show that this assumption is wrong. We describe a simple experiment to demonstrate the effects of environmental noise on the manifestation of attacks in data and introduce the notion of attack manifestation stability. Finally, we argue that conclusions about detector performance will be unreliable and incomplete if the stability of attack manifestation is not accounted for in the evaluation strategy.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
High-efficiency reconciliation for continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, Zengliang; Yang, Shenshen; Li, Yongmin
2017-04-01
Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.
Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen
2016-03-31
In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less
The GCOS Reference Upper-Air Network (GRUAN)
NASA Astrophysics Data System (ADS)
Vömel, H.; Berger, F. H.; Immler, F. J.; Seidel, D.; Thorne, P.
2009-04-01
While the global upper-air observing network has provided useful observations for operational weather forecasting for decades, its measurements lack the accuracy and long-term continuity needed for understanding climate change. Consequently, the scientific community faces uncertainty on such key issues as the trends of temperature in the upper troposphere and stratosphere or the variability and trends of stratospheric water vapour. To address these shortcomings, and to ensure that future climate records will be more useful than the records to date, the Global Climate Observing System (GCOS) program initiated the GCOS Reference Upper Air Network (GRUAN). GRUAN will be a network of about 30-40 observatories with a representative sampling of geographic regions and surface types. These stations will provide upper-air reference observations of the essential climate variables, i.e. temperature, geopotential, humidity, wind, radiation and cloud properties using specialized radiosondes and complementary remote sensing profiling instrumentation. Long-term stability, quality assurance / quality control, and a detailed assessment of measurement uncertainties will be the key aspects of GRUAN observations. The network will not be globally complete but will serve to constrain and adjust data from more spatially comprehensive global observing systems including satellites and the current radiosonde networks. This paper outlines the scientific rationale for GRUAN, its role in the Global Earth Observation System of Systems, network requirements and likely instrumentation, management structure, current status and future plans.
NASA Astrophysics Data System (ADS)
Heckmann, G.; Route, G.
2009-12-01
The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. The IDPS processes NPOESS satellite data to provide environmental data products (aka, Environmental Data Records or EDRs) to NOAA and DoD processing centers operated by the United States government. The IDPS will process EDRs beginning with the NPOESS Preparatory Project (NPP) and continuing through the lifetime of the NPOESS system. IDPS also provides the software and requirements for the Field Terminal Segment (FTS). NPOESS provides support to deployed field terminals by providing mission data in the Low Rate and High Rate downlinks (LRD/HRD), mission support data needed to generate EDRs and decryption keys needed to decrypt mission data during Selective data Encryption (SDE). Mission support data consists of globally relevant data, geographically constrained data, and two line element sets. NPOESS provides these mission support data via the Internet accessible Mission Support Data Server and HRD/LRD downlinks. This presentation will illustrate and describe the NPOESS capabilities in support of Field Terminal users. This discussion will include the mission support data available to Field Terminal users, content of the direct broadcast HRD and LRD downlinks identifying differences between the direct broadcast downlinks including the variability of the LRD downlink and NPOESS management and distribution of decryption keys to approved field terminals using Public Key Infrastructure (PKI) AES standard with 256 bit encryption and elliptical curve cryptography.
Space Suit Portable Life Support System Test Bed (PLSS 1.0) Development and Testing
NASA Technical Reports Server (NTRS)
Watts, Carly; Campbell, Colin; Vogel, Matthew; Conger, Bruce
2012-01-01
A multi-year effort has been carried out at NASA-JSC to develop an advanced extra-vehicular activity Portable Life Support System (PLSS) design intended to further the current state of the art by increasing operational flexibility, reducing consumables, and increasing robustness. Previous efforts have focused on modeling and analyzing the advanced PLSS architecture, as well as developing key enabling technologies. Like the current International Space Station Extra-vehicular Mobility Unit PLSS, the advanced PLSS comprises three subsystems required to sustain the crew during extra-vehicular activity including the Thermal, Ventilation, and Oxygen Subsystems. This multi-year effort has culminated in the construction and operation of PLSS 1.0, a test bed that simulates full functionality of the advanced PLSS design. PLSS 1.0 integrates commercial off the shelf hardware with prototype technology development components, including the primary and secondary oxygen regulators, Ventilation Subsystem fan, Rapid Cycle Amine swingbed carbon dioxide and water vapor removal device, and Spacesuit Water Membrane Evaporator heat rejection device. The overall PLSS 1.0 test objective was to demonstrate the capability of the Advanced PLSS to provide key life support functions including suit pressure regulation, carbon dioxide and water vapor removal, thermal control and contingency purge operations. Supplying oxygen was not one of the specific life support functions because the PLSS 1.0 test was not oxygen rated. Nitrogen was used for the working gas. Additional test objectives were to confirm PLSS technology development components performance within an integrated test bed, identify unexpected system level interactions, and map the PLSS 1.0 performance with respect to key variables such as crewmember metabolic rate and suit pressure. Successful PLSS 1.0 testing completed 168 test points over 44 days of testing and produced a large database of test results that characterize system level and component performance. With the exception of several minor anomalies, the PLSS 1.0 test rig performed as expected; furthermore, many system responses trended in accordance with pre-test predictions.
Choice with frequently changing food rates and food ratios.
Baum, William M; Davison, Michael
2014-03-01
In studies of operant choice, when one schedule of a concurrent pair is varied while the other is held constant, the constancy of the constant schedule may exert discriminative control over performance. In our earlier experiments, schedules varied reciprocally across components within sessions, so that while food ratio varied food rate remained constant. In the present experiment, we held one variable-interval (VI) schedule constant while varying the concurrent VI schedule within sessions. We studied five conditions, each with a different constant left VI schedule. On the right key, seven different VI schedules were presented in seven different unsignaled components. We analyzed performances at several different time scales. At the longest time scale, across conditions, behavior ratios varied with food ratios as would be expected from the generalized matching law. At shorter time scales, effects due to holding the left VI constant became more and more apparent, the shorter the time scale. In choice relations across components, preference for the left key leveled off as the right key became leaner. Interfood choice approximated strict matching for the varied right key, whereas interfood choice hardly varied at all for the constant left key. At the shortest time scale, visit patterns differed for the left and right keys. Much evidence indicated the development of a fix-and-sample pattern. In sum, the procedural difference made a large difference to performance, except for choice at the longest time scale and the fix-and-sample pattern at the shortest time scale. © Society for the Experimental Analysis of Behavior.
Composable security proof for continuous-variable quantum key distribution with coherent States.
Leverrier, Anthony
2015-02-20
We give the first composable security proof for continuous-variable quantum key distribution with coherent states against collective attacks. Crucially, in the limit of large blocks the secret key rate converges to the usual value computed from the Holevo bound. Combining our proof with either the de Finetti theorem or the postselection technique then shows the security of the protocol against general attacks, thereby confirming the long-standing conjecture that Gaussian attacks are optimal asymptotically in the composable security framework. We expect that our parameter estimation procedure, which does not rely on any assumption about the quantum state being measured, will find applications elsewhere, for instance, for the reliable quantification of continuous-variable entanglement in finite-size settings.
Identifying and Modeling Dynamic Preference Evolution in Multipurpose Water Resources Systems
NASA Astrophysics Data System (ADS)
Mason, E.; Giuliani, M.; Castelletti, A.; Amigoni, F.
2018-04-01
Multipurpose water systems are usually operated on a tradeoff of conflicting operating objectives. Under steady state climatic and socioeconomic conditions, such tradeoff is supposed to represent a fair and/or efficient preference. Extreme variability in external forcing might affect water operators' risk aversion and force a change in her/his preference. Properly accounting for these shifts is key to any rigorous retrospective assessment of the operator's behaviors, and to build descriptive models for projecting the future system evolution. In this study, we explore how the selection of different preferences is linked to variations in the external forcing. We argue that preference selection evolves according to recent, extreme variations in system performance: underperforming in one of the objectives pushes the preference toward the harmed objective. To test this assumption, we developed a rational procedure to simulate the operator's preference selection. We map this selection onto a multilateral negotiation, where multiple virtual agents independently optimize different objectives. The agents periodically negotiate a compromise policy for the operation of the system. Agents' attitudes in each negotiation step are determined by the recent system performance measured by the specific objective they maximize. We then propose a numerical model of preference dynamics that implements a concept from cognitive psychology, the availability bias. We test our modeling framework on a synthetic lake operated for flood control and water supply. Results show that our model successfully captures the operator's preference selection and dynamic evolution driven by extreme wet and dry situations.
78 FR 79061 - Noise Exposure Map Notice; Key West International Airport, Key West, FL
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-27
..., Flight Track Utilization by Aircraft Category for East Flow Operations; Table 4-3, Flight Track Utilization by Aircraft Category for West Flow Operations; Table 4-4, 2013 Air Carrier Flight Operations; Table 4-5, 2013 Commuter and Air Taxi Flight Operations; Table 4-6, 2013 Average Daily Engine Run-Up...
An empirical comparison of key statistical attributes among potential ICU quality indicators.
Brown, Sydney E S; Ratcliffe, Sarah J; Halpern, Scott D
2014-08-01
Good quality indicators should have face validity, relevance to patients, and be able to be measured reliably. Beyond these general requirements, good quality indicators should also have certain statistical properties, including sufficient variability to identify poor performers, relative insensitivity to severity adjustment, and the ability to capture what providers do rather than patients' characteristics. We assessed the performance of candidate indicators of ICU quality on these criteria. Indicators included ICU readmission, mortality, several length of stay outcomes, and the processes of venous-thromboembolism and stress ulcer prophylaxis provision. Retrospective cohort study. One hundred thirty-eight U.S. ICUs from 2001-2008 in the Project IMPACT database. Two hundred sixty-eight thousand eight hundred twenty-four patients discharged from U.S. ICUs. None. We assessed indicators' (1) variability across ICU-years; (2) degree of influence by patient vs. ICU and hospital characteristics using the Omega statistic; (3) sensitivity to severity adjustment by comparing the area under the receiver operating characteristic curve (AUC) between models including vs. excluding patient variables, and (4) correlation between risk adjusted quality indicators using a Spearman correlation. Large ranges of among-ICU variability were noted for all quality indicators, particularly for prolonged length of stay (4.7-71.3%) and the proportion of patients discharged home (30.6-82.0%), and ICU and hospital characteristics outweighed patient characteristics for stress ulcer prophylaxis (ω, 0.43; 95% CI, 0.34-0.54), venous thromboembolism prophylaxis (ω, 0.57; 95% CI, 0.53-0.61), and ICU readmissions (ω, 0.69; 95% CI, 0.52-0.90). Mortality measures were the most sensitive to severity adjustment (area under the receiver operating characteristic curve % difference, 29.6%); process measures were the least sensitive (area under the receiver operating characteristic curve % differences: venous thromboembolism prophylaxis, 3.4%; stress ulcer prophylaxis, 2.1%). None of the 10 indicators was clearly and consistently correlated with a majority of the other nine indicators. No indicator performed optimally across assessments. Future research should seek to define and operationalize quality in a way that is relevant to both patients and providers.
Chen, Yi; Huang, Weina; Peng, Bei
2014-01-01
Because of the demands for sustainable and renewable energy, fuel cells have become increasingly popular, particularly the polymer electrolyte fuel cell (PEFC). Among the various components, the cathode plays a key role in the operation of a PEFC. In this study, a quantitative dual-layer cathode model was proposed for determining the optimal parameters that minimize the over-potential difference η and improve the efficiency using a newly developed bat swarm algorithm with a variable population embedded in the computational intelligence-aided design. The simulation results were in agreement with previously reported results, suggesting that the proposed technique has potential applications for automating and optimizing the design of PEFCs.
Operational satellites and the global monitoring of snow and ice
NASA Technical Reports Server (NTRS)
Walsh, John E.
1991-01-01
The altitudinal dependence of the global warming projected by global climate models is at least partially attributable to the albedo-temperature feedback involving snow and ice, which must be regarded as key variables in the monitoring for global change. Statistical analyses of data from IR and microwave sensors monitoring the areal coverage and extent of sea ice have led to mixed conclusions about recent trends of hemisphere sea ice coverage. Seasonal snow cover has been mapped for over 20 years by NOAA/NESDIS on the basis of imagery from a variety of satellite sensors. Multichannel passive microwave data show some promise for the routine monitoring of snow depth over unforested land areas.
NASA Astrophysics Data System (ADS)
Doha, E.; Bhrawy, A.
2006-06-01
It is well known that spectral methods (tau, Galerkin, collocation) have a condition number of ( is the number of retained modes of polynomial approximations). This paper presents some efficient spectral algorithms, which have a condition number of , based on the Jacobi?Galerkin methods of second-order elliptic equations in one and two space variables. The key to the efficiency of these algorithms is to construct appropriate base functions, which lead to systems with specially structured matrices that can be efficiently inverted. The complexities of the algorithms are a small multiple of operations for a -dimensional domain with unknowns, while the convergence rates of the algorithms are exponentials with smooth solutions.
Di Stefano, Danilo Alessio; Arosio, Paolo; Piattelli, Adriano; Perrotti, Vittoria; Iezzi, Giovanna
2015-02-01
Bone density at implant placement site is a key factor to obtain the primary stability of the fixture, which, in turn, is a prognostic factor for osseointegration and long-term success of an implant supported rehabilitation. Recently, an implant motor with a bone density measurement probe has been introduced. The aim of the present study was to test the objectiveness of the bone densities registered by the implant motor regardless of the operator performing them. A total of 3704 bone density measurements, performed by means of the implant motor, were registered by 39 operators at different implant sites during routine activity. Bone density measurements were grouped according to their distribution across the jaws. Specifically, four different areas were distinguished: a pre-antral (between teeth from first right maxillary premolar to first left maxillary premolar) and a sub-antral (more distally) zone in the maxilla, and an interforaminal (between and including teeth from first left mandibular premolar to first right mandibular premolar) and a retroforaminal (more distally) zone in the lower one. A statistical comparison was performed to check the inter-operators variability of the collected data. The device produced consistent and operator-independent bone density values at each tooth position, showing a reliable bone-density measurement. The implant motor demonstrated to be a helpful tool to properly plan implant placement and loading irrespective of the operator using it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jong Suk; Chen, Jun; Garcia, Humberto E.
An RO (reverse osmosis) desalination plant is proposed as an effective, FLR (flexible load resource) to be integrated into HES (hybrid energy systems) to support various types of ancillary services to the electric grid, under variable operating conditions. To study the dynamic (transient) analysis of such system, among the various unit operations within HES, special attention is given here to the detailed dynamic modeling and control design of RO desalination process with a spiral-wound membrane module. The model incorporates key physical phenomena that have been investigated individually into a dynamic integrated model framework. In particular, the solution-diffusion model modified withmore » the concentration polarization theory is applied to predict RO performance over a large range of operating conditions. Simulation results involving several case studies suggest that an RO desalination plant, acting as a FLR, can provide operational flexibility to participate in energy management at the utility scale by dynamically optimizing the use of excess electrical energy. Here, the incorporation of additional commodity (fresh water) produced from a FLR allows a broader range of HES operations for maximizing overall system performance and profitability. For the purpose of assessing the incorporation of health assessment into process operations, an online condition monitoring approach for RO membrane fouling supervision is addressed in the case study presented.« less
Kim, Jong Suk; Chen, Jun; Garcia, Humberto E.
2016-06-17
An RO (reverse osmosis) desalination plant is proposed as an effective, FLR (flexible load resource) to be integrated into HES (hybrid energy systems) to support various types of ancillary services to the electric grid, under variable operating conditions. To study the dynamic (transient) analysis of such system, among the various unit operations within HES, special attention is given here to the detailed dynamic modeling and control design of RO desalination process with a spiral-wound membrane module. The model incorporates key physical phenomena that have been investigated individually into a dynamic integrated model framework. In particular, the solution-diffusion model modified withmore » the concentration polarization theory is applied to predict RO performance over a large range of operating conditions. Simulation results involving several case studies suggest that an RO desalination plant, acting as a FLR, can provide operational flexibility to participate in energy management at the utility scale by dynamically optimizing the use of excess electrical energy. Here, the incorporation of additional commodity (fresh water) produced from a FLR allows a broader range of HES operations for maximizing overall system performance and profitability. For the purpose of assessing the incorporation of health assessment into process operations, an online condition monitoring approach for RO membrane fouling supervision is addressed in the case study presented.« less
Discovering new variable stars at Key Stage 3
NASA Astrophysics Data System (ADS)
Chubb, Katy; Hood, Rosie; Wilson, Thomas; Holdship, Jonathan; Hutton, Sarah
2017-05-01
Details of the London pilot of the ‘Discovery Project’ are presented, where university-based astronomers were given the chance to pass on some real and applied knowledge of astronomy to a group of selected secondary school pupils. It was aimed at students in Key Stage 3 of their education, allowing them to be involved in real astronomical research at an early stage of their education, the chance to become the official discoverer of a new variable star, and to be listed in the International Variable Star Index database (The International Variable Star Index, Version 1.1, American Association of Variable Star Observers (AAVSO), 2016, http://aavso.org/vsx), all while learning and practising research-level skills. Future plans are discussed.
Quantum Watermarking Scheme Based on INEQR
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Zhou, Yang; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou
2018-04-01
Quantum watermarking technology protects copyright by embedding invisible quantum signal in quantum multimedia data. In this paper, a watermarking scheme based on INEQR was presented. Firstly, the watermark image is extended to achieve the requirement of embedding carrier image. Secondly, the swap and XOR operation is used on the processed pixels. Since there is only one bit per pixel, XOR operation can achieve the effect of simple encryption. Thirdly, both the watermark image extraction and embedding operations are described, where the key image, swap operation and LSB algorithm are used. When the embedding is made, the binary image key is changed. It means that the watermark has been embedded. Of course, if the watermark image is extracted, the key's state need detected. When key's state is |1>, this extraction operation is carried out. Finally, for validation of the proposed scheme, both the Signal-to-noise ratio (PSNR) and the security of the scheme are analyzed.
Datta, Manoshi S; Almada, Amalia A; Baumgartner, Mark F; Mincer, Tracy J; Tarrant, Ann M; Polz, Martin F
2018-06-06
Copepods harbor diverse bacterial communities, which collectively carry out key biogeochemical transformations in the ocean. However, bulk copepod sampling averages over the variability in their associated bacterial communities, thereby limiting our understanding of the nature and specificity of copepod-bacteria associations. Here, we characterize the bacterial communities associated with nearly 200 individual Calanus finmarchicus copepods transitioning from active growth to diapause. We find that all individual copepods sampled share a small set of "core" operational taxonomic units (OTUs), a subset of which have also been found associated with other marine copepod species in different geographic locations. However, most OTUs are patchily distributed across individual copepods, thereby driving community differences across individuals. Among patchily distributed OTUs, we identified groups of OTUs correlated with common ecological drivers. For instance, a group of OTUs positively correlated with recent copepod feeding served to differentiate largely active growing copepods from those entering diapause. Together, our results underscore the power of individual-level sampling for understanding host-microbiome relationships.
Common cause evaluations in applied risk analysis of nuclear power plants. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taniguchi, T.; Ligon, D.; Stamatelatos, M.
1983-04-01
Qualitative and quantitative approaches were developed for the evaluation of common cause failures (CCFs) in nuclear power plants and were applied to the analysis of the auxiliary feedwater systems of several pressurized water reactors (PWRs). Key CCF variables were identified through a survey of experts in the field and a review of failure experience in operating PWRs. These variables were classified into categories of high, medium, and low defense against a CCF. Based on the results, a checklist was developed for analyzing CCFs of systems. Several known techniques for quantifying CCFs were also reviewed. The information provided valuable insights inmore » the development of a new model for estimating CCF probabilities, which is an extension of and improvement over the Beta Factor method. As applied to the analysis of the PWR auxiliary feedwater systems, the method yielded much more realistic values than the original Beta Factor method for a one-out-of-three system.« less
Characterizing Heterogeneity in Infiltration Rates During Managed Aquifer Recharge.
Mawer, Chloe; Parsekian, Andrew; Pidlisecky, Adam; Knight, Rosemary
2016-11-01
Infiltration rate is the key parameter that describes how water moves from the surface into a groundwater aquifer during managed aquifer recharge (MAR). Characterization of infiltration rate heterogeneity in space and time is valuable information for MAR system operation. In this study, we utilized fiber optic distributed temperature sensing (FO-DTS) observations and the phase shift of the diurnal temperature signal between two vertically co-located fiber optic cables to characterize infiltration rate spatially and temporally in a MAR basin. The FO-DTS measurements revealed spatial heterogeneity of infiltration rate: approximately 78% of the recharge water infiltrated through 50% of the pond bottom on average. We also introduced a metric for quantifying how the infiltration rate in a recharge pond changes over time, which enables FO-DTS to be used as a method for monitoring MAR and informing maintenance decisions. By monitoring this metric, we found high-spatial variability in how rapidly infiltration rate changed during the test period. We attributed this variability to biological pore clogging and found a relationship between high initial infiltration rate and the most rapid pore clogging. We found a strong relationship (R 2 = 0.8) between observed maximum infiltration rates and electrical resistivity measurements from electrical resistivity tomography data taken in the same basin when dry. This result shows that the combined acquisition of DTS and ERT data can improve the design and operation of a MAR pond significantly by providing the critical information needed about spatial variability in parameters controlling infiltration rates. © 2016, National Ground Water Association.
Denawaka, Chamila J; Fowlis, Ian A; Dean, John R
2014-04-18
An evaluation of static headspace-multicapillary column-gas chromatography-ion mobility spectrometry (SHS-MCC-GC-IMS) has been undertaken to assess its applicability for the determination of 32 volatile compounds (VCs). The key experimental variables of sample incubation time and temperature have been evaluated alongside the MCC-GC variables of column polarity, syringe temperature, injection temperature, injection volume, column temperature and carrier gas flow rate coupled with the IMS variables of temperature and drift gas flow rate. This evaluation resulted in six sets of experimental variables being required to separate the 32 VCs. The optimum experimental variables for SHS-MCC-GC-IMS, the retention time and drift time operating parameters were determined; to normalise the operating parameters, the relative drift time and normalised reduced ion mobility for each VC were determined. In addition, a full theoretical explanation is provided on the formation of the monomer, dimer and trimer of a VC. The optimum operating condition for each VC calibration data was obtained alongside limit of detection (LOD) and limit of quantitation (LOQ) values. Typical detection limits ranged from 0.1ng bis(methylthio)methane, ethylbutanoate and (E)-2-nonenal to 472ng isovaleric acid with correlation coefficient (R(2)) data ranging from 0.9793 (for the dimer of octanal) through to 0.9990 (for isobutyric acid). Finally, the developed protocols were applied to the analysis of malodour in sock samples. Initial work involved spiking an inert matrix and sock samples with appropriate concentrations of eight VCs. The average recovery from the inert matrix was 101±18% (n=8), while recoveries from the sock samples were lower, that is, 54±30% (n=8) for sock type 1 and 78±24% (n=6) for sock type 2. Finally, SHS-MCC-GC-IMS was applied to sock malodour in a field trial based on 11 volunteers (mixed gender) over a 3-week period. By applying the SHS-MCC-GC-IMS database, four VCs were identified and quantified: ammonia, dimethyl disulphide, dimethyl trisulphide and butyric acid. A link was identified between the presence of high ammonia and dimethyl disulphide concentrations and a high malodour odour grading, that is, ≥ 6. Statistical analysis did not find any correlation between the occurrence of dimethyl disulphide and participant gender. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Roberts, Harry W; Myerscough, James; Borsci, Simone; Ni, Melody; O'Brart, David P S
2017-11-24
To provide a quantitative assessment of cataract theatre lists focusing on productivity and staffing levels/tasks using time and motion studies. National Health Service (NHS) cataract theatre lists were prospectively observed in five different institutions (four NHS hospitals and one private hospital). Individual tasks and their timings of every member of staff were recorded. Multiple linear regression analyses were performed to investigate possible associations between individual timings and tasks. 140 operations were studied over 18 theatre sessions. The median number of scheduled cataract operations was 7 (range: 5-14). The average duration of an operation was 10.3 min±(SD 4.11 min). The average time to complete one case including patient turnaround was 19.97 min (SD 8.77 min). The proportion of the surgeons' time occupied on total duties or operating ranged from 65.2% to 76.1% and from 42.4% to 56.7%, respectively. The correlations of the surgical time to patient time in theatre was R 2 =0.95. A multiple linear regression model found a significant association (F(3,111)=32.86, P<0.001) with R 2 =0.47 between the duration of one operation and the number of allied healthcare professionals (AHPs), the number of AHP key tasks and the time taken to perform these key tasks by the AHPs. Significant variability in the number of cases performed and the efficiency of patient flow were found between different institutions. Time and motion studies identified requirements for high-volume models and factors relating to performance. Supporting the surgeon with sufficient AHPs and tasks performed by AHPs could improve surgical efficiency up to approximately double productivity over conventional theatre models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Structural identifiability of cyclic graphical models of biological networks with latent variables.
Wang, Yulin; Lu, Na; Miao, Hongyu
2016-06-13
Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.
Variable Order and Distributed Order Fractional Operators
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
2002-01-01
Many physical processes appear to exhibit fractional order behavior that may vary with time or space. The continuum of order in the fractional calculus allows the order of the fractional operator to be considered as a variable. This paper develops the concept of variable and distributed order fractional operators. Definitions based on the Riemann-Liouville definitions are introduced and behavior of the operators is studied. Several time domain definitions that assign different arguments to the order q in the Riemann-Liouville definition are introduced. For each of these definitions various characteristics are determined. These include: time invariance of the operator, operator initialization, physical realization, linearity, operational transforms. and memory characteristics of the defining kernels. A measure (m2) for memory retentiveness of the order history is introduced. A generalized linear argument for the order q allows the concept of "tailored" variable order fractional operators whose a, memory may be chosen for a particular application. Memory retentiveness (m2) and order dynamic behavior are investigated and applications are shown. The concept of distributed order operators where the order of the time based operator depends on an additional independent (spatial) variable is also forwarded. Several definitions and their Laplace transforms are developed, analysis methods with these operators are demonstrated, and examples shown. Finally operators of multivariable and distributed order are defined in their various applications are outlined.
Monitoring household waste recycling centres performance using mean bin weight analyses.
Maynard, Sarah; Cherrett, Tom; Waterson, Ben
2009-02-01
This paper describes a modelling approach used to investigate the significance of key factors (vehicle type, compaction type, site design, temporal effects) in influencing the variability in observed nett amenity bin weights produced by household waste recycling centres (HWRCs). This new method can help to quickly identify sites that are producing significantly lighter bins, enabling detailed back-end analyses to be efficiently targeted and best practice in HWRC operation identified. Tested on weigh ticket data from nine HWRCs across West Sussex, UK, the model suggests that compaction technique, vehicle type, month and site design explained 76% of the variability in the observed nett amenity weights. For each factor, a weighting coefficient was calculated to generate a predicted nett weight for each bin transaction and three sites were subsequently identified as having similar characteristics but returned significantly different mean nett bin weights. Waste and site audits were then conducted at the three sites to try and determine the possible sources of the remaining variability. Significant differences were identified in the proportions of contained waste (bagged), wood, and dry recyclables entering the amenity waste stream, particularly at one site where significantly less contaminated waste and dry recyclables were observed.
An improvement of drought monitoring through the use of a multivariate magnitude index
NASA Astrophysics Data System (ADS)
Real-Rangel, R. A.; Alcocer-Yamanaka, V. H.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.; Ocón-Gutiérrez, A. R.
2017-12-01
In drought monitoring activities it is widely acknowledged that the severity of an event is determined in relation to monthly values of univariate indices of one or more hydrological variables. Normally, these indices are estimated using temporal windows from 1 to 12 months or more to aggregate the effects of deficits in the variable of interest. However, the use of these temporal windows may lead to a misperception of both, the drought event intensity and the timing of its occurrence. In this context, this work presents the implementation of a trivariate drought magnitude index, considering key hydrological variables (e.g., precipitation, soil moisture and runoff) using for this the framework of the Multivariate Standardized Drought Index (MSDI). Despite the popularity and simplicity of the concept of drought magnitude for standardized drought indices, its implementation in drought monitoring and early warning systems has not been reported. This approach has been tested for operational purposes in the recently launched Multivariate Drought Monitor of Mexico (MOSEMM) and the results shows that the inclusion of a Magnitude index facilitates the drought detection and, thus, improves the decision making process for emergency managers.
Descatha, Alexis; Roquelaure, Yves; Evanoff, Bradley; Niedhammer, Isabelle; Chastang, Jean François; Mariot, Camille; Ha, Catherine; Imbernon, Ellen; Goldberg, Marcel; Leclerc, Annette
2007-01-01
Objective Questionnaires for assessment of biomechanical exposure are frequently used in surveillance programs, though few studies have evaluated which key questions are needed. We sought to reduce the number of variables on a surveillance questionnaire by identifying which variables best summarized biomechanical exposure in a survey of the French working population. Methods We used data from the 2002–2003 French experimental network of Upper-limb work-related musculoskeletal disorders (UWMSD), performed on 2685 subjects in which 37 variables assessing biomechanical exposures were available (divided into four ordinal categories, according to the task frequency or duration). Principal Component Analysis (PCA) with orthogonal rotation was performed on these variables. Variables closely associated with factors issued from PCA were retained, except those highly correlated to another variable (rho>0.70). In order to study the relevance of the final list of variables, correlations between a score based on retained variables (PCA score) and the exposure score suggested by the SALTSA group were calculated. The associations between the PCA score and the prevalence of UWMSD were also studied. In a final step, we added back to the list a few variables not retained by PCA, because of their established recognition as risk factors. Results According to the results of the PCA, seven interpretable factors were identified: posture exposures, repetitiveness, handling of heavy loads, distal biomechanical exposures, computer use, forklift operator specific task, and recovery time. Twenty variables strongly correlated with the factors obtained from PCA were retained. The PCA score was strongly correlated both with the SALTSA score and with UWMSD prevalence (p<0.0001). In the final step, six variables were reintegrated. Conclusion Twenty-six variables out of 37 were efficiently selected according to their ability to summarize major biomechanical constraints in a working population, with an approach combining statistical analyses and existing knowledge. PMID:17476519
Present and future free-space quantum key distribution
NASA Astrophysics Data System (ADS)
Nordholt, Jane E.; Hughes, Richard J.; Morgan, George L.; Peterson, C. Glen; Wipf, Christopher C.
2002-04-01
Free-space quantum key distribution (QKD), more popularly know as quantum cryptography, uses single-photon free-space optical communications to distribute the secret keys required for secure communications. At Los Alamos National Laboratory we have demonstrated a fully automated system that is capable of operations at any time of day over a horizontal range of several kilometers. This has proven the technology is capable of operation from a spacecraft to the ground, opening up the possibility of QKD between any group of users anywhere on Earth. This system, the prototyping of a new system for use on a spacecraft, and the techniques required for world-wide quantum key distribution will be described. The operational parameters and performance of a system designed to operate between low earth orbit (LEO) and the ground will also be discussed.
Secure quantum key distribution using continuous variables of single photons.
Zhang, Lijian; Silberhorn, Christine; Walmsley, Ian A
2008-03-21
We analyze the distribution of secure keys using quantum cryptography based on the continuous variable degree of freedom of entangled photon pairs. We derive the information capacity of a scheme based on the spatial entanglement of photons from a realistic source, and show that the standard measures of security known for quadrature-based continuous variable quantum cryptography (CV-QKD) are inadequate. A specific simple eavesdropping attack is analyzed to illuminate how secret information may be distilled well beyond the bounds of the usual CV-QKD measures.
Field demonstration of a continuous-variable quantum key distribution network.
Huang, Duan; Huang, Peng; Li, Huasheng; Wang, Tao; Zhou, Yingming; Zeng, Guihua
2016-08-01
We report on what we believe is the first field implementation of a continuous-variable quantum key distribution (CV-QKD) network with point-to-point configuration. Four QKD nodes are deployed on standard communication infrastructures connected with commercial telecom optical fiber. Reliable key exchange is achieved in the wavelength-division-multiplexing CV-QKD network. The impact of a complex and volatile field environment on the excess noise is investigated, since excess noise controlling and reduction is arguably the major issue pertaining to distance and the secure key rate. We confirm the applicability and verify the maturity of the CV-QKD network in a metropolitan area, thus paving the way for a next-generation global secure communication network.
Continuous-variable quantum key distribution with 1 Mbps secure key rate.
Huang, Duan; Lin, Dakai; Wang, Chao; Liu, Weiqi; Fang, Shuanghong; Peng, Jinye; Huang, Peng; Zeng, Guihua
2015-06-29
We report the first continuous-variable quantum key distribution (CVQKD) experiment to enable the creation of 1 Mbps secure key rate over 25 km standard telecom fiber in a coarse wavelength division multiplexers (CWDM) environment. The result is achieved with two major technological advances: the use of a 1 GHz shot-noise-limited homodyne detector and the implementation of a 50 MHz clock system. The excess noise due to noise photons from local oscillator and classical data channels in CWDM is controlled effectively. We note that the experimental verification of high-bit-rate CVQKD in the multiplexing environment is a significant step closer toward large-scale deployment in fiber networks.
NASA Astrophysics Data System (ADS)
Blackstock, J. M.; Covington, M. D.; Williams, S. G. W.; Myre, J. M.; Rodriguez, J.
2017-12-01
Variability in CO2 fluxes within Earth's Critical zone occurs over a wide range of timescales. Resolving this and its drivers requires high-temporal resolution monitoring of CO2 both in the soil and aquatic environments. High-cost (> 1,000 USD) gas analyzers and data loggers present cost-barriers for investigations with limited budgets, particularly if high spatial resolution is desired. To overcome high-costs, we developed an Arduino based CO2 measuring platform (i.e. gas analyzer and data logger). The platform was deployed at multiple sites within the Critical Zone overlying the Springfield Plateau aquifer in Northwest Arkansas, USA. The CO2 gas analyzer used in this study was a relatively low-cost SenseAir K30. The analyzer's optical housing was covered by a PTFE semi-permeable membrane allowing for gas exchange between the analyzer and environment. Total approximate cost of the monitoring platform was 200 USD (2% detection limit) to 300 USD (10% detection limit) depending on the K30 model used. For testing purposes, we deployed the Arduino based platform alongside a commercial monitoring platform. CO2 concentration time series were nearly identical. Notably, CO2 cycles at the surface water site, which operated from January to April 2017, displayed a systematic increase in daily CO2 amplitude. Preliminary interpretation suggests key observation of seasonally increasing stream metabolic function. Other interpretations of observed cyclical and event-based behavior are out of the scope of the study; however, the presented method describes an accurate near-hourly characterization of CO2 variability. The new platform has been shown to be operational for several months, and we infer reliable operation for much longer deployments (> 1 year) given adequate environmental protection and power supply. Considering cost-savings, this platform is an attractive option for continuous, accurate, low-power, and low-cost CO2 monitoring for remote locations, globally.
Slump, J; Ferguson, P C; Wunder, J S; Griffin, A M; Hoekstra, H J; Liu, X; Hofer, S O P; O'Neill, A C
2017-06-01
Flap reconstruction plays an essential role in the management of soft tissue sarcoma, facilitating wide resection while maximizing preservation of function. The addition of reconstruction increases the complexity of the surgery and identification of patients who are at high risk for post-operative complications is an important part of the preoperative assessment. This study examines predictors of complications in these patients. 294 patients undergoing flap reconstruction following sarcoma resection were evaluated. Data on patient, tumour and treatment variables as well as post-operative complications were collected. Bivariate and multivariate regression analysis was performed to identify independent predictors of complications. Analysis of synergistic interaction between key patient and tumour risk factors was subsequently performed. A history of cerebrovascular events or cardiac disease were found to be the strongest independent predictors of post-operative complications (OR 14.84, p = 0.003 and OR 5.71, p = 0.001, respectively). Further strong independent tumour and treatment-related predictors were high grade tumours (OR 1.91, p = 0.038) and the need for additional reconstructive procedures (OR 2.78, p = 0.001). Obesity had significant synergistic interaction with tumour resection diameter (RERI 1.1, SI 1.99, p = 0.02) and high tumour grade (RERI 0.86, SI 1.5, p = 0.01). Comorbidities showed significant synergistic interaction with large tumour resections (RERI 0.91, SI 1.83, p = 0.02). Patient, tumour and treatment-related variables contribute to complications following flap reconstruction of sarcoma defects. This study highlights the importance of considering the combined effect of multiple risk factors when evaluating and counselling patients as significant synergistic interaction between variables can further increase the risk of complications. Copyright © 2017 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.
NASA Astrophysics Data System (ADS)
Libera, A.; de Barros, F.; Riva, M.; Guadagnini, A.
2016-12-01
Managing contaminated groundwater systems is an arduous task for multiple reasons. First, subsurface hydraulic properties are heterogeneous and the high costs associated with site characterization leads to data scarcity (therefore, model predictions are uncertain). Second, it is common for water agencies to schedule groundwater extraction through a temporal sequence of pumping rates to maximize the benefits to anthropogenic activities and minimize the environmental footprint of the withdrawal operations. The temporal variability in pumping rates and aquifer heterogeneity affect dilution rates of contaminant plumes and chemical concentration breakthrough curves (BTCs) at the well. While contaminant transport under steady-state pumping is widely studied, the manner in which a given time-varying pumping schedule affects contaminant plume behavior is tackled only marginally. At the same time, most studies focus on the impact of Gaussian random hydraulic conductivity (K) fields on transport. Here, we systematically analyze the significance of the random space function (RSF) model characterizing K in the presence of distinct pumping operations on the uncertainty of the concentration BTC at the operating well. We juxtapose Monte Carlo based numerical results associated with two models: (a) a recently proposed Generalized Sub-Gaussian model which allows capturing non-Gaussian statistical scaling features of RSFs such as hydraulic conductivity, and (b) the commonly used Gaussian field approximation. Our novel results include an appraisal of the coupled effect of (a) the model employed to depict the random spatial variability of K and (b) transient flow regime, as induced by a temporally varying pumping schedule, on the concentration BTC at the operating well. We systematically quantify the sensitivity of the uncertainty in the contaminant BTC to the RSF model adopted for K (non-Gaussian or Gaussian) in the presence of diverse well pumping schedules. Results contribute to determine conditions under which any of these two key factors prevails on the other.
Operational Assessment of Apollo Lunar Surface Extravehicular Activity
NASA Technical Reports Server (NTRS)
Miller, Matthew James; Claybrook, Austin; Greenlund, Suraj; Marquez, Jessica J.; Feigh, Karen M.
2017-01-01
Quantifying the operational variability of extravehicular activity (EVA) execution is critical to help design and build future support systems to enable astronauts to monitor and manage operations in deep-space, where ground support operators will no longer be able to react instantly and manage execution deviations due to the significant communication latency. This study quantifies the operational variability exhibited during Apollo 14-17 lunar surface EVA operations to better understand the challenges and natural tendencies of timeline execution and life support system performance involved in surface operations. Each EVA (11 in total) is individually summarized as well as aggregated to provide descriptive trends exhibited throughout the Apollo missions. This work extends previous EVA task analyses by calculating deviations between planned and as-performed timelines as well as examining metabolic rate and consumables usage throughout the execution of each EVA. The intent of this work is to convey the natural variability of EVA operations and to provide operational context for coping with the variability inherent to EVA execution as a means to support future concepts of operations.
3D facial landmarks: Inter-operator variability of manual annotation
2014-01-01
Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436
Method for encryption and transmission of digital keying data
Mniszewski, Susan M.; Springer, Edward A.; Brenner, David P.
1988-01-01
A method for the encryption, transmission, and subsequent decryption of digital keying data. The method utilizes the Data Encryption Standard and is implemented by means of a pair of apparatus, each of which is selectable to operate as either a master unit or remote unit. Each unit contains a set of key encryption keys which are indexed by a common indexing system. The master unit operates upon command from the remote unit to generate a data encryption key and encrypt the data encryption key using a preselected key encryption key. The encrypted data encryption key and an index designator are then downloaded to the remote unit, where the data encryption key is decrypted for subsequent use in the encryption and transmission data. Downloading of the encrypted data encryption key enables frequent change of keys without requiring manual entry or storage of keys at the remote unit.
Parallel Aircraft Trajectory Optimization with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Gray, Justin S.; Naylor, Bret
2016-01-01
Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.
Deng, Li; Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP
Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740
Operant Variability: A Conceptual Analysis
ERIC Educational Resources Information Center
Barba, Lourenco de Souza
2012-01-01
Some researchers claim that variability is an operant dimension of behavior. The present paper reviews the concept of operant behavior and emphasizes that differentiation is the behavioral process that demonstrates an operant relation. Differentiation is conceived as change in the overlap between two probability distributions: the distribution of…
Enhancing Heart-Beat-Based Security for mHealth Applications.
Seepers, Robert M; Strydis, Christos; Sourdis, Ioannis; De Zeeuw, Chris I
2017-01-01
In heart-beat-based security, a security key is derived from the time difference between consecutive heart beats (the inter-pulse interval, IPI), which may, subsequently, be used to enable secure communication. While heart-beat-based security holds promise in mobile health (mHealth) applications, there currently exists no work that provides a detailed characterization of the delivered security in a real system. In this paper, we evaluate the strength of IPI-based security keys in the context of entity authentication. We investigate several aspects that should be considered in practice, including subjects with reduced heart-rate variability (HRV), different sensor-sampling frequencies, intersensor variability (i.e., how accurate each entity may measure heart beats) as well as average and worst-case-authentication time. Contrary to the current state of the art, our evaluation demonstrates that authentication using multiple, less-entropic keys may actually increase the key strength by reducing the effects of intersensor variability. Moreover, we find that the maximal key strength of a 60-bit key varies between 29.2 bits and only 5.7 bits, depending on the subject's HRV. To improve security, we introduce the inter-multi-pulse interval (ImPI), a novel method of extracting entropy from the heart by considering the time difference between nonconsecutive heart beats. Given the same authentication time, using the ImPI for key generation increases key strength by up to 3.4 × (+19.2 bits) for subjects with limited HRV, at the cost of an extended key-generation time of 4.8 × (+45 s).
Maintenance-free operation of WDM quantum key distribution system through a field fiber over 30 days
NASA Astrophysics Data System (ADS)
Yoshino, Ken-ichiro; Ochi, Takao; Fujiwara, Mikio; Sasaki, Masahide; Tajima, Akio
2013-12-01
Maintenance-free wavelength-division-multiplexing quantum key distribution for 30 days was achieved through a 22-km field fiber. Using polarization-independent interferometers and stabilization techniques, we attained a quantum bit error rate as low as 1.70% and a key rate as high as 229.8 kbps, making the record of total secure key of 595.6 Gbits accumulated over an uninterrupted operation period.
Daylight operation of a free space, entanglement-based quantum key distribution system
NASA Astrophysics Data System (ADS)
Peloso, Matthew P.; Gerhardt, Ilja; Ho, Caleb; Lamas-Linares, Antía; Kurtsiefer, Christian
2009-04-01
Many quantum key distribution (QKD) implementations using a free space transmission path are restricted to operation at night time in order to distinguish the signal photons used for a secure key establishment from the background light. Here, we present a lean entanglement-based QKD system overcoming that limitation. By implementing spectral, spatial and temporal filtering techniques, we establish a secure key continuously over several days under varying light and weather conditions.
Parametric Studies of Flow Separation using Air Injection
NASA Technical Reports Server (NTRS)
Zhang, Wei
2004-01-01
Boundary Layer separation causes the airfoil to stall and therefore imposes dramatic performance degradation on the airfoil. In recent years, flow separation control has been one of the active research areas in the field of aerodynamics due to its promising performance improvements on the lifting device. These active flow separation control techniques include steady and unsteady air injection as well as suction on the airfoil surface etc. This paper will be focusing on the steady and unsteady air injection on the airfoil. Although wind tunnel experiments revealed that the performance improvements on the airfoil using injection techniques, the details of how the key variables such as air injection slot geometry and air injection angle etc impact the effectiveness of flow separation control via air injection has not been studied. A parametric study of both steady and unsteady air injection active flow control will be the main objective for this summer. For steady injection, the key variables include the slot geometry, orientation, spacing, air injection velocity as well as the injection angle. For unsteady injection, the injection frequency will also be investigated. Key metrics such as lift coefficient, drag coefficient, total pressure loss and total injection mass will be used to measure the effectiveness of the control technique. A design of experiments using the Box-Behnken Design is set up in order to determine how each of the variables affects each of the key metrics. Design of experiment is used so that the number of experimental runs will be at minimum and still be able to predict which variables are the key contributors to the responses. The experiments will then be conducted in the 1ft by 1ft wind tunnel according to the design of experiment settings. The data obtained from the experiments will be imported into JMP, statistical software, to generate sets of response surface equations which represent the statistical empirical model for each of the metrics as a function of the key variables. Next, the variables such as the slot geometry can be optimized using the build-in optimizer within JMP. Finally, a wind tunnel testing will be conducted using the optimized slot geometry and other key variables to verify the empirical statistical model. The long term goal for this effort is to assess the impacts of active flow control using air injection at system level as one of the task plan included in the NASAs URETI program with Georgia Institute of Technology.
Environmental Variability in the Florida Keys: Impacts on Coral Reef Resilience and Health
NASA Astrophysics Data System (ADS)
Soto, I. M.; Muller-Karger, F. E.
2005-12-01
Environmental variability contributes to both mass mortality and resilience in tropical coral reef communities. We assess variations in sea surface temperature (SST) and ocean color in the Florida Keys using satellite imagery, and provide insight into how this variability is associated with locations of resilient coral communities (those unaffected by or able to recover from major events). The project tests the hypothesis that areas with historically low environmental variability promote lower levels of coral reef resilience. Time series of SST from the Advanced Very High Resolution Radiometer (AVHRR) sensors and ocean color derived quantities (e.g., turbidity and chlorophyll) from the Sea-viewing Wide Field of View Sensor (SeaWiFS) are being constructed over the entire Florida Keys region for a period of twelve and nine years, respectively. These data will be compared with historical coral cover data derived from Landsat imagery (1984-2002). Improved understanding of the causes of coral reef decline or resilience will help protect and manage these natural treasures.
Qiu, Menglong; Wang, Qi; Li, Fangbai; Chen, Junjian; Yang, Guoyi; Liu, Liming
2016-01-01
A customized logistic-based cellular automata (CA) model was developed to simulate changes in heavy metal contamination (HMC) in farmland soils of Dongguan, a manufacturing center in Southern China, and to discover the relationship between HMC and related explanatory variables (continuous and categorical). The model was calibrated through the simulation and validation of HMC in 2012. Thereafter, the model was implemented for the scenario simulation of development alternatives for HMC in 2022. The HMC in 2002 and 2012 was determined through soil tests and cokriging. Continuous variables were divided into two groups by odds ratios. Positive variables (odds ratios >1) included the Nemerow synthetic pollution index in 2002, linear drainage density, distance from the city center, distance from the railway, slope, and secondary industrial output per unit of land. Negative variables (odds ratios <1) included elevation, distance from the road, distance from the key polluting enterprises, distance from the town center, soil pH, and distance from bodies of water. Categorical variables, including soil type, parent material type, organic content grade, and land use type, also significantly influenced HMC according to Wald statistics. The relative operating characteristic and kappa coefficients were 0.91 and 0.64, respectively, which proved the validity and accuracy of the model. The scenario simulation shows that the government should not only implement stricter environmental regulation but also strengthen the remediation of the current polluted area to effectively mitigate HMC.
Concurrently adjusting interrelated control parameters to achieve optimal engine performance
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-12-01
Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.
NASA Astrophysics Data System (ADS)
Cordova, Martin; Serio, Andrew; Meza, Francisco; Arriagada, Gustavo; Swett, Hector; Ball, Jesse; Collins, Paul; Masuda, Neal; Fuentes, Javier
2016-07-01
In 2014 Gemini Observatory started the base facility operations (BFO) project. The project's goal was to provide the ability to operate the two Gemini telescopes from their base facilities (respectively Hilo, HI at Gemini North, and La Serena, Chile at Gemini South). BFO was identified as a key project for Gemini's transition program, as it created an opportunity to reduce operational costs. In November 2015, the Gemini North telescope started operating from the base facility in Hilo, Hawaii. In order to provide the remote operator the tools to work from the base, many of the activities that were normally performed by the night staff at the summit were replaced with new systems and tools. This paper describes some of the key systems and tools implemented for environmental monitoring, and the design used in the implementation at the Gemini North telescope.
Analysis of the impact of safeguards criteria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mullen, M.F.; Reardon, P.T.
As part of the US Program of Technical Assistance to IAEA Safeguards, the Pacific Northwest Laboratory (PNL) was asked to assist in developing and demonstrating a model for assessing the impact of setting criteria for the application of IAEA safeguards. This report presents the results of PNL's work on the task. The report is in three parts. The first explains the technical approach and methodology. The second contains an example application of the methodology. The third presents the conclusions of the study. PNL used the model and computer programs developed as part of Task C.5 (Estimation of Inspection Efforts) ofmore » the Program of Technical Assistance. The example application of the methodology involves low-enriched uranium conversion and fuel fabrication facilities. The effects of variations in seven parameters are considered: false alarm probability, goal probability of detection, detection goal quantity, the plant operator's measurement capability, the inspector's variables measurement capability, the inspector's attributes measurement capability, and annual plant throughput. Among the key results and conclusions of the analysis are the following: the variables with the greatest impact on the probability of detection are the inspector's measurement capability, the goal quantity, and the throughput; the variables with the greatest impact on inspection costs are the throughput, the goal quantity, and the goal probability of detection; there are important interactions between variables. That is, the effects of a given variable often depends on the level or value of some other variable. With the methodology used in this study, these interactions can be quantitatively analyzed; reasonably good approximate prediction equations can be developed using the methodology described here.« less
Resistance to Change and Preference for Variable versus Fixed Response Sequences
ERIC Educational Resources Information Center
Arantes, Joana; Berg, Mark E.; Le, Dien; Grace, Randolph C.
2012-01-01
In Experiment 1, 4 pigeons were trained on a multiple chain schedule in which the initial link was a variable-interval (VI) 20-s schedule signalled by a red or green center key, and terminal links required four responses made to the left (L) and/or right (R) keys. In the REPEAT component, signalled by red keylights, only LRLR terminal-link…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-03
... via the U.S. Postal Service to Naval Facilities Engineering Command Southeast, NAS Key West Air... the project Web site ( http://www.keywesteis.com ). All statements, oral or written, submitted during... Engineering Command Southeast, NAS Key West Air Operations EIS Project Manager, P.O. Box 30, Building 903, NAS...
Summary of Key Operating Statistics: Data Collected from the 2009 Annual Institutional Report
ERIC Educational Resources Information Center
Accrediting Council for Independent Colleges and Schools, 2010
2010-01-01
The Accrediting Council for Independent Colleges and Schools (ACICS) provides the Summary of Key Operating Statistics (KOS) as an annual review of the performance and key measurements of the more than 800 private post-secondary institutions we accredit. This edition of the KOS contains information based on the 2009 Annual Institutional Reports…
Method and apparatus for executing an asynchronous clutch-to-clutch shift in a hybrid transmission
Demirovic, Besim; Gupta, Pinaki; Kaminsky, Lawrence A.; Naqvi, Ali K.; Heap, Anthony H.; Sah, Jy-Jen F.
2014-08-12
A hybrid transmission includes first and second electric machines. A method for operating the hybrid transmission in response to a command to execute a shift from an initial continuously variable mode to a target continuously variable mode includes increasing torque of an oncoming clutch associated with operating in the target continuously variable mode and correspondingly decreasing a torque of an off-going clutch associated with operating in the initial continuously variable mode. Upon deactivation of the off-going clutch, torque outputs of the first and second electric machines and the torque of the oncoming clutch are controlled to synchronize the oncoming clutch. Upon synchronization of the oncoming clutch, the torque for the oncoming clutch is increased and the transmission is operated in the target continuously variable mode.
Risius, Debbie; Milligan, Alexandra; Berns, Jason; Brown, Nicola; Scurr, Joanna
2017-05-01
To assess the effectiveness of breast support previous studies monitored breast kinematics and kinetics, subjective feedback, muscle activity (EMG), ground reaction forces (GRFs) and physiological measures in isolation. Comparing these variables within one study will establish the key performance variables that distinguish between breast supports during activities such as running. This study investigates the effects of changes in breast support on biomechanical, physiological and subjective measures during running. Ten females (34D) ran for 10 min in high and low breast supports, and for 2 min bare breasted (2.8 m·s -1 ). Breast and body kinematics, EMG, expired air and heart rate were recorded. GRFs were recorded during 10 m overground runs (2.8 m·s -1 ) and subjective feedback obtained after each condition. Of the 62 variables measured, 22 kinematic and subjective variables were influenced by changes in breast support. Willingness to exercise, time lag and superio-inferior breast velocity were most affected. GRFs, EMG and physiological variables were unaffected by breast support changes during running. Breast displacement reduction, although previously advocated, was not the most sensitive variable to breast support changes during running. Instead breast support products should be assessed using a battery of performance indicators, including the key kinematic and subjective variables identified here.
NASA Astrophysics Data System (ADS)
Arnold, Jeffrey; Clark, Martyn; Gutmann, Ethan; Wood, Andy; Nijssen, Bart; Rasmussen, Roy
2016-04-01
The United States Army Corps of Engineers (USACE) has had primary responsibility for multi-purpose water resource operations on most of the major river systems in the U.S. for more than 200 years. In that time, the USACE projects and programs making up those operations have proved mostly robust against the range of natural climate variability encountered over their operating life spans. However, in some watersheds and for some variables, climate change now is known to be shifting the hydroclimatic baseline around which that natural variability occurs and changing the range of that variability as well. This makes historical stationarity an inappropriate basis for assessing continued project operations under climate-changed futures. That means new hydroclimatic projections are required at multiple scales to inform decisions about specific threats and impacts, and for possible adaptation responses to limit water-resource vulnerabilities and enhance operational resilience. However, projections of possible future hydroclimatologies have myriad complex uncertainties that require explicit guidance for interpreting and using them to inform those decisions about climate vulnerabilities and resilience. Moreover, many of these uncertainties overlap and interact. Recent work, for example, has shown the importance of assessing the uncertainties from multiple sources including: global model structure [Meehl et al., 2005; Knutti and Sedlacek, 2013]; internal climate variability [Deser et al., 2012; Kay et al., 2014]; climate downscaling methods [Gutmann et al., 2012; Mearns et al., 2013]; and hydrologic models [Addor et al., 2014; Vano et al., 2014; Mendoza et al., 2015]. Revealing, reducing, and representing these uncertainties is essential for defining the plausible quantitative climate change narratives required to inform water-resource decision-making. And to be useful, such quantitative narratives, or storylines, of climate change threats and hydrologic impacts must sample from the full range of uncertainties associated with all parts of the simulation chain, from global climate models with simulations of natural climate variability, through regional climate downscaling, and on to modeling of affected hydrologic processes and downstream water resources impacts. This talk will present part of the work underway now both to reveal and reduce some important uncertainties and to develop explicit guidance for future generation of quantitative hydroclimatic storylines. Topics will include: 1- model structural and parameter-set limitations of some methods widely used to quantify climate impacts to hydrologic processes [Gutmann et al., 2014; Newman et al., 2015]; 2- development and evaluation of new, spatially consistent, U.S. national-scale climate downscaling and hydrologic simulation capabilities directly relevant at the multiple scales of water-resource decision-making [Newman et al., 2015; Mizukami et al., 2015; Gutmann et al., 2016]; and 3- development and evaluation of advanced streamflow forecasting methods to reduce and represent integrated uncertainties in a tractable way [Wood et al., 2014; Wood et al., 2015]. A key focus will be areas where climatologic and hydrologic science is currently under-developed to inform decisions - or is perhaps wrongly scaled or misapplied in practice - indicating the need for additional fundamental science and interpretation.
NASA Astrophysics Data System (ADS)
Biset, S.; Nieto Deglioumini, L.; Basualdo, M.; Garcia, V. M.; Serra, M.
The aim of this work is to investigate which would be a good preliminary plantwide control structure for the process of Hydrogen production from bioethanol to be used in a proton exchange membrane (PEM) accounting only steady-state information. The objective is to keep the process under optimal operation point, that is doing energy integration to achieve the maximum efficiency. Ethanol, produced from renewable feedstocks, feeds a fuel processor investigated for steam reforming, followed by high- and low-temperature shift reactors and preferential oxidation, which are coupled to a polymeric fuel cell. Applying steady-state simulation techniques and using thermodynamic models the performance of the complete system with two different control structures have been evaluated for the most typical perturbations. A sensitivity analysis for the key process variables together with the rigorous operability requirements for the fuel cell are taking into account for defining acceptable plantwide control structure. This is the first work showing an alternative control structure applied to this kind of process.
Concepts for Multi-Speed Rotorcraft Drive System - Status of Design and Testing at NASA GRC
NASA Technical Reports Server (NTRS)
Stevens, Mark A.; Lewicki, David G.; Handschuh, Robert F.
2015-01-01
In several studies and on-going developments for advanced rotorcraft, the need for variable/multi-speed capable rotors has been raised. Speed changes of up to 50 percent have been proposed for future rotorcraft to improve vehicle performance. A rotor speed change during operation not only requires a rotor that can perform effectively over the operating speed/load range, but also requires a propulsion system possessing these same capabilities. A study was completed investigating possible drive system arrangements that can accommodate up to a 50 percent speed change. Key drivers were identified from which simplicity and weight were judged as central. This paper presents the current status of two gear train concepts coupled with the first of two clutch types developed and tested thus far with focus on design lessons learned and areas requiring development. Also, a third concept is presented, a dual input planetary differential as leveraged from a simple planetary with fixed carrier.
Employing ISRU Models to Improve Hardware Design
NASA Technical Reports Server (NTRS)
Linne, Diane L.
2010-01-01
An analytical model for hydrogen reduction of regolith was used to investigate the effects of several key variables on the energy and mass performance of reactors for a lunar in-situ resource utilization oxygen production plant. Reactor geometry, reaction time, number of reactors, heat recuperation, heat loss, and operating pressure were all studied to guide hardware designers who are developing future prototype reactors. The effects of heat recuperation where the incoming regolith is pre-heated by the hot spent regolith before transfer was also investigated for the first time. In general, longer reaction times per batch provide a lower overall energy, but also result in larger and heavier reactors. Three reactors with long heat-up times results in similar energy requirements as a two-reactor system with all other parameters the same. Three reactors with heat recuperation results in energy reductions of 20 to 40 percent compared to a three-reactor system with no heat recuperation. Increasing operating pressure can provide similar energy reductions as heat recuperation for the same reaction times.
User's guide to the Fault Inferring Nonlinear Detection System (FINDS) computer program
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.; Satz, H. S.
1988-01-01
Described are the operation and internal structure of the computer program FINDS (Fault Inferring Nonlinear Detection System). The FINDS algorithm is designed to provide reliable estimates for aircraft position, velocity, attitude, and horizontal winds to be used for guidance and control laws in the presence of possible failures in the avionics sensors. The FINDS algorithm was developed with the use of a digital simulation of a commercial transport aircraft and tested with flight recorded data. The algorithm was then modified to meet the size constraints and real-time execution requirements on a flight computer. For the real-time operation, a multi-rate implementation of the FINDS algorithm has been partitioned to execute on a dual parallel processor configuration: one based on the translational dynamics and the other on the rotational kinematics. The report presents an overview of the FINDS algorithm, the implemented equations, the flow charts for the key subprograms, the input and output files, program variable indexing convention, subprogram descriptions, and the common block descriptions used in the program.
Lee, Fook Choon; Rangaiah, Gade Pandu; Ray, Ajay Kumar
2007-10-15
Bulk of the penicillin produced is used as raw material for semi-synthetic penicillin (such as amoxicillin and ampicillin) and semi-synthetic cephalosporins (such as cephalexin and cefadroxil). In the present paper, an industrial penicillin V bioreactor train is optimized for multiple objectives simultaneously. An industrial train, comprising a bank of identical bioreactors, is run semi-continuously in a synchronous fashion. The fermentation taking place in a bioreactor is modeled using a morphologically structured mechanism. For multi-objective optimization for two and three objectives, the elitist non-dominated sorting genetic algorithm (NSGA-II) is chosen. Instead of a single optimum as in the traditional optimization, a wide range of optimal design and operating conditions depicting trade-offs of key performance indicators such as batch cycle time, yield, profit and penicillin concentration, is successfully obtained. The effects of design and operating variables on the optimal solutions are discussed in detail. Copyright 2007 Wiley Periodicals, Inc.
Heavy equipment maintenance wastes and environmental management in the mining industry.
Guerin, Turlough F
2002-10-01
Maintenance wastes, if not managed properly, represent significant environmental issues for mining operations. Petroleum hydrocarbon liquid wastes were studied at an Australian site and a review of the literature and technology vendors was carried out to identify oil/water separation technologies. Treatment technologies and practices for managing oily wastewater, used across the broader mining industry in the Asia-Pacific region, were also identified. Key findings from the study were: (1) primary treatment is required to remove grease oil contamination and to protect secondary oily wastewater treatment systems from being overloaded; (2) selection of an effective secondary treatment system is dependent on influent oil droplet size and concentration, suspended solids concentration, flow rates (and their variability), environmental conditions, maintenance schedules and effectiveness, treatment targets and costs; and (3) oily wastewater treatment systems, based on mechanical separation, are favoured over those that are chemically based, as they simplify operational requirements. Source reduction, through housekeeping, equipment and reagent modifications, and segregation and/or consolidation of hydrocarbon waste streams, minimizes treatment costs, safety and environmental impact.
An investigation on wireless sensors for asset management and health monitoring of civil structures
NASA Astrophysics Data System (ADS)
Furkan, Mustafa; Mao, Qiang; Mazzotti, Matteo; DeVitis, John; Sumitro, S. Paul; Faridazar, Fred; Aktan, A. Emin; Moon, Franklin; Bartoli, Ivan
2016-04-01
Application of wireless sensors and sensor networks for Structural Health Monitoring has been investigated for a long time. Key limitations for practical use are energy requirements, connectivity, and integration with existing systems. Current sensors and sensor networks mainly rely on wired connectivity for communication and external power source for energy. This paper presents a suite of wireless sensors that are low-cost, maintenance free, rugged, and have long service life. The majority of the sensors considered were designed by transforming existing, proven, and robust wired sensors into wireless units. In this study, the wireless sensors were tested in laboratory conditions for calibration and evaluation along with wired sensors. The experimental results were also compared to theoretical results. The tests mostly show satisfactory performance of the wireless units. This work is part of a broader Federal Highway Administration sponsored project intended to ultimately validate a wireless sensing system on a real, operating structure to account for all the uncertainties, environmental conditions and operational variability that are encountered in the field.
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
Simple Sensitivity Analysis for Orion Guidance Navigation and Control
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Daoud, Salima; Chakroun-Feki, Nozha; Sellami, Afifa; Ammar-Keskes, Leila; Rebai, Tarek
2016-01-01
Semen analysis is a key part of male infertility investigation. The necessity of quality management implementation in the andrology laboratory has been recognized in order to ensure the reliability of its results. The aim of this study was to evaluate intra- and inter-individual variability in the assessment of semen parameters in our laboratory through a quality control programme. Four participants from the laboratory with different experience levels have participated in this study. Semen samples of varying quality were assessed for sperm motility, concentration and morphology and the results were used to evaluate inter-participant variability. In addition, replicates of each semen sample were analyzed to determine intra-individual variability for semen parameters analysis. The average values of inter-participant coefficients of variation for sperm motility, concentration and morphology were 12.8%, 19.8% and 48.9% respectively. The mean intra-participant coefficients of variation were, respectively, 6.9%, 12.3% and 42.7% for sperm motility, concentration and morphology. Despite some random errors of under- or overestimation, the overall results remained within the limits of acceptability for all participants. Sperm morphology assessment was particularly influenced by the participant's level of experience. The present data emphasize the need for appropriate training of the laboratory staff and for regular participation in internal quality control programmes in order to improve the reliability of laboratory results.
NASA Technical Reports Server (NTRS)
Reichle, Rolf H.; De Lannoy, Gabrielle J. M.; Forman, Barton A.; Draper, Clara S.; Liu, Qing
2013-01-01
A land data assimilation system (LDAS) can merge satellite observations (or retrievals) of land surface hydrological conditions, including soil moisture, snow, and terrestrial water storage (TWS), into a numerical model of land surface processes. In theory, the output from such a system is superior to estimates based on the observations or the model alone, thereby enhancing our ability to understand, monitor, and predict key elements of the terrestrial water cycle. In practice, however, satellite observations do not correspond directly to the water cycle variables of interest. The present paper addresses various aspects of this seeming mismatch using examples drawn from recent research with the ensemble-based NASA GEOS-5 LDAS. These aspects include (1) the assimilation of coarse-scale observations into higher-resolution land surface models, (2) the partitioning of satellite observations (such as TWS retrievals) into their constituent water cycle components, (3) the forward modeling of microwave brightness temperatures over land for radiance-based soil moisture and snow assimilation, and (4) the selection of the most relevant types of observations for the analysis of a specific water cycle variable that is not observed (such as root zone soil moisture). The solution to these challenges involves the careful construction of an observation operator that maps from the land surface model variables of interest to the space of the assimilated observations.
PARALLEL PERTURBATION MODEL FOR CYCLE TO CYCLE VARIABILITY PPM4CCV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin Mohammed; Som, Sibendu
This code consists of a Fortran 90 implementation of the parallel perturbation model to compute cyclic variability in spark ignition (SI) engines. Cycle-to-cycle variability (CCV) is known to be detrimental to SI engine operation resulting in partial burn and knock, and result in an overall reduction in the reliability of the engine. Numerical prediction of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are required to accurately capture the in-cylinder turbulent flow field, and (ii) CCV is experienced over long timescales and hence the simulations needmore » to be performed for hundreds of consecutive cycles. In the new technique, the strategy is to perform multiple parallel simulations, each of which encompasses 2-3 cycles, by effectively perturbing the simulation parameters such as the initial and boundary conditions. The PPM4CCV code is a pre-processing code and can be coupled with any engine CFD code. PPM4CCV was coupled with Converge CFD code and a 10-time speedup was demonstrated over the conventional multi-cycle LES in predicting the CCV for a motored engine. Recently, the model is also being applied to fired engines including port fuel injected (PFI) and direct injection spark ignition engines and the preliminary results are very encouraging.« less
Enhanced Biennial Variability in the Pacific due to Atlantic Capacitor Effect after the Early 1990s
NASA Astrophysics Data System (ADS)
WANG, L.; Yu, J. Y.; Paek, H.
2016-12-01
The El Niño-Southern Oscillation (ENSO) and Pacific subtropical highs (PSHs) have major impacts on social and ecological systems through their influences on severe natural hazards including tropical storms, coastal erosions, droughts and floods. The ability to forecast ENSO and PSHs requires an understanding of the underlying physical mechanisms that drive their variability. Here we present an Atlantic capacitor effect mechanism to suggest the Atlantic as a key pacemaker of the biennial variability in the Pacific including ENSO and PSHs in recent decades, while the pacemaker was previously considered to be mainly lied within the Pacific or Indian Oceans. The "charging" (i.e., ENSO imprinting the North Tropical Atlantic (NTA) sea surface temperature (SST) via an atmospheric bridge mechanism) and "discharging" (i.e., the NTA SST triggering the following ENSO via a subtropical teleconnection mechanism) process works alternately, generating the biennial rhythmic changes in the Pacific. After the early-1990s, the positive phase of the Atlantic Multidecadal Oscillation and global warming provides more favorable background states over the NTA that enable the Atlantic capacitor effect to operate more efficiently, giving rise to enhanced biennial variability in the Pacific which may increase the occurrence frequency of severe natural hazard events. The results highlight the increasing important role of the Atlantic-Pacific coupling as an important pacemaker of the ENSO cycle in recent decades.
LI, ZHILIN; JI, HAIFENG; CHEN, XIAOHONG
2016-01-01
A new augmented method is proposed for elliptic interface problems with a piecewise variable coefficient that has a finite jump across a smooth interface. The main motivation is not only to get a second order accurate solution but also a second order accurate gradient from each side of the interface. The key of the new method is to introduce the jump in the normal derivative of the solution as an augmented variable and re-write the interface problem as a new PDE that consists of a leading Laplacian operator plus lower order derivative terms near the interface. In this way, the leading second order derivatives jump relations are independent of the jump in the coefficient that appears only in the lower order terms after the scaling. An upwind type discretization is used for the finite difference discretization at the irregular grid points near or on the interface so that the resulting coefficient matrix is an M-matrix. A multi-grid solver is used to solve the linear system of equations and the GMRES iterative method is used to solve the augmented variable. Second order convergence for the solution and the gradient from each side of the interface has also been proved in this paper. Numerical examples for general elliptic interface problems have confirmed the theoretical analysis and efficiency of the new method. PMID:28983130
Natural gas treating with molecular sieves. Pt. 2. Regeneration, economics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, T.B.
1972-08-01
Regeneration considerations are often the key to successful and economical application of molecular sieves for natural gas sweetening. In effect, molecular sieves remove the sulfur compounds from the feed stream and concentrate them into a smaller regeneration gas stream. Because a molecular sieve natural gas sweetener concentrates the hydrogen sulfide from the feed stream in a smaller regeneration gas stream, the sulfur-rich gas must be subsequently treated or disposed of. Molecular sieve sweeteners afford a high degree of flexibility in operating rates. They have a very high turndown ratio limited only by the use of product gas for regeneration, whichmore » can be utilized to full advantage with a control system that provides variable cycle times. Tabular data provide a range of designed conditions for existing molecular sieve natural gas sweeteners. Actual operating experience has shown that, in most cases, the following economical advantages can be realized: (1) investment cost is competitive to alternate forms of gas treating; (2) operating cost of molecular sieve units are generally lower (3) the value of carbon dioxide left in natural gas can lead to a considerable operating credit; and (4) the incremental costs of expansion to an existing plant are normally much less. (24 refs.)« less
An Integrated Assessment of Location-Dependent Scaling for Microalgae Biofuel Production Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Andre M.; Abodeely, Jared; Skaggs, Richard
Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting/design through processing/upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are addressed in part by applying the Integrated Assessment Framework (IAF)—an integrated multi-scale modeling, analysis, and data management suite—to address key issues in developing and operating an open-pond facility by analyzing how variability and uncertainty in space andmore » time affect algal feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. The IAF was applied to a set of sites previously identified as having the potential to cumulatively produce 5 billion-gallons/year in the southeastern U.S. and results indicate costs can be reduced by selecting the most effective processing technology pathway and scaling downstream processing capabilities to fit site-specific growing conditions, available resources, and algal strains.« less
Bustos, Alejandro; Rubio, Higinio; Castejón, Cristina; García-Prada, Juan Carlos
2018-03-06
An efficient maintenance is a key consideration in systems of railway transport, especially in high-speed trains, in order to avoid accidents with catastrophic consequences. In this sense, having a method that allows for the early detection of defects in critical elements, such as the bogie mechanical components, is a crucial for increasing the availability of rolling stock and reducing maintenance costs. The main contribution of this work is the proposal of a methodology that, based on classical signal processing techniques, provides a set of parameters for the fast identification of the operating state of a critical mechanical system. With this methodology, the vibratory behaviour of a very complex mechanical system is characterised, through variable inputs, which will allow for the detection of possible changes in the mechanical elements. This methodology is applied to a real high-speed train in commercial service, with the aim of studying the vibratory behaviour of the train (specifically, the bogie) before and after a maintenance operation. The results obtained with this methodology demonstrated the usefulness of the new procedure and allowed for the disclosure of reductions between 15% and 45% in the spectral power of selected Intrinsic Mode Functions (IMFs) after the maintenance operation.
EMD-Based Methodology for the Identification of a High-Speed Train Running in a Gear Operating State
García-Prada, Juan Carlos
2018-01-01
An efficient maintenance is a key consideration in systems of railway transport, especially in high-speed trains, in order to avoid accidents with catastrophic consequences. In this sense, having a method that allows for the early detection of defects in critical elements, such as the bogie mechanical components, is a crucial for increasing the availability of rolling stock and reducing maintenance costs. The main contribution of this work is the proposal of a methodology that, based on classical signal processing techniques, provides a set of parameters for the fast identification of the operating state of a critical mechanical system. With this methodology, the vibratory behaviour of a very complex mechanical system is characterised, through variable inputs, which will allow for the detection of possible changes in the mechanical elements. This methodology is applied to a real high-speed train in commercial service, with the aim of studying the vibratory behaviour of the train (specifically, the bogie) before and after a maintenance operation. The results obtained with this methodology demonstrated the usefulness of the new procedure and allowed for the disclosure of reductions between 15% and 45% in the spectral power of selected Intrinsic Mode Functions (IMFs) after the maintenance operation. PMID:29509690
NASA Astrophysics Data System (ADS)
Ashe, E.; Toth, L. T.; Cheng, H.; Edwards, R. L.; Richey, J. N.
2016-12-01
The oceanic passage between the Florida Keys and Cuba, known as the Straits of Florida, provides a critical connection between the tropics and northern Atlantic. Changes in the character of water masses transported through this region may ultimately have important impacts on high-latitude climate variability. Although recent studies have documented significant changes in the density of regional surface waters over millennial timescales, little is known about the contribution of local- to regional-scale changes in circulation to surface-water variability. Local variability in the radiocarbon age, ΔR, of surface waters can be used to trace changes in local water-column mixing and/or changes in regional source water over a variety of spatial and temporal scales. We reconstructed "snapshots" of ΔR variability across the Florida Keys reef tract during the last 10,000 years by dating 68 unaltered corals collected from Holocene reef cores with both U-series and radiocarbon techniques. We combined the snapshots of ΔR into a semi-empirical model to develop a robust statistical reconstruction of millennial-scale variability in ΔR on the Florida Keys reef tract. Our model demonstrates that ΔR varied significantly during the Holocene, with relatively high values during the early Holocene and around 3000 years BP and relatively low values around 7000 years BP and at present. We compare the trends in ΔR to existing paleoceanographic reconstructions to evaluate the relative contribution of local upwelling versus changes in source water to the region as a whole in driving local radiocarbon variability, and discuss the importance of these results to our understanding of regional-scale oceanographic and climatic variability during the Holocene. We also discuss the implications of our results for radiocarbon dating of marine samples from south Florida and present a model of ΔR versus 14C age that can be used to improve the accuracy of radiocarbon calibrations from this region.
Experimental study on discretely modulated continuous-variable quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Yong; Zou Hongxin; Chen Pingxing
2010-08-15
We present a discretely modulated continuous-variable quantum key distribution system in free space by using strong coherent states. The amplitude noise in the laser source is suppressed to the shot-noise limit by using a mode cleaner combined with a frequency shift technique. Also, it is proven that the phase noise in the source has no impact on the final secret key rate. In order to increase the encoding rate, we use broadband homodyne detectors and the no-switching protocol. In a realistic model, we establish a secret key rate of 46.8 kbits/s against collective attacks at an encoding rate of 10more » MHz for a 90% channel loss when the modulation variance is optimal.« less
Long-distance continuous-variable quantum key distribution with a Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jouguet, Paul; SeQureNet, 23 avenue d'Italie, F-75013 Paris; Kunz-Jacques, Sebastien
2011-12-15
We designed high-efficiency error correcting codes allowing us to extract an errorless secret key in a continuous-variable quantum key distribution (CVQKD) protocol using a Gaussian modulation of coherent states and a homodyne detection. These codes are available for a wide range of signal-to-noise ratios on an additive white Gaussian noise channel with a binary modulation and can be combined with a multidimensional reconciliation method proven secure against arbitrary collective attacks. This improved reconciliation procedure considerably extends the secure range of a CVQKD with a Gaussian modulation, giving a secret key rate of about 10{sup -3} bit per pulse at amore » distance of 120 km for reasonable physical parameters.« less
NASA Astrophysics Data System (ADS)
Swami, D.; Parthasarathy, D.; Dave, P.
2016-12-01
A key objective of the ongoing research is to understand the risk and vulnerability of agriculture and farming communities with respect to multiple climate change attributes, particularly monsoon variability and hydrology such as ground water availability. Climate Variability has always been a feature affecting Indian agriculture but the nature and characteristics of this variability is not well understood. Indian monsoon patterns are highly variable and most of the studies focus on larger domain such as Central India or Western coast (Ghosh et al., 2009) but district level analysis is missing i.e. the linkage between agriculture and climate variables at finer scale has not been investigated comprehensively. For example, Eastern Vidarbha region in Maharashtra is considered as one of the most agriculturally sensitive region in India, where every year a large number of farmers commit suicide. The main reasons for large number of suicides are climate related stressors such as droughts, hail storms, and monsoon variability aggravated with poor socio-economic conditions. Present study has tried to explore the areas in Vidarbha region of Maharashtra where famers and crop productivity, specifically cotton, sorghum, is highly vulnerable to monsoon variability, hydrological and socio-economic variables which are further modelled to determine the maximal contributing factor towards crops and farmers' vulnerability. After analysis using primary and secondary data, it will aid in decision making regarding field operations such as time of sowing, harvesting and irrigation requirements by optimizing the cropping pattern with climatic, hydrological and socio-economic variables. It also suggests the adaptation strategies to farmers regarding different types of cropping and water harvesting practices, optimized dates and timings for harvesting, sowing, water and nutrient requirements of particular crops according to the specific region. Primarily along with secondary analysis captured here can be highly beneficial for the farmers and policy makers while formulating agricultural policies related to climate change.
NASA Astrophysics Data System (ADS)
Zurita-Milla, R.; Laurent, V. C. E.; van Gijsel, J. A. E.
2015-12-01
Monitoring biophysical and biochemical vegetation variables in space and time is key to understand the earth system. Operational approaches using remote sensing imagery rely on the inversion of radiative transfer models, which describe the interactions between light and vegetation canopies. The inversion required to estimate vegetation variables is, however, an ill-posed problem because of variable compensation effects that can cause different combinations of soil and canopy variables to yield extremely similar spectral responses. In this contribution, we present a novel approach to visualise the ill-posed problem using self-organizing maps (SOM), which are a type of unsupervised neural network. The approach is demonstrated with simulations for Sentinel-2 data (13 bands) made with the Soil-Leaf-Canopy (SLC) radiative transfer model. A look-up table of 100,000 entries was built by randomly sampling 14 SLC model input variables between their minimum and maximum allowed values while using both a dark and a bright soil. The Sentinel-2 spectral simulations were used to train a SOM of 200 × 125 neurons. The training projected similar spectral signatures onto either the same, or contiguous, neuron(s). Tracing back the inputs that generated each spectral signature, we created a 200 × 125 map for each of the SLC variables. The lack of spatial patterns and the variability in these maps indicate ill-posed situations, where similar spectral signatures correspond to different canopy variables. For Sentinel-2, our results showed that leaf area index, crown cover and leaf chlorophyll, water and brown pigment content are less confused in the inversion than variables with noisier maps like fraction of brown canopy area, leaf dry matter content and the PROSPECT mesophyll parameter. This study supports both educational and on-going research activities on inversion algorithms and might be useful to evaluate the uncertainties of retrieved canopy biophysical and biochemical state variables.
Mining key elements for severe convection prediction based on CNN
NASA Astrophysics Data System (ADS)
Liu, Ming; Pan, Ning; Zhang, Changan; Sha, Hongzhou; Zhang, Bolei; Liu, Liang; Zhang, Meng
2017-04-01
Severe convective weather is a kind of weather disasters accompanied by heavy rainfall, gust wind, hail, etc. Along with recent developments on remote sensing and numerical modeling, there are high-volume and long-term observational and modeling data accumulated to capture massive severe convective events over particular areas and time periods. With those high-volume and high-variety weather data, most of the existing studies and methods carry out the dynamical laws, cause analysis, potential rule study, and prediction enhancement by utilizing the governing equations from fluid dynamics and thermodynamics. In this study, a key-element mining method is proposed for severe convection prediction based on convolution neural network (CNN). It aims to identify the key areas and key elements from huge amounts of historical weather data including conventional measurements, weather radar, satellite, so as numerical modeling and/or reanalysis data. Under this manner, the machine-learning based method could help the human forecasters on their decision-making on operational weather forecasts on severe convective weathers by extracting key information from the real-time and historical weather big data. In this paper, it first utilizes computer vision technology to complete the data preprocessing work of the meteorological variables. Then, it utilizes the information such as radar map and expert knowledge to annotate all images automatically. And finally, by using CNN model, it cloud analyze and evaluate each weather elements (e.g., particular variables, patterns, features, etc.), and identify key areas of those critical weather elements, then help forecasters quickly screen out the key elements from huge amounts of observation data by current weather conditions. Based on the rich weather measurement and model data (up to 10 years) over Fujian province in China, where the severe convective weathers are very active during the summer months, experimental tests are conducted with the new machine-learning method via CNN models. Based on the analysis of those experimental results and case studies, the proposed new method have below benefits for the severe convection prediction: (1) helping forecasters to narrow down the scope of analysis and saves lead-time for those high-impact severe convection; (2) performing huge amount of weather big data by machine learning methods rather relying on traditional theory and knowledge, which provide new method to explore and quantify the severe convective weathers; (3) providing machine learning based end-to-end analysis and processing ability with considerable scalability on data volumes, and accomplishing the analysis work without human intervention.
Continuous-variable quantum key distribution with Gaussian source noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Yujie; Peng Xiang; Yang Jian
2011-05-15
Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.
NASA Technical Reports Server (NTRS)
Gallo, C.; Kasuba, R.; Pintz, A.; Spring, J.
1986-01-01
The dynamic analysis of a horizontal axis fixed pitch wind turbine generator (WTG) rated at 56 kW is discussed. A mechanical Continuously Variable Transmission (CVT) was incorporated in the drive train to provide variable speed operation capability. One goal of the dynamic analysis was to determine if variable speed operation, by means of a mechanical CVT, is capable of capturing the transient power in the WTG/wind environment. Another goal was to determine the extent of power regulation possible with CVT operation.
Ozmutlu, H. Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204
Water clarity in the Florida Keys, USA, as observed from space (1984-2002)
NASA Astrophysics Data System (ADS)
Palandro, D. A.; Hu, C.; Andrefouet, S.; Muller-Karger, F. E.; Hallock, P.
2007-12-01
Landsat TM and ETM+ satellite data were used to derive the diffuse attenuation coefficient (Kd, m-1), a measure of water clarity, for 29 sites throughout the Florida Keys Reef Tract. A total of 28 individual Landsat images between 1984 and 2002 were used, with imagery gathered every two years for spring seasons and every six years for fall seasons. Useful information was obtained by Landsat bands 1 (blue) and 2 (green), except when sites were covered by clouds or showed turbid water. Landsat band 3 (red) provided no consistent data due to the high absorption of red light by water. Because image sampling represented only one or two samples per year on specific days, and because water turbidity may change over short time scales, it was not possible to assess temporal trends at the sites with the Landsat data. Kd values in band 1 were higher in the spring (mean spring = 0.034 m-1, mean fall = 0.031 m-1) and band 2 were higher in the fall (mean spring = 0.056 m-1, mean fall = 0.058 m-1), but the differences were not statistically significant. Spatial variability was high between sites and between regions (Upper, Middle and Lower Keys), with band 1 ranges of 0.019 m-1 - 0.060 m-1 and band 2 ranges of 0.036 m-1 - 0.076 m-1. The highest Kd values were found in the Upper Keys, followed by the Middle Keys and Lower Keys, respectively. This result must be taken in context however, two Middle Keys sites were found to be inconsistent due to high turbidity, obscuring the benthos and altering our assumption of a visible seafloor, which the algorithm is dependent upon. If all Middle Keys data were valid it is likely that this region would have the highest Kd values for both bands. The Landsat-derived Kd values, and inherent variability, may be influenced by the dominant water mass associated with each Florida Keys region, as well as localized oceanic variables. The methodology used here may be applied to other reef areas and used with satellites that offer higher temporal resolution to assess temporal change and variability.
NASA Technical Reports Server (NTRS)
Buden, D.
1991-01-01
Topics dealing with nuclear safety are addressed which include the following: general safety requirements; safety design requirements; terrestrial safety; SP-100 Flight System key safety requirements; potential mission accidents and hazards; key safety features; ground operations; launch operations; flight operations; disposal; safety concerns; licensing; the nuclear engine for rocket vehicle application (NERVA) design philosophy; the NERVA flight safety program; and the NERVA safety plan.
14 CFR 437.31 - Verification of operating area containment and key flight-safety event limitations.
Code of Federal Regulations, 2010 CFR
2010-01-01
...(a) to contain its reusable suborbital rocket's instantaneous impact point within an operating area... limits on the ability of the reusable suborbital rocket to leave the operating area; or (2) Abort... requirements of § 437.59 to conduct any key flight-safety event so that the reusable suborbital rocket's...
14 CFR 437.31 - Verification of operating area containment and key flight-safety event limitations.
Code of Federal Regulations, 2013 CFR
2013-01-01
...(a) to contain its reusable suborbital rocket's instantaneous impact point within an operating area... limits on the ability of the reusable suborbital rocket to leave the operating area; or (2) Abort... requirements of § 437.59 to conduct any key flight-safety event so that the reusable suborbital rocket's...
14 CFR 437.31 - Verification of operating area containment and key flight-safety event limitations.
Code of Federal Regulations, 2012 CFR
2012-01-01
...(a) to contain its reusable suborbital rocket's instantaneous impact point within an operating area... limits on the ability of the reusable suborbital rocket to leave the operating area; or (2) Abort... requirements of § 437.59 to conduct any key flight-safety event so that the reusable suborbital rocket's...
14 CFR 437.31 - Verification of operating area containment and key flight-safety event limitations.
Code of Federal Regulations, 2011 CFR
2011-01-01
...(a) to contain its reusable suborbital rocket's instantaneous impact point within an operating area... limits on the ability of the reusable suborbital rocket to leave the operating area; or (2) Abort... requirements of § 437.59 to conduct any key flight-safety event so that the reusable suborbital rocket's...
14 CFR 437.31 - Verification of operating area containment and key flight-safety event limitations.
Code of Federal Regulations, 2014 CFR
2014-01-01
...(a) to contain its reusable suborbital rocket's instantaneous impact point within an operating area... limits on the ability of the reusable suborbital rocket to leave the operating area; or (2) Abort... requirements of § 437.59 to conduct any key flight-safety event so that the reusable suborbital rocket's...
Skill assessment of the coupled physical-biogeochemical operational Mediterranean Forecasting System
NASA Astrophysics Data System (ADS)
Cossarini, Gianpiero; Clementi, Emanuela; Salon, Stefano; Grandi, Alessandro; Bolzon, Giorgio; Solidoro, Cosimo
2016-04-01
The Mediterranean Monitoring and Forecasting Centre (Med-MFC) is one of the regional production centres of the European Marine Environment Monitoring Service (CMEMS-Copernicus). Med-MFC operatively manages a suite of numerical model systems (3DVAR-NEMO-WW3 and 3DVAR-OGSTM-BFM) that provides gridded datasets of physical and biogeochemical variables for the Mediterranean marine environment with a horizontal resolution of about 6.5 km. At the present stage, the operational Med-MFC produces ten-day forecast: daily for physical parameters and bi-weekly for biogeochemical variables. The validation of the coupled model system and the estimate of the accuracy of model products are key issues to ensure reliable information to the users and the downstream services. Product quality activities at Med-MFC consist of two levels of validation and skill analysis procedures. Pre-operational qualification activities focus on testing the improvement of the quality of a new release of the model system and relays on past simulation and historical data. Then, near real time (NRT) validation activities aim at the routinely and on-line skill assessment of the model forecast and relays on the NRT available observations. Med-MFC validation framework uses both independent (i.e. Bio-Argo float data, in-situ mooring and vessel data of oxygen, nutrients and chlorophyll, moored buoys, tide-gauges and ADCP of temperature, salinity, sea level and velocity) and semi-independent data (i.e. data already used for assimilation, such as satellite chlorophyll, Satellite SLA and SST and in situ vertical profiles of temperature and salinity from XBT, Argo and Gliders) We give evidence that different variables (e.g. CMEMS-products) can be validated at different levels (i.e. at the forecast level or at the level of model consistency) and at different spatial and temporal scales. The fundamental physical parameters temperature, salinity and sea level are routinely validated on daily, weekly and quarterly base at regional and sub-regional scale and along specific vertical layers (temperature and salinity); while velocity fields are daily validated against in situ coastal moorings. Since the velocity skill cannot be accurately assessed through coastal measurements due to the actual model horizontal resolution (~6.5 km), new validation metrics and procedures are under investigation. Chlorophyll is the only biogeochemical variable that can be validated routinely at the temporal and spatial scale of the weekly forecast, while nutrients and oxygen predictions can be validated locally or at sub-basin and seasonal scales. For the other biogeochemical variables (i.e. primary production, carbonate system variables) only the accuracy of the average dynamics and model consistency can be evaluated. Then, we discuss the limiting factors of the present validation framework, and the quality and extension of the observing system that would be needed for improving the reliability of the physical and biogeochemical Mediterranean forecast services.
24 CFR 3280.105 - Exit facilities; exterior doors.
Code of Federal Regulations, 2013 CFR
2013-04-01
... opening. (3) Each swinging exterior door other than screen or storm doors shall have a key-operated lock... the use of a key for operation from the inside. (4) All exterior doors, including storm and screen...
24 CFR 3280.105 - Exit facilities; exterior doors.
Code of Federal Regulations, 2012 CFR
2012-04-01
... opening. (3) Each swinging exterior door other than screen or storm doors shall have a key-operated lock... the use of a key for operation from the inside. (4) All exterior doors, including storm and screen...
24 CFR 3280.105 - Exit facilities; exterior doors.
Code of Federal Regulations, 2010 CFR
2010-04-01
... opening. (3) Each swinging exterior door other than screen or storm doors shall have a key-operated lock... the use of a key for operation from the inside. (4) All exterior doors, including storm and screen...
24 CFR 3280.105 - Exit facilities; exterior doors.
Code of Federal Regulations, 2011 CFR
2011-04-01
... opening. (3) Each swinging exterior door other than screen or storm doors shall have a key-operated lock... the use of a key for operation from the inside. (4) All exterior doors, including storm and screen...
Packet communications in satellites with multiple-beam antennas and signal processing
NASA Technical Reports Server (NTRS)
Davies, R.; Chethik, F.; Penick, M.
1980-01-01
A communication satellite with a multiple-beam antenna and onboard signal processing is considered for use in a 'message-switched' data relay system. The signal processor may incorporate demodulation, routing, storage, and remodulation of the data. A system user model is established and key functional elements for the signal processing are identified. With the throughput and delay requirements as the controlled variables, the hardware complexity, operational discipline, occupied bandwidth, and overall user end-to-end cost are estimated for (1) random-access packet switching; and (2) reservation-access packet switching. Other aspects of this network (eg, the adaptability to channel switched traffic requirements) are examined. For the given requirements and constraints, the reservation system appears to be the most attractive protocol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Na; Wu, Yu-Ping; Min, Hao
A radio-frequency (RF) source designed for cold atom experiments is presented. The source uses AD9858, a direct digital synthesizer, to generate the sine wave directly, up to 400 MHz, with sub-Hz resolution. An amplitude control circuit consisting of wideband variable gain amplifier and high speed digital to analog converter is integrated into the source, capable of 70 dB off isolation and 4 ns on-off keying. A field programmable gate array is used to implement a versatile frequency and amplitude co-sweep logic. Owing to modular design, the RF sources have been used on many cold atom experiments to generate various complicatedmore » RF sequences, enriching the operation schemes of cold atoms, which cannot be done by standard RF source instruments.« less
Analysis of aircraft tires via semianalytic finite elements
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Kim, Kyun O.; Tanner, John A.
1990-01-01
A computational procedure is presented for the geometrically nonlinear analysis of aircraft tires. The tire was modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The four key elements of the procedure are: (1) semianalytic finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynomials in the meridional direction; (2) a mixed formulation with the fundamental unknowns consisting of strain parameters, stress-resultant parameters, and generalized displacements; (3) multilevel operator splitting to effect successive simplifications, and to uncouple the equations associated with different Fourier harmonics; and (4) multilevel iterative procedures and reduction techniques to generate the response of the shell.
Usability testing and requirements derivation for EMU-compatible electrical connectors
NASA Technical Reports Server (NTRS)
Reaux, Ray A.; Griffin, Thomas J.; Lewis, Ruthan
1989-01-01
On-orbit servicing of payloads is simplified when a spacecraft has been designed for serviceability. A key design criterion for a serviceable spaceraft is standardization of electrical connectors. This paper investigates the effects of extravehicular mobility unit (EMU) glove size, connector size, and connector type on usability of electrical connectors. An experiment was conducted exploring participants' ability to mate and demate connectors in an evacuated glovebox. Independent variables were two EMU glove-sizes, five connector size groups, and seven connector types. Significant differences in performance times and heart rate changes during mate and demate operations were found. Subjective assessments of connectors were collected from participants with a usability questionnaire. The data were used to derive design recommendations for a NASA-recommended EMU-compatible electrical connector.
Mass and power modeling of communication satellites
NASA Technical Reports Server (NTRS)
Price, Kent M.; Pidgeon, David; Tsao, Alex
1991-01-01
Analytic estimating relationships for the mass and power requirements for major satellite subsystems are described. The model for each subsystem is keyed to the performance drivers and system requirements that influence their selection and use. Guidelines are also given for choosing among alternative technologies which accounts for other significant variables such as cost, risk, schedule, operations, heritage, and life requirements. These models are intended for application to first order systems analyses, where resources do not warrant detailed development of a communications system scenario. Given this ground rule, the models are simplified to 'smoothed' representation of reality. Therefore, the user is cautioned that cost, schedule, and risk may be significantly impacted where interpolations are sufficiently different from existing hardware as to warrant development of new devices.
Chen, Yi; Huang, Weina; Peng, Bei
2014-01-01
Because of the demands for sustainable and renewable energy, fuel cells have become increasingly popular, particularly the polymer electrolyte fuel cell (PEFC). Among the various components, the cathode plays a key role in the operation of a PEFC. In this study, a quantitative dual-layer cathode model was proposed for determining the optimal parameters that minimize the over-potential difference and improve the efficiency using a newly developed bat swarm algorithm with a variable population embedded in the computational intelligence-aided design. The simulation results were in agreement with previously reported results, suggesting that the proposed technique has potential applications for automating and optimizing the design of PEFCs. PMID:25490761
McLean, A P; Blampied, N M
1995-01-01
Behavioral momentum theory relates resistance to change of responding in a multiple-schedule component to the total reinforcement obtained in that component, regardless of how the reinforcers are produced. Four pigeons responded in a series of multiple-schedule conditions in which a variable-interval 40-s schedule arranged reinforcers for pecking in one component and a variable-interval 360-s schedule arranged them in the other. In addition, responses on a second key were reinforced according to variable-interval schedules that were equal in the two components. In different parts of the experiment, responding was disrupted by changing the rate of reinforcement on the second key or by delivering response-independent food during a blackout separating the two components. Consistent with momentum theory, responding on the first key in Part 1 changed more in the component with the lower reinforcement total when it was disrupted by changes in the rate of reinforcement on the second key. However, responding on the second key changed more in the component with the higher reinforcement total. In Parts 2 and 3, responding was disrupted with free food presented during intercomponent blackouts, with extinction (Part 2) or variable-interval 80-s reinforcement (Part 3) arranged on the second key. Here, resistance to change was greater for the component with greater overall reinforcement. Failures of momentum theory to predict short-term differences in resistance to change occurred with disruptors that caused greater change between steady states for the richer component. Consistency of effects across disruptors may yet be found if short-term effects of disruptors are assessed relative to the extent of change observed after prolonged exposure.
NASA Technical Reports Server (NTRS)
Huber, W. C.
1986-01-01
Voice synthesizer tells what key is about to be depressed. Verbal feedback useful for blind operators or where dim light prevents sighted operator from seeing keyboard. Also used where operator is busy observing other things while keying data into control system. Used as training aid for touch typing, and to train blind operators to use both standard and braille keyboards. Concept adapted to such equipment as typewriters, computers, calculators, telephones, cash registers, and on/off controls.
Military Review: Operation Desert Shield/Desert Storm
1991-09-01
areas of responsibility and ob- where he could best interact with key operation- jectives to influence the outcome of the battle. al wmmanders: the...Air component command- Key to successful operational command was er for air support, the Saudi Joint Forces Coin- the interaction of the two command...opterira~~ d-~ a~ ini ection as in available news people in the field. - Vtnmam.... The operational securiy/top safety prMAlem Dese ~amnind actio oereid
Johannes Breidenbach; Clara Antón-Fernández; Hans Petersson; Ronald E. McRoberts; Rasmus Astrup
2014-01-01
National Forest Inventories (NFIs) provide estimates of forest parameters for national and regional scales. Many key variables of interest, such as biomass and timber volume, cannot be measured directly in the field. Instead, models are used to predict those variables from measurements of other field variables. Therefore, the uncertainty or variability of NFI estimates...
Maltese, Antonino; Capodici, Fulvio; Ciraolo, Giuseppe; La Loggia, Goffredo
2015-03-19
Knowledge of soil water content plays a key role in water management efforts to improve irrigation efficiency. Among the indirect estimation methods of soil water content via Earth Observation data is the triangle method, used to analyze optical and thermal features because these are primarily controlled by water content within the near-surface evaporation layer and root zone in bare and vegetated soils. Although the soil-vegetation-atmosphere transfer theory describes the ongoing processes, theoretical models reveal limits for operational use. When applying simplified empirical formulations, meteorological forcing could be replaced with alternative variables when the above-canopy temperature is unknown, to mitigate the effects of calibration inaccuracies or to account for the temporal admittance of the soil. However, if applied over a limited area, a characterization of both dry and wet edges could not be properly achieved; thus, a multi-temporal analysis can be exploited to include outer extremes in soil water content. A diachronic empirical approach introduces the need to assume a constancy of other meteorological forcing variables that control thermal features. Airborne images were acquired on a Sicilian vineyard during most of an entire irrigation period (fruit-set to ripening stages, vintage 2008), during which in situ soil water content was measured to set up the triangle method. Within this framework, we tested the triangle method by employing alternative thermal forcing. The results were inaccurate when air temperature at airborne acquisition was employed. Sonic and aerodynamic air temperatures confirmed and partially explained the limits of simultaneous meteorological forcing, and the use of proxy variables improved model accuracy. The analysis indicates that high spatial resolution does not necessarily imply higher accuracies.
Carvajal, Guido; Roser, David J; Sisson, Scott A; Keegan, Alexandra; Khan, Stuart J
2017-02-01
Chlorine disinfection of biologically treated wastewater is practiced in many locations prior to environmental discharge or beneficial reuse. The effectiveness of chlorine disinfection processes may be influenced by several factors, such as pH, temperature, ionic strength, organic carbon concentration, and suspended solids. We investigated the use of Bayesian multilayer perceptron (BMLP) models as efficient and practical tools for compiling and analysing free chlorine and monochloramine virus disinfection performance as a multivariate problem. Corresponding to their relative susceptibility, Adenovirus 2 was used to assess disinfection by monochloramine and Coxsackievirus B5 was used for free chlorine. A BMLP model was constructed to relate key disinfection conditions (CT, pH, turbidity) to observed Log Reduction Values (LRVs) for these viruses at constant temperature. The models proved to be valuable for incorporating uncertainty in the chlor(am)ination performance estimation and interpolating between operating conditions. Various types of queries could be performed with this model including the identification of target CT for a particular combination of LRV, pH and turbidity. Similarly, it was possible to derive achievable LRVs for combinations of CT, pH and turbidity. These queries yielded probability density functions for the target variable reflecting the uncertainty in the model parameters and variability of the input variables. The disinfection efficacy was greatly impacted by pH and to a lesser extent by turbidity for both types of disinfections. Non-linear relationships were observed between pH and target CT, and turbidity and target CT, with compound effects on target CT also evidenced. This work demonstrated that the use of BMLP models had considerable ability to improve the resolution and understanding of the multivariate relationships between operational parameters and disinfection outcomes for wastewater treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Maltese, Antonino; Capodici, Fulvio; Ciraolo, Giuseppe; La Loggia, Goffredo
2015-01-01
Knowledge of soil water content plays a key role in water management efforts to improve irrigation efficiency. Among the indirect estimation methods of soil water content via Earth Observation data is the triangle method, used to analyze optical and thermal features because these are primarily controlled by water content within the near-surface evaporation layer and root zone in bare and vegetated soils. Although the soil-vegetation-atmosphere transfer theory describes the ongoing processes, theoretical models reveal limits for operational use. When applying simplified empirical formulations, meteorological forcing could be replaced with alternative variables when the above-canopy temperature is unknown, to mitigate the effects of calibration inaccuracies or to account for the temporal admittance of the soil. However, if applied over a limited area, a characterization of both dry and wet edges could not be properly achieved; thus, a multi-temporal analysis can be exploited to include outer extremes in soil water content. A diachronic empirical approach introduces the need to assume a constancy of other meteorological forcing variables that control thermal features. Airborne images were acquired on a Sicilian vineyard during most of an entire irrigation period (fruit-set to ripening stages, vintage 2008), during which in situ soil water content was measured to set up the triangle method. Within this framework, we tested the triangle method by employing alternative thermal forcing. The results were inaccurate when air temperature at airborne acquisition was employed. Sonic and aerodynamic air temperatures confirmed and partially explained the limits of simultaneous meteorological forcing, and the use of proxy variables improved model accuracy. The analysis indicates that high spatial resolution does not necessarily imply higher accuracies. PMID:25808771
Holmén, Britt A; Qu, Yingge
2004-04-15
The relationships between transient vehicle operation and ultrafine particle emissions are not well-known, especially for low-emission alternative bus technologies such as compressed natural gas (CNG) and diesel buses equipped with particulate filters/traps (TRAP). In this study, real-time particle number concentrations measured on a nominal 5 s average basis using an electrical low pressure impactor (ELPI) for these two bus technologies are compared to that of a baseline catalyst-equipped diesel bus operated on ultralow sulfur fuel (BASE) using dynamometer testing. Particle emissions were consistently 2 orders of magnitude lower for the CNG and TRAP compared to BASE on all driving cycles. Time-resolved total particle numbers were examined in terms of sampling factors identified as affecting the ability of ELPI to quantify the particulate matter number emissions for low-emitting vehicles such as CNG and TRAP as a function of vehicle driving mode. Key factors were instrument sensitivity and dilution ratio, alignment of particle and vehicle operating data, sampling train background particles, and cycle-to-cycle variability due to vehicle, engine, after-treatment, or driver behavior. In-cycle variability on the central business district (CBD) cycle was highest for the TRAP configuration, but this could not be attributed to the ELPI sensitivity issues observed for TRAP-IDLE measurements. Elevated TRAP emissions coincided with low exhaust temperature, suggesting on-road real-world particulate filter performance can be evaluated by monitoring exhaust temperature. Nonunique particle emission maps indicate that measures other than vehicle speed and acceleration are necessary to model disaggregated real-time particle emissions. Further testing on a wide variety of test cycles is needed to evaluate the relative importance of the time history of vehicle operation and the hysteresis of the sampling train/dilution tunnel on ultrafine particle emissions. Future studies should monitor particle emissions with high-resolution real-time instruments and account for the operating regime of the vehicle using time-series analysis to develop predictive number emissions models.
NASA Astrophysics Data System (ADS)
Schmidt, S. R.; Whitney, M. M.
2016-02-01
There are many challenges to data collection in the ocean sciences. There are both monetary and time constraints that can frequently limit the temporal and spatial coverage of the data being collected. Aside from the initial cost of equipment and materials, the amount of work hours required for continued deployment and operation are also a key concern (particularly for measurements made at sea). While the need for precise scientific instruments can be a barrier to effective public participation, the general public's accessibility to coastal waters can provide opportunities to overcome these challenges. The key, we found, is to incorporate public participation in the research plan from the onset. We present a program designed to understand the temporal and spatial variability of water properties in western Long Island Sound. During the spring and summer of 2015, temperature, salinity, pressure, and weather data were collected with the help and participation of local businesses and organizations in Norwalk and Westport, CT. Shipboard observations were collected on vessels operated by U.S Coast Guard Auxiliary, Harbor Watch at Earthplace, and the Norwalk Seaport Association using scientific apparatus designed at UCONN. Data at fixed locations were collected at several sites through cooperation with the Norwalk Maritime Aquarium, the Saugatuck Rowing Club, Rowayton Market, and the Norwalk Seaport Association. This effort resulted in the successful collection of several months' worth of data that covered all parts of the tidal cycle and different river flow regimes. The organizing and operation of this volunteer-based network, the resultant dataset, and the possible application of the methods used to other locations will be discussed.
Coastal vulnerability assessment using Fuzzy Logic and Bayesian Belief Network approaches
NASA Astrophysics Data System (ADS)
Valentini, Emiliana; Nguyen Xuan, Alessandra; Filipponi, Federico; Taramelli, Andrea
2017-04-01
Natural hazards such as sea surge are threatening low-lying coastal plains. In order to deal with disturbances a deeper understanding of benefits deriving from ecosystem services assessment, management and planning can contribute to enhance the resilience of coastal systems. In this frame assessing current and future vulnerability is a key concern of many Systems Of Systems SOS (social, ecological, institutional) that deals with several challenges like the definition of Essential Variables (EVs) able to synthesize the required information, the assignment of different weight to be attributed to each considered variable, the selection of method for combining the relevant variables. It is widely recognized that ecosystems contribute to human wellbeing and then their conservation increases the resilience capacities and could play a key role in reducing climate related risk and thus physical and economic losses. A way to fully exploit ecosystems potential, i.e. their so called ecopotential (see H2020 EU funded project "ECOPOTENTIAL"), is the Ecosystem based Adaptation (EbA): the use of ecosystem services as part of an adaptation strategy. In order to provide insight in understanding regulating ecosystem services to surge and which variables influence them and to make the best use of available data and information (EO products, in situ data and modelling), we propose a multi-component surge vulnerability assessment, focusing on coastal sandy dunes as natural barriers. The aim is to combine together eco-geomorphological and socio-economic variables with the hazard component on the base of different approaches: 1) Fuzzy Logic; 2) Bayesian Belief Networks (BBN). The Fuzzy Logic approach is very useful to get a spatialized information and it can easily combine variables coming from different sources. It provides information on vulnerability moving along-shore and across-shore (beach-dune transect), highlighting the variability of vulnerability conditions in the spatial dimension. According to the results using fuzzy operators, the analysis greatest weakness is the limited capacity to represent the relation among the different considered variables. The BBN approach, based on the definition of conditional probabilities, has allowed determining the trend of distributions of vulnerability along-shore, highlighting which parts of the coast are most likely to have higher or lower vulnerability than others. In BBN analysis, the greatest weakness emerge in the case of arbitrary definition of conditional probabilities (i.e. when there is a lack of information on the past hazardous events) because it is not possible to derive the individual contribution of each variable. As conclusion, the two approaches could be used together in the perspective of enhancing the multiple components in vulnerability assessment: the BBN as a preliminary assessment to provide a coarse description of the vulnerability distribution, and the Fuzzy Logic as an extended assessment to provide more space based information.
Radinger, Johannes; Wolter, Christian; Kail, Jochem
2015-01-01
Habitat suitability and the distinct mobility of species depict fundamental keys for explaining and understanding the distribution of river fishes. In recent years, comprehensive data on river hydromorphology has been mapped at spatial scales down to 100 m, potentially serving high resolution species-habitat models, e.g., for fish. However, the relative importance of specific hydromorphological and in-stream habitat variables and their spatial scales of influence is poorly understood. Applying boosted regression trees, we developed species-habitat models for 13 fish species in a sand-bed lowland river based on river morphological and in-stream habitat data. First, we calculated mean values for the predictor variables in five distance classes (from the sampling site up to 4000 m up- and downstream) to identify the spatial scale that best predicts the presence of fish species. Second, we compared the suitability of measured variables and assessment scores related to natural reference conditions. Third, we identified variables which best explained the presence of fish species. The mean model quality (AUC = 0.78, area under the receiver operating characteristic curve) significantly increased when information on the habitat conditions up- and downstream of a sampling site (maximum AUC at 2500 m distance class, +0.049) and topological variables (e.g., stream order) were included (AUC = +0.014). Both measured and assessed variables were similarly well suited to predict species’ presence. Stream order variables and measured cross section features (e.g., width, depth, velocity) were best-suited predictors. In addition, measured channel-bed characteristics (e.g., substrate types) and assessed longitudinal channel features (e.g., naturalness of river planform) were also good predictors. These findings demonstrate (i) the applicability of high resolution river morphological and instream-habitat data (measured and assessed variables) to predict fish presence, (ii) the importance of considering habitat at spatial scales larger than the sampling site, and (iii) that the importance of (river morphological) habitat characteristics differs depending on the spatial scale. PMID:26569119
Comparative study of air-conditioning energy use of four office buildings in China and USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Xin; Yan, Da; An, Jingjing
Energy use in buildings has great variability. In order to design and operate low energy buildings as well as to establish building energy codes and standards and effective energy policy, it is crucial to understand and quantify key factors influencing building energy performance. Here, this study investigates air-conditioning (AC) energy use of four office buildings in four locations: Beijing, Taiwan, Hong Kong, and Berkeley. Building simulation was employed to quantify the influences of key factors, including climate, building envelope and occupant behavior. Through simulation of various combinations of the three influencing elements, it is found that climate can lead tomore » AC cooling consumption differences by almost two times, while occupant behavior resulted in the greatest differences (of up to three times) in AC cooling consumption. The influence of occupant behavior on AC energy consumption is not homogeneous. Under similar climates, when the occupant behavior in the building differed, the optimized building envelope design also differed. In conclusion, the optimal building envelope should be determined according to the climate as well as the occupants who use the building.« less
Comparative study of air-conditioning energy use of four office buildings in China and USA
Zhou, Xin; Yan, Da; An, Jingjing; ...
2018-04-05
Energy use in buildings has great variability. In order to design and operate low energy buildings as well as to establish building energy codes and standards and effective energy policy, it is crucial to understand and quantify key factors influencing building energy performance. Here, this study investigates air-conditioning (AC) energy use of four office buildings in four locations: Beijing, Taiwan, Hong Kong, and Berkeley. Building simulation was employed to quantify the influences of key factors, including climate, building envelope and occupant behavior. Through simulation of various combinations of the three influencing elements, it is found that climate can lead tomore » AC cooling consumption differences by almost two times, while occupant behavior resulted in the greatest differences (of up to three times) in AC cooling consumption. The influence of occupant behavior on AC energy consumption is not homogeneous. Under similar climates, when the occupant behavior in the building differed, the optimized building envelope design also differed. In conclusion, the optimal building envelope should be determined according to the climate as well as the occupants who use the building.« less
Gordon, G T; McCann, B P
2015-01-01
This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.
Zeroing in on Number and Operations, Grades 7-8: Key Ideas and Common Misconceptions
ERIC Educational Resources Information Center
Collins, Anne; Dacey, Linda
2010-01-01
"The Zeroing in on Number and Operations" series, which aligns with the Common Core State Standards and the NCTM Standards and Focal Points, features easy-to-use tools for teaching key concepts in number and operations and for addressing common misconceptions. Sharing the insights they've gained in decades of mathematics teaching and research,…
Zeroing in on Number and Operations, Grades 3-4: Key Ideas and Common Misconceptions
ERIC Educational Resources Information Center
Dacey, Linda; Collins, Anne
2010-01-01
"The Zeroing in on Number and Operations" series, which aligns with the Common Core State Standards and the NCTM Standards and Focal Points, features easy-to-use tools for teaching key concepts in number and operations and for addressing common misconceptions. Sharing the insights they've gained in decades of mathematics teaching and research,…
Zeroing in on Number and Operations, Grades 5-6: Key Ideas and Common Misconceptions
ERIC Educational Resources Information Center
Collins, Anne; Dacey, Linda
2010-01-01
"The Zeroing in on Number and Operations" series, which aligns with the Common Core State Standards and the NCTM Sandards and Focal Points, features easy-to-use tools for teaching key concepts in number and operations and for addressing common misconceptions. Sharing the insights they've gained through decades of mathematics teaching and research,…
Argo workstation: a key component of operational oceanography
NASA Astrophysics Data System (ADS)
Dong, Mingmei; Xu, Shanshan; Miao, Qingsheng; Yue, Xinyang; Lu, Jiawei; Yang, Yang
2018-02-01
Operational oceanography requires the quantity, quality, and availability of data set and the timeliness and effectiveness of data products. Without steady and strong operational system supporting, operational oceanography will never be proceeded far. In this paper we describe an integrated platform named Argo Workstation. It operates as a data processing and management system, capable of data collection, automatic data quality control, visualized data check, statistical data search and data service. After it is set up, Argo workstation provides global high quality Argo data to users every day timely and effectively. It has not only played a key role in operational oceanography but also set up an example for operational system.
A Novel Image Encryption Algorithm Based on DNA Subsequence Operation
Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng
2012-01-01
We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack. PMID:23093912
NASA Astrophysics Data System (ADS)
Reid, M. D.
2000-12-01
Correlations of the type discussed by EPR in their original 1935 paradox for continuous variables exist for the quadrature phase amplitudes of two spatially separated fields. These correlations were first experimentally reported in 1992. We propose to use such EPR beams in quantum cryptography, to transmit with high efficiency messages in such a way that the receiver and sender may later determine whether eavesdropping has occurred. The merit of the new proposal is in the possibility of transmitting a reasonably secure yet predetermined key. This would allow relay of a cryptographic key over long distances in the presence of lossy channels.
Robust shot-noise measurement for continuous-variable quantum key distribution
NASA Astrophysics Data System (ADS)
Kunz-Jacques, Sébastien; Jouguet, Paul
2015-02-01
We study a practical method to measure the shot noise in real time in continuous-variable quantum key distribution systems. The amount of secret key that can be extracted from the raw statistics depends strongly on this quantity since it affects in particular the computation of the excess noise (i.e., noise in excess of the shot noise) added by an eavesdropper on the quantum channel. Some powerful quantum hacking attacks relying on faking the estimated value of the shot noise to hide an intercept and resend strategy were proposed. Here, we provide experimental evidence that our method can defeat the saturation attack and the wavelength attack.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blandino, Rémi; Etesse, Jean; Grangier, Philippe
2014-12-04
We show that the maximum transmission distance of continuous-variable quantum key distribution in presence of a Gaussian noisy lossy channel can be arbitrarily increased using a heralded noiseless linear amplifier. We explicitly consider a protocol using amplitude and phase modulated coherent states with reverse reconciliation. Assuming that the secret key rate drops to zero for a line transmittance T{sub lim}, we find that a noiseless amplifier with amplitude gain g can improve this value to T{sub lim}/g{sup 2}, corresponding to an increase in distance proportional to log g. We also show that the tolerance against noise is increased.
Gaussian-modulated coherent-state measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Ma, Xiang-Chun; Sun, Shi-Hai; Jiang, Mu-Sheng; Gui, Ming; Liang, Lin-Mei
2014-04-01
Measurement-device-independent quantum key distribution (MDI-QKD), leaving the detection procedure to the third partner and thus being immune to all detector side-channel attacks, is very promising for the construction of high-security quantum information networks. We propose a scheme to implement MDI-QKD, but with continuous variables instead of discrete ones, i.e., with the source of Gaussian-modulated coherent states, based on the principle of continuous-variable entanglement swapping. This protocol not only can be implemented with current telecom components but also has high key rates compared to its discrete counterpart; thus it will be highly compatible with quantum networks.
Collective attacks and unconditional security in continuous variable quantum key distribution.
Grosshans, Frédéric
2005-01-21
We present here an information theoretic study of Gaussian collective attacks on the continuous variable key distribution protocols based on Gaussian modulation of coherent states. These attacks, overlooked in previous security studies, give a finite advantage to the eavesdropper in the experimentally relevant lossy channel, but are not powerful enough to reduce the range of the reverse reconciliation protocols. Secret key rates are given for the ideal case where Bob performs optimal collective measurements, as well as for the realistic cases where he performs homodyne or heterodyne measurements. We also apply the generic security proof of Christiandl et al. to obtain unconditionally secure rates for these protocols.
Trust Threshold Based Public Key Management in Mobile Ad Hoc Networks
2016-03-05
should operate in a self-organized way. Capkun t al. [15] proposed a certificate-based self-organized pub- c key management for MANETs by removing...period allo node started with ignorance interact with other nodes, th not reach T th Table 2 Attack behavior for operations . Operation Attack...section, we discuss the core operations o CTPKM as illustrated by Fig. 1 . Each mobile entity is able t communicate with other entities using public
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim; Ibanez, Eduardo; Ela, Erik
2015-10-19
The recent increased interest in utilizing variable generation (VG) resources such as wind and solar in power systems has motivated investigations into new operating procedures. Although these resources provide desirable value to a system (e.g., no fuel costs or emissions), interconnecting them provides unique challenges. Their variable, non-controllable nature in particular requires significant attention, because it directly results in increased power system variability and uncertainty. One way to handle this is via new operating reserve schemes. Operating reserves provide upward and downward generation and ramping capacity to counteract uncertainty and variability prior to their realization. For instance, uncertainty and variabilitymore » in real-time dispatch can be accounted for in the hour-ahead unit commitment. New operating reserve methodologies that specifically account for the increased variability and uncertainty caused by VG are currently being investigated and developed by academia and industry. This paper examines one method inspired by the new operating reserve product being proposed by the California Independent System Operator. The method is based on examining the potential ramping requirements at any given time and enforcing those requirements via a reserve demand curve in the market-clearing optimization as an additional ancillary service product.« less
Solar Thermal Upper Stage Cryogen System Engineering Checkout Test
NASA Technical Reports Server (NTRS)
Olsen, A. D; Cady, E. C.; Jenkins, D. S.
1999-01-01
The Solar Thermal Upper Stage technology (STUSTD) program is a solar thermal propulsion technology program cooperatively sponsored by a Boeing led team and by NASA MSFC. A key element of its technology program is development of a liquid hydrogen (LH2) storage and supply system which employs multi-layer insulation, liquid acquisition devices, active and passive thermodynamic vent systems, and variable 40W tank heaters to reliably provide near constant pressure H2 to a solar thermal engine in the low-gravity of space operation. The LH2 storage and supply system is designed to operate as a passive, pressure fed supply system at a constant pressure of about 45 psia. During operation of the solar thermal engine over a small portion of the orbit the LH2 storage and supply system propulsively vents through the enjoy at a controlled flowrate. During the long coast portion of the orbit, the LH2 tank is locked up (unvented). Thus, all of the vented H2 flow is used in the engine for thrust and none is wastefully vented overboard. The key to managing the tank pressure and therefore the H2 flow to the engine is to manage and balance the energy flow into the LH2 tank with the MLI and tank heaters with the energy flow out of the LH2 tank through the vented H2 flow. A moderate scale (71 cu ft) LH2 storage and supply system was installed and insulated at the NASA MSFC Test Area 300. The operation of the system is described in this paper. The test program for the LH2 system consisted of two parts: 1) a series of engineering tests to characterize the performance of the various components in the system: and 2) a 30-day simulation of a complete LEO and GEO transfer mission. This paper describes the results of the engineering tests, and correlates these results with analytical models used to design future advanced Solar Orbit Transfer Vehicles.
Reduced nonlinear prognostic model construction from high-dimensional data
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander
2017-04-01
Construction of a data-driven model of evolution operator using universal approximating functions can only be statistically justified when the dimension of its phase space is small enough, especially in the case of short time series. At the same time in many applications real-measured data is high-dimensional, e.g. it is space-distributed and multivariate in climate science. Therefore it is necessary to use efficient dimensionality reduction methods which are also able to capture key dynamical properties of the system from observed data. To address this problem we present a Bayesian approach to an evolution operator construction which incorporates two key reduction steps. First, the data is decomposed into a set of certain empirical modes, such as standard empirical orthogonal functions or recently suggested nonlinear dynamical modes (NDMs) [1], and the reduced space of corresponding principal components (PCs) is obtained. Then, the model of evolution operator for PCs is constructed which maps a number of states in the past to the current state. The second step is to reduce this time-extended space in the past using appropriate decomposition methods. Such a reduction allows us to capture only the most significant spatio-temporal couplings. The functional form of the evolution operator includes separately linear, nonlinear (based on artificial neural networks) and stochastic terms. Explicit separation of the linear term from the nonlinear one allows us to more easily interpret degree of nonlinearity as well as to deal better with smooth PCs which can naturally occur in the decompositions like NDM, as they provide a time scale separation. Results of application of the proposed method to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510
A summary and integration of research concerning single pilot IFR operational problems
NASA Technical Reports Server (NTRS)
Chapman, G. C.
1983-01-01
A review of seven research studies pertaining to Single Pilot IFR (SPIFR) operations was performed. Two studies were based on questionnaire surveys; two based on National Transportation Safety Board (NTSB) reports; two were based on Aviation Safety Reporting System (ASRS) incident reports, and one report used event analysis and statistics to forecast problems. The results obtained in each study were extracted and integrated. Results were synthesized and key issues pertaining to SPIFR operations problems were identified. The research that was recommended by the studies and that addressed the key issues is catalogued for each key issue.
Recent Enhancements in NOAA's JPSS Land Product Suite and Key Operational Applications
NASA Astrophysics Data System (ADS)
Csiszar, I. A.; Yu, Y.; Zhan, X.; Vargas, M.; Ek, M. B.; Zheng, W.; Wu, Y.; Smirnova, T. G.; Benjamin, S.; Ahmadov, R.; James, E.; Grell, G. A.
2017-12-01
A suite of operational land products has been produced as part of NOAA's Joint Polar Satellite System (JPSS) program to support a wide range of operational applications in environmental monitoring, prediction, disaster management and mitigation, and decision support. The Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (NPP) and the operational JPSS satellite series forms the basis of six fundamental and multiple additional added-value environmental data records (EDRs). A major recent improvement in the land-based VIIRS EDRs has been the development of global gridded products, providing a format and science content suitable for ingest into NOAA's operational land surface and coupled numerical weather prediction models. VIIRS near-real-time Green Vegetation Fraction is now in the process of testing for full operational use, while land surface temperature and albedo are under testing and evaluation. The operational 750m VIIRS active fire product, including fire radiative power, is used to support emission modeling and air quality applications. Testing the evaluation for operational NOAA implementation of the improved 375m VIIRS active fire product is also underway. Added-value and emerging VIIRS land products include vegetation health, phenology, near-real-time surface type and surface condition change, and other biogeophysical variables. As part of the JPSS program, a global soil moisture data product has also been generated from the Advanced Microwave Scanning Radiometer 2 (AMSR2) sensor on the GCOM-W1 (Global Change Observation Mission - Water 1) satellite since July 2012. This product is included in the blended NESDIS Soil Moisture Operational Products System, providing soil moisture data as a critical input for land surface modeling.
Formal Analysis of Key Integrity in PKCS#11
NASA Astrophysics Data System (ADS)
Falcone, Andrea; Focardi, Riccardo
PKCS#11 is a standard API to cryptographic devices such as smarcards, hardware security modules and usb crypto-tokens. Though widely adopted, this API has been shown to be prone to attacks in which a malicious user gains access to the sensitive keys stored in the devices. In 2008, Delaune, Kremer and Steel proposed a model to formally reason on this kind of attacks. We extend this model to also describe flaws that are based on integrity violations of the stored keys. In particular, we consider scenarios in which a malicious overwriting of keys might fool honest users into using attacker's own keys, while performing sensitive operations. We further enrich the model with a trusted key mechanism ensuring that only controlled, non-tampered keys are used in cryptographic operations, and we show how this modified API prevents the above mentioned key-replacement attacks.
Combining Variables, Controlling Variables, and Proportions: Is There a Psychological Link?
ERIC Educational Resources Information Center
Lawson, Anton E.
1979-01-01
Investigated the degree of relationship among the performance of 28 seventh grade students on the following three formal operations tasks: chemical combinations, bending rods, and balance beam. Results show that task performance ranged widely from early concrete operational to fully operational. (HM)
Determinants of choice for pigeons and humans on concurrent-chains schedules of reinforcement.
Belke, T W; Pierce, W D; Powell, R A
1989-09-01
Concurrent-chains schedules of reinforcement were arranged for humans and pigeons. Responses of humans were reinforced with tokens exchangeable for money, and key pecks of 4 birds were reinforced with food. Variable-interval 30-s and 40-s schedules operated in the terminal links of the chains. Condition 1 exposed subjects to variable-interval 90-s and variable-interval 30-s initial links, respectively. Conditions 2 and 3 arranged equal initial-link schedules of 40 s or 120 s. Experimental conditions tested the descriptive adequacy of five equations: reinforcement density, delay reduction, modified delay reduction, matching and maximization. Results based on choice proportions and switch rates during the initial links showed that pigeons behaved in accord with delay-reduction models, whereas humans maximized overall rate of reinforcement. As discussed by Logue and associates in self-control research, different types of reinforcement may affect sensitivity to delay differentially. Pigeons' responses were reinforced with food, a reinforcer that is consumable upon presentation. Humans' responses were reinforced with money, a reinforcer exchanged for consumable reinforcers after it was earned. Reinforcers that are immediately consumed may generate high sensitivity to delay and behavior described as delay reduction. Reinforces with longer times to consumption may generate low sensitivity to delay and behavior that maximizes overall payoff.
NASA Astrophysics Data System (ADS)
Masih, Ilyas; Ahmad, Mobin-ud-Din; Uhlenbrook, Stefan; Turral, Hugh; Karimi, Poolad
This study provides a comprehensive spatio-temporal assessment of the surface water resources of the semi-arid Karkheh basin, Iran, and consequently enables decision makers to work towards a sustainable water development in that region. The analysis is based on the examination of statistical parameters, flow duration characteristics, base flow separation and trend analysis for which data of seven key gauging stations were used for the period of 1961-2001. Additionally, basin level water accounting was carried out for the water year 1993-94. The study shows that observed daily, monthly and annual streamflows are highly variable in space and time within the basin. The streamflows have not been changed significantly at annual scale, but few months have shown significant trends, most notably a decline during May and June and an increase during December and March. The major causes were related to changes in climate, land use and reservoir operations. The study concludes that the water allocations to different sectors were lower than the totally available resources during the study period. However, looking at the high variability of streamflows, changes in climate and land use and ongoing water resources development planning, it will be extremely difficult to meet the demands of all sectors in the future, particularly during dry years.
Kumar, Arvind; Rai, Lal Chand
2017-07-01
Soil quality is an important factor and maintained by inhabited microorganisms. Soil physicochemical characteristics determine indigenous microbial population and rice provides food security to major population of the world. Therefore, this study aimed to assess the impact of physicochemical variables on bacterial community composition and diversity in conventional paddy fields which could reflect a real picture of the bacterial communities operating in the paddy agro-ecosystem. To fulfill the objective; soil physicochemical characterization, bacterial community composition and diversity analysis was carried out using culture-independent PCR-DGGE method from twenty soils distributed across eight districts. Bacterial communities were grouped into three clusters based on UPGMA cluster analysis of DGGE banding pattern. The linkage of measured physicochemical variables with bacterial community composition was analyzed by canonical correspondence analysis (CCA). CCA ordination biplot results were similar to UPGMA cluster analysis. High levels of species-environment correlations (0.989 and 0.959) were observed and the largest proportion of species data variability was explained by total organic carbon (TOC), available nitrogen, total nitrogen and pH. Thus, results suggest that TOC and nitrogen are key regulators of bacterial community composition in the conventional paddy fields. Further, high diversity indices and evenness values demonstrated heterogeneity and co-abundance of the bacterial communities.
Meléndez, María José; Báez, José Carlos; Serna-Quintero, José Miguel; Camiñas, Juan Antonio; Fernández, Ignacio de Loyola; Real, Raimundo; Macías, David
2017-01-01
Chondrichthyes, which include Elasmobranchii (sharks and batoids) and Holocephali (chimaeras), are a relatively small group in the Mediterranean Sea (89 species) playing a key role in the ecosystems where they are found. At present, many species of this group are threatened as a result of anthropogenic effects, including fishing activity. Knowledge of the spatial distribution of these species is of great importance to understand their ecological role and for the efficient management of their populations, particularly if affected by fisheries. This study aims to analyze the spatial patterns of the distribution of Chondrichthyes species richness in the Mediterranean Sea. Information provided by the studied countries was used to model geographical and ecological variables affecting the Chondrichthyes species richness. The species were distributed in 16 Operational Geographical Units (OGUs), derived from the Geographical Sub-Areas (GSA) adopted by the General Fisheries Commission of the Mediterranean Sea (GFCM). Regression analyses with the species richness as a target variable were adjusted with a set of environmental and geographical variables, being the model that links richness of Chondrichthyes species with distance to the Strait of Gibraltar and number of taxonomic families of bony fishes the one that best explains it. This suggests that both historical and ecological factors affect the current distribution of Chondrichthyes within the Mediterranean Sea.
Folguera-Blasco, Núria; Cuyàs, Elisabet; Menéndez, Javier A; Alarcón, Tomás
2018-03-01
Understanding the control of epigenetic regulation is key to explain and modify the aging process. Because histone-modifying enzymes are sensitive to shifts in availability of cofactors (e.g. metabolites), cellular epigenetic states may be tied to changing conditions associated with cofactor variability. The aim of this study is to analyse the relationships between cofactor fluctuations, epigenetic landscapes, and cell state transitions. Using Approximate Bayesian Computation, we generate an ensemble of epigenetic regulation (ER) systems whose heterogeneity reflects variability in cofactor pools used by histone modifiers. The heterogeneity of epigenetic metabolites, which operates as regulator of the kinetic parameters promoting/preventing histone modifications, stochastically drives phenotypic variability. The ensemble of ER configurations reveals the occurrence of distinct epi-states within the ensemble. Whereas resilient states maintain large epigenetic barriers refractory to reprogramming cellular identity, plastic states lower these barriers, and increase the sensitivity to reprogramming. Moreover, fine-tuning of cofactor levels redirects plastic epigenetic states to re-enter epigenetic resilience, and vice versa. Our ensemble model agrees with a model of metabolism-responsive loss of epigenetic resilience as a cellular aging mechanism. Our findings support the notion that cellular aging, and its reversal, might result from stochastic translation of metabolic inputs into resilient/plastic cell states via ER systems.
Zare, Mohsen; Sagot, Jean-Claude; Roquelaure, Yves
2018-05-17
Industrial companies indicate a tendency to eliminate variations in operator strategies, particularly following implementation of the lean principle. Companies believe when the operators perform the same prescribed tasks, they have to execute them in the same manner (completing the same gestures and being exposed to the same risk factors). They attempt to achieve better product quality by standardizing and reducing operational leeway. However, operators adjust and modify ways of performing tasks to balance between their abilities and the requirements of the job. This study aims to investigate the variability of exposure to physical risk factors within and between operators when executing the same prescribed tasks. The Ergonomic Standard method was used to evaluate two workstations. Seven operators were observed thirty times between repeated cycle times at those workstations. The results revealed the variability of exposure to risk factors between and within operators in the repeated execution of the same tasks. Individual characteristics and operators' strategies might generate the variability of exposure to risk factors that may be an opportunity to reduce the risks of work-related musculoskeletal disorders (WR-MSDs). However, sometimes operators' strategies may cause overexposure to risk factors; operators most often adopt such strategies to undertake their tasks while reducing the workload.
NASA Astrophysics Data System (ADS)
Heslop, Emma; Aguiar, Eva; Mourre, Baptiste; Juza, Mélanie; Escudier, Romain; Tintoré, Joaquín
2017-04-01
The Ibiza Channel plays an important role in the circulation of the Western Mediterranean Sea, it governs the north/south exchange of different water masses that are known to affect regional ecosystems and is influenced by variability in the different drivers that affect sub-basins to the north (N) and south (S). A complex system. In this study we use a multi-platform approach to resolve the key drivers of this variability, and gain insight into the inter-connection between the N and S of the Western Mediterranean Sea through this choke point. The 6-year glider time series from the quasi-continuous glider endurance line monitoring of the Ibiza Channel, undertaken by SOCIB (Balearic Coastal Ocean observing and Forecasting System), is used as the base from which to identify key sub-seasonal to inter-annual patterns and shifts in water mass properties and transport volumes. The glider data indicates the following key components in the variability of the N/S flow of different water mass through the channel; regional winter mode water production, change in intermediate water mass properties, northward flows of a fresher water mass and the basin-scale circulation. To resolve the drivers of these components of variability, the strength of combining datasets from different sources, glider, modeling, altimetry and moorings, is harnessed. To the north atmospheric forcing in the Gulf of Lions is a dominant driver, while to the south the mesoscale circulation patterns of the Atlantic Jet and Alboran gyres dominate the variability but do not appear to influence the fresher inflows. Evidence of a connection between the northern and southern sub-basins is however indicated. The study highlights importance of sub-seasonal variability and the scale of rapid change possible in the Mediterranean, as well as the benefits of leveraging high resolution glider datasets within a multi-platform and modelling study.
Basic principles of variable speed drives
NASA Technical Reports Server (NTRS)
Loewenthal, S. H.
1973-01-01
An understanding of the principles which govern variable speed drive operation is discussed for successful drive application. The fundamental factors of torque, speed ratio, and power as they relate to drive selection are discussed. The basic types of variable speed drives, their operating characteristics and their applications are also presented.
NASA Astrophysics Data System (ADS)
Mukkavilli, S. K.; Kay, M. J.; Taylor, R.; Prasad, A. A.; Troccoli, A.
2014-12-01
The Australian Solar Energy Forecasting System (ASEFS) project requires forecasting timeframes which range from nowcasting to long-term forecasts (minutes to two years). As concentrating solar power (CSP) plant operators are one of the key stakeholders in the national energy market, research and development enhancements for direct normal irradiance (DNI) forecasts is a major subtask. This project involves comparing different radiative scheme codes to improve day ahead DNI forecasts on the national supercomputing infrastructure running mesoscale simulations on NOAA's Weather Research & Forecast (WRF) model. ASEFS also requires aerosol data fusion for improving accurate representation of spatio-temporally variable atmospheric aerosols to reduce DNI bias error in clear sky conditions over southern Queensland & New South Wales where solar power is vulnerable to uncertainities from frequent aerosol radiative events such as bush fires and desert dust. Initial results from thirteen years of Bureau of Meteorology's (BOM) deseasonalised DNI and MODIS NASA-Terra aerosol optical depth (AOD) anomalies demonstrated strong negative correlations in north and southeast Australia along with strong variability in AOD (~0.03-0.05). Radiative transfer schemes, DNI and AOD anomaly correlations will be discussed for the population and transmission grid centric regions where current and planned CSP plants dispatch electricity to capture peak prices in the market. Aerosol and solar irradiance datasets include satellite and ground based assimilations from the national BOM, regional aerosol researchers and agencies. The presentation will provide an overview of this ASEFS project task on WRF and results to date. The overall goal of this ASEFS subtask is to develop a hybrid numerical weather prediction (NWP) and statistical/machine learning multi-model ensemble strategy that meets future operational requirements of CSP plant operators.
A one dimensional moving bed biofilm reactor model for nitrification of municipal wastewaters.
Barry, Ugo; Choubert, Jean-Marc; Canler, Jean-Pierre; Pétrimaux, Olivier; Héduit, Alain; Lessard, Paul
2017-08-01
This work presents a one-dimensional model of a moving bed bioreactor (MBBR) process designed for the removal of nitrogen from raw wastewaters. A comprehensive experimental strategy was deployed at a semi-industrial pilot-scale plant fed with a municipal wastewater operated at 10-12 °C, and surface loading rates of 1-2 g filtered COD/m 2 d and 0.4-0.55 g NH 4 -N/m 2 d. Data were collected on influent/effluent composition, and on measurement of key variables or parameters (biofilm mass and maximal thickness, thickness of the limit liquid layer, maximal nitrification rate, oxygen mass transfer coefficient). Based on time-course variations in these variables, the MBBR model was calibrated at two time-scales and magnitudes of dynamic conditions, i.e., short-term (4 days) calibration under dynamic conditions and long-term (33 days) calibration, and for three types of carriers. A set of parameters suitable for the conditions was proposed, and the calibrated parameter set is able to simulate the time-course change of nitrogen forms in the effluent of the MBBR tanks, under the tested operated conditions. Parameters linked to diffusion had a strong influence on how robustly the model is able to accurately reproduce time-course changes in effluent quality. Then the model was used to optimize the operations of MBBR layout. It was shown that the main optimization track consists of the limitation of the aeration supply without changing the overall performance of the process. Further work would investigate the influence of the hydrodynamic conditions onto the thickness of the limit liquid layer and the "apparent" diffusion coefficient in the biofilm parameters.
Understanding and Controlling Sialylation in a CHO Fc-Fusion Process
Lewis, Amanda M.; Croughan, William D.; Aranibar, Nelly; Lee, Alison G.; Warrack, Bethanne; Abu-Absi, Nicholas R.; Patel, Rutva; Drew, Barry; Borys, Michael C.; Reily, Michael D.; Li, Zheng Jian
2016-01-01
A Chinese hamster ovary (CHO) bioprocess, where the product is a sialylated Fc-fusion protein, was operated at pilot and manufacturing scale and significant variation of sialylation level was observed. In order to more tightly control glycosylation profiles, we sought to identify the cause of variability. Untargeted metabolomics and transcriptomics methods were applied to select samples from the large scale runs. Lower sialylation was correlated with elevated mannose levels, a shift in glucose metabolism, and increased oxidative stress response. Using a 5-L scale model operated with a reduced dissolved oxygen set point, we were able to reproduce the phenotypic profiles observed at manufacturing scale including lower sialylation, higher lactate and lower ammonia levels. Targeted transcriptomics and metabolomics confirmed that reduced oxygen levels resulted in increased mannose levels, a shift towards glycolysis, and increased oxidative stress response similar to the manufacturing scale. Finally, we propose a biological mechanism linking large scale operation and sialylation variation. Oxidative stress results from gas transfer limitations at large scale and the presence of oxygen dead-zones inducing upregulation of glycolysis and mannose biosynthesis, and downregulation of hexosamine biosynthesis and acetyl-CoA formation. The lower flux through the hexosamine pathway and reduced intracellular pools of acetyl-CoA led to reduced formation of N-acetylglucosamine and N-acetylneuraminic acid, both key building blocks of N-glycan structures. This study reports for the first time a link between oxidative stress and mammalian protein sialyation. In this study, process, analytical, metabolomic, and transcriptomic data at manufacturing, pilot, and laboratory scales were taken together to develop a systems level understanding of the process and identify oxygen limitation as the root cause of glycosylation variability. PMID:27310468
Geospatial Data Integration for Assessing Landslide Hazard on Engineered Slopes
NASA Astrophysics Data System (ADS)
Miller, P. E.; Mills, J. P.; Barr, S. L.; Birkinshaw, S. J.
2012-07-01
Road and rail networks are essential components of national infrastructures, underpinning the economy, and facilitating the mobility of goods and the human workforce. Earthwork slopes such as cuttings and embankments are primary components, and their reliability is of fundamental importance. However, instability and failure can occur, through processes such as landslides. Monitoring the condition of earthworks is a costly and continuous process for network operators, and currently, geospatial data is largely underutilised. The research presented here addresses this by combining airborne laser scanning and multispectral aerial imagery to develop a methodology for assessing landslide hazard. This is based on the extraction of key slope stability variables from the remotely sensed data. The methodology is implemented through numerical modelling, which is parameterised with the slope stability information, simulated climate conditions, and geotechnical properties. This allows determination of slope stability (expressed through the factor of safety) for a range of simulated scenarios. Regression analysis is then performed in order to develop a functional model relating slope stability to the input variables. The remotely sensed raster datasets are robustly re-sampled to two-dimensional cross-sections to facilitate meaningful interpretation of slope behaviour and mapping of landslide hazard. Results are stored in a geodatabase for spatial analysis within a GIS environment. For a test site located in England, UK, results have shown the utility of the approach in deriving practical hazard assessment information. Outcomes were compared to the network operator's hazard grading data, and show general agreement. The utility of the slope information was also assessed with respect to auto-population of slope geometry, and found to deliver significant improvements over the network operator's existing field-based approaches.
Rivers and Floodplains as Key Components of Global Terrestrial Water Storage Variability
NASA Astrophysics Data System (ADS)
Getirana, Augusto; Kumar, Sujay; Girotto, Manuela; Rodell, Matthew
2017-10-01
This study quantifies the contribution of rivers and floodplains to terrestrial water storage (TWS) variability. We use state-of-the-art models to simulate land surface processes and river dynamics and to separate TWS into its main components. Based on a proposed impact index, we show that surface water storage (SWS) contributes 8% of TWS variability globally, but that contribution differs widely among climate zones. Changes in SWS are a principal component of TWS variability in the tropics, where major rivers flow over arid regions and at high latitudes. SWS accounts for 22-27% of TWS variability in both the Amazon and Nile Basins. Changes in SWS are negligible in the Western U.S., Northern Africa, Middle East, and central Asia. Based on comparisons with Gravity Recovery and Climate Experiment-based TWS, we conclude that accounting for SWS improves simulated TWS in most of South America, Africa, and Southern Asia, confirming that SWS is a key component of TWS variability.
Secret Key Crypto Implementations
NASA Astrophysics Data System (ADS)
Bertoni, Guido Marco; Melzani, Filippo
This chapter presents the algorithm selected in 2001 as the Advanced Encryption Standard. This algorithm is the base for implementing security and privacy based on symmetric key solutions in almost all new applications. Secret key algorithms are used in combination with modes of operation to provide different security properties. The most used modes of operation are presented in this chapter. Finally an overview of the different techniques of software and hardware implementations is given.
Zador, Zsolt; Huang, Wendy; Sperrin, Matthew; Lawton, Michael T
2018-06-01
Following the International Subarachnoid Aneurysm Trial (ISAT), evolving treatment modalities for acute aneurysmal subarachnoid hemorrhage (aSAH) has changed the case mix of patients undergoing urgent surgical clipping. To update our knowledge on outcome predictors by analyzing admission parameters in a pure surgical series using variable importance ranking and machine learning. We reviewed a single surgeon's case series of 226 patients suffering from aSAH treated with urgent surgical clipping. Predictions were made using logistic regression models, and predictive performance was assessed using areas under the receiver operating curve (AUC). We established variable importance ranking using partial Nagelkerke R2 scores. Probabilistic associations between variables were depicted using Bayesian networks, a method of machine learning. Importance ranking showed that World Federation of Neurosurgical Societies (WFNS) grade and age were the most influential outcome prognosticators. Inclusion of only these 2 predictors was sufficient to maintain model performance compared to when all variables were considered (AUC = 0.8222, 95% confidence interval (CI): 0.7646-0.88 vs 0.8218, 95% CI: 0.7616-0.8821, respectively, DeLong's P = .992). Bayesian networks showed that age and WFNS grade were associated with several variables such as laboratory results and cardiorespiratory parameters. Our study is the first to report early outcomes and formal predictor importance ranking following aSAH in a post-ISAT surgical case series. Models showed good predictive power with fewer relevant predictors than in similar size series. Bayesian networks proved to be a powerful tool in visualizing the widespread association of the 2 key predictors with admission variables, explaining their importance and demonstrating the potential for hypothesis generation.
Thermoelectric power generator for variable thermal power source
Bell, Lon E; Crane, Douglas Todd
2015-04-14
Traditional power generation systems using thermoelectric power generators are designed to operate most efficiently for a single operating condition. The present invention provides a power generation system in which the characteristics of the thermoelectrics, the flow of the thermal power, and the operational characteristics of the power generator are monitored and controlled such that higher operation efficiencies and/or higher output powers can be maintained with variably thermal power input. Such a system is particularly beneficial in variable thermal power source systems, such as recovering power from the waste heat generated in the exhaust of combustion engines.
Acadia National Park ITS field operational test : key informant interviews
DOT National Transportation Integrated Search
2003-03-01
This document reflects the ideas and opinions of a group of key informants and stakeholders involved in the Field Operational Test of ITS components in and around Acadia National Park from 1999 through 2002. The stakeholders were involved in the plan...
Integrated vehicle-based safety systems : heavy-truck field operational test key findings report.
DOT National Transportation Integrated Search
2010-08-01
This document presents key findings from the heavy-truck field operational test conducted as : part of the Integrated Vehicle-Based Safety Systems program. These findings are the result of : analyses performed by the University of Michigan Transporta...
Integrated vehicle-based safety systems light-vehicle field operational test key findings report.
DOT National Transportation Integrated Search
2011-01-01
This document presents key findings from the light-vehicle field operational test conducted as part of the Integrated Vehicle-Based Safety Systems program. These findings are the result of analyses performed by the University of Michigan Transportati...
Integrated vehicle-based safety systems light-vehicle field operational test key findings report.
DOT National Transportation Integrated Search
2011-01-01
"This document presents key findings from the light-vehicle field operational test conducted as part of the Integrated Vehicle-Based Safety Systems program. These findings are the result of analyses performed by the University of Michigan Transportat...
NASA Astrophysics Data System (ADS)
Changjiang, Xu; Dongdong, Zhang
2018-06-01
As the impacts by climate changes and human activities are intensified, variability may occur in river's annual runoff as well as flood and low water characteristics. In order to understand the characteristics of variability in hydrological series, diagnosis and identification must be conducted specific to the variability of hydrological series, i.e., whether there was variability and where the variability began to occur. In this paper, the mainstream of Yangtze River was taken as the object of study. A model was established to simulate the impounding and operation of upstream cascade reservoirs so as to obtain the runoff of downstream hydrological control stations after the regulation by upstream reservoirs in different level years. The Range of Variability Approach was utilized to analyze the impact of the operation of upstream reservoirs on the variability of downstream. The results indicated that the overall hydrologic alterations of Yichang hydrological station in 2010 level year, 2015 level year and the forward level year were 68.4, 72.5 and 74.3 % respectively, belonging to high alteration in all three level years. The runoff series of mainstream hydrological stations presented variability in different degrees, where the runoff series of the four hydrological stations including Xiangjiaba, Gaochang and Wulong belonged to high alteration in the three level years; and the runoff series of Beibei hydrological station in 2010 level year belonged to medium alteration, and high alteration in 2015 level year and the forward level year. The study on the impact of the operation of cascade reservoirs in Upper Yangtze River on hydrological variability of the mainstream had important practical significance on the sustainable utilization of water resources, disaster prevention and mitigation, safe and efficient operation and management of water conservancy projects and stable development of the economic society.
On-line identification of fermentation processes for ethanol production.
Câmara, M M; Soares, R M; Feital, T; Naomi, P; Oki, S; Thevelein, J M; Amaral, M; Pinto, J C
2017-07-01
A strategy for monitoring fermentation processes, specifically, simultaneous saccharification and fermentation (SSF) of corn mash, was developed. The strategy covered the development and use of first principles, semimechanistic and unstructured process model based on major kinetic phenomena, along with mass and energy balances. The model was then used as a reference model within an identification procedure capable of running on-line. The on-line identification procedure consists on updating the reference model through the estimation of corrective parameters for certain reaction rates using the most recent process measurements. The strategy makes use of standard laboratory measurements for sugars quantification and in situ temperature and liquid level data. The model, along with the on-line identification procedure, has been tested against real industrial data and have been able to accurately predict the main variables of operational interest, i.e., state variables and its dynamics, and key process indicators. The results demonstrate that the strategy is capable of monitoring, in real time, this complex industrial biomass fermentation. This new tool provides a great support for decision-making and opens a new range of opportunities for industrial optimization.
Lumped Multi-Bubble Analysis of Injection Cooling System for Storage of Cryogenic Liquids
NASA Astrophysics Data System (ADS)
Saha, Pritam; Sandilya, Pavitra
2017-12-01
Storage of cryogenic liquids is a critical issue in many cryogenic applications. Subcooling of the liquid by bubbling a gas has been suggested to extend the storage period by reducing the boil-off loss. Liquid evaporation into the gas may cause liquid subcooling by extracting the latent heat of vaporization from the liquid. The present study aims at studying the factors affecting the liquid subcooling during gas injection. A lumped parameter model is presented to capture the effects of bubble dynamics (coalescence, breakup, deformation etc.) on the heat and mass transport between the gas and the liquid. The liquid subcooling has been estimated as a function of the key operating variables such as gas flow rate and gas injection temperature. Numerical results have been found to predict the change in the liquid temperature drop reasonably well when compared with the previously reported experimental results. This modelling approach can therefore be used in gauging the significance of various process variables on the liquid subcooling by injection cooling, as well as in designing and rating an injection cooling system.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Cobas, M; Sanromán, M A; Pazos, M
2014-05-01
This study focused on leather industrial effluents treatment by biosorption using Fucus vesiculosus as low-cost adsorbent. These effluents are yellowish-brown color and high concentration of Cr (VI). Therefore, biosorption process was optimized using response surface methodology based on Box-Behnken design operating with a simulated leather effluent obtained by mixture of Cr (VI) solution and four leather dyes. The key variables selected were initial solution pH, biomass dosage and CaCl2 concentration in the pretreatment stage. The statistical analysis shows that pH has a negligible effect, being the biomass dosage and CaCl2 concentration the most significant variables. At optimal conditions, 98% of Cr (VI) and 88% of dyes removal can be achieved. Freundlich fitted better to the obtained equilibrium data for all studied systems than Temkin, Langmuir or D-R models. In addition, the use of the final biosorbent as support-substrate to grown of enzyme producer fungi, Pleurotus ostreatus, was also demonstrated. Copyright © 2014 Elsevier Ltd. All rights reserved.
Renewable Electricity Futures: Exploration of a U.S. Grid with 80% Renewable Electricity
NASA Astrophysics Data System (ADS)
Mai, Trieu
2013-04-01
Renewable Electricity Futures is an initial investigation of the extent to which renewable energy supply can meet the electricity demands of the contiguous United States over the next several decades. This study explores the implications and challenges of very high renewable electricity generation levels: from 30% up to 90% (focusing on 80%) of all U.S. electricity generation from renewable technologies in 2050. At such high levels of renewable electricity penetration, the unique characteristics of some renewable resources, specifically geographical distribution and variability and un-certainty in output, pose challenges to the operability of the nation's electric system. The study focuses on key technical implications of this environment from a national perspective, exploring whether the U.S. power system can supply electricity to meet customer demand on an hourly basis with high levels of renewable electricity, including variable wind and solar generation. The study also identifies some of the potential economic, environmental, and social implications of deploying and integrating high levels of renewable electricity in the U.S. The full report and associated supporting information is available at: http://www.nrel.gov/analysis/refutures/.
Multi-party quantum key agreement with five-qubit brown states
NASA Astrophysics Data System (ADS)
Cai, Tao; Jiang, Min; Cao, Gang
2018-05-01
In this paper, we propose a multi-party quantum key agreement protocol with five-qubit brown states and single-qubit measurements. Our multi-party protocol ensures each participant to contribute equally to the agreement key. Each party performs three single-qubit unitary operations on three qubits of each brown state. Finally, by measuring brown states and decoding the measurement results, all participants can negotiate a shared secret key without classical bits exchange between them. With the analysis of security, our protocol demonstrates that it can resist against both outsider and participant attacks. Compared with other schemes, it also possesses a higher information efficiency. In terms of physical operation, it requires single-qubit measurements only which weakens the hardware requirements of participant and has a better operating flexibility.
A conceptual framework of outcomes for caregivers of assistive technology users.
Demers, Louise; Fuhrer, Marcus J; Jutai, Jeffrey; Lenker, James; Depa, Malgorzata; De Ruyter, Frank
2009-08-01
To develop and validate the content of a conceptual framework concerning outcomes for caregivers whose recipients are assistive technology users. The study was designed in four stages. First, a list of potential key variables relevant to the caregivers of assistive technology users was generated from a review of the existing literature and semistructured interviews with caregivers. Second, the variables were analyzed, regrouped, and partitioned, using a conceptual mapping approach. Third, the key areas were anchored in a general stress model of caregiving. Finally, the judgments of rehabilitation experts were used to evaluate the conceptual framework. An important result of this study is the identification of a complex set of variables that need to be considered when examining the experience of caregivers of assistive technology users. Stressors, such as types of assistance, number of tasks, and physical effort, are predominant contributors to caregiver outcomes along with caregivers' personal resources acting as mediating factors (intervening variables) and assistive technology acting as a key moderating factor (effect modifier variable). Recipients' use of assistive technology can enhance caregivers' well being because of its potential for alleviating a number of stressors associated with caregiving. Viewed as a whole, this work demonstrates that the assistive technology experience of caregivers has many facets that merit the attention of outcomes researchers.
The HPT Value Proposition in the Larger Improvement Arena.
ERIC Educational Resources Information Center
Wallace, Guy W.
2003-01-01
Discussion of human performance technology (HPT) emphasizes the key variable, which is the human variable. Highlights include the Ishikawa Diagram; human performance as one variable of process performance; collaborating with other improvement approaches; value propositions; and benefits to stakeholders, including real return on investments. (LRW)
40 CFR 63.1207 - What are the performance testing requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
... operating conditions that are most likely to reflect daily maximum operating variability, similar to a... operating variability, similar to a dioxin/furan compliance test; (B) You have not changed the design or... document the temperature location measurement in the comprehensive performance test plan, as required by...
Thurlow, W R
1980-01-01
Experiments with keyboard arrangements of letters show that simple alphabetic letter-key sequences with 4 to 5 letters in a row lead to most rapid visual search performance. Such arrangements can be used on keyboards operated by the index finger of one hand. Arrangement of letters in words offers a promising alternative because these arrangements can be readily memorized and can result in small interletter distances on the keyboard for frequently occurring letter sequences. Experiments on operation of keyboards show that a space or shift key operated by the left hand (which also holds the communication device) results in faster keyboard operation than when space or shift keys on the front of the keyboard (operated by right hand) are used. Special problems of the deaf-blind are discussed. Keyboard arrangements are investigated, and matching tactual codes are suggested.
Quantum Public Key Cryptosystem Based on Bell States
NASA Astrophysics Data System (ADS)
Wu, WanQing; Cai, QingYu; Zhang, HuanGuo; Liang, XiaoYan
2017-11-01
Classical public key cryptosystems ( P K C), such as R S A, E I G a m a l, E C C, are no longer secure in quantum algorithms, and quantum cryptography has become a novel research topic. In this paper we present a quantum asymmetrical cryptosystem i.e. quantum public key cryptosystem ( Q P K C) based on the Bell states. In particular, in the proposed QPKC the public key are given by the first n particles of Bell states and generalized Pauli operations. The corresponding secret key are the last n particles of Bell states and the inverse of generalized Pauli operations. The proposed QPKC encrypts the message using a public key and decrypts the ciphertext using a private key. By H o l e v o ' s theorem, we proved the security of the secret key and messages during the QPKC.
Implementation of continuous-variable quantum key distribution with discrete modulation
NASA Astrophysics Data System (ADS)
Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2017-06-01
We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.
Pelagic Habitat Analysis Module (PHAM) for GIS Based Fisheries Decision Support
NASA Technical Reports Server (NTRS)
Kiefer, D. A.; Armstrong, Edward M.; Harrison, D. P.; Hinton, M. G.; Kohin, S.; Snyder, S.; O'Brien, F. J.
2011-01-01
We have assembled a system that integrates satellite and model output with fisheries data We have developed tools that allow analysis of the interaction between species and key environmental variables Demonstrated the capacity to accurately map habitat of Thresher Sharks Alopias vulpinus & pelagicus. Their seasonal migration along the California Current is at least partly driven by the seasonal migration of sardine, key prey of the sharks.We have assembled a system that integrates satellite and model output with fisheries data We have developed tools that allow analysis of the interaction between species and key environmental variables Demonstrated the capacity to accurately map habitat of Thresher Sharks Alopias vulpinus nd pelagicus. Their seasonal migration along the California Current is at least partly driven by the seasonal migration of sardine, key prey of the sharks.
High performance reconciliation for continuous-variable quantum key distribution with LDPC code
NASA Astrophysics Data System (ADS)
Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua
2015-03-01
Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.
Modelling Tradeoffs Evolution in Multipurpose Water Systems Operation in Response to Extreme Events
NASA Astrophysics Data System (ADS)
Mason, E.; Gazzotti, P.; Amigoni, F.; Giuliani, M.; Castelletti, A.
2015-12-01
Multipurpose water resource systems are usually operated on a tradeoff of the operating objectives, which - under steady state climatic and socio-economic boundary conditions - is supposed to ensure a fair and/or efficient balance among the conflicting interests. Extreme variability in the system's drivers might affect operators' risk aversion and force a change in the tradeoff. Properly accounting for these shifts is key to any rigorous retrospective assessment of operators' behavior and the associated system's performance. In this study, we explore how the selection of different optimal tradeoffs among the operating objectives is linked to the variations of the boundary conditions, such as, for example, drifting rainfall season or remarkable changes in crop and energy prices. We argue that tradeoff selection is driven by recent, extreme variations in system performance: underperforming on one of the operating objective target value should push the tradeoff toward the disadvantaged objective. To test this assumption, we developed a rational procedure to simulate the operators' tradeoff selection process. We map the selection onto a multi lateral negotiation process, where different multiple, virtual agents optimize different operating objectives. The agents periodically negotiate a compromise on the operating policy. The agent's rigidity in each negotiation round is determined by the recent system performances according to the specific objective it represents. The negotiation follows a set-based egocentric monotonic concession protocol: at each negotiation step an agent incrementally adds some options to the set of its acceptable compromises and (possibly) accepts lower and lower satisfying policies until an agreement is achieved. We apply this reiterated negotiation framework on the regulated Lake Como, Italy, simulating the lake dam operation and its recurrent updates over the last 50 years. The operation aims to balance shoreline flood prevention and irrigation deficit control in the downstream irrigated areas. The results of our simulated negotiations are able to accurately capture the operator's risk aversion changes as driven by extreme wet and dry situations, and to well reproduce the observational release data.
The preferable keypad layout for ease of pressing small cell phone keys with the thumb.
Muraki, Satoshi; Okabe, Keiichi; Abe, Tetsuji; Sai, Akishige
2010-12-01
The present study investigated the effect of keypad layout on the ease of operating small cell phones with the thumb in one-handed operations by young and elderly male and female participants. Eighteen young participants (9 males and 9 females) and 12 elderly participants (6 males and 6 females) operated 9 different keypads modeled after commercially available cordless handsets. Keypads designed by using the L9 orthogonal array differed in vertical pitch (V-Pitch: 7, 8, 9 mm) between keys, horizontal pitch (H-Pitch: 10, 11, 12 mm) between keys, the margin below the bottom row of keys (B-Margin: 5, 13, 21 mm), and phone body width (P-Width: 38, 41, 44 mm). Results concerning subjective overall usability showed the lowest scores for a V-Pitch of 7 mm and a B-Margin of 5 mm in most groups. However, for the female participants, with shorter thumbs, the increase in V-pitch did not improve operability. In the elderly participants, miskeying frequently occurred at dial keys of specific numbers. These findings suggest that the preferable keypad layout differs between different age groups and between male and female participants.
Variability in hand-arm vibration during grinding operations.
Liljelind, Ingrid; Wahlström, Jens; Nilsson, Leif; Toomingas, Allan; Burström, Lage
2011-04-01
Measurements of exposure to vibrations from hand-held tools are often conducted on a single occasion. However, repeated measurements may be crucial for estimating the actual dose with good precision. In addition, knowledge of determinants of exposure could be used to improve working conditions. The aim of this study was to assess hand-arm vibration (HAV) exposure during different grinding operations, in order to obtain estimates of the variance components and to evaluate the effect of work postures. Ten experienced operators used two compressed air-driven angle grinders of the same make in a simulated work task at a workplace. One part of the study consisted of using a grinder while assuming two different working postures: at a standard work bench (low) and on a wall with arms elevated and the work area adjusted to each operator's height (high). The workers repeated the task three times. In another part of the study, investigating the wheel wear, for each grinder, the operators used two new grinding wheels and with each wheel the operator performed two consecutive 1-min grinding tasks. Both grinding tasks were conducted on weld puddles of mild steel on a piece of mild steel. Measurements were taken according to ISO-standard 5349 [the equivalent hand-arm-weighted acceleration (m s(-2)) averaged over 1 min]. Mixed- and random-effects models were used to investigate the influence of the fixed variables and to estimate variance components. The equivalent hand-arm-weighted acceleration assessed when the task was performed on the bench and at the wall was 3.2 and 3.3 m s(-2), respectively. In the mixed-effects model, work posture was not a significant variable. The variables 'operator' and 'grinder' together explained only 12% of the exposure variability and 'grinding wheel' explained 47%; the residual variability of 41% remained unexplained. When the effect of grinding wheel wear was investigated in the random-effects model, 37% of the variability was associated with the wheel while minimal variability was associated with the operator or the grinder and 37% was unexplained. The interaction effect of grinder and operator explained 18% of the variability. In the wheel wear test, the equivalent hand-arm-weighted accelerations for Grinder 1 during the first and second grinding minutes were 3.4 and 2.9 m s(-2), respectively, and for Grinder 2, they were 3.1 and 2.9 m s(-2), respectively. For Grinder 1, the equivalent hand-arm-weighted acceleration during the first grinding minute was significantly higher (P = 0.04) than during the second minute. Work posture during grinding operations does not appear to affect the level of HAV. Grinding wheels explained much of the variability in this study, but almost 40% of the variance remained unexplained. The considerable variability in the equivalent hand-arm-weighted acceleration has an impact on the risk assessment at both the group and the individual level.
NASA Astrophysics Data System (ADS)
von Ruette, J.; Lehmann, P.; Or, D.
2014-10-01
The occurrence of shallow landslides is often associated with intense and prolonged rainfall events, where infiltrating water reduces soil strength and may lead to abrupt mass release. Despite general understanding of the role of rainfall water in slope stability, the prediction of rainfall-induced landslides remains a challenge due to natural heterogeneity that affect hydrologic loading patterns and the largely unobservable internal progressive failures. An often overlooked and potentially important factor is the role of rainfall variability in space and time on landslide triggering that is often obscured by coarse information (e.g., hourly radar data at spatial resolution of a few kilometers). To quantify potential effects of rainfall variability on failure dynamics, spatial patterns, landslide numbers and volumes, we employed a physically based "Catchment-scale Hydromechanical Landslide Triggering" (CHLT) model for a study area where a summer storm in 2002 triggered 51 shallow landslides. In numerical experiments based on the CHLT model, we applied the measured rainfall amount of 53 mm in different artificial spatiotemporal rainfall patterns, resulting in between 30 and 100 landslides and total released soil volumes between 3000 and 60,000 m3 for the various scenarios. Results indicate that low intensity rainfall below soil's infiltration capacity resulted in the largest mechanical perturbation. This study illustrates how small-scale rainfall variability that is often overlooked by present operational rainfall data may play a key role in shaping landslide patterns.
On the Past, Present, and Future of Eastern Boundary Upwelling Systems
NASA Astrophysics Data System (ADS)
Bograd, S. J.; Black, B.; Garcia-Reyes, M.; Rykaczewski, R. R.; Thompson, S. A.; Turley, B. D.; van der Sleen, P.; Sydeman, W. J.
2016-12-01
Coastal upwelling in Eastern Boundary Upwelling Systems (EBUS) drives high productivity and marine biodiversity and supports lucrative commercial fishing operations. Thus there is significant interest in understanding the mechanisms underlying variations in the upwelling process, its drivers, and potential changes relative to global warming. Here we review recent results from a combination of regional and global observations, reanalysis products, and climate model projections that describe variability in coastal upwelling in EBUS. Key findings include: (1) interannual variability in California Current upwelling occurs in two orthogonal seasonal modes: a winter/early spring mode dominated by interannual variability and a summer mode dominated by long-term increasing trend; (2) there is substantial coherence in year-to-year variability between this winter/spring upwelling mode and upper trophic level demographic processes, including fish growth rates (rockfish and salmon) and seabird phenology, breeding success and survival; (3) a meta-analysis of existing literature suggests consistency with the Bakun (1990) hypothesis that rising global greenhouse-gas concentrations would result in upwelling-favorable wind intensification; however, (4) an ensemble of coupled, global ocean-atmosphere models finds limited evidence for intensification of upwelling-favorable winds over the 21st century, although summertime winds near the poleward boundaries of climatalogical upwelling zones are projected to intensify. We will also review a new comparative research program between the California and Benguela Upwelling Systems, including efforts to understand patterns of change and variation between climate, upwelling, fish, and seabirds.
Numerical Prediction of CCV in a PFI Engine using a Parallel LES Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin M; Mirzaeian, Mohsen; Millo, Federico
Cycle-to-cycle variability (CCV) is detrimental to IC engine operation and can lead to partial burn, misfire, and knock. Predicting CCV numerically is extremely challenging due to two key reasons. Firstly, high-fidelity methods such as large eddy simulation (LES) are required to accurately resolve the incylinder turbulent flowfield both spatially and temporally. Secondly, CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. Ameen et al. (Int. J. Eng. Res., 2017) developed a parallel perturbation model (PPM) approach to dissociate this long time-scale problem into several shorter timescale problems. The strategy ismore » to perform multiple single-cycle simulations in parallel by effectively perturbing the initial velocity field based on the intensity of the in-cylinder turbulence. This strategy was demonstrated for motored engine and it was shown that the mean and variance of the in-cylinder flowfield was captured reasonably well by this approach. In the present study, this PPM approach is extended to simulate the CCV in a fired port-fuel injected (PFI) SI engine. Two operating conditions are considered – a medium CCV operating case corresponding to 2500 rpm and 16 bar BMEP and a low CCV case corresponding to 4000 rpm and 12 bar BMEP. The predictions from this approach are also shown to be similar to the consecutive LES cycles. Both the consecutive and PPM LES cycles are observed to under-predict the variability in the early stage of combustion. The parallel approach slightly underpredicts the cyclic variability at all stages of combustion as compared to the consecutive LES cycles. However, it is shown that the parallel approach is able to predict the coefficient of variation (COV) of the in-cylinder pressure and burn rate related parameters with sufficient accuracy, and is also able to predict the qualitative trends in CCV with changing operating conditions. The convergence of the statistics predicted by the PPM approach with respect to the number of consecutive cycles required for each parallel simulation is also investigated. It is shown that this new approach is able to give accurate predictions of the CCV in fired engines in less than one-tenth of the time required for the conventional approach of simulating consecutive engine cycles.« less
Historic range of variability for upland vegetation in the Medicine Bow National Forest, Wyoming
Gregory K. Dillon; Dennis H. Knight; Carolyn B. Meyer
2005-01-01
An approach for synthesizing the results of ecological research pertinent to land management is the analysis of the historic range of variability (HRV) for key ecosystem variables that are affected by management activities. This report provides an HRV analysis for the upland vegetation of the Medicine Bow National Forest in southeastern Wyoming. The variables include...
Historic range of variability for upland vegetation in the Bighorn National Forest, Wyoming
Carolyn B. Meyer; Dennis H. Knight; Gregory K. Dillon
2005-01-01
An approach for synthesizing the results of ecological research pertinent to land management is the analysis of the historic range of variability (HRV) for key ecosystem variables that are affected by management activities. This report provides an HRV analysis for the upland vegetation of the Bighorn National Forest in northcentral Wyoming. The variables include live...
Operational Impacts of Operating Reserve Demand Curves on Production Cost and Reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim; Ibanez, Eduardo; Ela, Erik
The electric power industry landscape is continually evolving. As emerging technologies such as wind, solar, electric vehicles, and energy storage systems become more cost-effective and present in the system, traditional power system operating strategies will need to be reevaluated. The presence of wind and solar generation (commonly referred to as variable generation) may result in an increase in the variability and uncertainty of the net load profile. One mechanism to mitigate this is to schedule and dispatch additional operating reserves. These operating reserves aim to ensure that there is enough capacity online in the system to account for the increasedmore » variability and uncertainty occurring at finer temporal resolutions. A new operating reserve strategy, referred to as flexibility reserve, has been introduced in some regions. A similar implementation is explored in this paper, and its implications on power system operations are analyzed.« less
Operational Impacts of Operating Reserve Demand Curves on Production Cost and Reliability: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim; Ibanez, Eduardo; Ela, Erik
The electric power industry landscape is continually evolving. As emerging technologies such as wind, solar, electric vehicles, and energy storage systems become more cost-effective and present in the system, traditional power system operating strategies will need to be reevaluated. The presence of wind and solar generation (commonly referred to as variable generation) may result in an increase in the variability and uncertainty of the net load profile. One mechanism to mitigate this is to schedule and dispatch additional operating reserves. These operating reserves aim to ensure that there is enough capacity online in the system to account for the increasedmore » variability and uncertainty occurring at finer temporal resolutions. A new operating reserve strategy, referred to as flexibility reserve, has been introduced in some regions. A similar implementation is explored in this paper, and its implications on power system operations are analyzed.« less
Sources of Sex Discrimination in Educational Systems: A Conceptual Model
ERIC Educational Resources Information Center
Kutner, Nancy G.; Brogan, Donna
1976-01-01
A conceptual model is presented relating numerous variables contributing to sexism in American education. Discrimination is viewed as intervening between two sets of interrelated independent variables and the dependent variable of sex inequalities in educational attainment. Sex-role orientation changes are the key to significant change in the…
Relative Reinforcer Rates and Magnitudes Do Not Control Concurrent Choice Independently
ERIC Educational Resources Information Center
Elliffe, Douglas; Davison, Michael; Landon, Jason
2008-01-01
One assumption of the matching approach to choice is that different independent variables control choice independently of each other. We tested this assumption for reinforcer rate and magnitude in an extensive parametric experiment. Five pigeons responded for food reinforcement on switching-key concurrent variable-interval variable-interval…
Security of continuous-variable quantum key distribution against general attacks.
Leverrier, Anthony; García-Patrón, Raúl; Renner, Renato; Cerf, Nicolas J
2013-01-18
We prove the security of Gaussian continuous-variable quantum key distribution with coherent states against arbitrary attacks in the finite-size regime. In contrast to previously known proofs of principle (based on the de Finetti theorem), our result is applicable in the practically relevant finite-size regime. This is achieved using a novel proof approach, which exploits phase-space symmetries of the protocols as well as the postselection technique introduced by Christandl, Koenig, and Renner [Phys. Rev. Lett. 102, 020504 (2009)].
Framework for assessing key variable dependencies in loose-abrasive grinding and polishing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.S.; Aikens, D.M.; Brown, N.J.
1995-12-01
This memo describes a framework for identifying all key variables that determine the figuring performance of loose-abrasive lapping and polishing machines. This framework is intended as a tool for prioritizing R&D issues, assessing the completeness of process models and experimental data, and for providing a mechanism to identify any assumptions in analytical models or experimental procedures. Future plans for preparing analytical models or performing experiments can refer to this framework in establishing the context of the work.
Ultrasonic determination of recrystallization
NASA Technical Reports Server (NTRS)
Generazio, E. R.
1986-01-01
Ultrasonic attenuation was measured for cold worked Nickel 200 samples annealed at increasing temperatures. Localized dislocation density variations, crystalline order and colume percent of recrystallized phase were determined over the anneal temperature range using transmission electron microscopy, X-ray diffraction, and metallurgy. The exponent of the frequency dependence of the attenuation was found to be a key variable relating ultrasonic attenuation to the thermal kinetics of the recrystallization process. Identification of this key variable allows for the ultrasonic determination of onset, degree, and completion of recrystallization.
Huben, Neil; Hussein, Ahmed; May, Paul; Whittum, Michelle; Kraswowki, Collin; Ahmed, Youssef; Jing, Zhe; Khan, Hijab; Kim, Hyung; Schwaab, Thomas; Underwood Iii, Willie; Kauffman, Eric; Mohler, James L; Guru, Khurshid A
2018-04-10
To develop a methodology for predicting operative times for robot-assisted radical prostatectomy (RARP) using preoperative patient, disease, procedural and surgeon variables to facilitate operating room (OR) scheduling. The model included preoperative metrics: BMI, ASA score, clinical stage, National Comprehensive Cancer Network (NCCN) risk, prostate weight, nerve-sparing status, extent and laterality of lymph node dissection, and operating surgeon (6 surgeons were included in the study). A binary decision tree was fit using a conditional inference tree method to predict operative times. The variables most associated with operative time were determined using permutation tests. The data was split at the value of the variable that results in the largest difference in means for surgical time across the split. This process was repeated recursively on the resultant data. 1709 RARPs were included. The variable most strongly associated with operative time was the surgeon (surgeons 2 and 4 - 102 minutes shorter than surgeons 1, 3, 5, and 6, p<0.001). Among surgeons 2 and 4, BMI had the strongest association with surgical time (p<0.001). Among patients operated by surgeons 1, 3, 5 and 6, RARP time was again most strongly associated with the surgeon performing RARP. Surgeons 1, 3, and 6 were on average 76 minutes faster than surgeon 5 (p<0.001). The regression tree output in the form of box plots showed operative time median and ranges according to patient, disease, procedural and surgeon metrics. We developed a methodology that can predict operative times for RARP based on patient, disease and surgeon variables. This methodology can be utilized for quality control, facilitate OR scheduling and maximize OR efficiency.
Heliocentric interplanetary low thrust trajectory optimization program, supplement 1, part 2
NASA Technical Reports Server (NTRS)
Mann, F. I.; Horsewood, J. L.
1978-01-01
The improvements made to the HILTOP electric propulsion trajectory computer program are described. A more realistic propulsion system model was implemented in which various thrust subsystem efficiencies and specific impulse are modeled as variable functions of power available to the propulsion system. The number of operating thrusters are staged, and the beam voltage is selected from a set of five (or less) constant voltages, based upon the application of variational calculus. The constant beam voltages may be optimized individually or collectively. The propulsion system logic is activated by a single program input key in such a manner as to preserve the HILTOP logic. An analysis describing these features, a complete description of program input quantities, and sample cases of computer output illustrating the program capabilities are presented.
Multiple and variable speed electrical generator systems for large wind turbines
NASA Technical Reports Server (NTRS)
Andersen, T. S.; Hughes, P. S.; Kirschbaum, H. S.; Mutone, G. A.
1982-01-01
A cost effective method to achieve increased wind turbine generator energy conversion and other operational benefits through variable speed operation is presented. Earlier studies of multiple and variable speed generators in wind turbines were extended for evaluation in the context of a specific large sized conceptual design. System design and simulation have defined the costs and performance benefits which can be expected from both two speed and variable speed configurations.
Anaerobic treatment of Tequila vinasses in a CSTR-type digester.
Méndez-Acosta, Hugo Oscar; Snell-Castro, Raúl; Alcaraz-González, Víctor; González-Alvarez, Víctor; Pelayo-Ortiz, Carlos
2010-06-01
Tequila industries in general produce great volumes of effluents with high pollutant loads, which are discharged (untreated or partially treated) into natural receivers, thus causing severe environmental problems. In this contribution, we propose an integrated system as a first step to comply with the Mexican ecological norms and stabilize the anaerobic treatment of Tequila vinasses with main design criteria: simple and easy operation, reduce operating time and associated costs (maintenance), integrated and compact design, minimal cost of set-up, start-up, monitoring and control. This system is composed of a fully instrumented and automated lab-scale CSTR-type digester, on-line measuring devices of key variables (pH, temperature, flow rates, etc.), which are used along with off-line readings of chemical oxygen demand (COD), biogas composition, alkalinity and volatile fatty acids to guarantee the operational stability of the anaerobic digestion process. The system performance was evaluated for 200 days and the experimental results show that even under the influence of load disturbances, it is possible to reduce the COD concentration to 85% in the start-up phase and up to 95% during the normal operation phase while producing a biogas with a methane composition greater than 65%. It is also shown that in order to maintain an efficient treatment, the buffering capacity (given by the alkalinity ratio, alpha = intermediate alkalinity/total alkalinity) must be closely monitored.
Segmentation and learning in the quantitative analysis of microscopy images
NASA Astrophysics Data System (ADS)
Ruggiero, Christy; Ross, Amy; Porter, Reid
2015-02-01
In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.
An integrated assessment of location-dependent scaling for microalgae biofuel production facilities
Coleman, André M.; Abodeely, Jared M.; Skaggs, Richard L.; ...
2014-06-19
Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting and design through processing and upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are partially addressed by applying the Integrated Assessment Framework (IAF) – an integrated multi-scale modeling, analysis, and data management suite – to address key issues in developing and operating an open-pond microalgae production facility.more » This is done by analyzing how variability and uncertainty over space and through time affect feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. To provide a baseline analysis, the IAF was applied in this paper to a set of sites in the southeastern U.S. with the potential to cumulatively produce 5 billion gallons per year. Finally, the results indicate costs can be reduced by scaling downstream processing capabilities to fit site-specific growing conditions, available and economically viable resources, and specific microalgal strains.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez-Gonzalez, Jesus S.; Ade, Brian J.; Bowman, Stephen M.
2015-01-01
Simulation of boiling water reactor (BWR) fuel depletion poses a challenge for nuclide inventory validation and nuclear criticality safety analyses. This challenge is due to the complex operating conditions and assembly design heterogeneities that characterize these nuclear systems. Fuel depletion simulations and in-cask criticality calculations are affected by (1) completeness of design information, (2) variability of operating conditions needed for modeling purposes, and (3) possible modeling choices. These effects must be identified, quantified, and ranked according to their significance. This paper presents an investigation of BWR fuel depletion using a complete set of actual design specifications and detailed operational datamore » available for five operating cycles of the Swedish BWR Forsmark 3 reactor. The data includes detailed axial profiles of power, burnup, and void fraction in a very fine temporal mesh for a GE14 (10×10) fuel assembly. The specifications of this case can be used to assess the impacts of different modeling choices on inventory prediction and in-cask criticality, specifically regarding the key parameters that drive inventory and reactivity throughout fuel burnup. This study focused on the effects of the fidelity with which power history and void fraction distributions are modeled. The corresponding sensitivity of the reactivity in storage configurations is assessed, and the impacts of modeling choices on decay heat and inventory are addressed.« less
Modeling of carbonate reservoir variable secondary pore space based on CT images
NASA Astrophysics Data System (ADS)
Nie, X.; Nie, S.; Zhang, J.; Zhang, C.; Zhang, Z.
2017-12-01
Digital core technology has brought convenience to us, and X-ray CT scanning is one of the most common way to obtain 3D digital cores. However, it can only provide the original information of the only samples being scanned, and we can't modify the porosity of the scanned cores. For numerical rock physical simulations, a series of cores with variable porosities are needed to determine the relationship between the physical properties and porosity. In carbonate rocks, the secondary pore space including dissolution pores, caves and natural fractures is the key reservoir space, which makes the study of carbonate secondary porosity very important. To achieve the variation of porosities in one rock sample, based on CT scanned digital cores, according to the physical and chemical properties of carbonate rocks, several mathematical methods are chosen to simulate the variation of secondary pore space. We use the erosion and dilation operations of mathematical morphology method to simulate the pore space changes of dissolution pores and caves. We also use the Fractional Brownian Motion model to generate natural fractures with different widths and angles in digital cores to simulate fractured carbonate rocks. The morphological opening-and-closing operations in mathematical morphology method are used to simulate distribution of fluid in the pore space. The established 3D digital core models with different secondary porosities and water saturation status can be used in the study of the physical property numerical simulations of carbonate reservoir rocks.
RTJ-303: Variable geometry, oblique wing supersonic aircraft
NASA Technical Reports Server (NTRS)
Antaran, Albert; Belete, Hailu; Dryzmkowski, Mark; Higgins, James; Klenk, Alan; Rienecker, Lisa
1992-01-01
This document is a preliminary design of a High Speed Civil Transport (HSCT) named the RTJ-303. It is a 300 passenger, Mach 1.6 transport with a range of 5000 nautical miles. It features four mixed-flow turbofan engines, variable geometry oblique wing, with conventional tail-aft control surfaces. The preliminary cost analysis for a production of 300 aircraft shows that flyaway cost would be 183 million dollars (1992) per aircraft. The aircraft uses standard jet fuel and requires no special materials to handle aerodynamic heating in flight because the stagnation temperatures are approximately 130 degrees Fahrenheit in the supersonic cruise condition. It should be stressed that this aircraft could be built with today's technology and does not rely on vague and uncertain assumptions of technology advances. Included in this report are sections discussing the details of the preliminary design sequence including the mission to be performed, operational and performance constraints, the aircraft configuration and the tradeoffs of the final choice, wing design, a detailed fuselage design, empennage design, sizing of tail geometry, and selection of control surfaces, a discussion on propulsion system/inlet choice and their position on the aircraft, landing gear design including a look at tire selection, tip-over criterion, pavement loading, and retraction kinematics, structures design including load determination, and materials selection, aircraft performance, a look at stability and handling qualities, systems layout including location of key components, operations requirements maintenance characteristics, a preliminary cost analysis, and conclusions made regarding the design, and recommendations for further study.
Bar-On, Elhanan; Abargel, Avi; Peleg, Kobi; Kreiss, Yitshak
2013-10-01
To propose strategies and recommendations for future planning and deployment of field hospitals after earthquakes by comparing the experience of 4 field hospitals deployed by The Israel Defense Forces (IDF) Medical Corps in Armenia, Turkey, India and Haiti. Quantitative data regarding the earthquakes were collected from published sources; data regarding hospital activity were collected from IDF records; and qualitative information was obtained from structured interviews with key figures involved in the missions. The hospitals started operating between 89 and 262 hours after the earthquakes. Their sizes ranged from 25 to 72 beds, and their personnel numbered between 34 and 100. The number of patients treated varied from 1111 to 2400. The proportion of earthquake-related diagnoses ranged from 28% to 67% (P < .001), with hospitalization rates between 3% and 66% (P < .001) and surgical rates from 1% to 24% (P < .001). In spite of characteristic scenarios and injury patterns after earthquakes, patient caseload and treatment requirements varied widely. The variables affecting the patient profile most significantly were time until deployment, total number of injured, availability of adjacent medical facilities, and possibility of evacuation from the disaster area. When deploying a field hospital in the early phase after an earthquake, a wide variability in patient caseload should be anticipated. Customization is difficult due to the paucity of information. Therefore, early deployment necessitates full logistic self-sufficiency and operational versatility. Also, collaboration with local and international medical teams can greatly enhance treatment capabilities.
NASA Astrophysics Data System (ADS)
Takaya, Yuhei; Hirahara, Shoji; Yasuda, Tamaki; Matsueda, Satoko; Toyoda, Takahiro; Fujii, Yosuke; Sugimoto, Hiroyuki; Matsukawa, Chihiro; Ishikawa, Ichiro; Mori, Hirotoshi; Nagasawa, Ryoji; Kubo, Yutaro; Adachi, Noriyuki; Yamanaka, Goro; Kuragano, Tsurane; Shimpo, Akihiko; Maeda, Shuhei; Ose, Tomoaki
2018-02-01
This paper describes the Japan Meteorological Agency/Meteorological Research Institute-Coupled Prediction System version 2 (JMA/MRI-CPS2), which was put into operation in June 2015 for the purpose of performing seasonal predictions. JMA/MRI-CPS2 has various upgrades from its predecessor, JMA/MRI-CPS1, including improved resolution and physics in its atmospheric and oceanic components, introduction of an interactive sea-ice model and realistic initialization of its land component. Verification of extensive re-forecasts covering a 30-year period (1981-2010) demonstrates that JMA/MRI-CPS2 possesses improved seasonal predictive skills for both atmospheric and oceanic interannual variability as well as key coupled variability such as the El Niño-Southern Oscillation (ENSO). For ENSO prediction, the new system better represents the forecast uncertainty and transition/duration of ENSO phases. Our analysis suggests that the enhanced predictive skills are attributable to incremental improvements resulting from all of the changes, as is apparent in the beneficial effects of sea-ice coupling and land initialization on 2-m temperature predictions. JMA/MRI-CPS2 is capable of reasonably representing the seasonal cycle and secular trends of sea ice. The sea-ice coupling remarkably enhances the predictive capability for the Arctic 2-m temperature, indicating the importance of this factor, particularly for seasonal predictions in the Arctic region.
NASA Astrophysics Data System (ADS)
Guo, Ying; Li, Renjie; Liao, Qin; Zhou, Jian; Huang, Duan
2018-02-01
Discrete modulation is proven to be beneficial to improving the performance of continuous-variable quantum key distribution (CVQKD) in long-distance transmission. In this paper, we suggest a construct to improve the maximal generated secret key rate of discretely modulated eight-state CVQKD using an optical amplifier (OA) with a slight cost of transmission distance. In the proposed scheme, an optical amplifier is exploited to compensate imperfection of Bob's apparatus, so that the generated secret key rate of eight-state protocol is enhanced. Specifically, we investigate two types of optical amplifiers, phase-insensitive amplifier (PIA) and phase-sensitive amplifier (PSA), and thereby obtain approximately equivalent improved performance for eight-state CVQKD system when applying these two different amplifiers. Numeric simulation shows that the proposed scheme can well improve the generated secret key rate of eight-state CVQKD in both asymptotic limit and finite-size regime. We also show that the proposed scheme can achieve the relatively high-rate transmission at long-distance communication system.
NASA Astrophysics Data System (ADS)
Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano
2018-06-01
One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.
NASA Astrophysics Data System (ADS)
Zhao, Yijia; Zhang, Yichen; Xu, Bingjie; Yu, Song; Guo, Hong
2018-04-01
The method of improving the performance of continuous-variable quantum key distribution protocols by postselection has been recently proposed and verified. In continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocols, the measurement results are obtained from untrusted third party Charlie. There is still not an effective method of improving CV-MDI QKD by the postselection with untrusted measurement. We propose a method to improve the performance of coherent-state CV-MDI QKD protocol by virtual photon subtraction via non-Gaussian postselection. The non-Gaussian postselection of transmitted data is equivalent to an ideal photon subtraction on the two-mode squeezed vacuum state, which is favorable to enhance the performance of CV-MDI QKD. In CV-MDI QKD protocol with non-Gaussian postselection, two users select their own data independently. We demonstrate that the optimal performance of the renovated CV-MDI QKD protocol is obtained with the transmitted data only selected by Alice. By setting appropriate parameters of the virtual photon subtraction, the secret key rate and tolerable excess noise are both improved at long transmission distance. The method provides an effective optimization scheme for the application of CV-MDI QKD protocols.
Using Natural Language to Enhance Mission Effectiveness
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Meszaros, Erica
2016-01-01
The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for professional-related activities. The driving function of this research is allowing a non-UAV pilot, an operator, to define and manage a mission. This paper describes the preliminary usability measures of an interface that allows an operator to define the mission using speech to make inputs. An experiment was conducted to begin to enumerate the efficacy and user acceptance of using voice commands to define a multi-UAV mission and to provide high-level vehicle control commands such as "takeoff." The primary independent variable was input type - voice or mouse. The primary dependent variables consisted of the correctness of the mission parameter inputs and the time needed to make all inputs. Other dependent variables included NASA-TLX workload ratings and subjective ratings on a final questionnaire. The experiment required each subject to fill in an online form that contained comparable required information that would be needed for a package dispatcher to deliver packages. For each run, subjects typed in a simple numeric code for the package code. They then defined the initial starting position, the delivery location, and the return location using either pull-down menus or voice input. Voice input was accomplished using CMU Sphinx4-5prealpha for speech recognition. They then inputted the length of the package. These were the option fields. The subject had the system "Calculate Trajectory" and then "Takeoff" once the trajectory was calculated. Later, the subject used "Land" to finish the run. After the voice and mouse input blocked runs, subjects completed a NASA-TLX. At the conclusion of all runs, subjects completed a questionnaire asking them about their experience in inputting the mission parameters, and starting and stopping the mission using mouse and voice input. In general, the usability of voice commands is acceptable. With a relatively well-defined and simple vocabulary, the operator can input the vast majority of the mission parameters using simple, intuitive voice commands. However, voice input may be more applicable to initial mission specification rather than for critical commands such as the need to land immediately due to time and feedback constraints. It would also be convenient to retrieve relevant mission information using voice input. Therefore, further on-going research is looking at using intent from operator utterances to provide the relevant mission information to the operator. The information displayed will be inferred from the operator's utterances just before key phrases are spoken. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables us to predict the operator's intent and supply the operator's desired information to the interface. This paper also describes preliminary investigations into the generation of the semantic space of UAV operation and the success at providing information to the interface based on the operator's utterances.
Method for assessing motor insulation on operating motors
Kueck, John D.; Otaduy, Pedro J.
1997-01-01
A method for monitoring the condition of electrical-motor-driven devices. The method is achieved by monitoring electrical variables associated with the functioning of an operating motor, applying these electrical variables to a three phase equivalent circuit and determining non-symmetrical faults in the operating motor based upon symmetrical components analysis techniques.
ERIC Educational Resources Information Center
Fischer, Richard B.
1986-01-01
Defines key terms and discusses things to consider when setting fees for a continuing education program. These include (1) the organization's philosophy and mission, (2) certain key variables, (3) pricing strategy options, and (4) the test of reasonableness. (CH)
Portable Computer Keyboard For Use With One Hand
NASA Technical Reports Server (NTRS)
Friedman, Gary L.
1992-01-01
Data-entry device held in one hand and operated with five fingers. Contains seven keys. Letters, numbers, punctuation, and cursor commands keyed into computer by pressing keys in various combinations. Device called "data egg" used where standard typewriter keyboard unusable or unavailable. Contains micro-processor and 32-Kbyte memory. Captures text and transmits it to computer. Concept extended to computer mouse. Especially useful to handicapped or bedridden people who find it difficult or impossible to operate standard keyboards.
Sippel, Lauren M; Roy, Alicia M; Southwick, Steven M; Fichtenholtz, Harlan M
2016-09-01
Theories of posttraumatic stress disorder (PTSD) implicate emotional processes, including difficulties utilizing adaptive emotion regulation strategies, as critical to the etiology and maintenance of PTSD. Operation Iraqi Freedom, Operation Enduring Freedom, and Operation New Dawn (OIF/OEF/OND) veterans report high levels of combat exposure and PTSD. We aimed to extend findings suggesting that emotion regulation difficulties are a function of PTSD, rather than combat trauma exposure or common comorbidities, to OIF/OEF/OND veterans, in order to inform models of PTSD risk and recovery that can be applied to returning veterans. We tested differences in emotion regulation, measured with the Difficulties in Emotion Regulation Scale and Emotion Regulation Questionnaire, among trauma-exposed veterans with (n = 24) or without PTSD (n = 22) and healthy civilian comparison participants (n = 27) using multivariate analyses of covariance, adjusting for major depressive disorder, anxiety disorders, and demographic variables (age, sex, and ethnicity). Veterans with PTSD reported more use of expressive suppression and more difficulties with emotion regulation than veterans without PTSD and healthy comparison participants. Groups did not differ on cognitive reappraisal. Findings suggest the key role of PTSD above and beyond trauma exposure, depression, and anxiety in specific aspects of emotion dysregulation among OIF/OEF/OND veterans. Interventions that help veterans expand and diversify their emotion regulation skills may serve as helpful adjunctive treatments for PTSD among OIF/OEF/OND veterans.
Kumar, Rajesh; Nguyen, Elizabeth A; Roth, Lindsey A; Oh, Sam S; Gignoux, Christopher R.; Huntsman, Scott; Eng, Celeste; Moreno-Estrada, Andres; Sandoval, Karla; Peñaloza-Espinosa, Rosenda; López-López, Marisol; Avila, Pedro C.; Farber, Harold J.; Tcheurekdjian, Haig; Rodriguez-Cintron, William; Rodriguez-Santana, Jose R; Serebrisky, Denise; Thyne, Shannon M.; Williams, L. Keoki; Winkler, Cheryl; Bustamante, Carlos D.; Pérez-Stable, Eliseo J.; Borrell, Luisa N.; Burchard, Esteban G
2013-01-01
Background Atopy varies by ethnicity even within Latino groups. This variation may be due to environmental, socio-cultural or genetic factors. Objective To examine risk factors for atopy within a nationwide study of U.S. Latino children with and without asthma. Methods Aeroallergen skin test repsonse was analyzed in 1830 US latino subjects. Key determinants of atopy included: country / region of origin, generation in the U.S., acculturation, genetic ancestry and site to which individuals migrated. Serial multivariate zero inflated negative binomial regressions, stratified by asthma status, examined the association of each key determinant variable with the number of positive skin tests. In addition, the independent effect of each key variable was determined by including all key variables in the final models. Results In baseline analyses, African ancestry was associated with 3 times as many positive skin tests in participants with asthma (95% CI:1.62–5.57) and 3.26 times as many positive skin tests in control participants (95% CI: 1.02–10.39). Generation and recruitment site were also associated with atopy in crude models. In final models adjusted for key variables, Puerto Rican [exp(β) (95%CI): 1.31(1.02–1.69)] and mixed ethnicity [exp(β) (95%CI):1.27(1.03–1.56)] asthmatics had a greater probability of positive skin tests compared to Mexican asthmatics. Ancestry associations were abrogated by recruitment site, but not region of origin. Conclusions Puerto Rican ethnicity and mixed origin were associated with degree of atopy within U.S. Latino children with asthma. African ancestry was not associated with degree of atopy after adjusting for recruitment site. Local environment variation, represented by site, was associated with degree of sensitization. PMID:23684070
NASA Astrophysics Data System (ADS)
Richoux, Nicole B.; Allan, E. Louise; Froneman, P. William
2016-04-01
The caridean shrimp Nauticaris marionis is an ecologically important species in the benthic community around the sub-Antarctic Prince Edward Islands (PEI) as it represents a key prey item for a variety of top predators breeding on the islands. We hypothesized that the diet of N. marionis shifts during its development, and that spatial variability in food availability results in differentiation in the diet signatures of specimens collected from various locations of the shelf waters around the PEI. Specimens were collected from nine stations (depth range 70 to 240 m) around the PEI at inter-island shelf (from west to east: upstream, between and downstream) and nearshore regions during austral autumn 2009. Stable isotope and fatty acid data both revealed spatial and developmental variations in the shrimp diet. Nearshore shrimp were more 13C-enriched than those from the inter-island region, suggesting increased kelp detritus entered the food web in the nearshore regions. The shrimp showed increases in δ13C and δ15N signatures (and trophic position) with an increase in body size, resulting in distinctions between size classes that reflected shifts in their trophic niche through development. The fatty acid profiles similarly indicated distinctions in diet with increased shrimp size (in the deep regions), and spatial variability was evident in relation to region and depth. All shrimp contained large proportions of polyunsaturated and essential fatty acids, indicating that the quality of food consumed was similar between regions despite the diet variability. Our results provide new dietary information about a key species operating near the base of the food web at the highly productive PEI, and show that there were no areas of enhanced nutrition available to the shrimp. As such, there was no nutritional advantage to shrimp inhabiting any specific region around the PEI.
NASA Technical Reports Server (NTRS)
Brandli, A. E.; Eckelkamp, R. E.; Kelly, C. M.; Mccandless, W.; Rue, D. L.
1990-01-01
The objective of an operations management system is to provide an orderly and efficient method to operate and maintain aerospace vehicles. Concepts are described for an operations management system and the key technologies are highlighted which will be required if this capability is brought to fruition. Without this automation and decision aiding capability, the growing complexity of avionics will result in an unmanageable workload for the operator, ultimately threatening mission success or survivability of the aircraft or space system. The key technologies include expert system application to operational tasks such as replanning, equipment diagnostics and checkout, global system management, and advanced man machine interfaces. The economical development of operations management systems, which are largely software, will require advancements in other technological areas such as software engineering and computer hardware.
Tracer gauge: An automated dye dilution gauging system for ice‐affected streams
Clow, David W.; Fleming, Andrea C.
2008-01-01
In‐stream flow protection programs require accurate, real‐time streamflow data to aid in the protection of aquatic ecosystems during winter base flow periods. In cold regions, however, winter streamflow often can only be estimated because in‐channel ice causes variable backwater conditions and alters the stage‐discharge relation. In this study, an automated dye dilution gauging system, a tracer gauge, was developed for measuring discharge in ice‐affected streams. Rhodamine WT is injected into the stream at a constant rate, and downstream concentrations are measured with a submersible fluorometer. Data loggers control system operations, monitor key variables, and perform discharge calculations. Comparison of discharge from the tracer gauge and from a Cipoletti weir during periods of extensive ice cover indicated that the root‐mean‐square error of the tracer gauge was 0.029 m3 s−1, or 6.3% of average discharge for the study period. The tracer gauge system can provide much more accurate data than is currently available for streams that are strongly ice affected and, thus, could substantially improve management of in‐stream flow protection programs during winter in cold regions. Care must be taken, however, to test for the validity of key assumptions, including complete mixing and conservative behavior of dye, no changes in storage, and no gains or losses of water to or from the stream along the study reach. These assumptions may be tested by measuring flow‐weighted dye concentrations across the stream, performing dye mass balance analyses, and evaluating breakthrough curve behavior.
Utilization of Integrated Assessment Modeling for determining geologic CO2 storage security
NASA Astrophysics Data System (ADS)
Pawar, R.
2017-12-01
Geologic storage of carbon dioxide (CO2) has been extensively studied as a potential technology to mitigate atmospheric concentration of CO2. Multiple international research & development efforts, large-scale demonstration and commercial projects are helping advance the technology. One of the critical areas of active investigation is prediction of long-term CO2 storage security and risks. A quantitative methodology for predicting a storage site's long-term performance is critical for making key decisions necessary for successful deployment of commercial scale projects where projects will require quantitative assessments of potential long-term liabilities. These predictions are challenging given that they require simulating CO2 and in-situ fluid movements as well as interactions through the primary storage reservoir, potential leakage pathways (such as wellbores, faults, etc.) and shallow resources such as groundwater aquifers. They need to take into account the inherent variability and uncertainties at geologic sites. This talk will provide an overview of an approach based on integrated assessment modeling (IAM) to predict long-term performance of a geologic storage site including, storage reservoir, potential leakage pathways and shallow groundwater aquifers. The approach utilizes reduced order models (ROMs) to capture the complex physical/chemical interactions resulting due to CO2 movement and interactions but are computationally extremely efficient. Applicability of the approach will be demonstrated through examples that are focused on key storage security questions such as what is the probability of leakage of CO2 from a storage reservoir? how does storage security vary for different geologic environments and operational conditions? how site parameter variability and uncertainties affect storage security, etc.
Guidelines for the operation of variable message signs on state highways
DOT National Transportation Integrated Search
2004-06-01
A variable message sign (VMS) is a traffic control device whose message can be changed manually, electrically, mechanically, or electromechanically to provide motorists with information about traffic congestion, traffic crashes, maintenance operation...
Quench Crack Behavior of Nickel-base Disk Superalloys
NASA Technical Reports Server (NTRS)
Gayda, John; Kantzos, Pete; Miller, Jason
2002-01-01
There is a need to increase the temperature capability of superalloy turbine disks to allow higher operating temperatures in advanced aircraft engines. When modifying processing and chemistry of disk alloys to achieve this capability, it is important to preserve the ability to use rapid cooling during supersolvus heat treatments to achieve coarse grain, fine gamma prime microstructures. An important step in this effort is an understanding of the key variables controlling the cracking tendencies of nickel-base disk alloys during quenching from supersolvus heat treatments. The objective of this study was to investigate the quench cracking tendencies of several advanced disk superalloys during simulated heat treatments. Miniature disk specimens were rapidly quenched after solution heat treatments. The responses and failure modes were compared and related to the quench cracking tendencies of actual disk forgings. Cracking along grain boundaries was generally observed to be operative. For the alloys examined in this study, the solution temperature not alloy chemistry was found to be the primary factor controlling quench cracking. Alloys with high solvus temperatures show greater tendency for quench cracking.
Properties of the Flight Model Gas Electron Multiplier for the GEMS Mission
NASA Technical Reports Server (NTRS)
Takeuchi, Yoko; Kitaguchi, Takao; Hayato, Asami; Tamagawa, Toru; Iwakiri, Wataru; Asami, Fumi; Yoshikawa, Akifumi; Kaneko, Kenta; Enoto, Teruaki; Black, Kevin;
2014-01-01
We present the gain properties of the gas electron multiplier (GEM) foil in pure dimethyl ether (DME) at 190 Torr. The GEM is one of the micro pattern gas detectors and it is adopted as a key part of the X-ray polarimeter for the GEMS mission. The X-ray polarimeter is a time projection chamber operating in pure DME gas at 190 Torr. We describe experimental results of (1) the maximum gain the GEM can achieve without any discharges, (2) the linearity of the energy scale for the GEM operation, and (3) the two-dimensional gain variation of the active area. First, our experiment with 6.4 keV X-ray irradiation of the whole GEM area demonstrates that the maximum effective gain is 2 x 10(exp 4) with the applied voltage of 580 V. Second, the measured energy scale is linear among three energies of 4.5, 6.4, and 8.0 keV. Third, the two-dimensional gain mapping test derives the standard deviation of the gain variability of 7% across the active area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovett, S.; Berruti, F.; Behie, L.A.
1997-11-01
Viable operating conditions were identified experimentally for maximizing the production of high-value products such as ethylene, propylene, styrene, and benzene, from the ultrapyrolysis of waste plastics. Using both a batch microreactor and a pilot-plant-sized reactor, the key operating variables considered were pyrolysis temperature, product reaction time, and quench time. In the microreactor experiments, polystyrene (PS), a significant component of waste plastics, was pyrolyzed at temperatures ranging from 800 to 965 C, with total reaction times ranging from 500 to 1,000 ms. At a temperature of 965 C and 500 ms, the yields of styrene plus benzene were greater than 95more » wt %. In the pilot-plant experiments, the recently patented internally circulating fluidized bed (ICFB) reactor (Milne et al., US Patent Number 5,370,789, 1994b) was used to ultrapyrolyze low-density polyethylene (LDPE) in addition to LDPE (5% by weight)/heavy oil mixtures at a residence time of 600 ms. Both experiments produced light olefin yields greater than 55 wt % at temperatures above 830 C.« less
Dynamic Simulation of a Periodic 10 K Sorption Cryocooler
NASA Technical Reports Server (NTRS)
Bhandari, P.; Rodriguez, J.; Bard, S.; Wade, L.
1994-01-01
A transient thermal simulation model has been developed to simulate the dynamic performance of a multiple-stage 10 K sorption cryocooler for spacecraft sensor cooling applications that require periodic quick-cooldown (under 2 minutes) , negligible vibration, low power consumption, and long life (5 to 10 years). The model was specifically designed to represent the Brilliant Eyes Ten-Kelvin Sorption Cryocooler Experiment (BETSCE), but it can be adapted to represent other sorption cryocooler systems as well. The model simulates the heat transfer, mass transfer, and thermodynamic processes in the cryostat and the sorbent beds for the entire refrigeration cycle, and includes the transient effects of variable hydrogen supply pressures due to expansion and overflow of hydrogen during the cooldown operation. The paper describes model limitations and simplifying assumptions, with estimates of errors induced by them, and presents comparisons of performance predictions with ground experiments. An important benefit of the model is its ability to predict performance sensitivities to variations of key design and operational parameters. The insights thus obtained are expected to lead to higher efficiencies and lower weights for future designs.
Evolution of the Generic Lock System at Jefferson Lab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brian Bevins; Yves Roblin
2003-10-13
The Generic Lock system is a software framework that allows highly flexible feedback control of large distributed systems. It allows system operators to implement new feedback loops between arbitrary process variables quickly and with no disturbance to the underlying control system. Several different types of feedback loops are provided and more are being added. This paper describes the further evolution of the system since it was first presented at ICALEPCS 2001 and reports on two years of successful use in accelerator operations. The framework has been enhanced in several key ways. Multiple-input, multiple-output (MIMO) lock types have been added formore » accelerator orbit and energy stabilization. The general purpose Proportional-Integral-Derivative (PID) locks can now be tuned automatically. The generic lock server now makes use of the Proxy IOC (PIOC) developed at Jefferson Lab to allow the locks to be monitored from any EPICS Channel Access aware client. (Previously clients had to be Cdev aware.) The dependency on the Qt XML parser has been replaced with the freely available Xerces DOM parser from the Apache project.« less
Quantum key distribution using basis encoding of Gaussian-modulated coherent states
NASA Astrophysics Data System (ADS)
Huang, Peng; Huang, Jingzheng; Zhang, Zheshen; Zeng, Guihua
2018-04-01
The continuous-variable quantum key distribution (CVQKD) has been demonstrated to be available in practical secure quantum cryptography. However, its performance is restricted strongly by the channel excess noise and the reconciliation efficiency. In this paper, we present a quantum key distribution (QKD) protocol by encoding the secret keys on the random choices of two measurement bases: the conjugate quadratures X and P . The employed encoding method can dramatically weaken the effects of channel excess noise and reconciliation efficiency on the performance of the QKD protocol. Subsequently, the proposed scheme exhibits the capability to tolerate much higher excess noise and enables us to reach a much longer secure transmission distance even at lower reconciliation efficiency. The proposal can work alternatively to strengthen significantly the performance of the known Gaussian-modulated CVQKD protocol and serve as a multiplier for practical secure quantum cryptography with continuous variables.
NASA Astrophysics Data System (ADS)
Zhang, Hang; Mao, Yu; Huang, Duan; Li, Jiawei; Zhang, Ling; Guo, Ying
2018-05-01
We introduce a reliable scheme for continuous-variable quantum key distribution (CV-QKD) by using orthogonal frequency division multiplexing (OFDM). As a spectrally efficient multiplexing technique, OFDM allows a large number of closely spaced orthogonal subcarrier signals used to carry data on several parallel data streams or channels. We place emphasis on modulator impairments which would inevitably arise in the OFDM system and analyze how these impairments affect the OFDM-based CV-QKD system. Moreover, we also evaluate the security in the asymptotic limit and the Pirandola-Laurenza-Ottaviani-Banchi upper bound. Results indicate that although the emergence of imperfect modulation would bring about a slight decrease in the secret key bit rate of each subcarrier, the multiplexing technique combined with CV-QKD results in a desirable improvement on the total secret key bit rate which can raise the numerical value about an order of magnitude.
NASA Astrophysics Data System (ADS)
Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua
2017-02-01
Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.
NASA Astrophysics Data System (ADS)
Wang, Tao; Huang, Peng; Zhou, Yingming; Liu, Weiqi; Zeng, Guihua
2018-01-01
In a practical continuous-variable quantum key distribution (CVQKD) system, real-time shot-noise measurement (RTSNM) is an essential procedure for preventing the eavesdropper exploiting the practical security loopholes. However, the performance of this procedure itself is not analyzed under the real-world condition. Therefore, we indicate the RTSNM practical performance and investigate its effects on the CVQKD system. In particular, due to the finite-size effect, the shot-noise measurement at the receiver's side may decrease the precision of parameter estimation and consequently result in a tight security bound. To mitigate that, we optimize the block size for RTSNM under the ensemble size limitation to maximize the secure key rate. Moreover, the effect of finite dynamics of amplitude modulator in this scheme is studied and its mitigation method is also proposed. Our work indicates the practical performance of RTSNM and provides the real secret key rate under it.
NASA Astrophysics Data System (ADS)
Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano
2018-05-01
We present a rigorous security analysis of continuous-variable measurement-device-independent quantum key distribution (CV MDI QKD) in a finite-size scenario. The security proof is obtained in two steps: by first assessing the security against collective Gaussian attacks, and then extending to the most general class of coherent attacks via the Gaussian de Finetti reduction. Our result combines recent state-of-the-art security proofs for CV QKD with findings about min-entropy calculus and parameter estimation. In doing so, we improve the finite-size estimate of the secret key rate. Our conclusions confirm that CV MDI protocols allow for high rates on the metropolitan scale, and may achieve a nonzero secret key rate against the most general class of coherent attacks after 107-109 quantum signal transmissions, depending on loss and noise, and on the required level of security.
Quantum key distribution using continuous-variable non-Gaussian states
NASA Astrophysics Data System (ADS)
Borelli, L. F. M.; Aguiar, L. S.; Roversi, J. A.; Vidiella-Barranco, A.
2016-02-01
In this work, we present a quantum key distribution protocol using continuous-variable non-Gaussian states, homodyne detection and post-selection. The employed signal states are the photon added then subtracted coherent states (PASCS) in which one photon is added and subsequently one photon is subtracted from the field. We analyze the performance of our protocol, compared with a coherent state-based protocol, for two different attacks that could be carried out by the eavesdropper (Eve). We calculate the secret key rate transmission in a lossy line for a superior channel (beam-splitter) attack, and we show that we may increase the secret key generation rate by using the non-Gaussian PASCS rather than coherent states. We also consider the simultaneous quadrature measurement (intercept-resend) attack, and we show that the efficiency of Eve's attack is substantially reduced if PASCS are used as signal states.
The experimental studies of operating modes of a diesel-generator set at variable speed
NASA Astrophysics Data System (ADS)
Obukhov, S. G.; Plotnikov, I. A.; Surkov, M. A.; Sumarokova, L. P.
2017-02-01
A diesel generator set working at variable speed to save fuel is studied. The results of experimental studies of the operating modes of an autonomous diesel generator set are presented. Areas for regulating operating modes are determined. It is demonstrated that the transfer of the diesel generator set to variable speed of the diesel engine makes it possible to improve the energy efficiency of the autonomous generator source, as well as the environmental and ergonomic performance of the equipment as compared with general industrial analogues.
NASA Astrophysics Data System (ADS)
Vanwalleghem, T.; Román, A.; Peña, A.; Laguna, A.; Giráldez, J. V.
2017-12-01
There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties in the critical zone. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of traditional digital soil mapping versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.
NASA Astrophysics Data System (ADS)
Cheng, Jilin; Zhang, Lihua; Zhang, Rentian; Gong, Yi; Zhu, Honggeng; Deng, Dongsheng; Feng, Xuesong; Qiu, Jinxian
2010-06-01
A dynamic planning model for optimizing operation of variable speed pumping system, aiming at minimum power consumption, was proposed to achieve economic operation. The No. 4 Jiangdu Pumping Station, a source pumping station in China's Eastern Route of South-to-North Water Diversion Project, is taken as a study case. Since the sump water level of Jiangdu Pumping Station is affected by the tide of Yangtze River, the daily-average heads of the pumping system varies yearly from 3.8m to 7.8m and the tide level difference in one day up to 1.2m. Comparisons of operation electricity cost between optimized variable speed and fixed speed operations of pumping system were made. When the full load operation mode is adopted, whether or not electricity prices in peak-valley periods are considered, the benefits of variable speed operation cannot compensate the energy consumption of the VFD. And when the pumping system operates in part load and the peak-valley electricity prices are considered, the pumping system should cease operation or lower its rotational speed in peak load hours since the electricity price are much higher, and to the contrary the pumping system should raise its rotational speed in valley load hours to pump more water. The computed results show that if the pumping system operates in 80% or 60% loads, the energy consumption cost of specified volume of water will save 14.01% and 26.69% averagely by means of optimal variable speed operation, and the investment on VFD will be paid back in 2 or 3 years. However, if the pumping system operates in 80% or 60% loads and the energy cost is calculated in non peak-valley electricity price, the repayment will be lengthened up to 18 years. In China's S-to-N Water Diversion Project, when the market operation and peak-valley electricity prices are taken into effect to supply water and regulate water levels in regulation reservoirs as Hongzehu Lake, Luomahu Lake, etc. the economic operation of water-diversion pumping stations will be vital, and the adoption of VFDs to achieve optimal operation may be a good choice.
Mathematical models of water application for a variable rate irrigating hill-seeder
USDA-ARS?s Scientific Manuscript database
A variable rate irrigating hill-seeder can adjust water application automatically according to the difference in soil moisture content in the field to alleviate drought and save water. Two key problems to realize variable rate water application are how to determine the right amount of water for the ...
Mathematic models of water application for a variable rate irrigating hill-seeder
USDA-ARS?s Scientific Manuscript database
A variable rate irrigating hill-seeder can adjust water application automatically according to the difference in soil moisture content in the field to alleviate drought and save water. Two key problems to realize variable rate water application are how to determine the right amount of water for the ...
Rosen, G D
2006-06-01
Meta-analysis is a vague descriptor used to encompass very diverse methods of data collection analysis, ranging from simple averages to more complex statistical methods. Holo-analysis is a fully comprehensive statistical analysis of all available data and all available variables in a specified topic, with results expressed in a holistic factual empirical model. The objectives and applications of holo-analysis include software production for prediction of responses with confidence limits, translation of research conditions to praxis (field) circumstances, exposure of key missing variables, discovery of theoretically unpredictable variables and interactions, and planning future research. Holo-analyses are cited as examples of the effects on broiler feed intake and live weight gain of exogenous phytases, which account for 70% of variation in responses in terms of 20 highly significant chronological, dietary, environmental, genetic, managemental, and nutrient variables. Even better future accountancy of variation will be facilitated if and when authors of papers routinely provide key data for currently neglected variables, such as temperatures, complete feed formulations, and mortalities.
Method for assessing motor insulation on operating motors
Kueck, J.D.; Otaduy, P.J.
1997-03-18
A method for monitoring the condition of electrical-motor-driven devices is disclosed. The method is achieved by monitoring electrical variables associated with the functioning of an operating motor, applying these electrical variables to a three phase equivalent circuit and determining non-symmetrical faults in the operating motor based upon symmetrical components analysis techniques. 15 figs.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false Key employee. 502.14 Section 502.14 Indians NATIONAL....14 Key employee. Key employee means: (a) A person who performs one or more of the following functions... gaming operation. (d) Any other person designated by the tribe as a key employee. [57 FR 12392, Apr. 9...
Samuel Chan; Paul Anderson; John Cissel; Larry Lateen; Charley Thompson
2004-01-01
A large-scale operational study has been undertaken to investigate variable density management in conjunction with riparian buffers as a means to accelerate development of late-seral habitat, facilitate rare species management, and maintain riparian functions in 40-70 year-old headwater forests in western Oregon, USA. Upland variable retention treatments include...
Increasing Vocal Variability in Children with Autism Using a Lag Schedule of Reinforcement
ERIC Educational Resources Information Center
Esch, John W.; Esch, Barbara E.; Love, Jessa R.
2009-01-01
Variability has been demonstrated to be an operant dimension of behavior (Neuringer, 2002; Page & Neuringer, 1985). Recently, lag schedules have been used to demonstrate operant variability of verbal behavior in persons with a diagnosis of autism (e.g., Lee, McComas, & Jawor, 2002). The current study evaluated the effects of a Lag 1 schedule on…
Subwavelength grating enabled on-chip ultra-compact optical true time delay line
Wang, Junjia; Ashrafi, Reza; Adams, Rhys; Glesk, Ivan; Gasulla, Ivana; Capmany, José; Chen, Lawrence R.
2016-01-01
An optical true time delay line (OTTDL) is a basic photonic building block that enables many microwave photonic and optical processing operations. The conventional design for an integrated OTTDL that is based on spatial diversity uses a length-variable waveguide array to create the optical time delays, which can introduce complexities in the integrated circuit design. Here we report the first ever demonstration of an integrated index-variable OTTDL that exploits spatial diversity in an equal length waveguide array. The approach uses subwavelength grating waveguides in silicon-on-insulator (SOI), which enables the realization of OTTDLs having a simple geometry and that occupy a compact chip area. Moreover, compared to conventional wavelength-variable delay lines with a few THz operation bandwidth, our index-variable OTTDL has an extremely broad operation bandwidth practically exceeding several tens of THz, which supports operation for various input optical signals with broad ranges of central wavelength and bandwidth. PMID:27457024
Subwavelength grating enabled on-chip ultra-compact optical true time delay line.
Wang, Junjia; Ashrafi, Reza; Adams, Rhys; Glesk, Ivan; Gasulla, Ivana; Capmany, José; Chen, Lawrence R
2016-07-26
An optical true time delay line (OTTDL) is a basic photonic building block that enables many microwave photonic and optical processing operations. The conventional design for an integrated OTTDL that is based on spatial diversity uses a length-variable waveguide array to create the optical time delays, which can introduce complexities in the integrated circuit design. Here we report the first ever demonstration of an integrated index-variable OTTDL that exploits spatial diversity in an equal length waveguide array. The approach uses subwavelength grating waveguides in silicon-on-insulator (SOI), which enables the realization of OTTDLs having a simple geometry and that occupy a compact chip area. Moreover, compared to conventional wavelength-variable delay lines with a few THz operation bandwidth, our index-variable OTTDL has an extremely broad operation bandwidth practically exceeding several tens of THz, which supports operation for various input optical signals with broad ranges of central wavelength and bandwidth.
Electric Motors Maintenance Planning From Its Operating Variables
NASA Astrophysics Data System (ADS)
Rodrigues, Francisco; Fonseca, Inácio; Farinha, José Torres; Ferreira, Luís; Galar, Diego
2017-09-01
The maintenance planning corresponds to an approach that seeks to maximize the availability of equipment and, consequently, increase the levels of competitiveness of companies by increasing production times. This paper presents a maintenance planning based on operating variables (number of hours worked, duty cycles, number of revolutions) to maximizing the availability of operation of electrical motors. The reading of the operating variables and its sampling is done based on predetermined sampling cycles and subsequently is made the data analysis through time series algorithms aiming to launch work orders before reaching the variables limit values. This approach is supported by tools and technologies such as logical applications that enable a graphical user interface for access to relevant information about their Physical Asset HMI (Human Machine Interface), including the control and supervision by acquisition through SCADA (Supervisory Control And data acquisition) data, also including the communication protocols among different logical applications.
Pamboo, Jaya; Hans, Manoj-Kumar; Kumaraswamy, Bangalore-Niranjan; Chander, Subhash; Bhaskaran, Sajeev
2014-12-01
The study had twin objectives: to assess the incidence of flare-ups (a severe problem requiring an unscheduled visit and treatment) among patients who received endodontic treatment in the Department of Conservative Dentistry and Endodontics in Vyas Dental college and hospital, Jodhpur during a period of one year, and also to examine the correlation with pre-operative and operative variables. Data was collected from 1023 teeth from 916 patients who had received endodontic treatment over a 12- month period. Information was obtained for each patient treated, including pulp and peri-radicular diagnosis for the tooth, presence of pre-operatory pain, type of medication being used, type of instrumentation technique used and number of appointments needed to complete the root canal treatment. The results showed an incidence of 2.35% for flare-ups from 1023 endodontically treated teeth. Statistical analysis was done using the chi-square test. Flare-ups were found to be affected significantly by gender of patient, presence of radiolucent lesions, patients taking pre-operative analgesic or anti-inflammatory drugs and on type of instrumentation technique. In contrast, there was no correlation between flare-ups and age, different arch/tooth groups and single or multiple visit endodontics. Key words:Anti-inflammatory, flare-ups, instrumentation, prospective.
Pamboo, Jaya; Kumaraswamy, Bangalore-Niranjan; Chander, Subhash; Bhaskaran, Sajeev
2014-01-01
Objectives: The study had twin objectives: to assess the incidence of flare-ups (a severe problem requiring an unscheduled visit and treatment) among patients who received endodontic treatment in the Department of Conservative Dentistry and Endodontics in Vyas Dental college and hospital, Jodhpur during a period of one year, and also to examine the correlation with pre-operative and operative variables. Material and Methods: Data was collected from 1023 teeth from 916 patients who had received endodontic treatment over a 12- month period. Information was obtained for each patient treated, including pulp and peri-radicular diagnosis for the tooth, presence of pre-operatory pain, type of medication being used, type of instrumentation technique used and number of appointments needed to complete the root canal treatment. Results: The results showed an incidence of 2.35% for flare-ups from 1023 endodontically treated teeth. Statistical analysis was done using the chi-square test. Conclusions: Flare-ups were found to be affected significantly by gender of patient, presence of radiolucent lesions, patients taking pre-operative analgesic or anti-inflammatory drugs and on type of instrumentation technique. In contrast, there was no correlation between flare-ups and age, different arch/tooth groups and single or multiple visit endodontics. Key words:Anti-inflammatory, flare-ups, instrumentation, prospective. PMID:25674318
Assessing the ability of operational snow models to predict snowmelt runoff extremes (Invited)
NASA Astrophysics Data System (ADS)
Wood, A. W.; Restrepo, P. J.; Clark, M. P.
2013-12-01
In the western US, the snow accumulation and melt cycle of winter and spring plays a critical role in the region's water management strategies. Consequently, the ability to predict snowmelt runoff at time scales from days to seasons is a key input for decisions in reservoir management, whether for avoiding flood hazards or supporting environmental flows through the scheduling of releases in spring, or for allocating releases for multi-state water distribution in dry seasons of year (using reservoir systems to provide an invaluable buffer for many sectors against drought). Runoff forecasts thus have important benefits at both wet and dry extremes of the climatological spectrum. The importance of the prediction of the snow cycle motivates an assessment of the strengths and weaknesses of the US's central operational snow model, SNOW17, in contrast to process-modeling alternatives, as they relate to simulating observed snowmelt variability and extremes. To this end, we use a flexible modeling approach that enables an investigation of different choices in model structure, including model physics, parameterization and degree of spatiotemporal discretization. We draw from examples of recent extreme events in western US watersheds and an overall assessment of retrospective model performance to identify fruitful avenues for advancing the modeling basis for the operational prediction of snow-related runoff extremes.
Augmented Adaptive Control of a Wind Turbine in the Presence of Structural Modes
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Balas, Mark J.; Wright, Alan D.
2010-01-01
Wind turbines operate in highly turbulent environments resulting in aerodynamic loads that can easily excite turbine structural modes, potentially causing component fatigue and failure. Two key technology drivers for turbine manufacturers are increasing turbine up time and reducing maintenance costs. Since the trend in wind turbine design is towards larger, more flexible turbines with lower frequency structural modes, manufacturers will want to develop methods to operate in the presence of these modes. Accurate models of the dynamic characteristics of new wind turbines are often not available due to the complexity and expense of the modeling task, making wind turbines ideally suited to adaptive control. In this paper, we develop theory for adaptive control with rejection of disturbances in the presence of modes that inhibit the controller. We use this method to design an adaptive collective pitch controller for a high-fidelity simulation of a utility-scale, variable-speed wind turbine operating in Region 3. The objective of the adaptive pitch controller is to regulate generator speed, accommodate wind gusts, and reduce the interference of certain structural modes in feedback. The control objective is accomplished by collectively pitching the turbine blades. The adaptive pitch controller for Region 3 is compared in simulations with a baseline classical Proportional Integrator (PI) collective pitch controller.
Planning Coverage Campaigns for Mission Design and Analysis: CLASP for DESDynl
NASA Technical Reports Server (NTRS)
Knight, Russell L.; McLaren, David A.; Hu, Steven
2013-01-01
Mission design and analysis presents challenges in that almost all variables are in constant flux, yet the goal is to achieve an acceptable level of performance against a concept of operations, which might also be in flux. To increase responsiveness, automated planning tools are used that allow for the continual modification of spacecraft, ground system, staffing, and concept of operations, while returning metrics that are important to mission evaluation, such as area covered, peak memory usage, and peak data throughput. This approach was applied to the DESDynl mission design using the CLASP planning system, but since this adaptation, many techniques have changed under the hood for CLASP, and the DESDynl mission concept has undergone drastic changes. The software produces mission evaluation products, such as memory highwater marks, coverage percentages, given a mission design in the form of coverage targets, concept of operations, spacecraft parameters, and orbital parameters. It tries to overcome the lack of fidelity and timeliness of mission requirements coverage analysis during mission design. Previous techniques primarily use Excel in ad hoc fashion to approximate key factors in mission performance, often falling victim to overgeneralizations necessary in such an adaptation. The new program allows designers to faithfully represent their mission designs quickly, and get more accurate results just as quickly.
NASA Astrophysics Data System (ADS)
Bogena, Heye R.; Huisman, Johan A.; Rosenbaum, Ulrike; Weuthen, Ansgar; Vereecken, Harry
2010-05-01
Soil water content plays a key role in partitioning water and energy fluxes and controlling the pattern of groundwater recharge. Despite the importance of soil water content, it is not yet measured in an operational way at larger scales. The aim of this paper is to present the potential of real-time monitoring for the analysis of soil moisture patterns at the catchment scale using the recently developed wireless sensor network SoilNet [1], [2]. SoilNet is designed to measure soil moisture, salinity and temperature in several depths (e.g. 5, 20 and 50 cm). Recently, a small forest catchment Wüstebach (~27 ha) has been instrumented with 150 sensor nodes and more than 1200 soil sensors in the framework of the Transregio32 and the Helmholtz initiative TERENO (Terrestrial Environmental Observatories). From August to November 2009, more than 6 million soil moisture measurements have been performed. We will present first results from a statistical and geostatistical analysis of the data. The observed spatial variability of soil moisture corresponds well with the 800-m scale variability described in [3]. The very low scattering of the standard deviation versus mean soil moisture plots indicates that sensor network data shows less artificial soil moisture variations than soil moisture data originated from measurement campaigns. The variograms showed more or less the same nugget effect, which indicates that the sum of the sub-scale variability and the measurement error is rather time-invariant. Wet situations showed smaller spatial variability, which is attributed to saturated soil water content, which poses an upper limit and is typically not strongly variable in headwater catchments with relatively homogeneous soil. The spatiotemporal variability in soil moisture at 50 cm depth was significantly lower than at 5 and 20 cm. This finding indicates that the considerable variability of the top soil is buffered deeper in the soil due to lateral and vertical water fluxes. Topographic features showed the strongest correlation with soil moisture during dry periods, indicating that the control of topography on the soil moisture pattern depends on the soil water status. Interpolation using the external drift kriging method demonstrated that the high sampling density allows capturing the key patterns of soil moisture variation in the Wüstebach catchment. References: [1] Bogena, H.R., J.A. Huisman, C. Oberdörster, H. Vereecken (2007): Evaluation of a low-cost soil water content sensor for wireless network applications. Journal of Hydrology: 344, 32- 42. [2] Rosenbaum, U., Huisman, J.A., Weuthen, A., Vereecken, H. and Bogena, H.R. (2010): Quantification of sensor-to-sensor variability of the ECH2O EC-5, TE and 5TE sensors in dielectric liquids. Accepted for publication in Vadose Zone Journal (09/2009). [3] Famiglietti J.S., D. Ryu, A. A. Berg, M. Rodell and T. J. Jackson (2008), Field observations of soil moisture variability across scales, Water Resour. Res. 44, W01423, doi:10.1029/2006WR005804.
Multi-party Quantum Key Agreement without Entanglement
NASA Astrophysics Data System (ADS)
Cai, Bin-Bin; Guo, Gong-De; Lin, Song
2017-04-01
A new efficient quantum key agreement protocol without entanglement is proposed. In this protocol, each user encodes his secret key into the traveling particles by performing one of four rotation operations that one cannot perfectly distinguish. In the end, all users can simultaneously obtain the final shared key. The security of the presented protocol against some common attacks is discussed. It is shown that this protocol can effectively protect the privacy of each user and satisfy the requirement of fairness in theory. Moreover, the quantum carriers and the encoding operations used in the protocol can be achieved in realistic physical devices. Therefore, the presented protocol is feasible with current technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Ming; Deng, Yi
2015-02-06
El Niño-Southern Oscillation (ENSO) and Annular Modes (AMs) represent respectively the most important modes of low frequency variability in the tropical and extratropical circulations. The future projection of the ENSO and AM variability, however, remains highly uncertain with the state-of-the-art coupled general circulation models. A comprehensive understanding of the factors responsible for the inter-model discrepancies in projecting future changes in the ENSO and AM variability, in terms of multiple feedback processes involved, has yet to be achieved. The proposed research aims to identify sources of such uncertainty and establish a set of process-resolving quantitative evaluations of the existing predictions ofmore » the future ENSO and AM variability. The proposed process-resolving evaluations are based on a feedback analysis method formulated in Lu and Cai (2009), which is capable of partitioning 3D temperature anomalies/perturbations into components linked to 1) radiation-related thermodynamic processes such as cloud and water vapor feedbacks, 2) local dynamical processes including convection and turbulent/diffusive energy transfer and 3) non-local dynamical processes such as the horizontal energy transport in the oceans and atmosphere. Taking advantage of the high-resolution, multi-model ensemble products from the Coupled Model Intercomparison Project Phase 5 (CMIP5) soon to be available at the Lawrence Livermore National Lab, we will conduct a process-resolving decomposition of the global three-dimensional (3D) temperature (including SST) response to the ENSO and AM variability in the preindustrial, historical and future climate simulated by these models. Specific research tasks include 1) identifying the model-observation discrepancies in the global temperature response to ENSO and AM variability and attributing such discrepancies to specific feedback processes, 2) delineating the influence of anthropogenic radiative forcing on the key feedback processes operating on ENSO and AM variability and quantifying their relative contributions to the changes in the temperature anomalies associated with different phases of ENSO and AMs, and 3) investigating the linkages between model feedback processes that lead to inter-model differences in time-mean temperature projection and model feedback processes that cause inter-model differences in the simulated ENSO and AM temperature response. Through a thorough model-observation and inter-model comparison of the multiple energetic processes associated with ENSO and AM variability, the proposed research serves to identify key uncertainties in model representation of ENSO and AM variability, and investigate how the model uncertainty in predicting time-mean response is related to the uncertainty in predicting response of the low-frequency modes. The proposal is thus a direct response to the first topical area of the solicitation: Interaction of Climate Change and Low Frequency Modes of Natural Climate Variability. It ultimately supports the accomplishment of the BER climate science activity Long Term Measure (LTM): "Deliver improved scientific data and models about the potential response of the Earth's climate and terrestrial biosphere to increased greenhouse gas levels for policy makers to determine safe levels of greenhouse gases in the atmosphere."« less
NASA Astrophysics Data System (ADS)
Aguiar, Eva; Mourre, Baptiste; Heslop, Emma; Juza, Mélanie; Escudier, Romain; Tintoré, Joaquín
2017-04-01
This study focuses on the validation of the high resolution Western Mediterranean Operational model (WMOP) developed at SOCIB, the Balearic Islands Coastal Observing and Forecasting System. The Mediterranean Sea is often seen as a small scale ocean laboratory where energetic eddies, fronts and circulation features have important ecological consequences. The Medclic project is a program between "La Caixa" Foundation and SOCIB which aims at characterizing and forecasting the "oceanic weather" in the Western Mediterranean Sea, specifically investigating the interactions between the general circulation and mesoscale processes. We use a WMOP 2009-2015 free run hindcast simulation and available observational datasets (altimetry, moorings and gliders) to both assess the numerical simulation and investigate the ocean variability. WMOP has a 2-km spatial resolution and uses CMEMS Mediterranean products as initial and boundary conditions, with surface forcing from the high-resolution Spanish Meteorological Agency model HIRLAM. Different aspects of the spatial and temporal variability in the model are validated from local to regional and basin scales: (1) the principal axis of variability of the surface circulation using altimetry and moorings along the Iberian coast, (2) the inter-annual changes of the surface flows incorporating also glider data, (3) the propagation of mesoscale eddies formed in the Algerian sub-basin using altimetry, and (4) the statistical properties of eddies (number, rotation, size) applying an eddy tracker detection method in the Western Mediterranean Sea. With these key points evaluated in the model, EOF analysis of sea surface height maps are used to investigate spatial patterns of variability associated with eddies, gyres and the basis-scale circulation and so gain insight into the interconnections between sub-basins, as well as the interactions between physical processes at different scales.
Hacker, Kathryn P; Seto, Karen C; Costa, Federico; Corburn, Jason; Reis, Mitermayer G; Ko, Albert I; Diuk-Wasser, Maria A
2013-10-20
The expansion of urban slums is a key challenge for public and social policy in the 21st century. The heterogeneous and dynamic nature of slum communities limits the use of rigid slum definitions. A systematic and flexible approach to characterize, delineate and model urban slum structure at an operational resolution is essential to plan, deploy, and monitor interventions at the local and national level. We modeled the multi-dimensional structure of urban slums in the city of Salvador, a city of 3 million inhabitants in Brazil, by integrating census-derived socioeconomic variables and remotely-sensed land cover variables. We assessed the correlation between the two sets of variables using canonical correlation analysis, identified land cover proxies for the socioeconomic variables, and produced an integrated map of deprivation in Salvador at 30 m × 30 m resolution. The canonical analysis identified three significant ordination axes that described the structure of Salvador census tracts according to land cover and socioeconomic features. The first canonical axis captured a gradient from crowded, low-income communities with corrugated roof housing to higher-income communities. The second canonical axis discriminated among socioeconomic variables characterizing the most marginalized census tracts, those without access to sanitation or piped water. The third canonical axis accounted for the least amount of variation, but discriminated between high-income areas with white-painted or tiled roofs from lower-income areas. Our approach captures the socioeconomic and land cover heterogeneity within and between slum settlements and identifies the most marginalized communities in a large, complex urban setting. These findings indicate that changes in the canonical scores for slum areas can be used to track their evolution and to monitor the impact of development programs such as slum upgrading.
2013-01-01
Background The expansion of urban slums is a key challenge for public and social policy in the 21st century. The heterogeneous and dynamic nature of slum communities limits the use of rigid slum definitions. A systematic and flexible approach to characterize, delineate and model urban slum structure at an operational resolution is essential to plan, deploy, and monitor interventions at the local and national level. Methods We modeled the multi-dimensional structure of urban slums in the city of Salvador, a city of 3 million inhabitants in Brazil, by integrating census-derived socioeconomic variables and remotely-sensed land cover variables. We assessed the correlation between the two sets of variables using canonical correlation analysis, identified land cover proxies for the socioeconomic variables, and produced an integrated map of deprivation in Salvador at 30 m × 30 m resolution. Results The canonical analysis identified three significant ordination axes that described the structure of Salvador census tracts according to land cover and socioeconomic features. The first canonical axis captured a gradient from crowded, low-income communities with corrugated roof housing to higher-income communities. The second canonical axis discriminated among socioeconomic variables characterizing the most marginalized census tracts, those without access to sanitation or piped water. The third canonical axis accounted for the least amount of variation, but discriminated between high-income areas with white-painted or tiled roofs from lower-income areas. Conclusions Our approach captures the socioeconomic and land cover heterogeneity within and between slum settlements and identifies the most marginalized communities in a large, complex urban setting. These findings indicate that changes in the canonical scores for slum areas can be used to track their evolution and to monitor the impact of development programs such as slum upgrading. PMID:24138776
Pickersgill, C H; Marr, C M; Reid, S W
2001-01-01
A quantitative investigation of the variation that can occur during the course of ultrasonography of the equine superficial digital flexor tendons (SDFT) was undertaken. The aim of this investigation was to use an objective measure, namely the measurement of CSA, to quantify the variability occurring during the course of the ultrasonographic assessment of the equine SDFT. The effects of 3 variables on the CSA measurements were determined. 1) Image acquisition operator (IAc): two different operators undertaking the ultrasonographic examination; 2) image analysis operator (IAn): two different operators undertaking the calculation of CSA values from previously stored images; and 3) analytical equipment (used during CSA measurement) (IEq): the use of 2 different sets of equipment during calculation of CSA values. Tendon cross-sectional area (CSA) measurements were used as the comparative variable of 3 potential sources: interoperator, during image acquisition; interoperator, during CSA measurement; and intraoperator, when using different analytical equipment. Two operators obtained transverse ultrasonographic images from the forelimb SDFTs of 16 National Hunt (NH) Thoroughbred (TB) racehorses, each undertaking analysis of their own and the other operator's images. One operator undertook analysis of their images using 2 sets of equipment. There was no statistically significant difference in the results obtained when different operators undertook image acquisition (P>0.05). At all but the most distal level, there was no significant difference when different equipment was used during analysis (P>0.05). A significant difference (P<0.01) was reported when different operators undertook image analysis, one operator consistently returning larger measurements. Different operators undertaking different stages of an examination can result in significant variability. To reduce confounding during ultrasonographic investigations involving multiple persons, one operator should undertake image analysis, although different operators may undertake image acquisition.
Variable-Period Undulators For Synchrotron Radiation
Shenoy, Gopal; Lewellen, John; Shu, Deming; Vinokurov, Nikolai
2005-02-22
A new and improved undulator design is provided that enables a variable period length for the production of synchrotron radiation from both medium-energy and high-energy storage rings. The variable period length is achieved using a staggered array of pole pieces made up of high permeability material, permanent magnet material, or an electromagnetic structure. The pole pieces are separated by a variable width space. The sum of the variable width space and the pole width would therefore define the period of the undulator. Features and advantages of the invention include broad photon energy tunability, constant power operation and constant brilliance operation.
Applications of variable speed control for contending with recurrent highway congestion.
DOT National Transportation Integrated Search
2014-07-01
This research project developed vital operational guidelines for design of a variable speed limit (VSL) system and its integrated operations with ramp metering control in contending with recurrent highway congestion. The developed guidelines can serv...
Krad, Ibrahim; Gao, David Wenzhong; Ela, Erik; ...
2017-06-07
The electric power industry landscape is continually evolving. As emerging technologies such as wind and solar generating systems become more cost effective, traditional power system operating strategies will need to be re-evaluated. The presence of wind and solar generation (commonly referred to as variable generation or VG) can increase variability and uncertainty in the net-load profile. One mechanism to mitigate this issue is to schedule and dispatch additional operating reserves. These operating reserves aim to ensure that there is enough capacity online in the system to account for the increased variability and uncertainty occurring at finer temporal resolutions. A newmore » operating reserve strategy, referred to as flexibility reserve, has been introduced in some regions. A similar implementation is explored in this study, and its implications on power system operations are analyzed. Results show that flexibility reserve products can improve economic metrics, particularly in significantly reducing the number of scarcity pricing events, with minimal impacts on reliability metrics and production costs. Furthermore, the production costs increased due to increased VG curtailment - i.e. including the flexible ramping product in the commitment of excess thermal capacity that needed to remain online at the expense of VG output.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krad, Ibrahim; Gao, David Wenzhong; Ela, Erik
The electric power industry landscape is continually evolving. As emerging technologies such as wind and solar generating systems become more cost effective, traditional power system operating strategies will need to be re-evaluated. The presence of wind and solar generation (commonly referred to as variable generation or VG) can increase variability and uncertainty in the net-load profile. One mechanism to mitigate this issue is to schedule and dispatch additional operating reserves. These operating reserves aim to ensure that there is enough capacity online in the system to account for the increased variability and uncertainty occurring at finer temporal resolutions. A newmore » operating reserve strategy, referred to as flexibility reserve, has been introduced in some regions. A similar implementation is explored in this study, and its implications on power system operations are analyzed. Results show that flexibility reserve products can improve economic metrics, particularly in significantly reducing the number of scarcity pricing events, with minimal impacts on reliability metrics and production costs. Furthermore, the production costs increased due to increased VG curtailment - i.e. including the flexible ramping product in the commitment of excess thermal capacity that needed to remain online at the expense of VG output.« less
NASA Astrophysics Data System (ADS)
Isobe, Takanori; Kitahara, Tadayuki; Fukutani, Kazuhiko; Shimada, Ryuichi
Variable frequency induction heating has great potential for industrial heating applications due to the possibility of achieving heating distribution control; however, large-scale induction heating with variable frequency has not yet been introduced for practical use. This paper proposes a high frequency soft-switching inverter for induction heating that can achieve variable frequency operation. One challenge of variable frequency induction heating is increasing power electronics ratings. This paper indicates that its current source type dc-link configuration and soft-switching characteristics can make it possible to build a large-scale system with variable frequency capability. A 90-kVA 150-1000Hz variable frequency experimental power supply for steel strip induction heating was developed. Experiments confirmed the feasibility of variable frequency induction heating with proposed converter and the advantages of variable frequency operation.
Transceivers and receivers for quantum key distribution and methods pertaining thereto
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeRose, Christopher; Sarovar, Mohan; Soh, Daniel B.S.
Various technologies for performing continuous-variable (CV) and discrete-variable (DV) quantum key distribution (QKD) with integrated electro-optical circuits are described herein. An integrated DV-QKD system uses Mach-Zehnder modulators to modulate a polarization of photons at a transmitter and select a photon polarization measurement basis at a receiver. An integrated CV-QKD system uses wavelength division multiplexing to send and receive amplitude-modulated and phase-modulated optical signals with a local oscillator signal while maintaining phase coherence between the modulated signals and the local oscillator signal.
Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution.
García-Patrón, Raúl; Cerf, Nicolas J
2006-11-10
A fully general approach to the security analysis of continuous-variable quantum key distribution (CV-QKD) is presented. Provided that the quantum channel is estimated via the covariance matrix of the quadratures, Gaussian attacks are shown to be optimal against all collective eavesdropping strategies. The proof is made strikingly simple by combining a physical model of measurement, an entanglement-based description of CV-QKD, and a recent powerful result on the extremality of Gaussian states [M. M. Wolf, Phys. Rev. Lett. 96, 080502 (2006)10.1103/PhysRevLett.96.080502].
Continuous-variable quantum-key-distribution protocols with a non-Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, Univ. Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex
2011-04-15
In this paper, we consider continuous-variable quantum-key-distribution (QKD) protocols which use non-Gaussian modulations. These specific modulation schemes are compatible with very efficient error-correction procedures, hence allowing the protocols to outperform previous protocols in terms of achievable range. In their simplest implementation, these protocols are secure for any linear quantum channels (hence against Gaussian attacks). We also show how the use of decoy states makes the protocols secure against arbitrary collective attacks, which implies their unconditional security in the asymptotic limit.
Ultrasonic attenuation measurements determine onset, degree, and completion of recrystallization
NASA Technical Reports Server (NTRS)
Generazio, E. R.
1988-01-01
Ultrasonic attenuation was measured for cold worked Nickel 200 samples annealed at increasing temperatures. Localized dislocation density variations, crystalline order and volume percent of recrystallized phase were determined over the anneal temperature range using transmission electron microscopy, X-ray diffraction, and metallurgy. The exponent of the frequency dependence of the attenuation was found to be a key variable relating ultrasonic attenuation to the thermal kinetics of the recrystallization process. Identification of this key variable allows for the ultrasonic determination of onset, degree, and completion of recrystallization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letlow, K.; Lopreato, S.C.; Meriwether, M.
The institutional aspect of the study attempts to identify possible effects of geothermal research, development, and utilization on the area and its inhabitants in three chapters. Chapters I and II address key socio-economic and demographic variables. The initial chapter provides an overview of the area where the resource is located. Major data are presented that can be used to establish a baseline description of the region for comparison over time and to delineate crucial area for future study with regard to geothermal development. The chapter highlights some of the variables that reflect the cultural nature of the Gulf Coast, itsmore » social characteristics, labor force, and service in an attempt to delineate possible problems with and barriers to the development of geothermal energy in the region. The following chapter focuses on the local impacts of geothermal wells and power-generating facilities using data on such variables as size and nature of construction and operating crews. Data are summarized for the areas studied. A flow chart is utilized to describe research that is needed in order to exploit the resource as quickly and effectively as possible. Areas of interface among various parts of the research that will include exchange of data between the social-cultural group and the institutional, legal, environmental, and resource utilization groups are identified. (MCW)« less
Serna-Quintero, José Miguel; Camiñas, Juan Antonio; Fernández, Ignacio de Loyola; Real, Raimundo; Macías, David
2017-01-01
Chondrichthyes, which include Elasmobranchii (sharks and batoids) and Holocephali (chimaeras), are a relatively small group in the Mediterranean Sea (89 species) playing a key role in the ecosystems where they are found. At present, many species of this group are threatened as a result of anthropogenic effects, including fishing activity. Knowledge of the spatial distribution of these species is of great importance to understand their ecological role and for the efficient management of their populations, particularly if affected by fisheries. This study aims to analyze the spatial patterns of the distribution of Chondrichthyes species richness in the Mediterranean Sea. Information provided by the studied countries was used to model geographical and ecological variables affecting the Chondrichthyes species richness. The species were distributed in 16 Operational Geographical Units (OGUs), derived from the Geographical Sub-Areas (GSA) adopted by the General Fisheries Commission of the Mediterranean Sea (GFCM). Regression analyses with the species richness as a target variable were adjusted with a set of environmental and geographical variables, being the model that links richness of Chondrichthyes species with distance to the Strait of Gibraltar and number of taxonomic families of bony fishes the one that best explains it. This suggests that both historical and ecological factors affect the current distribution of Chondrichthyes within the Mediterranean Sea. PMID:28406963
Axial and Centrifugal Compressor Mean Line Flow Analysis Method
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
2009-01-01
This paper describes a method to estimate key aerodynamic parameters of single and multistage axial and centrifugal compressors. This mean-line compressor code COMDES provides the capability of sizing single and multistage compressors quickly during the conceptual design process. Based on the compressible fluid flow equations and the Euler equation, the code can estimate rotor inlet and exit blade angles when run in the design mode. The design point rotor efficiency and stator losses are inputs to the code, and are modeled at off design. When run in the off-design analysis mode, it can be used to generate performance maps based on simple models for losses due to rotor incidence and inlet guide vane reset angle. The code can provide an improved understanding of basic aerodynamic parameters such as diffusion factor, loading levels and incidence, when matching multistage compressor blade rows at design and at part-speed operation. Rotor loading levels and relative velocity ratio are correlated to the onset of compressor surge. NASA Stage 37 and the three-stage NASA 74-A axial compressors were analyzed and the results compared to test data. The code has been used to generate the performance map for the NASA 76-B three-stage axial compressor featuring variable geometry. The compressor stages were aerodynamically matched at off-design speeds by adjusting the variable inlet guide vane and variable stator geometry angles to control the rotor diffusion factor and incidence angles.
Medical Logistics Lessons Observed During Operations Enduring Freedom and Iraqi Freedom.
Dole, Mark J; Kissane, Jonathan M
2016-01-01
Medical Logistics (MEDLOG) is a function of the Army's integrated System for Health that provides the medical products and specialized logistics services required to deliver health protection and care under all operational conditions. In unified land operations, MEDLOG is an inherent function of Health Service Support (HSS), which also includes casualty care and medical evacuation. This paper focuses on a few key lessons observed during Operations Enduring Freedom and Iraqi Freedom with direct implications for the support of HSS in future operations as envisioned in the Army Operating Concept and the Joint Concept for Health Services. It also examines a few key enablers that helped mitigate these challenges that are not yet fully acknowledged in Army Medical Department doctrine, policy, and planning.
Pathan, Sameer A; Bhutta, Zain A; Moinudheen, Jibin; Jenkins, Dominic; Silva, Ashwin D; Sharma, Yogdutt; Saleh, Warda A; Khudabakhsh, Zeenat; Irfan, Furqan B; Thomas, Stephen H
2016-01-01
Background: Standard Emergency Department (ED) operations goals include minimization of the time interval (tMD) between patients' initial ED presentation and initial physician evaluation. This study assessed factors known (or suspected) to influence tMD with a two-step goal. The first step was generation of a multivariate model identifying parameters associated with prolongation of tMD at a single study center. The second step was the use of a study center-specific multivariate tMD model as a basis for predictive marginal probability analysis; the marginal model allowed for prediction of the degree of ED operations benefit that would be affected with specific ED operations improvements. Methods: The study was conducted using one month (May 2015) of data obtained from an ED administrative database (EDAD) in an urban academic tertiary ED with an annual census of approximately 500,000; during the study month, the ED saw 39,593 cases. The EDAD data were used to generate a multivariate linear regression model assessing the various demographic and operational covariates' effects on the dependent variable tMD. Predictive marginal probability analysis was used to calculate the relative contributions of key covariates as well as demonstrate the likely tMD impact on modifying those covariates with operational improvements. Analyses were conducted with Stata 14MP, with significance defined at p < 0.05 and confidence intervals (CIs) reported at the 95% level. Results: In an acceptable linear regression model that accounted for just over half of the overall variance in tMD (adjusted r 2 0.51), important contributors to tMD included shift census ( p = 0.008), shift time of day ( p = 0.002), and physician coverage n ( p = 0.004). These strong associations remained even after adjusting for each other and other covariates. Marginal predictive probability analysis was used to predict the overall tMD impact (improvement from 50 to 43 minutes, p < 0.001) of consistent staffing with 22 physicians. Conclusions: The analysis identified expected variables contributing to tMD with regression demonstrating significance and effect magnitude of alterations in covariates including patient census, shift time of day, and number of physicians. Marginal analysis provided operationally useful demonstration of the need to adjust physician coverage numbers, prompting changes at the study ED. The methods used in this analysis may prove useful in other EDs wishing to analyze operations information with the goal of predicting which interventions may have the most benefit.
An Inexpensive Device for Teaching Public Key Encryption
ERIC Educational Resources Information Center
Pendegraft, Norman
2009-01-01
An inexpensive device to assist in teaching the main ideas of Public Key encryption and its use in class to illustrate the operation of public key encryption is described. It illustrates that there are two keys, and is particularly useful for illustrating that privacy is achieved by using the public key. Initial data from in class use seem to…
Biometrics based key management of double random phase encoding scheme using error control codes
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2013-08-01
In this paper, an optical security system has been proposed in which key of the double random phase encoding technique is linked to the biometrics of the user to make it user specific. The error in recognition due to the biometric variation is corrected by encoding the key using the BCH code. A user specific shuffling key is used to increase the separation between genuine and impostor Hamming distance distribution. This shuffling key is then further secured using the RSA public key encryption to enhance the security of the system. XOR operation is performed between the encoded key and the feature vector obtained from the biometrics. The RSA encoded shuffling key and the data obtained from the XOR operation are stored into a token. The main advantage of the present technique is that the key retrieval is possible only in the simultaneous presence of the token and the biometrics of the user which not only authenticates the presence of the original input but also secures the key of the system. Computational experiments showed the effectiveness of the proposed technique for key retrieval in the decryption process by using the live biometrics of the user.
ERIC Educational Resources Information Center
Reuter, Katherine E.; LeBlanc, Judith M.
Two groups of five preschool children were trained to press a key for marbles for four sessions of variable ratio reinforcement (VR6). Subsequently, response decrement for the groups was compared during conditions of fixed and variable differential reinforcement of other behavior (DRO and VDRO). Fixed DRO was more effective for decreasing response…
ERIC Educational Resources Information Center
Saxon, David; Barkham, Michael
2012-01-01
Objective: To investigate the size of therapist effects using multilevel modeling (MLM), to compare the outcomes of therapists identified as above and below average, and to consider how key variables--in particular patient severity and risk and therapist caseload--contribute to therapist variability and outcomes. Method: We used a large…
Variables Affecting Secondary School Students' Willingness to Eat Genetically Modified Food Crops
ERIC Educational Resources Information Center
Maes, Jasmien; Bourgonjon, Jeroen; Gheysen, Godelieve; Valcke, Martin
2018-01-01
A large-scale cross-sectional study (N = 4002) was set up to determine Flemish secondary school students' willingness to eat genetically modified food (WTE) and to link students' WTE to previously identified key variables from research on the acceptance of genetic modification (GM). These variables include subjective and objective knowledge about…
On FIA Variables For Ecological Use
David C. Chojnacky
2001-01-01
The Forest Inventory and Analysis (FIA) program collects or calculates over 300 variables for its national network of permanent forest plots. However, considerable ecological analysis can be done with only a few key variables. Two examples--Mexican spotted owl habitat in New Mexico and down deadwood in Maine--are used to illustrate the potential of FIA data for...