Experimental design for evaluating WWTP data by linear mass balances.
Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P
2018-05-15
A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.
Parametric Studies of Flow Separation using Air Injection
NASA Technical Reports Server (NTRS)
Zhang, Wei
2004-01-01
Boundary Layer separation causes the airfoil to stall and therefore imposes dramatic performance degradation on the airfoil. In recent years, flow separation control has been one of the active research areas in the field of aerodynamics due to its promising performance improvements on the lifting device. These active flow separation control techniques include steady and unsteady air injection as well as suction on the airfoil surface etc. This paper will be focusing on the steady and unsteady air injection on the airfoil. Although wind tunnel experiments revealed that the performance improvements on the airfoil using injection techniques, the details of how the key variables such as air injection slot geometry and air injection angle etc impact the effectiveness of flow separation control via air injection has not been studied. A parametric study of both steady and unsteady air injection active flow control will be the main objective for this summer. For steady injection, the key variables include the slot geometry, orientation, spacing, air injection velocity as well as the injection angle. For unsteady injection, the injection frequency will also be investigated. Key metrics such as lift coefficient, drag coefficient, total pressure loss and total injection mass will be used to measure the effectiveness of the control technique. A design of experiments using the Box-Behnken Design is set up in order to determine how each of the variables affects each of the key metrics. Design of experiment is used so that the number of experimental runs will be at minimum and still be able to predict which variables are the key contributors to the responses. The experiments will then be conducted in the 1ft by 1ft wind tunnel according to the design of experiment settings. The data obtained from the experiments will be imported into JMP, statistical software, to generate sets of response surface equations which represent the statistical empirical model for each of the metrics as a function of the key variables. Next, the variables such as the slot geometry can be optimized using the build-in optimizer within JMP. Finally, a wind tunnel testing will be conducted using the optimized slot geometry and other key variables to verify the empirical statistical model. The long term goal for this effort is to assess the impacts of active flow control using air injection at system level as one of the task plan included in the NASAs URETI program with Georgia Institute of Technology.
NASA Technical Reports Server (NTRS)
Sullivan, T. J.; Parker, D. E.
1979-01-01
A design technology study was performed to identify a high speed, multistage, variable geometry fan configuration capable of achieving wide flow modulation with near optimum efficiency at the important operating condition. A parametric screening study of the front and rear block fans was conducted in which the influence of major fan design features on weight and efficiency was determined. Key design parameters were varied systematically to determine the fan configuration most suited for a double bypass, variable cycle engine. Two and three stage fans were considered for the front block. A single stage, core driven fan was studied for the rear block. Variable geometry concepts were evaluated to provide near optimum off design performance. A detailed aerodynamic design and a preliminary mechanical design were carried out for the selected fan configuration. Performance predictions were made for the front and rear block fans.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
High-efficiency reconciliation for continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, Zengliang; Yang, Shenshen; Li, Yongmin
2017-04-01
Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.
A conceptual framework for the domain of evidence-based design.
Ulrich, Roger S; Berry, Leonard L; Quan, Xiaobo; Parish, Janet Turner
2010-01-01
The physical facilities in which healthcare services are performed play an important role in the healing process. Evidence-based design in healthcare is a developing field of study that holds great promise for benefiting key stakeholders: patients, families, physicians, and nurses, as well as other healthcare staff and organizations. In this paper, the authors present and discuss a conceptual framework intended to capture the current domain of evidence-based design in healthcare. In this framework, the built environment is represented by nine design variable categories: audio environment, visual environment, safety enhancement, wayfinding system, sustainability, patient room, family support spaces, staff support spaces, and physician support spaces. Furthermore, a series of matrices is presented that indicates knowledge gaps concerning the relationship between specific healthcare facility design variable categories and participant and organizational outcomes. From this analysis, the authors identify fertile research opportunities from the perspectives of key stakeholders.
Designing a better weather display
NASA Astrophysics Data System (ADS)
Ware, Colin; Plumlee, Matthew
2012-01-01
The variables most commonly displayed on weather maps are atmospheric pressure, wind speed and direction, and surface temperature. But they are usually shown separately, not together on a single map. As a design exercise, we set the goal of finding out if it is possible to show all three variables (two 2D scalar fields and a 2D vector field) simultaneously such that values can be accurately read using keys for all variables, a reasonable level of detail is shown, and important meteorological features stand out clearly. Our solution involves employing three perceptual "channels", a color channel, a texture channel, and a motion channel in order to perceptually separate the variables and make them independently readable. We conducted an experiment to evaluate our new design both against a conventional solution, and against a glyph-based solution. The evaluation tested the abilities of novice subjects both to read values using a key, and to see meteorological patterns in the data. Our new scheme was superior especially in the representation of wind patterns using the motion channel, and it also performed well enough in the representation of pressure using the texture channel to suggest it as a viable design alternative.
Field test of classical symmetric encryption with continuous variables quantum key distribution.
Jouguet, Paul; Kunz-Jacques, Sébastien; Debuisschert, Thierry; Fossier, Simon; Diamanti, Eleni; Alléaume, Romain; Tualle-Brouri, Rosa; Grangier, Philippe; Leverrier, Anthony; Pache, Philippe; Painchault, Philippe
2012-06-18
We report on the design and performance of a point-to-point classical symmetric encryption link with fast key renewal provided by a Continuous Variable Quantum Key Distribution (CVQKD) system. Our system was operational and able to encrypt point-to-point communications during more than six months, from the end of July 2010 until the beginning of February 2011. This field test was the first demonstration of the reliability of a CVQKD system over a long period of time in a server room environment. This strengthens the potential of CVQKD for information technology security infrastructure deployments.
Earth Observatory Satellite system definition study. Report no. 3: Design/cost tradeoff studies
NASA Technical Reports Server (NTRS)
1974-01-01
The key issues in the Earth Observatory Satellite (EOS) program which are subject to configuration study and tradeoff are identified. The issue of a combined operational and research and development program is considered. It is stated that cost and spacecraft weight are the key design variables and design options are proposed in terms of these parameters. A cost analysis of the EOS program is provided. Diagrams of the satellite configuration and subsystem components are included.
Using Learning Analytics to Characterize Student Experimentation Strategies in Engineering Design
ERIC Educational Resources Information Center
Vieira, Camilo; Goldstein, Molly Hathaway; Purzer, Senay; Magana, Alejandra J.
2016-01-01
Engineering design is a complex process both for students to participate in and for instructors to assess. Informed designers use the key strategy of conducting experiments as they test ideas to inform next steps. Conversely, beginning designers experiment less, often with confounding variables. These behaviours are not easy to assess in…
Long-distance continuous-variable quantum key distribution with a Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jouguet, Paul; SeQureNet, 23 avenue d'Italie, F-75013 Paris; Kunz-Jacques, Sebastien
2011-12-15
We designed high-efficiency error correcting codes allowing us to extract an errorless secret key in a continuous-variable quantum key distribution (CVQKD) protocol using a Gaussian modulation of coherent states and a homodyne detection. These codes are available for a wide range of signal-to-noise ratios on an additive white Gaussian noise channel with a binary modulation and can be combined with a multidimensional reconciliation method proven secure against arbitrary collective attacks. This improved reconciliation procedure considerably extends the secure range of a CVQKD with a Gaussian modulation, giving a secret key rate of about 10{sup -3} bit per pulse at amore » distance of 120 km for reasonable physical parameters.« less
Multidisciplinary optimization of controlled space structures with global sensitivity equations
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.
1991-01-01
A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.
Reasoning about Shape as a Pattern in Variability
ERIC Educational Resources Information Center
Bakker, Arthur
2004-01-01
This paper examines ways in which coherent reasoning about key concepts such as variability, sampling, data, and distribution can be developed as part of statistics education. Instructional activities that could support such reasoning were developed through design research conducted with students in grades 7 and 8. Results are reported from a…
Tiered Pricing: Implications for Library Collections
ERIC Educational Resources Information Center
Hahn, Karla
2005-01-01
In recent years an increasing number of publishers have adopted tiered pricing of journals. The design and implications of tiered-pricing models, however, are poorly understood. Tiered pricing can be modeled using several variables. A survey of current tiered-pricing models documents the range of key variables used. A sensitivity analysis…
PHYSICAL AND OPTICAL PROPERTIES OF STEAM-EXPLODED LASER-PRINTED PAPER
Laser-printed paper was pulped by the steam-explosion process. A full-factorial experimental design was applied to determine the effects of key operating variables on the properties of steam-exploded pulp. The variables were addition level for pulping chemicals (NaOH and/or Na2SO...
Analysis of a Preloaded Bolted Joint in a Ceramic Composite Combustor
NASA Technical Reports Server (NTRS)
Hissam, D. Andy; Bower, Mark V.
2003-01-01
This paper presents the detailed analysis of a preloaded bolted joint incorporating ceramic materials. The objective of this analysis is to determine the suitability of a joint design for a ceramic combustor. The analysis addresses critical factors in bolted joint design including preload, preload uncertainty, and load factor. The relationship between key joint variables is also investigated. The analysis is based on four key design criteria, each addressing an anticipated failure mode. The criteria are defined in terms of margin of safety, which must be greater than zero for the design criteria to be satisfied. Since the proposed joint has positive margins of safety, the design criteria are satisfied. Therefore, the joint design is acceptable.
Quasi-experimental study designs series-paper 9: collecting data from quasi-experimental studies.
Aloe, Ariel M; Becker, Betsy Jane; Duvendack, Maren; Valentine, Jeffrey C; Shemilt, Ian; Waddington, Hugh
2017-09-01
To identify variables that must be coded when synthesizing primary studies that use quasi-experimental designs. All quasi-experimental (QE) designs. When designing a systematic review of QE studies, potential sources of heterogeneity-both theory-based and methodological-must be identified. We outline key components of inclusion criteria for syntheses of quasi-experimental studies. We provide recommendations for coding content-relevant and methodological variables and outlined the distinction between bivariate effect sizes and partial (i.e., adjusted) effect sizes. Designs used and controls used are viewed as of greatest importance. Potential sources of bias and confounding are also addressed. Careful consideration must be given to inclusion criteria and the coding of theoretical and methodological variables during the design phase of a synthesis of quasi-experimental studies. The success of the meta-regression analysis relies on the data available to the meta-analyst. Omission of critical moderator variables (i.e., effect modifiers) will undermine the conclusions of a meta-analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Ma, Jian; Bai, Bing; Wang, Liu-Jun; Tong, Cun-Zhu; Jin, Ge; Zhang, Jun; Pan, Jian-Wei
2016-09-20
InGaAs/InP single-photon avalanche diodes (SPADs) are widely used in practical applications requiring near-infrared photon counting such as quantum key distribution (QKD). Photon detection efficiency and dark count rate are the intrinsic parameters of InGaAs/InP SPADs, due to the fact that their performances cannot be improved using different quenching electronics given the same operation conditions. After modeling these parameters and developing a simulation platform for InGaAs/InP SPADs, we investigate the semiconductor structure design and optimization. The parameters of photon detection efficiency and dark count rate highly depend on the variables of absorption layer thickness, multiplication layer thickness, excess bias voltage, and temperature. By evaluating the decoy-state QKD performance, the variables for SPAD design and operation can be globally optimized. Such optimization from the perspective of specific applications can provide an effective approach to design high-performance InGaAs/InP SPADs.
Applying quality by design (QbD) concept for fabrication of chitosan coated nanoliposomes.
Pandey, Abhijeet P; Karande, Kiran P; Sonawane, Raju O; Deshmukh, Prashant K
2014-03-01
In the present investigation, a quality by design (QbD) strategy was successfully applied to the fabrication of chitosan-coated nanoliposomes (CH-NLPs) encapsulating a hydrophilic drug. The effects of the processing variables on the particle size, encapsulation efficiency (%EE) and coating efficiency (%CE) of CH-NLPs (prepared using a modified ethanol injection method) were investigated. The concentrations of lipid, cholesterol, drug and chitosan; stirring speed, sonication time; organic:aqueous phase ratio; and temperature were identified as the key factors after risk analysis for conducting a screening design study. A separate study was designed to investigate the robustness of the predicted design space. The particle size, %EE and %CE of the optimized CH-NLPs were 111.3 nm, 33.4% and 35.2%, respectively. The observed responses were in accordance with the predicted response, which confirms the suitability and robustness of the design space for CH-NLP formulation. In conclusion, optimization of the selected key variables will help minimize the problems related to size, %EE and %CE that are generally encountered when scaling up processes for NLP formulations. The robustness of the design space will help minimize both intra-batch and inter-batch variations, which are quite common in the pharmaceutical industry.
Distributed Space Mission Design for Earth Observation Using Model-Based Performance Evaluation
NASA Technical Reports Server (NTRS)
Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Cervantes, Ben; DeWeck, Oliver
2015-01-01
Distributed Space Missions (DSMs) are gaining momentum in their application to earth observation missions owing to their unique ability to increase observation sampling in multiple dimensions. DSM design is a complex problem with many design variables, multiple objectives determining performance and cost and emergent, often unexpected, behaviors. There are very few open-access tools available to explore the tradespace of variables, minimize cost and maximize performance for pre-defined science goals, and therefore select the most optimal design. This paper presents a software tool that can multiple DSM architectures based on pre-defined design variable ranges and size those architectures in terms of predefined science and cost metrics. The tool will help a user select Pareto optimal DSM designs based on design of experiments techniques. The tool will be applied to some earth observation examples to demonstrate its applicability in making some key decisions between different performance metrics and cost metrics early in the design lifecycle.
Reverse design and characteristic study of multi-range HMCVT
NASA Astrophysics Data System (ADS)
Zhu, Zhen; Chen, Long; Zeng, Falin
2017-09-01
The reduction of fuel consumption and increase of transmission efficiency is one of the key problems of the agricultural machinery. Many promising technologies such as hydromechanical continuously variable transmissions (HMCVT) are the focus of research and investments, but there is little technical documentation that describes the design principle and presents the design parameters. This paper presents the design idea and characteristic study of HMCVT, in order to find out the suitable scheme for the big horsepower tractors. Analyzed the kinematics and dynamics of a large horsepower tractor, according to the characteristic parameters, a hydro-mechanical continuously variable transmission has been designed. Compared with the experimental curves and theoretical curves of the stepless speed regulation of transmission, the experimental result illustrates the rationality of the design scheme.
ERIC Educational Resources Information Center
Nosofsky, Robert M.; Donkin, Chris
2016-01-01
We report an experiment designed to provide a qualitative contrast between knowledge-limited versions of mixed-state and variable-resources (VR) models of visual change detection. The key data pattern is that observers often respond "same" on big-change trials, while simultaneously being able to discriminate between same and small-change…
Emirates Mars Mission (EMM) Overview
NASA Astrophysics Data System (ADS)
Sharaf, Omran; Amiri, Sarah; AlMheiri, Suhail; Alrais, Adnan; Wali, Mohammad; AlShamsi, Zakareyya; AlQasim, Ibrahim; AlHarmoodi, Khuloud; AlTeneiji, Nour; Almatroushi, Hessa; AlShamsi, Maryam; AlAwadhi, Mohsen; McGrath, Michael; Withnell, Pete; Ferrington, Nicolas; Reed, Heather; Landin, Brett; Ryan, Sean; Pramann, Brian
2017-04-01
United Arab Emirates (UAE) has entered the space exploration race with the announcement of Emirates Mars Mission (EMM), the first Arab Islamic mission to another planet, in 2014. Through this mission, UAE is to send an unmanned probe, called Hope probe, to be launched in summer 2020 and reach Mars by 2021 to coincide with UAE's 50th anniversary. Through a sequence of subsequent maneuvers, the spacecraft will enter a large science orbit that has a periapsis altitude of 20,000 km, an apoapsis altitude of 43,000 km, and an inclination of 25 degrees. The mission is designed to (1) characterize the state of the Martian lower atmosphere on global scales and its geographic, diurnal and seasonal variability, (2) correlate rates of thermal and photochemical atmospheric escape with conditions in the collisional Martian atmosphere, and (3) characterize the spatial structure and variability of key constituents in the Martian exosphere. These objectives will be met by four investigations with diurnal variability on sub-seasonal timescales which are (1) determining the three-dimensional thermal state of the lower atmosphere, (2) determining the geographic and diurnal distribution of key constituents in the lower atmosphere, (3) determining the abundance and spatial variability of key neutral species in the thermosphere, and (4) determining the three-dimensional structure and variability of key species in the exosphere. EMM will collect these information about the Mars atmospheric circulation and connections through a combination of three distinct instruments that image Mars in the visible, thermal infrared and ultraviolet wavelengths and they are the Emirates eXploration Imager (EXI), the Emirates Mars InfraRed Spectrometer (EMIRS), and the EMM Mars Ultraviolet Spectrometer (EMUS). EMM has passed its Mission Concept Review (MCR), System Requirements Review (SRR), System Design Review (SDR), and Preliminary Design Review (PDR) phases. The mission is led by Emiratis from Mohammed Bin Rashid Space Centre, Dubai, UAE, and it will expand the nation's human capital through knowledge transfer programs set with international partners from the University of Colorado Laboratory for Atmospheric and Space Physics (LASP), University of California Berkeley Space Sciences Lab (SSL), and Arizona State University (ASU) School of Earth and Space Exploration.
Design Considerations for a New Terminal Area Arrival Scheduler
NASA Technical Reports Server (NTRS)
Thipphavong, Jane; Mulfinger, Daniel
2010-01-01
Design of a terminal area arrival scheduler depends on the interrelationship between throughput, delay and controller intervention. The main contribution of this paper is an analysis of the above interdependence for several stochastic behaviors of expected system performance distributions in the aircraft s time of arrival at the meter fix and runway. Results of this analysis serve to guide the scheduler design choices for key control variables. Two types of variables are analyzed, separation buffers and terminal delay margins. The choice for these decision variables was tested using sensitivity analysis. Analysis suggests that it is best to set the separation buffer at the meter fix to its minimum and adjust the runway buffer to attain the desired system performance. Delay margin was found to have the least effect. These results help characterize the variables most influential in the scheduling operations of terminal area arrivals.
Mixed coherent states in coupled chaotic systems: Design of secure wireless communication
NASA Astrophysics Data System (ADS)
Vigneshwaran, M.; Dana, S. K.; Padmanaban, E.
2016-12-01
A general coupling design is proposed to realize a mixed coherent (MC) state: coexistence of complete synchronization, antisynchronization, and amplitude death in different pairs of similar state variables of the coupled chaotic system. The stability of coupled system is ensured by the Lyapunov function and a scaling of each variable is also separately taken care of. When heterogeneity as a parameter mismatch is introduced in the coupled system, the coupling function facilitates to retain its coherence and displays the global stability with renewed scaling factor. Robust synchronization features facilitated by a MC state enable to design a dual modulation scheme: binary phase shift key (BPSK) and parameter mismatch shift key (PMSK), for secure data transmission. Two classes of decoders (coherent and noncoherent) are discussed, the noncoherent decoder shows better performance over the coherent decoder, mostly a noncoherent demodulator is preferred in biological implant applications. Both the modulation schemes are demonstrated numerically by using the Lorenz oscillator and the BPSK scheme is demonstrated experimentally using radio signals.
Selimkhanov, Jangir; Thompson, W. Clayton; Guo, Juen; Hall, Kevin D.; Musante, Cynthia J.
2017-01-01
The design of well-powered in vivo preclinical studies is a key element in building knowledge of disease physiology for the purpose of identifying and effectively testing potential anti-obesity drug targets. However, as a result of the complexity of the obese phenotype, there is limited understanding of the variability within and between study animals of macroscopic endpoints such as food intake and body composition. This, combined with limitations inherent in the measurement of certain endpoints, presents challenges to study design that can have significant consequences for an anti-obesity program. Here, we analyze a large, longitudinal study of mouse food intake and body composition during diet perturbation to quantify the variability and interaction of key metabolic endpoints. To demonstrate how conclusions can change as a function of study size, we show that a simulated pre-clinical study properly powered for one endpoint may lead to false conclusions based on secondary endpoints. We then propose guidelines for endpoint selection and study size estimation under different conditions to facilitate proper power calculation for a more successful in vivo study design. PMID:28392555
Shelate, Pragna; Dave, Divyang
2016-01-01
The objective of this work was design, characterization, and optimization of controlled drug delivery system containing antibiotic drug/s. Osmotic drug delivery system was chosen as controlled drug delivery system. The porous osmotic pump tablets were designed using Plackett-Burman and Box-Behnken factorial design to find out the best formulation. For screening of three categories of polymers, six independent variables were chosen for Plackett-Burman design. Osmotic agent sodium chloride and microcrystalline cellulose, pore forming agent sodium lauryl sulphate and sucrose, and coating agent ethyl cellulose and cellulose acetate were chosen as independent variables. Optimization of osmotic tablets was done by Box-Behnken design by selecting three independent variables. Osmotic agent sodium chloride, pore forming agent sodium lauryl sulphate, and coating agent cellulose acetate were chosen as independent variables. The result of Plackett-Burman and Box-Behnken design and ANOVA studies revealed that osmotic agent and pore former had significant effect on the drug release up to 12 hr. The observed independent variables were found to be very close to predicted values of most satisfactory formulation which demonstrates the feasibility of the optimization procedure in successful development of porous osmotic pump tablets containing antibiotic drug/s by using sodium chloride, sodium lauryl sulphate, and cellulose acetate as key excipients. PMID:27610247
NASA Astrophysics Data System (ADS)
Juszczyk, Michał; Leśniak, Agnieszka; Zima, Krzysztof
2013-06-01
Conceptual cost estimation is important for construction projects. Either underestimation or overestimation of building raising cost may lead to failure of a project. In the paper authors present application of a multicriteria comparative analysis (MCA) in order to select factors influencing residential building raising cost. The aim of the analysis is to indicate key factors useful in conceptual cost estimation in the early design stage. Key factors are being investigated on basis of the elementary information about the function, form and structure of the building, and primary assumptions of technological and organizational solutions applied in construction process. The mentioned factors are considered as variables of the model which aim is to make possible conceptual cost estimation fast and with satisfying accuracy. The whole analysis included three steps: preliminary research, choice of a set of potential variables and reduction of this set to select the final set of variables. Multicriteria comparative analysis is applied in problem solution. Performed analysis allowed to select group of factors, defined well enough at the conceptual stage of the design process, to be used as a describing variables of the model.
Understanding traffic variations by vehicle classifications
DOT National Transportation Integrated Search
1998-08-01
To provide a better understanding of how short-duration truck volume counts can be used to accurately estimate the key variables needed for design, planning, and operational analyses, the Long-Term Pavement Performance (LTPP) program recently complet...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-18
..., revisions to existing Study visits, and the initiation of methodological substudies. The NCS Vanguard Study... design of the Main Study of the National Children's Study. Background: The National Children's Study is a... questionnaire containing key variables and designed to collect core data at every study visit contact from the...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-30
... Information Collection: The purpose of the proposed methodological study is to continue the Vanguard phase of... design of the Main Study of the National Children's Study. Background The National Children's Study is a... questionnaire containing key variables and designed to collect core data at every study visit contact from the...
Design of Linear Quadratic Regulators and Kalman Filters
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L.
1986-01-01
AESOP solves problems associated with design of controls and state estimators for linear time-invariant systems. Systems considered are modeled in state-variable form by set of linear differential and algebraic equations with constant coefficients. Two key problems solved by AESOP are linear quadratic regulator (LQR) design problem and steady-state Kalman filter design problem. AESOP is interactive. User solves design problems and analyzes solutions in single interactive session. Both numerical and graphical information available to user during the session.
Framework for a U.S. Geological Survey Hydrologic Climate-Response Program in Maine
Hodgkins, Glenn A.; Lent, Robert M.; Dudley, Robert W.; Schalk, Charles W.
2009-01-01
This report presents a framework for a U.S. Geological Survey (USGS) hydrologic climate-response program designed to provide early warning of changes in the seasonal water cycle of Maine. Climate-related hydrologic changes on Maine's rivers and lakes in the winter and spring during the last century are well documented, and several river and lake variables have been shown to be sensitive to air-temperature changes. Monitoring of relevant hydrologic data would provide important baseline information against which future climate change can be measured. The framework of the hydrologic climate-response program presented here consists of four major parts: (1) identifying homogeneous climate-response regions; (2) identifying hydrologic components and key variables of those components that would be included in a hydrologic climate-response data network - as an example, streamflow has been identified as a primary component, with a key variable of streamflow being winter-spring streamflow timing; the data network would be created by maintaining existing USGS data-collection stations and establishing new ones to fill data gaps; (3) regularly updating historical trends of hydrologic data network variables; and (4) establishing basins for process-based studies. Components proposed for inclusion in the hydrologic climate-response data network have at least one key variable for which substantial historical data are available. The proposed components are streamflow, lake ice, river ice, snowpack, and groundwater. The proposed key variables of each component have extensive historical data at multiple sites and are expected to be responsive to climate change in the next few decades. These variables are also important for human water use and (or) ecosystem function. Maine would be divided into seven climate-response regions that follow major river-basin boundaries (basins subdivided to hydrologic units with 8-digit codes or larger) and have relatively homogeneous climates. Key hydrologic variables within each climate-response region would be analyzed regularly to maintain up-to-date analyses of year-to-year variability, decadal variability, and longer term trends. Finally, one basin in each climate-response region would be identified for process-based hydrologic and ecological studies.
A conceptual framework of outcomes for caregivers of assistive technology users.
Demers, Louise; Fuhrer, Marcus J; Jutai, Jeffrey; Lenker, James; Depa, Malgorzata; De Ruyter, Frank
2009-08-01
To develop and validate the content of a conceptual framework concerning outcomes for caregivers whose recipients are assistive technology users. The study was designed in four stages. First, a list of potential key variables relevant to the caregivers of assistive technology users was generated from a review of the existing literature and semistructured interviews with caregivers. Second, the variables were analyzed, regrouped, and partitioned, using a conceptual mapping approach. Third, the key areas were anchored in a general stress model of caregiving. Finally, the judgments of rehabilitation experts were used to evaluate the conceptual framework. An important result of this study is the identification of a complex set of variables that need to be considered when examining the experience of caregivers of assistive technology users. Stressors, such as types of assistance, number of tasks, and physical effort, are predominant contributors to caregiver outcomes along with caregivers' personal resources acting as mediating factors (intervening variables) and assistive technology acting as a key moderating factor (effect modifier variable). Recipients' use of assistive technology can enhance caregivers' well being because of its potential for alleviating a number of stressors associated with caregiving. Viewed as a whole, this work demonstrates that the assistive technology experience of caregivers has many facets that merit the attention of outcomes researchers.
Effective color design for displays
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay W.
2002-06-01
Visual communication is a key aspect of human-computer interaction, which contributes to the satisfaction of user and application needs. For effective design of presentations on computer displays, color should be used in conjunction with the other visual variables. The general needs of graphic user interfaces are discussed, followed by five specific tasks with differing criteria for display color specification - advertising, text, information, visualization and imaging.
Learning CAD at University through Summaries of the Rules of Design Intent
ERIC Educational Resources Information Center
Barbero, Basilio Ramos; Pedrosa, Carlos Melgosa; Samperio, Raúl Zamora
2017-01-01
The ease with which 3D CAD models may be modified and reused are two key aspects that improve the design-intent variable and that can significantly shorten the development timelines of a product. A set of rules are gathered from various authors that take different 3D modelling strategies into account. These rules are then applied to CAD…
Selimkhanov, J; Thompson, W C; Guo, J; Hall, K D; Musante, C J
2017-08-01
The design of well-powered in vivo preclinical studies is a key element in building the knowledge of disease physiology for the purpose of identifying and effectively testing potential antiobesity drug targets. However, as a result of the complexity of the obese phenotype, there is limited understanding of the variability within and between study animals of macroscopic end points such as food intake and body composition. This, combined with limitations inherent in the measurement of certain end points, presents challenges to study design that can have significant consequences for an antiobesity program. Here, we analyze a large, longitudinal study of mouse food intake and body composition during diet perturbation to quantify the variability and interaction of the key metabolic end points. To demonstrate how conclusions can change as a function of study size, we show that a simulated preclinical study properly powered for one end point may lead to false conclusions based on secondary end points. We then propose the guidelines for end point selection and study size estimation under different conditions to facilitate proper power calculation for a more successful in vivo study design.
NASA Technical Reports Server (NTRS)
Meyn, Larry A.
2018-01-01
One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use
Sensitivity analysis of navy aviation readiness based sparing model
2017-09-01
variability. (See Figure 4.) Figure 4. Research design flowchart 18 Figure 4 lays out the four steps of the methodology , starting in the upper left-hand...as a function of changes in key inputs. We develop NAVARM Experimental Designs (NED), a computational tool created by applying a state-of-the-art...experimental design to the NAVARM model. Statistical analysis of the resulting data identifies the most influential cost factors. Those are, in order of
A Review of United States Air Force and Department of Defense Aerospace Propulsion Needs
2006-01-01
evolved expendable launch vehicle EHF extremely high frequency EMA electromechanical actuator EMDP engine model derivative program EMTVA...condition. A key aspect of the model was which of the two methods was used—parameters of the system or propulsion variables produced in the design ... models for turbopump analysis and design . In addition, the skills required to design a high -performance turbopump are very specialized and must be
Economic evaluation in chronic pain: a systematic review and de novo flexible economic model.
Sullivan, W; Hirst, M; Beard, S; Gladwell, D; Fagnani, F; López Bastida, J; Phillips, C; Dunlop, W C N
2016-07-01
There is unmet need in patients suffering from chronic pain, yet innovation may be impeded by the difficulty of justifying economic value in a field beset by data limitations and methodological variability. A systematic review was conducted to identify and summarise the key areas of variability and limitations in modelling approaches in the economic evaluation of treatments for chronic pain. The results of the literature review were then used to support the development of a fully flexible open-source economic model structure, designed to test structural and data assumptions and act as a reference for future modelling practice. The key model design themes identified from the systematic review included: time horizon; titration and stabilisation; number of treatment lines; choice/ordering of treatment; and the impact of parameter uncertainty (given reliance on expert opinion). Exploratory analyses using the model to compare a hypothetical novel therapy versus morphine as first-line treatments showed cost-effectiveness results to be sensitive to structural and data assumptions. Assumptions about the treatment pathway and choice of time horizon were key model drivers. Our results suggest structural model design and data assumptions may have driven previous cost-effectiveness results and ultimately decisions based on economic value. We therefore conclude that it is vital that future economic models in chronic pain are designed to be fully transparent and hope our open-source code is useful in order to aspire to a common approach to modelling pain that includes robust sensitivity analyses to test structural and parameter uncertainty.
Phase-noise limitations in continuous-variable quantum key distribution with homodyne detection
NASA Astrophysics Data System (ADS)
Corvaja, Roberto
2017-02-01
In continuous-variables quantum key distribution with coherent states, the advantage of performing the detection by using standard telecoms components is counterbalanced by the lack of a stable phase reference in homodyne detection due to the complexity of optical phase-locking circuits and to the unavoidable phase noise of lasers, which introduces a degradation on the achievable secure key rate. Pilot-assisted phase-noise estimation and postdetection compensation techniques are used to implement a protocol with coherent states where a local laser is employed and it is not locked to the received signal, but a postdetection phase correction is applied. Here the reduction of the secure key rate determined by the laser phase noise, for both individual and collective attacks, is analytically evaluated and a scheme of pilot-assisted phase estimation proposed, outlining the tradeoff in the system design between phase noise and spectral efficiency. The optimal modulation variance as a function of the phase-noise amount is derived.
Poitras, Julien; Chauny, Jean-Marc; Lévesque, Jean-Frédéric; Ouimet, Mathieu; Dupuis, Gilles; Tanguay, Alain; Simard-Racine, Geneviève
2015-01-01
Introduction Health services research generates useful knowledge. Promotion of implementation of this knowledge in medical practice is essential. Prior to initiation of a major study on rural emergency departments (EDs), we deployed two knowledge transfer strategies designed to generate interest and engagement from potential knowledge users. The objective of this paper was to review: 1) a combined project launch and media press release strategy, and 2) a pre-study survey designed to survey potential knowledge users’ opinions on the proposed study variables. Materials and Methods We evaluated the impact of the project launch (presentation at two conferences hosted by key stakeholders) and media press release via a survey of participants/stakeholders and by calculating the number of media interview requests and reports generated. We used a pre-study survey to collect potential key stakeholder’ opinions on the study variables. Results Twenty-one of Quebec’s 26 rural EDs participated in the pre-study survey (81% participation rate). The press release about the study generated 51 press articles and 20 media request for interviews, and contributed to public awareness of a major rural research initiative. In the pre-study survey, thirteen participants (46%) mentioned prior knowledge of the research project. Results from the pre-study survey revealed that all of the potential study variables were considered to be relevant for inclusion in the research project. Respondents also proposed additional variables of interest, including factors promoting retention of human resources. Conclusions The present study demonstrated the potential utility of a two-pronged knowledge transfer strategy, including a combined formal launch and press release, and a pre-study survey designed to ensure that the included variables were of interest to participants and stakeholders. PMID:25849328
Parry, Jason; Harrington, Edward; Rees, Gareth D; McNab, Rod; Smith, Anthony J
2008-02-01
Design and construct a tooth-brushing simulator incorporating control of brushing variables including brushing force, speed and temperature, thereby facilitating greater understanding of their importance in toothpaste abrasion testing methodologies. A thermostable orbital shaker was selected as a base unit and 16- and 24-specimen brushing rigs were constructed to fit inside, consisting of: a square bath partitioned horizontally to provide brushing channels, specimen holders for 25 mm diameter mounted specimens to fit the brushing channels and individually weighted brushing arms, able to support four toothbrush holders suspended over the brushing channels. Brush head holders consisted of individually weighted blocks of Delrin, or PTFE onto which toothbrush heads were fixed. Investigating effects of key design criteria involved measuring abrasion depths of polished human enamel and dentine. The brushing simulator demonstrated good reproducibility of abrasion on enamel and dentine across consecutive brushing procedures. Varying brushing parameters had a significant impact on wear results: increased brushing force demonstrated a trend towards increased wear, with increased reproducibility for greater abrasion levels, highlighting the importance of achieving sufficient wear to optimise accuracy; increasing brushing temperature demonstrated increased enamel abrasion for silica and calcium carbonate systems, which may be related to slurry viscosities and particle suspension; varying brushing speed showed a small effect on abrasion of enamel at lower brushing speed, which may indicate the importance of maintenance of the abrasive in suspension. Adjusting key brushing variables significantly affected wear behaviour. The brushing simulator design provides a valuable model system for in vitro assessment of toothpaste abrasivity and the influence of variables in a controlled manner. Control of these variables will allow more reproducible study of in vitro tooth wear processes.
Linear Modeling of Rotorcraft for Stability Analysis and Preliminary Design
1993-09-01
Bmat ) disp(’ ’) disp(’press any key to continue...’) pause clc elseif choice==4, V...lateral cyclic, pedal.]’) diap(’ ’) diup ( Bmat ) disp(’ ’) disp(’ ’) diup(’ Higenvalue’) diup(’ ’) diap (’Uncoupled’) diup(’ ’) disp(’Longitudinal plant...containing matrix variables V Amat Bmat Rcoup Flataug Glataug Rlataug Plataug Flataug % Glataug Rlonaug Plonaug ’a V * Configuring variables
Experimental study on all-fiber-based unidimensional continuous-variable quantum key distribution
NASA Astrophysics Data System (ADS)
Wang, Xuyang; Liu, Wenyuan; Wang, Pu; Li, Yongmin
2017-06-01
We experimentally demonstrated an all-fiber-based unidimensional continuous-variable quantum key distribution (CV QKD) protocol and analyzed its security under collective attack in realistic conditions. A pulsed balanced homodyne detector, which could not be accessed by eavesdroppers, with phase-insensitive efficiency and electronic noise, was considered. Furthermore, a modulation method and an improved relative phase-locking technique with one amplitude modulator and one phase modulator were designed. The relative phase could be locked precisely with a standard deviation of 0.5° and a mean of almost zero. Secret key bit rates of 5.4 kbps and 700 bps were achieved for transmission fiber lengths of 30 and 50 km, respectively. The protocol, which simplified the CV QKD system and reduced the cost, displayed a performance comparable to that of a symmetrical counterpart under realistic conditions. It is expected that the developed protocol can facilitate the practical application of the CV QKD.
Contracting to improve your revenue cycle performance.
Welter, Terri L; Semko, George A; Miller, Tony; Lauer, Roberta
2007-09-01
The following key drivers of commercial contract variability can have a material effect on your hospital's revenue cycle: Claim form variance. Benefit design. Contract complexity. Coding variance. Medical necessity. Precertification/authorization. Claim adjudication/appeal requirements. Additional documentation requirements. Timeliness of payment. Third-party payer activity.
Understanding the dynamic effects of returning patients toward emergency department density
NASA Astrophysics Data System (ADS)
Ahmad, Norazura; Zulkepli, Jafri; Ramli, Razamin; Ghani, Noraida Abdul; Teo, Aik Howe
2017-11-01
This paper presents the development of a dynamic hypothesis for the effect of returning patients to the emergency department (ED). A logical tree from the Theory of Constraint known as Current Reality Tree was used to identify the key variables. Then, a hypothetical framework portraying the interrelated variables and its influencing relationships was developed using causal loop diagrams (CLD). The conceptual framework was designed as the basis for the development of a system dynamics model.
Liu, Huolong; Galbraith, S C; Ricart, Brendon; Stanton, Courtney; Smith-Goettler, Brandye; Verdi, Luke; O'Connor, Thomas; Lee, Sau; Yoon, Seongkyu
2017-06-15
In this study, the influence of key process variables (screw speed, throughput and liquid to solid (L/S) ratio) of a continuous twin screw wet granulation (TSWG) was investigated using a central composite face-centered (CCF) experimental design method. Regression models were developed to predict the process responses (motor torque, granule residence time), granule properties (size distribution, volume average diameter, yield, relative width, flowability) and tablet properties (tensile strength). The effects of the three key process variables were analyzed via contour and interaction plots. The experimental results have demonstrated that all the process responses, granule properties and tablet properties are influenced by changing the screw speed, throughput and L/S ratio. The TSWG process was optimized to produce granules with specific volume average diameter of 150μm and the yield of 95% based on the developed regression models. A design space (DS) was built based on volume average granule diameter between 90 and 200μm and the granule yield larger than 75% with a failure probability analysis using Monte Carlo simulations. Validation experiments successfully validated the robustness and accuracy of the DS generated using the CCF experimental design in optimizing a continuous TSWG process. Copyright © 2017 Elsevier B.V. All rights reserved.
Managerial process improvement: a lean approach to eliminating medication delivery.
Hussain, Aftab; Stewart, LaShonda M; Rivers, Patrick A; Munchus, George
2015-01-01
Statistical evidence shows that medication errors are a major cause of injuries that concerns all health care oganizations. Despite all the efforts to improve the quality of care, the lack of understanding and inability of management to design a robust system that will strategically target those factors is a major cause of distress. The paper aims to discuss these issues. Achieving optimum organizational performance requires two key variables; work process factors and human performance factors. The approach is that healthcare administrators must take in account both variables in designing a strategy to reduce medication errors. However, strategies that will combat such phenomena require that managers and administrators understand the key factors that are causing medication delivery errors. The authors recommend that healthcare organizations implement the Toyota Production System (TPS) combined with human performance improvement (HPI) methodologies to eliminate medication delivery errors in hospitals. Despite all the efforts to improve the quality of care, there continues to be a lack of understanding and the ability of management to design a robust system that will strategically target those factors associated with medication errors. This paper proposes a solution to an ambiguous workflow process using the TPS combined with the HPI system.
Improved specificity of TALE-based genome editing using an expanded RVD repertoire.
Miller, Jeffrey C; Zhang, Lei; Xia, Danny F; Campo, John J; Ankoudinova, Irina V; Guschin, Dmitry Y; Babiarz, Joshua E; Meng, Xiangdong; Hinkley, Sarah J; Lam, Stephen C; Paschon, David E; Vincent, Anna I; Dulay, Gladys P; Barlow, Kyle A; Shivak, David A; Leung, Elo; Kim, Jinwon D; Amora, Rainier; Urnov, Fyodor D; Gregory, Philip D; Rebar, Edward J
2015-05-01
Transcription activator-like effector (TALE) proteins have gained broad appeal as a platform for targeted DNA recognition, largely owing to their simple rules for design. These rules relate the base specified by a single TALE repeat to the identity of two key residues (the repeat variable diresidue, or RVD) and enable design for new sequence targets via modular shuffling of these units. A key limitation of these rules is that their simplicity precludes options for improving designs that are insufficiently active or specific. Here we address this limitation by developing an expanded set of RVDs and applying them to improve the performance of previously described TALEs. As an extreme example, total conversion of a TALE nuclease to new RVDs substantially reduced off-target cleavage in cellular studies. By providing new RVDs and design strategies, these studies establish options for developing improved TALEs for broader application across medicine and biotechnology.
Sensitivity study of Space Station Freedom operations cost and selected user resources
NASA Technical Reports Server (NTRS)
Accola, Anne; Fincannon, H. J.; Williams, Gregory J.; Meier, R. Timothy
1990-01-01
The results of sensitivity studies performed to estimate probable ranges for four key Space Station parameters using the Space Station Freedom's Model for Estimating Space Station Operations Cost (MESSOC) are discussed. The variables examined are grouped into five main categories: logistics, crew, design, space transportation system, and training. The modification of these variables implies programmatic decisions in areas such as orbital replacement unit (ORU) design, investment in repair capabilities, and crew operations policies. The model utilizes a wide range of algorithms and an extensive trial logistics data base to represent Space Station operations. The trial logistics data base consists largely of a collection of the ORUs that comprise the mature station, and their characteristics based on current engineering understanding of the Space Station. A nondimensional approach is used to examine the relative importance of variables on parameters.
The art of spacecraft design: A multidisciplinary challenge
NASA Technical Reports Server (NTRS)
Abdi, F.; Ide, H.; Levine, M.; Austel, L.
1989-01-01
Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.
NASA Astrophysics Data System (ADS)
Phan, Duoc T.; Lim, James B. P.; Sha, Wei; Siew, Calvin Y. M.; Tanyimboh, Tiku T.; Issa, Honar K.; Mohammad, Fouad A.
2013-04-01
Cold-formed steel portal frames are a popular form of construction for low-rise commercial, light industrial and agricultural buildings with spans of up to 20 m. In this article, a real-coded genetic algorithm is described that is used to minimize the cost of the main frame of such buildings. The key decision variables considered in this proposed algorithm consist of both the spacing and pitch of the frame as continuous variables, as well as the discrete section sizes. A routine taking the structural analysis and frame design for cold-formed steel sections is embedded into a genetic algorithm. The results show that the real-coded genetic algorithm handles effectively the mixture of design variables, with high robustness and consistency in achieving the optimum solution. All wind load combinations according to Australian code are considered in this research. Results for frames with knee braces are also included, for which the optimization achieved even larger savings in cost.
Integrated Projectile Systems Synthesis Model (IPSSM)
1976-08-01
Lethal area effectiveness Batch mode I terior ballistics Trajectory calculations Weapon system modeling ""TRACT (Cenetsmae an revers elds It ecesuy and...Ballistics (AR) 29 E. Terminal Effectiveness Calculations (LA) 31 F. 6-D Trajectory (TR) 32 G. Recoil Mechanism Design (RM) 33 H. Sabot Design (SD) 33 I...Exterior Ballistics Program (AR) 79 Key Variable Input D2 Exterior Ballistics Program (AR) 89 List of Tables E Terminal Effectiveness Program (LA) 93
New Discrete Fibonacci Charge Pump Design, Evaluation and Measurement
NASA Astrophysics Data System (ADS)
Matoušek, David; Hospodka, Jiří; Šubrt, Ondřej
2017-06-01
This paper focuses on the practical aspects of the realisation of Dickson and Fibonacci charge pumps. Standard Dickson charge pump circuit solution and new Fibonacci charge pump implementation are compared. Both charge pumps were designed and then evaluated by LTspice XVII simulations and realised in a discrete form on printed circuit board (PCB). Finally, the key parameters as the output voltage, efficiency, rise time, variable power supply and clock frequency effects were measured.
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
2011-01-01
The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.
A Requirements-Driven Optimization Method for Acoustic Treatment Design
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
2016-01-01
Acoustic treatment designers have long been able to target specific noise sources inside turbofan engines. Facesheet porosity and cavity depth are key design variables of perforate-over-honeycomb liners that determine levels of noise suppression as well as the frequencies at which suppression occurs. Layers of these structures can be combined to create a robust attenuation spectrum that covers a wide range of frequencies. Looking to the future, rapidly-emerging additive manufacturing technologies are enabling new liners with multiple degrees of freedom, and new adaptive liners with variable impedance are showing promise. More than ever, there is greater flexibility and freedom in liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. The subject of this paper is an analytical method that derives the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground.
Modeling of laser transmission contour welding process using FEA and DoE
NASA Astrophysics Data System (ADS)
Acherjee, Bappa; Kuar, Arunanshu S.; Mitra, Souren; Misra, Dipten
2012-07-01
In this research, a systematic investigation on laser transmission contour welding process is carried out using finite element analysis (FEA) and design of experiments (DoE) techniques. First of all, a three-dimensional thermal model is developed to simulate the laser transmission contour welding process with a moving heat source. The commercial finite element code ANSYS® multi-physics is used to obtain the numerical results by implementing a volumetric Gaussian heat source, and combined convection-radiation boundary conditions. Design of experiments together with regression analysis is then employed to plan the experiments and to develop mathematical models based on simulation results. Four key process parameters, namely power, welding speed, beam diameter, and carbon black content in absorbing polymer, are considered as independent variables, while maximum temperature at weld interface, weld width, and weld depths in transparent and absorbing polymers are considered as dependent variables. Sensitivity analysis is performed to determine how different values of an independent variable affect a particular dependent variable.
How Task Features Impact Evidence from Assessments Embedded in Simulations and Games
ERIC Educational Resources Information Center
Almond, Russell G.; Kim, Yoon Jeon; Velasquez, Gertrudes; Shute, Valerie J.
2014-01-01
One of the key ideas of evidence-centered assessment design (ECD) is that task features can be deliberately manipulated to change the psychometric properties of items. ECD identifies a number of roles that task-feature variables can play, including determining the focus of evidence, guiding form creation, determining item difficulty and…
Variability of furrow infiltration and irrigation performance in a macroporous soil
USDA-ARS?s Scientific Manuscript database
The study of spatial and temporal variations of infiltration in furrows is essential for the design and management of surface irrigation. A key difficulty in quantifying the process is that infiltration is dependent on the depth of flow, which varies along a furrow and with time. An additional diffi...
ERIC Educational Resources Information Center
Nyaanga, Solomon G.
2012-01-01
This research investigates the impact of telecommuting intensity (hours worked/week from home) on worker perceived outcomes such as job satisfaction, productivity, organizational commitment. Data was collected and analyzed from a large U.S. Federal Department. The conceptual research model and design include three key mediating variables, one…
ERIC Educational Resources Information Center
Tynan, Joshua J.; Somers, Cheryl L.; Gleason, Jamie H.; Markman, Barry S.; Yoon, Jina
2015-01-01
With Bronfenbrenner's (1977) ecological theory and other multifactor models (e.g. Pianta, 1999; Prinstein, Boergers, & Spirito, 2001) underlying this study design, the purpose was to examine, simultaneously, key variables in multiple life contexts (microsystem, mesosystem, exosystem levels) for their individual and combined roles in predicting…
USDA-ARS?s Scientific Manuscript database
HMW glutenin subunits are the most important determinants of wheat (Triticum aestivum L.) bread-making quality, and subunit composition explains a large percentage of the variability observed between genotypes. Experiments were designed to elevate expression of a key native HMW glutenin subunit (1D...
The Promise of Virtual Teams: Identifying Key Factors in Effectiveness and Failure
ERIC Educational Resources Information Center
Horwitz, Frank M.; Bravington, Desmond; Silvis, Ulrik
2006-01-01
Purpose: The aim of the investigation is to identify enabling and disenabling factors in the development and operation of virtual teams; to evaluate the importance of factors such as team development, cross-cultural variables, leadership, communication and social cohesion as contributors to virtual team effectiveness. Design/methodology/approach:…
Modeling of crude oil biodegradation using two phase partitioning bioreactor.
Fakhru'l-Razi, A; Peyda, Mazyar; Ab Karim Ghani, Wan Azlina Wan; Abidin, Zurina Zainal; Zakaria, Mohamad Pauzi; Moeini, Hassan
2014-01-01
In this work, crude oil biodegradation has been optimized in a solid-liquid two phase partitioning bioreactor (TPPB) by applying a response surface methodology based d-optimal design. Three key factors including phase ratio, substrate concentration in solid organic phase, and sodium chloride concentration in aqueous phase were taken as independent variables, while the efficiency of the biodegradation of absorbed crude oil on polymer beads was considered to be the dependent variable. Commercial thermoplastic polyurethane (Desmopan®) was used as the solid phase in the TPPB. The designed experiments were carried out batch wise using a mixed acclimatized bacterial consortium. Optimum combinations of key factors with a statistically significant cubic model were used to maximize biodegradation in the TPPB. The validity of the model was successfully verified by the good agreement between the model-predicted and experimental results. When applying the optimum parameters, gas chromatography-mass spectrometry showed a significant reduction in n-alkanes and low molecular weight polycyclic aromatic hydrocarbons. This consequently highlights the practical applicability of TPPB in crude oil biodegradation. © 2014 American Institute of Chemical Engineers.
Trauma systems and the costs of trauma care.
Goldfarb, M G; Bazzoli, G J; Coffey, R M
1996-01-01
OBJECTIVE. This study examines the cost of providing trauma services in trauma centers organized by publicly administered trauma systems, compared to hospitals not part of a formal trauma system. DATA SOURCES AND STUDY SETTING. Secondary administrative discharge abstracts for a national sample of severely injured trauma patients in 44 trauma centers and 60 matched control hospitals for the year 1987 were used. STUDY DESIGN. Retrospective univariate and multivariate analyses were conducted to examine the impact of formal trauma systems and trauma center designation on the costs of treating trauma patients. Key dependent variables included length of stay, charge per day per patient, and charge per hospital stay. Key impact variables were type of trauma system and level of trauma designation. Control variables included patient, hospital, and community characteristics. DATA COLLECTION/EXTRACTION METHODS. Data were selected for hospitals based on (1) a large national hospital discharge database, the Hospital Cost and Utilization Project, 1980-1987 (HCUP-2) and (2) a special survey of trauma systems and trauma designation undertaken by the Hospital Research and Educational Trust of the American Hospital Association. PRINCIPAL FINDINGS. The results show that publicly designated Level I trauma centers, which are the focal point of most trauma systems, have the highest charge per case, the highest average charge per day, and similar or longer average lengths of stay than other hospitals. These findings persist after controlling for patient injury and health status, and for demographic characteristics and hospital and community characteristics. CONCLUSIONS. Prior research shows that severely injured trauma patients have greater chances of survival when treated in specialized trauma centers. However, findings here should be of concern to the many states developing trauma systems since the high costs of Level I centers support limiting the number of centers designated at this level and/or reconsidering the requirements placed on these centers. PMID:8617611
Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh
2015-09-01
We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs.
Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J. Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh
2015-01-01
We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs. PMID:26539566
NASA Astrophysics Data System (ADS)
Xie, Cailang; Guo, Ying; Liao, Qin; Zhao, Wei; Huang, Duan; Zhang, Ling; Zeng, Guihua
2018-03-01
How to narrow the gap of security between theory and practice has been a notoriously urgent problem in quantum cryptography. Here, we analyze and provide experimental evidence of the clock jitter effect on the practical continuous-variable quantum key distribution (CV-QKD) system. The clock jitter is a random noise which exists permanently in the clock synchronization in the practical CV-QKD system, it may compromise the system security because of its impact on data sampling and parameters estimation. In particular, the practical security of CV-QKD with different clock jitter against collective attack is analyzed theoretically based on different repetition frequencies, the numerical simulations indicate that the clock jitter has more impact on a high-speed scenario. Furthermore, a simplified experiment is designed to investigate the influence of the clock jitter.
Ashengroph, Morahem; Nahvi, Iraj; Amini, Jahanshir
2013-01-01
For all industrial processes, modelling, optimisation and control are the keys to enhance productivity and ensure product quality. In the current study, the optimization of process parameters for improving the conversion of isoeugenol to vanillin by Psychrobacter sp. CSW4 was investigated by means of Taguchi approach and Box-Behnken statistical design under resting cell conditions. Taguchi design was employed for screening the significant variables in the bioconversion medium. Sequentially, Box-Behnken design experiments under Response Surface Methodology (RSM) was used for further optimization. Four factors (isoeugenol, NaCl, biomass and tween 80 initial concentrations), which have significant effects on vanillin yield, were selected from ten variables by Taguchi experimental design. With the regression coefficient analysis in the Box-Behnken design, a relationship between vanillin production and four significant variables was obtained, and the optimum levels of the four variables were as follows: initial isoeugenol concentration 6.5 g/L, initial tween 80 concentration 0.89 g/L, initial NaCl concentration 113.2 g/L and initial biomass concentration 6.27 g/L. Under these optimized conditions, the maximum predicted concentration of vanillin was 2.25 g/L. These optimized values of the factors were validated in a triplicate shaking flask study and an average of 2.19 g/L for vanillin, which corresponded to a molar yield 36.3%, after a 24 h bioconversion was obtained. The present work is the first one reporting the application of Taguchi design and Response surface methodology for optimizing bioconversion of isoeugenol into vanillin under resting cell conditions.
Axial and Centrifugal Compressor Mean Line Flow Analysis Method
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
2009-01-01
This paper describes a method to estimate key aerodynamic parameters of single and multistage axial and centrifugal compressors. This mean-line compressor code COMDES provides the capability of sizing single and multistage compressors quickly during the conceptual design process. Based on the compressible fluid flow equations and the Euler equation, the code can estimate rotor inlet and exit blade angles when run in the design mode. The design point rotor efficiency and stator losses are inputs to the code, and are modeled at off design. When run in the off-design analysis mode, it can be used to generate performance maps based on simple models for losses due to rotor incidence and inlet guide vane reset angle. The code can provide an improved understanding of basic aerodynamic parameters such as diffusion factor, loading levels and incidence, when matching multistage compressor blade rows at design and at part-speed operation. Rotor loading levels and relative velocity ratio are correlated to the onset of compressor surge. NASA Stage 37 and the three-stage NASA 74-A axial compressors were analyzed and the results compared to test data. The code has been used to generate the performance map for the NASA 76-B three-stage axial compressor featuring variable geometry. The compressor stages were aerodynamically matched at off-design speeds by adjusting the variable inlet guide vane and variable stator geometry angles to control the rotor diffusion factor and incidence angles.
Market Evolution: Wholesale Electricity Market Design for 21st Century Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cochran, Jaquelin; Miller, Mackay; Milligan, Michael
2013-10-01
Demand for affordable, reliable, domestically sourced, and low-carbon electricity is on the rise. This growing demand is driven in part by evolving public policy priorities, especially reducing the health and environmental impacts of electricity service and expanding energy access to under-served customers. Consequently, variable renewable energy resources comprise an increasing share ofelectricity generation globally. At the same time, new opportunities for addressing the variability of renewables are being strengthened through advances in smart grids, communications, and technologies that enable dispatchable demand response and distributed generation to extend to the mass market. A key challenge of merging these opportunities is marketmore » design -- determining how to createincentives and compensate providers justly for attributes and performance that ensure a reliable and secure grid -- in a context that fully realizes the potential of a broad array of sources of flexibility in both the wholesale power and retail markets. This report reviews the suite of wholesale power market designs in use and under consideration to ensure adequacy, security, and flexibilityin a landscape of significant variable renewable energy. It also examines considerations needed to ensure that wholesale market designs are inclusive of emerging technologies, such as demand response, distributed generation, and storage.« less
The fuelbed: a key element of the Fuel Characteristic Classification System.
Cynthia L. Riccardi; Roger D. Ottmar; David V. Sandberg; Anne Andreu; Ella Elman; Karen Kopper; Jennifer Long
2007-01-01
Wildland fuelbed characteristics are temporally and spatially complex and can vary widely across regions. To capture this variability, we designed the Fuel Characteristic Classification System (FCCS), a national system to create fuelbeds and classify those fuelbeds for their capacity to support fire and consume fuels. This paper describes the structure of the fuelbeds...
Preliminary low-level waste feed definition guidance - LLW pretreatment interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shade, J.W.; Connor, J.M.; Hendrickson, D.W.
1995-02-01
The document describes limits for key constituents in the LLW feed, and the bases for these limits. The potential variability in the stream is then estimated and compared to the limits. Approaches for accomodating uncertainty in feed inventory, processing strategies, and process design (melter and disposal system) are discussed. Finally, regulatory constraints are briefly addressed.
Presenting the Straddle Lemma in an Introductory Real Analysis Course
ERIC Educational Resources Information Center
Soares, A.; dos Santos, A. L.
2017-01-01
In this article, we revisit the concept of strong differentiability of real functions of one variable, underlying the concept of differentiability. Our discussion is guided by the Straddle Lemma, which plays a key role in this context. The proofs of the results presented are designed to meet a young audience in mathematics, typical of students in…
Knowledge Discovery for Transonic Regional-Jet Wing through Multidisciplinary Design Exploration
NASA Astrophysics Data System (ADS)
Chiba, Kazuhisa; Obayashi, Shigeru; Morino, Hiroyuki
Data mining is an important facet of solving multi-objective optimization problem. Because it is one of the effective manner to discover the design knowledge in the multi-objective optimization problem which obtains large data. In the present study, data mining has been performed for a large-scale and real-world multidisciplinary design optimization (MDO) to provide knowledge regarding the design space. The MDO among aerodynamics, structures, and aeroelasticity of the regional-jet wing was carried out using high-fidelity evaluation models on the adaptive range multi-objective genetic algorithm. As a result, nine non-dominated solutions were generated and used for tradeoff analysis among three objectives. All solutions evaluated during the evolution were analyzed for the tradeoffs and influence of design variables using a self-organizing map to extract key features of the design space. Although the MDO results showed the inverted gull-wings as non-dominated solutions, one of the key features found by data mining was the non-gull wing geometry. When this knowledge was applied to one optimum solution, the resulting design was found to have better performance compared with the original geometry designed in the conventional manner.
Hotspot-Centric De Novo Design of Protein Binders
Fleishman, Sarel J.; Corn, Jacob E.; Strauch, Eva-Maria; Whitehead, Timothy A.; Karanicolas, John; Baker, David
2014-01-01
Protein–protein interactions play critical roles in biology, and computational design of interactions could be useful in a range of applications. We describe in detail a general approach to de novo design of protein interactions based on computed, energetically optimized interaction hotspots, which was recently used to produce high-affinity binders of influenza hemagglutinin. We present several alternative approaches to identify and build the key hotspot interactions within both core secondary structural elements and variable loop regions and evaluate the method's performance in natural-interface recapitulation. We show that the method generates binding surfaces that are more conformationally restricted than previous design methods, reducing opportunities for off-target interactions. PMID:21945116
Message Variability and Heterogeneity: A Core Challenge for Communication Research
Slater, Michael D.; Peter, Jochen; Valkenberg, Patti
2015-01-01
Messages are central to human social experience, and pose key conceptual and methodological challenges in the study of communication. In response to these challenges, we outline a systematic approach to conceptualizing, operationalizing, and analyzing messages. At the conceptual level, we distinguish between two core aspects of messages: message variability (the defined and operationalized features of messages) and message heterogeneity (the undefined and unmeasured features of messages), and suggest preferred approaches to defining message variables. At the operational level, we identify message sampling, selection, and research design strategies responsive to issues of message variability and heterogeneity in experimental and survey research. At the analytical level, we highlight effective techniques to deal with message variability and heterogeneity. We conclude with seven recommendations to increase rigor in the study of communication through appropriately addressing the challenges presented by messages. PMID:26681816
Overview of Variable-Speed Power-Turbine Research
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
2011-01-01
The vertical take-off and landing (VTOL) and high-speed cruise capability of the NASA Large Civil Tilt-Rotor (LCTR) notional vehicle is envisaged to enable increased throughput in the national airspace. A key challenge of the LCTR is the requirement to vary the main rotor speeds from 100% at take-off to near 50% at cruise as required to minimize mission fuel burn. The variable-speed power-turbine (VSPT), driving a fixed gear-ratio transmission, provides one approach for effecting this wide speed variation. The key aerodynamic and rotordynamic challenges of the VSPT were described in the FAP Conference presentation. The challenges include maintaining high turbine efficiency at high work factor, wide (60 deg.) of incidence variation in all blade rows due to the speed variation, and operation at low Reynolds numbers (with transitional flow). The PT -shaft of the VSPT must be designed for safe operation in the wide speed range required, and therefore poses challenges associated with rotordynamics. The technical challenges drive research activities underway at NASA. An overview of the NASA SRW VSPT research activities was provided. These activities included conceptual and preliminary aero and mechanical (rotordynamics) design of the VSPT for the LCTR application, experimental and computational research supporting the development of incidence tolerant blading, and steps toward component-level testing of a variable-speed power-turbine of relevance to the LCTR application.
Identifying Key Features of Effective Active Learning: The Effects of Writing and Peer Discussion
Pangle, Wiline M.; Wyatt, Kevin H.; Powell, Karli N.; Sherwood, Rachel E.
2014-01-01
We investigated some of the key features of effective active learning by comparing the outcomes of three different methods of implementing active-learning exercises in a majors introductory biology course. Students completed activities in one of three treatments: discussion, writing, and discussion + writing. Treatments were rotated weekly between three sections taught by three different instructors in a full factorial design. The data set was analyzed by generalized linear mixed-effect models with three independent variables: student aptitude, treatment, and instructor, and three dependent (assessment) variables: change in score on pre- and postactivity clicker questions, and coding scores on in-class writing and exam essays. All independent variables had significant effects on student performance for at least one of the dependent variables. Students with higher aptitude scored higher on all assessments. Student scores were higher on exam essay questions when the activity was implemented with a writing component compared with peer discussion only. There was a significant effect of instructor, with instructors showing different degrees of effectiveness with active-learning techniques. We suggest that individual writing should be implemented as part of active learning whenever possible and that instructors may need training and practice to become effective with active learning. PMID:25185230
Aerodynamic design on high-speed trains
NASA Astrophysics Data System (ADS)
Ding, San-San; Li, Qiang; Tian, Ai-Qin; Du, Jian; Liu, Jia-Li
2016-04-01
Compared with the traditional train, the operational speed of the high-speed train has largely improved, and the dynamic environment of the train has changed from one of mechanical domination to one of aerodynamic domination. The aerodynamic problem has become the key technological challenge of high-speed trains and significantly affects the economy, environment, safety, and comfort. In this paper, the relationships among the aerodynamic design principle, aerodynamic performance indexes, and design variables are first studied, and the research methods of train aerodynamics are proposed, including numerical simulation, a reduced-scale test, and a full-scale test. Technological schemes of train aerodynamics involve the optimization design of the streamlined head and the smooth design of the body surface. Optimization design of the streamlined head includes conception design, project design, numerical simulation, and a reduced-scale test. Smooth design of the body surface is mainly used for the key parts, such as electric-current collecting system, wheel truck compartment, and windshield. The aerodynamic design method established in this paper has been successfully applied to various high-speed trains (CRH380A, CRH380AM, CRH6, CRH2G, and the Standard electric multiple unit (EMU)) that have met expected design objectives. The research results can provide an effective guideline for the aerodynamic design of high-speed trains.
Low-cost housing design and provision: A case study of Kenya
NASA Astrophysics Data System (ADS)
Kabo, Felichism W.
Shelter is as basic a human need as food and water. Today, many people in Third World countries live in sub-standard housing, or lack shelter altogether. Prior research addresses either one of two housing dimensions: broader provision processes, or specific aspects of design. This dissertation is an effort at addressing both dimensions, the underlying premise being that their inter-connectedness demands an integrative approach. More specifically, this dissertation is a combined strategy case study of housing design and provision in Kenya, a sub-Saharan African country with serious shelter problems. A majority of Kenya's urban population lives in slums or squatter settlements. This dissertation covers four major areas of housing design and provision in Kenya: building materials, user preferences for building materials and housing designs, interior layouts, and the organizational context of the housing sector. These four areas are theoretically unified by Canter's (1977) model of place. Each of the first three areas (housing design) relates to one or more of the three domains in the model. The fourth area (housing provision) pertains to the model's context and framework. The technical building materials research reveals the feasibility of making low-cost materials (soil-cements) with satisfactory engineering performance. The research in preference for building materials reveals that the two independent variables, soil and mix, have a significant effect on potential users' ratings. The housing preference study reveals that of the four independent variables, design and type had a significant effect on potential users' ratings, while materials and construction method did not have a significant effect. The interior layout studies reveal important associations between spatial configurations and a key space (the kitchen), and between configuration and conceptualizations of living, cooking, and sleeping spaces. The findings from the studies of preferences and interior layouts are then synthesized in the development of a low-cost housing prototype. Lastly, analysis of the organizational context reveals notable links between nominal housing-related responsibilities, and the potential power and influence of key organizations. The potential effects of the spatial context on housing organizations are also explored. Later, the key organizations are restructured to address shortcomings identified at the organizational and sectoral levels.
Maternal responses to child frustration and requests for help in dyads with fragile X syndrome.
Wheeler, A C; Hatton, D; Holloway, V T; Sideris, J; Neebe, E C; Roberts, J E; Reznick, J S
2010-06-01
Variability in behaviour displayed by children with fragile X syndrome (FXS) may be partially attributable to environmental factors such as maternal responsivity. The purpose of this study was to explore variables associated with maternal behaviour during a task designed to elicit frustration in their children with FXS. Forty-six mother-child dyads, in which the child had full-mutation FXS, were observed in their homes during a task designed to elicit frustration in the child. Each child was given a wrong set of keys and asked to open a box to retrieve a desired toy. Mothers were provided with the correct set of keys and instructed to intervene when they perceived their child was getting too frustrated. Child-expressed frustration and requests for help and maternal behaviours (comforting, negative control, and encouraging/directing) were observed and coded. Maternal variables (e.g. depression, stress, education levels), child variables (e.g. autistic behaviours, age, medication use) and child behaviours (frustration, requests for help) were explored as predictors of maternal behaviour. Almost all mothers intervened to help their children and most used encouraging/directing behaviours, whereas very few used comforting or negative control. Child age and child behaviours during the frustrating event were significant predictors of encouraging/directing behaviours in the mothers. Children whose mothers reported higher depressive symptomology used fewer requests for help, and mothers of children with more autistic behaviours used more negative control. The results of this study suggest that child age and immediate behaviours are more strongly related to maternal responsivity than maternal traits such as depression and stress.
Adaptive data rate SSMA system for personal and mobile satellite communications
NASA Technical Reports Server (NTRS)
Ikegami, Tetsushi; Takahashi, Takashi; Arakaki, Yoshiya; Wakana, Hiromitsu
1995-01-01
An adaptive data rate SSMA (spread spectrum multiple access) system is proposed for mobile and personal multimedia satellite communications without the aid of system control earth stations. This system has a constant occupied bandwidth and has variable data rates and processing gains to mitigate communication link impairments such as fading, rain attenuation and interference as well as to handle variable data rate on demand. Proof of concept hardware for 6MHz bandwidth transponder is developed, that uses offset-QPSK (quadrature phase shift keying) and MSK (minimum shift keying) for direct sequence spread spectrum modulation and handle data rates of 4k to 64kbps. The RS422 data interface, low rate voice and H.261 video codecs are installed. The receiver is designed with coherent matched filter technique to achieve fast code acquisition, AFC (automatic frequency control) and coherent detection with minimum hardware losses in a single matched filter circuit. This receiver structure facilitates variable data rate on demand during a call. This paper shows the outline of the proposed system and the performance of the prototype equipment.
A Mindful Approach to Teaching Emotional Intelligence to Undergraduate Students Online and in Person
ERIC Educational Resources Information Center
Cotler, Jami L.; DiTursi, Dan; Goldstein, Ira; Yates, Jeff; DelBelso, Deb
2017-01-01
In this paper we examine whether emotional intelligence (EI) can be taught online and, if so, what key variables influence the successful implementation of this online learning model. Using a 3 x 2 factorial quasi-experimental design, this mixed-methods study found that a team-based learning environment using a blended teaching approach, supported…
ERIC Educational Resources Information Center
Loreman, Tim; Sharma, Umesh; Forlin, Chris
2013-01-01
This paper reports the results of an international study examining pre-service teacher reports of teaching self-efficacy for inclusive education; principally focusing on the explanatory relationship between a scale designed to measure teaching self-efficacy in this area and key demographic variables within Canada, Australia, Hong Kong, and…
Funkenbusch, Paul D; Rotella, Mario; Ercoli, Carlo
2015-04-01
Laboratory studies of tooth preparation are often performed under a limited range of conditions involving single values for all variables other than the 1 being tested. In contrast, in clinical settings not all variables can be tightly controlled. For example, a new dental rotary cutting instrument may be tested in the laboratory by making a specific cut with a fixed force, but in clinical practice, the instrument must make different cuts with individual dentists applying a range of different forces. Therefore, the broad applicability of laboratory results to diverse clinical conditions is uncertain and the comparison of effects across studies is difficult. The purpose of this study was to examine the effect of 9 process variables on dental cutting in a single experiment, allowing each variable to be robustly tested over a range of values for the other 8 and permitting a direct comparison of the relative importance of each on the cutting process. The effects of 9 key process variables on the efficiency of a simulated dental cutting operation were measured. A fractional factorial experiment was conducted by using a computer-controlled, dedicated testing apparatus to simulate dental cutting procedures and Macor blocks as the cutting substrate. Analysis of Variance (ANOVA) was used to judge the statistical significance (α=.05). Five variables consistently produced large, statistically significant effects (target applied load, cut length, starting rpm, diamond grit size, and cut type), while 4 variables produced relatively small, statistically insignificant effects (number of cooling ports, rotary cutting instrument diameter, disposability, and water flow rate). The control exerted by the dentist, simulated in this study by targeting a specific level of applied force, was the single most important factor affecting cutting efficiency. Cutting efficiency was also significantly affected by factors simulating patient/clinical circumstances as well as hardware choices. These results highlight the importance of local clinical conditions (procedure, dentist) in understanding dental cutting procedures and in designing adequate experimental methodologies for future studies. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
McKnight, G. P.; Henry, C. P.
2008-03-01
Morphing or reconfigurable structures potentially allow for previously unattainable vehicle performance by permitting several optimized structures to be achieved using a single platform. The key to enabling this technology in applications such as aircraft wings, nozzles, and control surfaces, are new engineered materials which can achieve the necessary deformations but limit losses in parasitic actuation mass and structural efficiency (stiffness/weight). These materials should exhibit precise control of deformation properties and provide high stiffness when exercised through large deformations. In this work, we build upon previous efforts in segmented reinforcement variable stiffness composites employing shape memory polymers to create prototype hybrid composite materials that combine the benefits of cellular materials with those of discontinuous reinforcement composites. These composites help overcome two key challenges for shearing wing skins: the resistance to out of plane buckling from actuation induced shear deformation, and resistance to membrane deflections resulting from distributed aerodynamic pressure loading. We designed, fabricated, and tested composite materials intended for shear deformation and address out of plane deflections in variable area wing skins. Our designs are based on the kinematic engineering of reinforcement platelets such that desired microstructural kinematics is achieved through prescribed boundary conditions. We achieve this kinematic control by etching sheets of metallic reinforcement into regular patterns of platelets and connecting ligaments. This kinematic engineering allows optimization of materials properties for a known deformation pathway. We use mechanical analysis and full field photogrammetry to relate local scale kinematics and strains to global deformations for both axial tension loading and shear loading with a pinned-diamond type fixture. The Poisson ratio of the kinematically engineered composite is ~3x higher than prototypical orthotropic variable stiffness composites. This design allows us to create composite materials that have high stiffness in the cold state below SMP T g (4-14GPa) and yet achieve large composite shear strains (5-20%) in the hot state (above SMP T g).
NASA Astrophysics Data System (ADS)
Reynerson, Charles Martin
This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1988-01-01
Several key issues involved in the application of formal optimization technique to helicopter airframe structures for vibration reduction are addressed. Considerations which are important in the optimization of real airframe structures are discussed. Considerations necessary to establish relevant set of design variables, constraints and objectives which are appropriate to conceptual, preliminary, detailed design, ground and flight test phases of airframe design are discussed. A methodology is suggested for optimization of airframes in various phases of design. Optimization formulations that are unique to helicopter airframes are described and expressions for vibration related functions are derived. Using a recently developed computer code, the optimization of a Bell AH-1G helicopter airframe is demonstrated.
Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey
1995-01-01
An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.
Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood Interventions.
Thomas, Jaime; Avellar, Sarah A; Deke, John; Gleason, Philip
2017-06-01
Systematic reviews assess the quality of research on program effectiveness to help decision makers faced with many intervention options. Study quality standards specify criteria that studies must meet, including accounting for baseline differences between intervention and comparison groups. We explore two issues related to systematic review standards: covariate choice and choice of estimation method. To help systematic reviews develop/refine quality standards and support researchers in using nonexperimental designs to estimate program effects, we address two questions: (1) How well do variables that systematic reviews typically require studies to account for explain variation in key child and family outcomes? (2) What methods should studies use to account for preexisting differences between intervention and comparison groups? We examined correlations between baseline characteristics and key outcomes using Early Childhood Longitudinal Study-Birth Cohort data to address Question 1. For Question 2, we used simulations to compare two methods-matching and regression adjustment-to account for preexisting differences between intervention and comparison groups. A broad range of potential baseline variables explained relatively little of the variation in child and family outcomes. This suggests the potential for bias even after accounting for these variables, highlighting the need for systematic reviews to provide appropriate cautions about interpreting the results of moderately rated, nonexperimental studies. Our simulations showed that regression adjustment can yield unbiased estimates if all relevant covariates are used, even when the model is misspecified, and preexisting differences between the intervention and the comparison groups exist.
NASA Astrophysics Data System (ADS)
Holburn, E. R.; Bledsoe, B. P.; Poff, N. L.; Cuhaciyan, C. O.
2005-05-01
Using over 300 R/EMAP sites in OR and WA, we examine the relative explanatory power of watershed, valley, and reach scale descriptors in modeling variation in benthic macroinvertebrate indices. Innovative metrics describing flow regime, geomorphic processes, and hydrologic-distance weighted watershed and valley characteristics are used in multiple regression and regression tree modeling to predict EPT richness, % EPT, EPT/C, and % Plecoptera. A nested design using seven ecoregions is employed to evaluate the influence of geographic scale and environmental heterogeneity on the explanatory power of individual and combined scales. Regression tree models are constructed to explain variability while identifying threshold responses and interactions. Cross-validated models demonstrate differences in the explanatory power associated with single-scale and multi-scale models as environmental heterogeneity is varied. Models explaining the greatest variability in biological indices result from multi-scale combinations of physical descriptors. Results also indicate that substantial variation in benthic macroinvertebrate response can be explained with process-based watershed and valley scale metrics derived exclusively from common geospatial data. This study outlines a general framework for identifying key processes driving macroinvertebrate assemblages across a range of scales and establishing the geographic extent at which various levels of physical description best explain biological variability. Such information can guide process-based stratification to avoid spurious comparison of dissimilar stream types in bioassessments and ensure that key environmental gradients are adequately represented in sampling designs.
Programmable rate modem utilizing digital signal processing techniques
NASA Technical Reports Server (NTRS)
Bunya, George K.; Wallace, Robert L.
1989-01-01
The engineering development study to follow was written to address the need for a Programmable Rate Digital Satellite Modem capable of supporting both burst and continuous transmission modes with either binary phase shift keying (BPSK) or quadrature phase shift keying (QPSK) modulation. The preferred implementation technique is an all digital one which utilizes as much digital signal processing (DSP) as possible. Here design tradeoffs in each portion of the modulator and demodulator subsystem are outlined, and viable circuit approaches which are easily repeatable, have low implementation losses and have low production costs are identified. The research involved for this study was divided into nine technical papers, each addressing a significant region of concern in a variable rate modem design. Trivial portions and basic support logic designs surrounding the nine major modem blocks were omitted. In brief, the nine topic areas were: (1) Transmit Data Filtering; (2) Transmit Clock Generation; (3) Carrier Synthesizer; (4) Receive AGC; (5) Receive Data Filtering; (6) RF Oscillator Phase Noise; (7) Receive Carrier Selectivity; (8) Carrier Recovery; and (9) Timing Recovery.
Impacting load control of floating supported friction plate and its experimental verification
NASA Astrophysics Data System (ADS)
Ning, Keyan; Wang, Yu; Huang, Dingchuan; Yin, Lei
2017-05-01
Friction plates are key components in automobile transmission system. Unfortunately, due to the tough working condition i.e. high impact, high temperature, fracture and plastic deformation are easily observed in friction plates. In order to reduce the impact load and increase the impact resistance and life span of the friction plate. This paper presents a variable damping design method and structure, by punching holes in the key position of the friction plate and filling it with damping materials, the impact load of the floating support friction plate can be controlled. Simulation is applied to study the effect of the position and number of damping holes on tooth root stress. Furthermore, physic test was designed and conducted to validate the correctness and effectiveness of the proposed method. Test result shows that the impact load of the new structure is reduced by 40% and its fatigue life is 4.7 times larger. The new structure provides a new way for floating supported friction plates design.
Vera-Villarroel, Pablo; Contreras, Daniela; Lillo, Sebastián; Beyle, Christian; Segovia, Ariel; Rojo, Natalia; Moreno, Sandra; Oyarzo, Francisco
2016-01-01
The perception of colour and its subjective effects are key issues to designing safe and enjoyable bike lanes. This paper addresses the relationship between the colours of bike lane interventions-in particular pavement painting and intersection design-and the subjective evaluation of liking, visual saliency, and perceived safety related to such an intervention. Utilising images of three real bike lane intersections modified by software to change their colour (five in total), this study recruited 538 participants to assess their perception of all fifteen colour-design combinations. A multivariate analysis of covariance (MANCOVA) with the Bonferroni post hoc test was performed to assess the effect of the main conditions (colour and design) on the dependent variables (liking towards the intervention, level of visual saliency of the intersection, and perceived safety of the bike lane). The results showed that the colour red was more positively associated to the outcome variables, followed by yellow and blue. Additionally, it was observed that the effect of colour widely outweighs the effect of design, suggesting that the right choice and use of colour would increase the effectiveness on bike-lanes pavement interventions. Limitations and future directions are discussed.
B.Z. Yang; R.D. Seale; R. Shmulsky; J. Dahlen; X. Wang
2017-01-01
The identification of strength-reducing characteristics that impact modulus of rupture (MOR) is a key differentiation between lumber grades. Because global design values for MOR are at the fifth percentile level and in-grade lumber can be highly variable, it is important that nondestructive evaluation technology be used to better discern the potential wood strength. In...
Looking at Biotech’s Crystal Ball
KOBER, SCOTT
2005-01-01
Biotech’s course through 2005 will hinge on five key variables: Use of electronic medical records to speed data collectionClarifying sources of investment capitalKeeping clinicians currentReshaping benefit designs to cope with new and costly productsAnd, above all, defining and proving the value of biotech products Here’s a look at how these issues could play out in the year ahead. PMID:23390402
ERIC Educational Resources Information Center
Dotson, Wesley H.
2010-01-01
The purpose of the present study was to identify components of an optional mock exam review session (e.g. requiring students to write answers, providing students grading keys for questions) responsible for improvements in student performance on application-based short-essay exams in an undergraduate behavior modification course. Both…
Designing Wind and Solar Power Purchase Agreements to Support Grid Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Neill, Barbara; Chernyakhovskiy, Ilya
Power purchase agreements (PPAs) represent one of many institutional tools that power systems can use to improve grid services from variable renewable energy (VRE) generators. This fact sheet introduces the concept of PPAs for VRE generators and provides a brief summary of key PPA components that can facilitate VRE generators to enhance grid stability and serve as a source of power system flexibility.
Measuring cardiac waste: the premier cardiac waste measures.
Lowe, Timothy J; Partovian, Chohreh; Kroch, Eugene; Martin, John; Bankowitz, Richard
2014-01-01
The authors developed 8 measures of waste associated with cardiac procedures to assist hospitals in comparing their performance with peer facilities. Measure selection was based on review of the research literature, clinical guidelines, and consultation with key stakeholders. Development and validation used the data from 261 hospitals in a split-sample design. Measures were risk adjusted using Premier's CareScience methodologies or mean peer value based on Medicare Severity Diagnosis-Related Group assignment. High variability was found in resource utilization across facilities. Validation of the measures using item-to-total correlations (range = 0.27-0.78), Cronbach α (.88), and Spearman rank correlation (0.92) showed high reliability and discriminatory power. Because of the level of variability observed among hospitals, this study suggests that there is opportunity for facilities to design successful waste reduction programs targeting cardiac-device procedures.
High-rate measurement-device-independent quantum cryptography
NASA Astrophysics Data System (ADS)
Pirandola, Stefano; Ottaviani, Carlo; Spedalieri, Gaetana; Weedbrook, Christian; Braunstein, Samuel L.; Lloyd, Seth; Gehring, Tobias; Jacobsen, Christian S.; Andersen, Ulrik L.
2015-06-01
Quantum cryptography achieves a formidable task—the remote distribution of secret keys by exploiting the fundamental laws of physics. Quantum cryptography is now headed towards solving the practical problem of constructing scalable and secure quantum networks. A significant step in this direction has been the introduction of measurement-device independence, where the secret key between two parties is established by the measurement of an untrusted relay. Unfortunately, although qubit-implemented protocols can reach long distances, their key rates are typically very low, unsuitable for the demands of a metropolitan network. Here we show, theoretically and experimentally, that a solution can come from the use of continuous-variable systems. We design a coherent-state network protocol able to achieve remarkably high key rates at metropolitan distances, in fact three orders of magnitude higher than those currently achieved. Our protocol could be employed to build high-rate quantum networks where devices securely connect to nearby access points or proxy servers.
The unusual suspect: Land use is a key predictor of biodiversity patterns in the Iberian Peninsula
NASA Astrophysics Data System (ADS)
Martins, Inês Santos; Proença, Vânia; Pereira, Henrique Miguel
2014-11-01
Although land use change is a key driver of biodiversity change, related variables such as habitat area and habitat heterogeneity are seldom considered in modeling approaches at larger extents. To address this knowledge gap we tested the contribution of land use related variables to models describing richness patterns of amphibians, reptiles and passerines in the Iberian Peninsula. We analyzed the relationship between species richness and habitat heterogeneity at two spatial resolutions (i.e., 10 km × 10 km and 50 km × 50 km). Using both ordinary least square and simultaneous autoregressive models, we assessed the relative importance of land use variables, climate variables and topographic variables. We also compare the species-area relationship with a multi-habitat model, the countryside species-area relationship, to assess the role of the area of different types of habitats on species diversity across scales. The association between habitat heterogeneity and species richness varied with the taxa and spatial resolution. A positive relationship was detected for all taxa at a grain size of 10 km × 10 km, but only passerines responded at a grain size of 50 km × 50 km. Species richness patterns were well described by abiotic predictors, but habitat predictors also explained a considerable portion of the variation. Moreover, species richness patterns were better described by a multi-habitat species-area model, incorporating land use variables, than by the classic power model, which only includes area as the single explanatory variable. Our results suggest that the role of land use in shaping species richness patterns goes beyond the local scale and persists at larger spatial scales. These findings call for the need of integrating land use variables in models designed to assess species richness response to large scale environmental changes.
Multi-rate DPSK optical transceivers for free-space applications
NASA Astrophysics Data System (ADS)
Caplan, D. O.; Carney, J. J.; Fitzgerald, J. J.; Gaschits, I.; Kaminsky, R.; Lund, G.; Hamilton, S. A.; Magliocco, R. J.; Murphy, R. J.; Rao, H. G.; Spellmeyer, N. W.; Wang, J. P.
2014-03-01
We describe a flexible high-sensitivity laser communication transceiver design that can significantly benefit performance and cost of NASA's satellite-based Laser Communications Relay Demonstration. Optical communications using differential phase shift keying, widely deployed for use in long-haul fiber-optic networks, is well known for its superior sensitivity and link performance over on-off keying, while maintaining a relatively straightforward design. However, unlike fiber-optic links, free-space applications often require operation over a wide dynamic range of power due to variations in link distance and channel conditions, which can include rapid kHz-class fading when operating through the turbulent atmosphere. Here we discuss the implementation of a robust, near-quantum-limited multi-rate DPSK transceiver, co-located transmitter and receiver subsystems that can operate efficiently over the highly-variable free-space channel. Key performance features will be presented on the master oscillator power amplifier (MOPA) based TX, including a wavelength-stabilized master laser, high-extinction-ratio burst-mode modulator, and 0.5 W single polarization power amplifier, as well as low-noise optically preamplified DSPK receiver and built-in test capabilities.
NASA Astrophysics Data System (ADS)
Guo, Danlu; Westra, Seth; Maier, Holger R.
2017-11-01
Scenario-neutral approaches are being used increasingly for assessing the potential impact of climate change on water resource systems, as these approaches allow the performance of these systems to be evaluated independently of climate change projections. However, practical implementations of these approaches are still scarce, with a key limitation being the difficulty of generating a range of plausible future time series of hydro-meteorological data. In this study we apply a recently developed inverse stochastic generation approach to support the scenario-neutral analysis, and thus identify the key hydro-meteorological variables to which the system is most sensitive. The stochastic generator simulates synthetic hydro-meteorological time series that represent plausible future changes in (1) the average, extremes and seasonal patterns of rainfall; and (2) the average values of temperature (Ta), relative humidity (RH) and wind speed (uz) as variables that drive PET. These hydro-meteorological time series are then fed through a conceptual rainfall-runoff model to simulate the potential changes in runoff as a function of changes in the hydro-meteorological variables, and runoff sensitivity is assessed with both correlation and Sobol' sensitivity analyses. The method was applied to a case study catchment in South Australia, and the results showed that the most important hydro-meteorological attributes for runoff were winter rainfall followed by the annual average rainfall, while the PET-related meteorological variables had comparatively little impact. The high importance of winter rainfall can be related to the winter-dominated nature of both the rainfall and runoff regimes in this catchment. The approach illustrated in this study can greatly enhance our understanding of the key hydro-meteorological attributes and processes that are likely to drive catchment runoff under a changing climate, thus enabling the design of tailored climate impact assessments to specific water resource systems.
NASA Technical Reports Server (NTRS)
Prasthofer, W. P.
1974-01-01
The key to optimization of design where there are a large number of variables, all of which may not be known precisely, lies in the mathematical tool of dynamic programming developed by Bellman. This methodology can lead to optimized solutions to the design of critical systems in a minimum amount of time, even when there are a great number of acceptable configurations to be considered. To demonstrate the usefulness of dynamic programming, an analytical method is developed for evaluating the relationship among existing numerous connector designs to find the optimum configuration. The data utilized in the study were generated from 900 flanges designed for six subsystems of the S-1B stage of the Saturn 1B space carrier vehicle.
NASA Astrophysics Data System (ADS)
Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang
2018-06-01
We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.
Continuous variable quantum key distribution with modulated entangled states.
Madsen, Lars S; Usenko, Vladyslav C; Lassen, Mikael; Filip, Radim; Andersen, Ulrik L
2012-01-01
Quantum key distribution enables two remote parties to grow a shared key, which they can use for unconditionally secure communication over a certain distance. The maximal distance depends on the loss and the excess noise of the connecting quantum channel. Several quantum key distribution schemes based on coherent states and continuous variable measurements are resilient to high loss in the channel, but are strongly affected by small amounts of channel excess noise. Here we propose and experimentally address a continuous variable quantum key distribution protocol that uses modulated fragile entangled states of light to greatly enhance the robustness to channel noise. We experimentally demonstrate that the resulting quantum key distribution protocol can tolerate more noise than the benchmark set by the ideal continuous variable coherent state protocol. Our scheme represents a very promising avenue for extending the distance for which secure communication is possible.
Andrea Watts; Brooke Penaluna; Jason Dunham
2016-01-01
Land use and climate change are two key factors with the potential to affect stream conditions and fish habitat. Since the 1950s, Washington and Oregon have required forest practices designed to mitigate the effects of timber harvest on streams and fish. Yet questions remain about the extent to which these practices are effective. Add in the effects of climate changeâ...
Presenting the Straddle Lemma in an introductory Real Analysis course
NASA Astrophysics Data System (ADS)
Soares, A.; Santos, A. L. dos
2017-04-01
In this article, we revisit the concept of strong differentiability of real functions of one variable, underlying the concept of differentiability. Our discussion is guided by the Straddle Lemma, which plays a key role in this context. The proofs of the results presented are designed to meet a young audience in mathematics, typical of students in a first course of Real Analysis or an honors-level Calculus course.
Body shape analyses of large persons in South Korea.
Park, Woojin; Park, Sungjoon
2013-01-01
Despite the prevalence of obesity and overweight, anthropometric characteristics of large individuals have not been extensively studied. This study investigated body shapes of large persons (Broca index ≥ 20, BMI ≥ 25 or WHR>1.0) using stature-normalised body dimensions data from the latest South Korean anthropometric survey. For each sex, a factor analysis was performed on the anthropometric data set to identify the key factors that explain the shape variability; and then, a cluster analysis was conducted on the factor scores data to determine a set of representative body types. The body types were labelled in terms of their distinct shape characteristics and their relative frequencies were computed for each of the four age groups considered: the 10s, 20s-30s, 40s-50s and 60s. The study findings may facilitate creating artefacts that anthropometrically accommodate large individuals, developing digital human models of large persons and designing future ergonomics studies on largeness. This study investigated body shapes of large persons using anthropometric data from South Korea. For each sex, multivariate statistical analyses were conducted to identify the key factors of the body shape variability and determine the representative body types. The study findings may facilitate designing artefacts that anthropometrically accommodate large persons.
Satellite Telemetry and Long-Range Bat Movements
Smith, Craig S.; Epstein, Jonathan H.; Breed, Andrew C.; Plowright, Raina K.; Olival, Kevin J.; de Jong, Carol; Daszak, Peter; Field, Hume E.
2011-01-01
Background Understanding the long-distance movement of bats has direct relevance to studies of population dynamics, ecology, disease emergence, and conservation. Methodology/Principal Findings We developed and trialed several collar and platform terminal transmitter (PTT) combinations on both free-living and captive fruit bats (Family Pteropodidae: Genus Pteropus). We examined transmitter weight, size, profile and comfort as key determinants of maximized transmitter activity. We then tested the importance of bat-related variables (species size/weight, roosting habitat and behavior) and environmental variables (day-length, rainfall pattern) in determining optimal collar/PTT configuration. We compared battery- and solar-powered PTT performance in various field situations, and found the latter more successful in maintaining voltage on species that roosted higher in the tree canopy, and at lower density, than those that roost more densely and lower in trees. Finally, we trialed transmitter accuracy, and found that actual distance errors and Argos location class error estimates were in broad agreement. Conclusions/Significance We conclude that no single collar or transmitter design is optimal for all bat species, and that species size/weight, species ecology and study objectives are key design considerations. Our study provides a strategy for collar and platform choice that will be applicable to a larger number of bat species as transmitter size and weight continue to decrease in the future. PMID:21358823
NASA Astrophysics Data System (ADS)
Lin, Zhuosheng; Yu, Simin; Lü, Jinhu
2017-06-01
In this paper, a novel approach for constructing one-way hash function based on 8D hyperchaotic map is presented. First, two nominal matrices both with constant and variable parameters are adopted for designing 8D discrete-time hyperchaotic systems, respectively. Then each input plaintext message block is transformed into 8 × 8 matrix following the order of left to right and top to bottom, which is used as a control matrix for the switch of the nominal matrix elements both with the constant parameters and with the variable parameters. Through this switching control, a new nominal matrix mixed with the constant and variable parameters is obtained for the 8D hyperchaotic map. Finally, the hash function is constructed with the multiple low 8-bit hyperchaotic system iterative outputs after being rounded down, and its secure analysis results are also given, validating the feasibility and reliability of the proposed approach. Compared with the existing schemes, the main feature of the proposed method is that it has a large number of key parameters with avalanche effect, resulting in the difficulty for estimating or predicting key parameters via various attacks.
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
Exposure Knowledge and Perception of Wireless Communication Technologies.
Freudenstein, Frederik; Correia, Luis M; Oliveira, Carla; Sebastião, Daniel; Wiedemann, Peter M
2015-11-06
The presented survey investigates risk and exposure perceptions of radio frequency electromagnetic fields (RF EMF) associated with base stations, mobile phones and other sources, the key issue being the interaction between both sets of perceptions. The study is based on a cross-sectional design, and conducted with an online sample of 838 citizens from Portugal. The results indicate that respondents' intuitive exposure perception differs from the actual exposure levels. Furthermore, exposure and risk perceptions are found to be highly correlated. Respondents' beliefs about exposure factors, which might influence possible health risks, is appropriate. A regression analysis between exposure characteristics, as predictor variables, and RF EMF risk perception, as the response variable, indicates that people seem to use simple heuristics to form their perceptions. What is bigger, more frequent and longer lasting is seen as riskier. Moreover, the quality of exposure knowledge is not an indicator for amplified EMF risk perception. These findings show that exposure perception is key to future risk communication.
Exposure Knowledge and Perception of Wireless Communication Technologies
Freudenstein, Frederik; Correia, Luis M.; Oliveira, Carla; Sebastião, Daniel; Wiedemann, Peter M.
2015-01-01
The presented survey investigates risk and exposure perceptions of radio frequency electromagnetic fields (RF EMF) associated with base stations, mobile phones and other sources, the key issue being the interaction between both sets of perceptions. The study is based on a cross-sectional design, and conducted with an online sample of 838 citizens from Portugal. The results indicate that respondents’ intuitive exposure perception differs from the actual exposure levels. Furthermore, exposure and risk perceptions are found to be highly correlated. Respondents’ beliefs about exposure factors, which might influence possible health risks, is appropriate. A regression analysis between exposure characteristics, as predictor variables, and RF EMF risk perception, as the response variable, indicates that people seem to use simple heuristics to form their perceptions. What is bigger, more frequent and longer lasting is seen as riskier. Moreover, the quality of exposure knowledge is not an indicator for amplified EMF risk perception. These findings show that exposure perception is key to future risk communication. PMID:26561826
Chen, Yi; Huang, Weina; Peng, Bei
2014-01-01
Because of the demands for sustainable and renewable energy, fuel cells have become increasingly popular, particularly the polymer electrolyte fuel cell (PEFC). Among the various components, the cathode plays a key role in the operation of a PEFC. In this study, a quantitative dual-layer cathode model was proposed for determining the optimal parameters that minimize the over-potential difference η and improve the efficiency using a newly developed bat swarm algorithm with a variable population embedded in the computational intelligence-aided design. The simulation results were in agreement with previously reported results, suggesting that the proposed technique has potential applications for automating and optimizing the design of PEFCs.
NASA Astrophysics Data System (ADS)
Kiryutin, Alexey S.; Pravdivtsev, Andrey N.; Ivanov, Konstantin L.; Grishin, Yuri A.; Vieth, Hans-Martin; Yurkovskaya, Alexandra V.
2016-02-01
A device for performing fast magnetic field-cycling NMR experiments is described. A key feature of this setup is that it combines fast switching of the external magnetic field and high-resolution NMR detection. The field-cycling method is based on precise mechanical positioning of the NMR probe with the mounted sample in the inhomogeneous fringe field of the spectrometer magnet. The device enables field variation over several decades (from 100 μT up to 7 T) within less than 0.3 s; progress in NMR probe design provides NMR linewidths of about 10-3 ppm. The experimental method is very versatile and enables site-specific studies of spin relaxation (NMRD, LLSs) and spin hyperpolarization (DNP, CIDNP, and SABRE) at variable magnetic field and at variable temperature. Experimental examples of such studies are demonstrated; advantages of the experimental method are described and existing challenges in the field are outlined.
Numerical Simulation of the RTA Combustion Rig
NASA Technical Reports Server (NTRS)
Davoudzadeh, Farhad; Buehrle, Robert; Liu, Nan-Suey; Winslow, Ralph
2005-01-01
The Revolutionary Turbine Accelerator (RTA)/Turbine Based Combined Cycle (TBCC) project is investigating turbine-based propulsion systems for access to space. NASA Glenn Research Center and GE Aircraft Engines (GEAE) planned to develop a ground demonstrator engine for validation testing. The demonstrator (RTA-1) is a variable cycle, turbofan ramjet designed to transition from an augmented turbofan to a ramjet that produces the thrust required to accelerate the vehicle from Sea Level Static (SLS) to Mach 4. The RTA-1 is designed to accommodate a large variation in bypass ratios from sea level static to Mach 4 conditions. Key components of this engine are new, such as a nickel alloy fan, advanced trapped vortex combustor, a Variable Area Bypass Injector (VABI), radial flameholders, and multiple fueling zones. A means to mitigate risks to the RTA development program was the use of extensive component rig tests and computational fluid dynamics (CFD) analysis.
Zoning method for environmental engineering geological patterns in underground coal mining areas.
Liu, Shiliang; Li, Wenping; Wang, Qiqing
2018-09-01
Environmental engineering geological patterns (EEGPs) are used to express the trend and intensity of eco-geological environment caused by mining in underground coal mining areas, a complex process controlled by multiple factors. A new zoning method for EEGPs was developed based on the variable-weight theory (VWT), where the weights of factors vary with their value. The method was applied to the Yushenfu mining area, Shaanxi, China. First, the mechanism of the EEGPs caused by mining was elucidated, and four types of EEGPs were proposed. Subsequently, 13 key control factors were selected from mining conditions, lithosphere, hydrosphere, ecosphere, and climatic conditions; their thematic maps were constructed using ArcGIS software and remote-sensing technologies. Then, a stimulation-punishment variable-weight model derived from the partition of basic evaluation unit of study area, construction of partition state-variable-weight vector, and determination of variable-weight interval was built to calculate the variable weights of each factor. On this basis, a zoning mathematical model of EEGPs was established, and the zoning results were analyzed. For comparison, the traditional constant-weight theory (CWT) was also applied to divide the EEGPs. Finally, the zoning results obtained using VWT and CWT were compared. The verification of field investigation indicates that VWT is more accurate and reliable than CWT. The zoning results are consistent with the actual situations and the key of planning design for the rational development of coal resources and protection of eco-geological environment. Copyright © 2018 Elsevier B.V. All rights reserved.
Where do the Field Plots Belong? A Multiple-Constraint Sampling Design for the BigFoot Project
NASA Astrophysics Data System (ADS)
Kennedy, R. E.; Cohen, W. B.; Kirschbaum, A. A.; Gower, S. T.
2002-12-01
A key component of a MODIS validation project is effective characterization of biophysical measures on the ground. Fine-grain ecological field measurements must be placed strategically to capture variability at the scale of the MODIS imagery. Here we describe the BigFoot project's revised sampling scheme, designed to simultaneously meet three important goals: capture landscape variability, avoid spatial autocorrelation between field plots, and minimize time and expense of field sampling. A stochastic process places plots in clumped constellations to reduce field sampling costs, while minimizing spatial autocorrelation. This stochastic process is repeated, creating several hundred realizations of plot constellations. Each constellation is scored and ranked according to its ability to match landscape variability in several Landsat-based spectral indices, and its ability to minimize field sampling costs. We show how this approach has recently been used to place sample plots at the BigFoot project's two newest study areas, one in a desert system and one in a tundra system. We also contrast this sampling approach to that already used at the four prior BigFoot project sites.
Reliability Assessment Approach for Stirling Convertors and Generators
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Schreiber, Jeffrey G.; Zampino, Edward; Best, Timothy
2004-01-01
Stirling power conversion is being considered for use in a Radioisotope Power System for deep-space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power. Quantifying the reliability of a Radioisotope Power System that utilizes Stirling power conversion technology is important in developing and demonstrating the capability for long-term success. A description of the Stirling power convertor is provided, along with a discussion about some of the key components. Ongoing efforts to understand component life, design variables at the component and system levels, related sources, and the nature of uncertainties is discussed. The requirement for reliability also is discussed, and some of the critical areas of concern are identified. A section on the objectives of the performance model development and a computation of reliability is included to highlight the goals of this effort. Also, a viable physics-based reliability plan to model the design-level variable uncertainties at the component and system levels is outlined, and potential benefits are elucidated. The plan involves the interaction of different disciplines, maintaining the physical and probabilistic correlations at all the levels, and a verification process based on rational short-term tests. In addition, both top-down and bottom-up coherency were maintained to follow the physics-based design process and mission requirements. The outlined reliability assessment approach provides guidelines to improve the design and identifies governing variables to achieve high reliability in the Stirling Radioisotope Generator design.
Mitchell, D A; von Meien, O F
2000-04-20
Zymotis bioreactors for solid-state fermentation (SSF) are packed-bed bioreactors with internal cooling plates. This design has potential to overcome the problem of heat removal, which is one of the main challenges in SSF. In ordinary packed-bed bioreactors, which lack internal plates, large axial temperature gradients arise, leading to poor microbial growth in the end of the bed near the air outlet. The Zymotis design is suitable for SSF processes in which the substrate bed must be maintained static, but little is known about how to design and operate Zymotis bioreactors. We use a two-dimensional heat transfer model, describing the growth of Aspergillus niger on a starchy substrate, to provide guidelines for the optimum design and operation of Zymotis bioreactors. As for ordinary packed-beds, the superficial velocity of the process air is a key variable. However, the Zymotis design introduces other important variables, namely, the spacing between the internal cooling plates and the temperature of the cooling water. High productivities can be achieved at large scale, but only if small spacings between the cooling plates are used, and if the cooling water temperature is varied during the fermentation in response to bed temperatures. Copyright 2000 John Wiley & Sons, Inc.
State legislative staff influence in health policy making.
Weissert, C S; Weissert, W G
2000-12-01
State legislative staff may influence health policy by gathering intelligence, setting the agenda, and shaping the legislative proposals. But they may also be stymied in their roles by such institutional constraints as hiring practices and by turnover in committee leadership in the legislature. The intervening variable of trust between legislators and their support staff is also key to understanding influence and helps explain how staff-legislator relationships play an important role in designing state health policy. This study of legislative fiscal and health policy committee staff uses data from interviews with key actors in five states to model the factors important in explaining variation in the influence of committee staff on health policy.
Overview of Wholesale Electricity Markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milligan, Michael; Bloom, Aaron P; Cochran, Jaquelin M
This chapter provides a comprehensive review of four key electricity markets: energy markets (day-ahead and real-time markets); ancillary service markets; financial transmission rights markets; capacity markets. It also discusses how the outcomes of each of these markets may be impacted by the introduction of high penetrations of variable generation. Furthermore, the chapter examines considerations needed to ensure that wholesale market designs are inclusive of emerging technologies, such as demand response, distributed generation, and distributed storage.
ERIC Educational Resources Information Center
Al-Shahrani, Amer A. S.
Saudi Arabia has not ignored the fact that education is the key to a nation's progress. It considers education as a first priority and is trying to catch up with more advanced nations in this area. One way Saudi Arabia has planned to reduce the risk of complete reliance on the oil revenues is by improving education. This study was designed to…
Structural safety evaluation of Gerber Arch Dam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrie, R.E.
1995-12-31
Gerber Dam, a variable radius arch structure, has experienced seepage and extensive freeze-thaw damage since its construction. A construction key was found cracked at its crest. A finite element investigation was made to evaluate the safety of the arch structure. Design methods and assumptions are evaluated. Historical performance is used in the evaluation. Stress levels, patterns, and distributions were evaluated for loads the structure has experienced to determine behavior contributing to seepage and cracking.
Agarabi, Cyrus D; Schiel, John E; Lute, Scott C; Chavez, Brittany K; Boyne, Michael T; Brorson, Kurt A; Khan, Mansoora; Read, Erik K
2015-06-01
Consistent high-quality antibody yield is a key goal for cell culture bioprocessing. This endpoint is typically achieved in commercial settings through product and process engineering of bioreactor parameters during development. When the process is complex and not optimized, small changes in composition and control may yield a finished product of less desirable quality. Therefore, changes proposed to currently validated processes usually require justification and are reported to the US FDA for approval. Recently, design-of-experiments-based approaches have been explored to rapidly and efficiently achieve this goal of optimized yield with a better understanding of product and process variables that affect a product's critical quality attributes. Here, we present a laboratory-scale model culture where we apply a Plackett-Burman screening design to parallel cultures to study the main effects of 11 process variables. This exercise allowed us to determine the relative importance of these variables and identify the most important factors to be further optimized in order to control both desirable and undesirable glycan profiles. We found engineering changes relating to culture temperature and nonessential amino acid supplementation significantly impacted glycan profiles associated with fucosylation, β-galactosylation, and sialylation. All of these are important for monoclonal antibody product quality. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Aircraft Conceptual Design and Risk Analysis Using Physics-Based Noise Prediction
NASA Technical Reports Server (NTRS)
Olson, Erik D.; Mavris, Dimitri N.
2006-01-01
An approach was developed which allows for design studies of commercial aircraft using physics-based noise analysis methods while retaining the ability to perform the rapid trade-off and risk analysis studies needed at the conceptual design stage. A prototype integrated analysis process was created for computing the total aircraft EPNL at the Federal Aviation Regulations Part 36 certification measurement locations using physics-based methods for fan rotor-stator interaction tones and jet mixing noise. The methodology was then used in combination with design of experiments to create response surface equations (RSEs) for the engine and aircraft performance metrics, geometric constraints and take-off and landing noise levels. In addition, Monte Carlo analysis was used to assess the expected variability of the metrics under the influence of uncertainty, and to determine how the variability is affected by the choice of engine cycle. Finally, the RSEs were used to conduct a series of proof-of-concept conceptual-level design studies demonstrating the utility of the approach. The study found that a key advantage to using physics-based analysis during conceptual design lies in the ability to assess the benefits of new technologies as a function of the design to which they are applied. The greatest difficulty in implementing physics-based analysis proved to be the generation of design geometry at a sufficient level of detail for high-fidelity analysis.
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F; Schnabel, Roman
2015-10-30
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein-Podolsky-Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components.
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F.; Schnabel, Roman
2015-01-01
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein–Podolsky–Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components. PMID:26514280
Murphy, Shannon M E; Hough, Douglas E; Sylvia, Martha L; Dunbar, Linda J; Frick, Kevin D
2018-02-08
To illustrate the impact of key quasi-experimental design elements on cost savings measurement for population health management (PHM) programs. Population health management program records and Medicaid claims and enrollment data from December 2011 through March 2016. The study uses a difference-in-difference design to compare changes in cost and utilization outcomes between program participants and propensity score-matched nonparticipants. Comparisons of measured savings are made based on (1) stable versus dynamic population enrollment and (2) all eligible versus enrolled-only participant definitions. Options for the operationalization of time are also discussed. Individual-level Medicaid administrative and claims data and PHM program records are used to match study groups on baseline risk factors and assess changes in costs and utilization. Savings estimates are statistically similar but smaller in magnitude when eliminating variability based on duration of population enrollment and when evaluating program impact on the entire target population. Measurement in calendar time, when possible, simplifies interpretability. Program evaluation design elements, including population stability and participant definitions, can influence the estimated magnitude of program savings for the payer and should be considered carefully. Time specifications can also affect interpretability and usefulness. © Health Research and Educational Trust.
Defining process design space for monoclonal antibody cell culture.
Abu-Absi, Susan Fugett; Yang, LiYing; Thompson, Patrick; Jiang, Canping; Kandula, Sunitha; Schilling, Bernhard; Shukla, Abhinav A
2010-08-15
The concept of design space has been taking root as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. During mapping of the process design space, the multidimensional combination of operational variables is studied to quantify the impact on process performance in terms of productivity and product quality. An efficient methodology to map the design space for a monoclonal antibody cell culture process is described. A failure modes and effects analysis (FMEA) was used as the basis for the process characterization exercise. This was followed by an integrated study of the inoculum stage of the process which includes progressive shake flask and seed bioreactor steps. The operating conditions for the seed bioreactor were studied in an integrated fashion with the production bioreactor using a two stage design of experiments (DOE) methodology to enable optimization of operating conditions. A two level Resolution IV design was followed by a central composite design (CCD). These experiments enabled identification of the edge of failure and classification of the operational parameters as non-key, key or critical. In addition, the models generated from the data provide further insight into balancing productivity of the cell culture process with product quality considerations. Finally, process and product-related impurity clearance was evaluated by studies linking the upstream process with downstream purification. Production bioreactor parameters that directly influence antibody charge variants and glycosylation in CHO systems were identified.
Long-distance continuous-variable quantum key distribution by controlling excess noise
NASA Astrophysics Data System (ADS)
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise.
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-13
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network. PMID:26758727
NASA Astrophysics Data System (ADS)
Srinivasan, Vasudevan
Air plasma spray is inherently complex due to the deviation from equilibrium conditions, three dimensional nature, multitude of interrelated (controllable) parameters and (uncontrollable) variables involved, and stochastic variability at different stages. The resultant coatings are complex due to the layered high defect density microstructure. Despite the widespread use and commercial success for decades in earthmoving, automotive, aerospace and power generation industries, plasma spray has not been completely understood and prime reliance for critical applications such as thermal barrier coatings on gas turbines are yet to be accomplished. This dissertation is aimed at understanding the in-flight particle state of the plasma spray process towards designing coatings and achieving coating reliability with the aid of noncontact in-flight particle and spray stream sensors. Key issues such as the phenomena of optimum particle injection and the definition of spray stream using particle state are investigated. Few strategies to modify the microstructure and properties of Yttria Stabilized Zirconia coatings are examined systematically using the framework of process maps. An approach to design process window based on design relevant coating properties is presented. Options to control the process for enhanced reproducibility and reliability are examined and the resultant variability is evaluated systematically at the different stages in the process. The 3D variability due to the difference in plasma characteristics has been critically examined by investigating splats collected from the entire spray footprint.
Obesity and the community food environment: a systematic review.
Holsten, Joanna E
2009-03-01
To examine the relationship between obesity and the community and/or consumer food environment. A comprehensive literature search of multiple databases was conducted and seven studies were identified for review. Studies were selected if they measured BMI and environmental variables related to food outlets. Environmental variables included the geographic arrangement of food stores or restaurants in communities and consumer conditions such as food price and availability within each outlet. The study designs, methods, limitations and results related to obesity and the food environment were reviewed, and implications for future research were synthesized. The reviewed studies used cross-sectional designs to examine the community food environment defined as the number per capita, proximity or density of food outlets. Most studies indirectly identified food outlets through large databases. The studies varied substantially in sample populations, outcome variables, units of measurement and data analysis. Two studies did not find any significant association between obesity rates and community food environment variables. Five studies found significant results. Many of the studies were subject to limitations that may have mitigated the validity of the results. Research examining obesity and the community or consumer food environment is at an early stage. The most pertinent gaps include primary data at the individual level, direct measures of the environment, studies examining the consumer environment and study designs involving a time sequence. Future research should directly measure multiple levels of the food environment and key confounders at the individual level.
EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physi...
Bio-inspired online variable recruitment control of fluidic artificial muscles
NASA Astrophysics Data System (ADS)
Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew
2016-12-01
This paper details the creation of a hybrid variable recruitment control scheme for fluidic artificial muscle (FAM) actuators with an emphasis on maximizing system efficiency and switching control performance. Variable recruitment is the process of altering a system’s active number of actuators, allowing operation in distinct force regimes. Previously, FAM variable recruitment was only quantified with offline, manual valve switching; this study addresses the creation and characterization of novel, on-line FAM switching control algorithms. The bio-inspired algorithms are implemented in conjunction with a PID and model-based controller, and applied to a simulated plant model. Variable recruitment transition effects and chatter rejection are explored via a sensitivity analysis, allowing a system designer to weigh tradeoffs in actuator modeling, algorithm choice, and necessary hardware. Variable recruitment is further developed through simulation of a robotic arm tracking a variety of spline position inputs, requiring several levels of actuator recruitment. Switching controller performance is quantified and compared with baseline systems lacking variable recruitment. The work extends current variable recruitment knowledge by creating novel online variable recruitment control schemes, and exploring how online actuator recruitment affects system efficiency and control performance. Key topics associated with implementing a variable recruitment scheme, including the effects of modeling inaccuracies, hardware considerations, and switching transition concerns are also addressed.
Considering sex as a biological variable in preclinical research.
Miller, Leah R; Marks, Cheryl; Becker, Jill B; Hurn, Patricia D; Chen, Wei-Jung; Woodruff, Teresa; McCarthy, Margaret M; Sohrabji, Farida; Schiebinger, Londa; Wetherington, Cora Lee; Makris, Susan; Arnold, Arthur P; Einstein, Gillian; Miller, Virginia M; Sandberg, Kathryn; Maier, Susan; Cornelison, Terri L; Clayton, Janine A
2017-01-01
In June 2015, the National Institutes of Health (NIH) released a Guide notice (NOT-OD-15-102) that highlighted the expectation of the NIH that the possible role of sex as a biologic variable be factored into research design, analyses, and reporting of vertebrate animal and human studies. Anticipating these guidelines, the NIH Office of Research on Women's Health, in October 2014, convened key stakeholders to discuss methods and techniques for integrating sex as a biologic variable in preclinical research. The workshop focused on practical methods, experimental design, and approaches to statistical analyses in the use of both male and female animals, cells, and tissues in preclinical research. Workshop participants also considered gender as a modifier of biology. This article builds on the workshop and is meant as a guide to preclinical investigators as they consider methods and techniques for inclusion of both sexes in preclinical research and is not intended to prescribe exhaustive/specific approaches for compliance with the new NIH policy.-Miller, L. R., Marks, C., Becker, J. B., Hurn, P. D., Chen, W.-J., Woodruff, T., McCarthy, M. M., Sohrabji, F., Schiebinger, L., Wetherington, C. L., Makris, S., Arnold, A. P., Einstein, G., Miller, V. M., Sandberg, K., Maier, S., Cornelison, T. L., Clayton, J. A. Considering sex as a biological variable in preclinical research. © FASEB.
Tang, Weiming; Yang, Haitao; Mahapatra, Tanmay; Huan, Xiping; Yan, Hongjing; Li, Jianjun; Fu, Gengfeng; Zhao, Jinkou; Detels, Roger
2013-01-01
Background Respondent-driven-sampling (RDS) has well been recognized as a method for sampling from most hard-to-reach populations like commercial sex workers, drug users and men who have sex with men. However the feasibility of this sampling strategy in terms of recruiting a diverse spectrum of these hidden populations has not been understood well yet in developing countries. Methods In a cross sectional study in Nanjing city of Jiangsu province of China, 430 MSM were recruited including 9 seeds in 14 weeks of study period using RDS. Information regarding socio-demographic characteristics and sexual risk behavior were collected and testing was done for HIV and syphilis. Duration, completion, participant characteristics and the equilibrium of key factors were used for assessing feasibility of RDS. Homophily of key variables, socio-demographic distribution and social network size were used as the indicators of diversity. Results In the study sample, adjusted HIV and syphilis prevalence were 6.6% and 14.6% respectively. Majority (96.3%) of the participants were recruited by members of their own social network. Although there was a tendency for recruitment within the same self-identified group (homosexuals recruited 60.0% homosexuals), considerable cross-group recruitment (bisexuals recruited 52.3% homosexuals) was also seen. Homophily of the self-identified sexual orientations was 0.111 for homosexuals. Upon completion of the recruitment process, participant characteristics and the equilibrium of key factors indicated that RDS was feasible for sampling MSM in Nanjing. Participants recruited by RDS were found to be diverse after assessing the homophily of key variables in successive waves of recruitment, the proportion of characteristics after reaching equilibrium and the social network size. The observed design effects were nearly the same or even better than the theoretical design effect of 2. Conclusion RDS was found to be an efficient and feasible sampling method for recruiting a diverse sample of MSM in a reasonable time. PMID:24244280
Predesign study for a modern 4-bladed rotor for RSRA
NASA Technical Reports Server (NTRS)
Davis, S. J.
1981-01-01
The feasibility of providing a modern four-bladed rotor for flight research testing on a rotor system aircraft was evaluated. The capabilities of a state of the art rotor system and the contributions of key design parameters to these capabilities were investigated. Three candidate rotors were examined: the UH-60A BLACK HAWK rotor with and without root extenders and the H-3 composite blade rotor. It was concluded that the technical/cost objectives could best be accomplished using the basic BLACK HAWK rotor (i.e. without root extenders). Further, the availability of three existing sets of blade tip of varying design, together with a demonstrated capability for altering airfoil geometry should provide early research information on important design variables at reduced cost.
Usability testing and requirements derivation for EMU-compatible electrical connectors
NASA Technical Reports Server (NTRS)
Reaux, Ray A.; Griffin, Thomas J.; Lewis, Ruthan
1989-01-01
On-orbit servicing of payloads is simplified when a spacecraft has been designed for serviceability. A key design criterion for a serviceable spaceraft is standardization of electrical connectors. This paper investigates the effects of extravehicular mobility unit (EMU) glove size, connector size, and connector type on usability of electrical connectors. An experiment was conducted exploring participants' ability to mate and demate connectors in an evacuated glovebox. Independent variables were two EMU glove-sizes, five connector size groups, and seven connector types. Significant differences in performance times and heart rate changes during mate and demate operations were found. Subjective assessments of connectors were collected from participants with a usability questionnaire. The data were used to derive design recommendations for a NASA-recommended EMU-compatible electrical connector.
Chen, Yi; Huang, Weina; Peng, Bei
2014-01-01
Because of the demands for sustainable and renewable energy, fuel cells have become increasingly popular, particularly the polymer electrolyte fuel cell (PEFC). Among the various components, the cathode plays a key role in the operation of a PEFC. In this study, a quantitative dual-layer cathode model was proposed for determining the optimal parameters that minimize the over-potential difference and improve the efficiency using a newly developed bat swarm algorithm with a variable population embedded in the computational intelligence-aided design. The simulation results were in agreement with previously reported results, suggesting that the proposed technique has potential applications for automating and optimizing the design of PEFCs. PMID:25490761
Two-key concurrent responding: response-reinforcement dependencies and blackouts1
Herbert, Emily W.
1970-01-01
Two-key concurrent responding was maintained for three pigeons by a single variable-interval 1-minute schedule of reinforcement in conjunction with a random number generator that assigned feeder operations between keys with equal probability. The duration of blackouts was varied between keys when each response initiated a blackout, and grain arranged by the variable-interval schedule was automatically presented after a blackout (Exp. I). In Exp. II every key peck, except for those that produced grain, initiated a blackout, and grain was dependent upon a response following a blackout. For each pigeon in Exp. I and for one pigeon in Exp. II, the relative frequency of responding on a key approximated, i.e., matched, the relative reciprocal of the duration of the blackout interval on that key. In a third experiment, blackouts scheduled on a variable-interval were of equal duration on the two keys. For one key, grain automatically followed each blackout; for the other key, grain was dependent upon a response and never followed a blackout. The relative frequency of responding on the former key, i.e., the delay key, better approximated the negative exponential function obtained by Chung (1965) than the matching function predicted by Chung and Herrnstein (1967). PMID:16811458
Shuttle ku-band communications/radar technical concepts
NASA Technical Reports Server (NTRS)
Griffin, J. W.; Kelley, J. S.; Steiner, A. W.; Vang, H. A.; Zrubek, W. E.; Huth, G. K.
1985-01-01
Technical data on the Shuttle Orbiter K sub u-band communications/radar system are presented. The more challenging aspects of the system design and development are emphasized. The technical problems encountered and the advancements made in solving them are discussed. The radar functions are presented first. Requirements and design/implementation approaches are discussed. Advanced features are explained, including Doppler measurement, frequency diversity, multiple pulse repetition frequencies and pulse widths, and multiple modes. The communications functions that are presented include advances made because of the requirements for multiple communications modes. Spread spectrum, quadrature phase shift keying (QPSK), variable bit rates, and other advanced techniques are discussed. Performance results and conclusions reached are outlined.
NASA Technical Reports Server (NTRS)
Jefferies, K. S.; Tew, R. C.
1974-01-01
A digital computer study was made of reactor thermal transients during startup of the Brayton power conversion loop of a 60-kWe reactor Brayton power system. A startup procedure requiring the least Brayton system complication was tried first; this procedure caused violations of design limits on key reactor variables. Several modifications of this procedure were then found which caused no design limit violations. These modifications involved: (1) using a slower rate of increase in gas flow; (2) increasing the initial reactor power level to make the reactor respond faster; and (3) appropriate reactor control drum manipulation during the startup transient.
Shiyuan Zhong; Lejiang Yu; Julie A. Winkler; Ying Tang; Warren E. Heilman; Xiandi. Bian
2017-01-01
Understanding the impacts of climate change on frost-free seasons is key to designing effective adaptation strategies for ecosystem management and agricultural production. This study examines the potential changes in the frost-free season length between historical (1971â2000) and future (2041â2070) periods over the contiguous USA with a focus on spatial variability and...
Vera-Villarroel, Pablo; Contreras, Daniela; Lillo, Sebastián; Segovia, Ariel; Rojo, Natalia; Moreno, Sandra; Oyarzo, Francisco
2016-01-01
The perception of colour and its subjective effects are key issues to designing safe and enjoyable bike lanes. This paper addresses the relationship between the colours of bike lane interventions—in particular pavement painting and intersection design—and the subjective evaluation of liking, visual saliency, and perceived safety related to such an intervention. Utilising images of three real bike lane intersections modified by software to change their colour (five in total), this study recruited 538 participants to assess their perception of all fifteen colour-design combinations. A multivariate analysis of covariance (MANCOVA) with the Bonferroni post hoc test was performed to assess the effect of the main conditions (colour and design) on the dependent variables (liking towards the intervention, level of visual saliency of the intersection, and perceived safety of the bike lane). The results showed that the colour red was more positively associated to the outcome variables, followed by yellow and blue. Additionally, it was observed that the effect of colour widely outweighs the effect of design, suggesting that the right choice and use of colour would increase the effectiveness on bike-lanes pavement interventions. Limitations and future directions are discussed. PMID:27548562
Estimation of Quasi-Stiffness of the Human Knee in the Stance Phase of Walking
Shamaei, Kamran; Sawicki, Gregory S.; Dollar, Aaron M.
2013-01-01
Biomechanical data characterizing the quasi-stiffness of lower-limb joints during human locomotion is limited. Understanding joint stiffness is critical for evaluating gait function and designing devices such as prostheses and orthoses intended to emulate biological properties of human legs. The knee joint moment-angle relationship is approximately linear in the flexion and extension stages of stance, exhibiting nearly constant stiffnesses, known as the quasi-stiffnesses of each stage. Using a generalized inverse dynamics analysis approach, we identify the key independent variables needed to predict knee quasi-stiffness during walking, including gait speed, knee excursion, and subject height and weight. Then, based on the identified key variables, we used experimental walking data for 136 conditions (speeds of 0.75–2.63 m/s) across 14 subjects to obtain best fit linear regressions for a set of general models, which were further simplified for the optimal gait speed. We found R2 > 86% for the most general models of knee quasi-stiffnesses for the flexion and extension stages of stance. With only subject height and weight, we could predict knee quasi-stiffness for preferred walking speed with average error of 9% with only one outlier. These results provide a useful framework and foundation for selecting subject-specific stiffness for prosthetic and exoskeletal devices designed to emulate biological knee function during walking. PMID:23533662
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
Adult Lifespan Cognitive Variability in the Cross-Sectional Cam-CAN Cohort
Green, Emma; Shafto, Meredith A.; Matthews, Fiona E.; White, Simon R.
2015-01-01
This study examines variability across the age span in cognitive performance in a cross-sectional, population-based, adult lifespan cohort from the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) study (n = 2680). A key question we highlight is whether using measures that are designed to detect age-related cognitive pathology may not be sensitive to, or reflective of, individual variability among younger adults. We present three issues that contribute to the debate for and against age-related increases in variability. Firstly, the need to formally define measures of central tendency and measures of variability. Secondly, in addition to the commonly addressed location-confounding (adjusting for covariates) there may exist changes in measures of variability due to confounder sub-groups. Finally, that increases in spread may be a result of floor or ceiling effects; where the measure is not sensitive enough at all ages. From the Cam-CAN study, a large population-based dataset, we demonstrate the existence of variability-confounding for the immediate episodic memory task; and show that increasing variance with age in our general cognitive measures is driven by a ceiling effect in younger age groups. PMID:26690191
Adult Lifespan Cognitive Variability in the Cross-Sectional Cam-CAN Cohort.
Green, Emma; Shafto, Meredith A; Matthews, Fiona E; White, Simon R
2015-12-07
This study examines variability across the age span in cognitive performance in a cross-sectional, population-based, adult lifespan cohort from the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) study (n = 2680). A key question we highlight is whether using measures that are designed to detect age-related cognitive pathology may not be sensitive to, or reflective of, individual variability among younger adults. We present three issues that contribute to the debate for and against age-related increases in variability. Firstly, the need to formally define measures of central tendency and measures of variability. Secondly, in addition to the commonly addressed location-confounding (adjusting for covariates) there may exist changes in measures of variability due to confounder sub-groups. Finally, that increases in spread may be a result of floor or ceiling effects; where the measure is not sensitive enough at all ages. From the Cam-CAN study, a large population-based dataset, we demonstrate the existence of variability-confounding for the immediate episodic memory task; and show that increasing variance with age in our general cognitive measures is driven by a ceiling effect in younger age groups.
2013-06-01
1 18th ICCRTS Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables...AND SUBTITLE Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables 5a. CONTRACT...command in crisis management. C2 Agility Model Agility can be conceptualized at a number of different levels; for instance at the team
Leverrier, Anthony; Grangier, Philippe
2009-05-08
We present a continuous-variable quantum key distribution protocol combining a discrete modulation and reverse reconciliation. This protocol is proven unconditionally secure and allows the distribution of secret keys over long distances, thanks to a reverse reconciliation scheme efficient at very low signal-to-noise ratio.
Design tradeoffs in long-term research for stream salamanders
Brand, Adrianne B,; Grant, Evan H. Campbell
2017-01-01
Long-term research programs can benefit from early and periodic evaluation of their ability to meet stated objectives. In particular, consideration of the spatial allocation of effort is key. We sampled 4 species of stream salamanders intensively for 2 years (2010–2011) in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA to evaluate alternative distributions of sampling locations within stream networks, and then evaluated via simulation the ability of multiple survey designs to detect declines in occupancy and to estimate dynamic parameters (colonization, extinction) over 5 years for 2 species. We expected that fine-scale microhabitat variables (e.g., cobble, detritus) would be the strongest determinants of occupancy for each of the 4 species; however, we found greater support for all species for models including variables describing position within the stream network, stream size, or stream microhabitat. A monitoring design focused on headwater sections had greater power to detect changes in occupancy and the dynamic parameters in each of 3 scenarios for the dusky salamander (Desmognathus fuscus) and red salamander (Pseudotriton ruber). Results for transect length were more variable, but across all species and scenarios, 25-m transects are most suitable as a balance between maximizing detection probability and describing colonization and extinction. These results inform sampling design and provide a general framework for setting appropriate goals, effort, and duration in the initial planning stages of research programs on stream salamanders in the eastern United States.
Optimality of Gaussian attacks in continuous-variable quantum cryptography.
Navascués, Miguel; Grosshans, Frédéric; Acín, Antonio
2006-11-10
We analyze the asymptotic security of the family of Gaussian modulated quantum key distribution protocols for continuous-variables systems. We prove that the Gaussian unitary attack is optimal for all the considered bounds on the key rate when the first and second momenta of the canonical variables involved are known by the honest parties.
NASA Technical Reports Server (NTRS)
Mavris, Dimitri; Osburg, Jan
2005-01-01
An important enabler of the new national Vision for Space Exploration is the ability to rapidly and efficiently develop optimized concepts for the manifold future space missions that this effort calls for. The design of such complex systems requires a tight integration of all the engineering disciplines involved, in an environment that fosters interaction and collaboration. The research performed under this grant explored areas where the space systems design process can be enhanced: by integrating risk models into the early stages of the design process, and by including rapid-turnaround variable-fidelity tools for key disciplines. Enabling early assessment of mission risk will allow designers to perform trades between risk and design performance during the initial design space exploration. Entry into planetary atmospheres will require an increased emphasis of the critical disciplines of aero- and thermodynamics. This necessitates the pulling forward of EDL disciplinary expertise into the early stage of the design process. Radiation can have a large potential impact on overall mission designs, in particular for the planned nuclear-powered robotic missions under Project Prometheus and for long-duration manned missions to the Moon, Mars and beyond under Project Constellation. This requires that radiation and associated risk and hazards be assessed and mitigated at the earliest stages of the design process. Hence, RPS is another discipline needed to enhance the engineering competencies of conceptual design teams. Researchers collaborated closely with NASA experts in those disciplines, and in overall space systems design, at Langley Research Center and at the Jet Propulsion Laboratory. This report documents the results of this initial effort.
On the Biomimetic Design of Agile-Robot Legs
Garcia, Elena; Arevalo, Juan Carlos; Muñoz, Gustavo; Gonzalez-de-Santos, Pablo
2011-01-01
The development of functional legged robots has encountered its limits in human-made actuation technology. This paper describes research on the biomimetic design of legs for agile quadrupeds. A biomimetic leg concept that extracts key principles from horse legs which are responsible for the agile and powerful locomotion of these animals is presented. The proposed biomimetic leg model defines the effective leg length, leg kinematics, limb mass distribution, actuator power, and elastic energy recovery as determinants of agile locomotion, and values for these five key elements are given. The transfer of the extracted principles to technological instantiations is analyzed in detail, considering the availability of current materials, structures and actuators. A real leg prototype has been developed following the biomimetic leg concept proposed. The actuation system is based on the hybrid use of series elasticity and magneto-rheological dampers which provides variable compliance for natural motion. From the experimental evaluation of this prototype, conclusions on the current technological barriers to achieve real functional legged robots to walk dynamically in agile locomotion are presented. PMID:22247667
Feng, Shi-Jin; Cao, Ben-Yi; Xie, Hai-Jian
2017-10-01
Leachate recirculation in municipal solid waste (MSW) landfills operated as bioreactors offers significant economic and environmental benefits. Combined drainage blanket (DB)-horizontal trench (HT) systems can be an alternative to single conventional recirculation approaches and can have competitive advantages. The key objectives of this study are to investigate combined drainage blanket -horizontal trench systems, to analyze the effects of applying two recirculation systems on the leachate migration in landfills, and to estimate some key design parameters (e.g., the steady-state flow rate, the influence width, and the cumulative leachate volume). It was determined that an effective recirculation model should consist of a moderate horizontal trench injection pressure head and supplementary leachate recirculated through drainage blanket, with an objective of increasing the horizontal unsaturated hydraulic conductivity and thereby allowing more leachate to flow from the horizontal trench system in a horizontal direction. In addition, design charts for engineering application were established using a dimensionless variable formulation.
On the biomimetic design of agile-robot legs.
Garcia, Elena; Arevalo, Juan Carlos; Muñoz, Gustavo; Gonzalez-de-Santos, Pablo
2011-01-01
The development of functional legged robots has encountered its limits in human-made actuation technology. This paper describes research on the biomimetic design of legs for agile quadrupeds. A biomimetic leg concept that extracts key principles from horse legs which are responsible for the agile and powerful locomotion of these animals is presented. The proposed biomimetic leg model defines the effective leg length, leg kinematics, limb mass distribution, actuator power, and elastic energy recovery as determinants of agile locomotion, and values for these five key elements are given. The transfer of the extracted principles to technological instantiations is analyzed in detail, considering the availability of current materials, structures and actuators. A real leg prototype has been developed following the biomimetic leg concept proposed. The actuation system is based on the hybrid use of series elasticity and magneto-rheological dampers which provides variable compliance for natural motion. From the experimental evaluation of this prototype, conclusions on the current technological barriers to achieve real functional legged robots to walk dynamically in agile locomotion are presented.
Cheng, Xianfu; Lin, Yuqun
2014-01-01
The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.
Optimisation study of a vehicle bumper subsystem with fuzzy parameters
NASA Astrophysics Data System (ADS)
Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.
2012-10-01
This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).
Monitoring household waste recycling centres performance using mean bin weight analyses.
Maynard, Sarah; Cherrett, Tom; Waterson, Ben
2009-02-01
This paper describes a modelling approach used to investigate the significance of key factors (vehicle type, compaction type, site design, temporal effects) in influencing the variability in observed nett amenity bin weights produced by household waste recycling centres (HWRCs). This new method can help to quickly identify sites that are producing significantly lighter bins, enabling detailed back-end analyses to be efficiently targeted and best practice in HWRC operation identified. Tested on weigh ticket data from nine HWRCs across West Sussex, UK, the model suggests that compaction technique, vehicle type, month and site design explained 76% of the variability in the observed nett amenity weights. For each factor, a weighting coefficient was calculated to generate a predicted nett weight for each bin transaction and three sites were subsequently identified as having similar characteristics but returned significantly different mean nett bin weights. Waste and site audits were then conducted at the three sites to try and determine the possible sources of the remaining variability. Significant differences were identified in the proportions of contained waste (bagged), wood, and dry recyclables entering the amenity waste stream, particularly at one site where significantly less contaminated waste and dry recyclables were observed.
Process Variables and Design of Experiments in Liposome and Nanoliposome Research.
Zoghi, Alaleh; Khosravi-Darani, Kianoush; Omri, Abdelwahab
2018-01-01
Liposomes vesicles consisting of one or more phospholipid bilayers are microcarriers used in numerous scientific disciplines. During the last decade, nanostructured liposomes, or nanoliposomes, have been utilized in biomedical investigations due to their unique characteristics including nanoscale size, sustained release, biocompatibility, and biodegradability. The extensive literature covering the field of liposomology is an indication of increasing interests and applications in many areas, especially as carriers of active substances in nanomedicine, agriculture, food technology, and cosmetics. Nanoliposomes application as drug carriers resulted in more effective treatment of such diseases as cancers, atherosclerosis, infectious diseases and ocular disorders. In this communication, we will introduce commonly used methods for the preparation of liposome, pointing the therapeutic report of liposomes, and explaining the common process variables in liposome encapsulations. We will also review different screening methods and full and fractional factorial designs that impact independent variables in certain applications and the end-user response. We will review such key factors as encapsulation efficiency, loading capacity, particles' biologic, structural and physicochemical properties, and lipid composition in an effort to provide a comprehensive guide for liposomologists in different fields of interest. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Loescher, Henry; Ayres, Edward; Duffy, Paul; Luo, Hongyan; Brunke, Max
2014-01-01
Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and sub-tropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10× more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12-dominant soil orders across the US were estimated, advancing our continental-scale understanding of soil behavior. PMID:24465377
RTJ-303: Variable geometry, oblique wing supersonic aircraft
NASA Technical Reports Server (NTRS)
Antaran, Albert; Belete, Hailu; Dryzmkowski, Mark; Higgins, James; Klenk, Alan; Rienecker, Lisa
1992-01-01
This document is a preliminary design of a High Speed Civil Transport (HSCT) named the RTJ-303. It is a 300 passenger, Mach 1.6 transport with a range of 5000 nautical miles. It features four mixed-flow turbofan engines, variable geometry oblique wing, with conventional tail-aft control surfaces. The preliminary cost analysis for a production of 300 aircraft shows that flyaway cost would be 183 million dollars (1992) per aircraft. The aircraft uses standard jet fuel and requires no special materials to handle aerodynamic heating in flight because the stagnation temperatures are approximately 130 degrees Fahrenheit in the supersonic cruise condition. It should be stressed that this aircraft could be built with today's technology and does not rely on vague and uncertain assumptions of technology advances. Included in this report are sections discussing the details of the preliminary design sequence including the mission to be performed, operational and performance constraints, the aircraft configuration and the tradeoffs of the final choice, wing design, a detailed fuselage design, empennage design, sizing of tail geometry, and selection of control surfaces, a discussion on propulsion system/inlet choice and their position on the aircraft, landing gear design including a look at tire selection, tip-over criterion, pavement loading, and retraction kinematics, structures design including load determination, and materials selection, aircraft performance, a look at stability and handling qualities, systems layout including location of key components, operations requirements maintenance characteristics, a preliminary cost analysis, and conclusions made regarding the design, and recommendations for further study.
A fibre-coupled UHV-compatible variable angle reflection-absorption UV/visible spectrometer
NASA Astrophysics Data System (ADS)
Stubbing, J. W.; Salter, T. L.; Brown, W. A.; Taj, S.; McCoustra, M. R. S.
2018-05-01
We present a novel UV/visible reflection-absorption spectrometer for determining the refractive index, n, and thicknesses, d, of ice films. Knowledge of the refractive index of these films is of particular relevance to the astrochemical community, where they can be used to model radiative transfer and spectra of various regions of space. In order to make these models more accurate, values of n need to be recorded under astronomically relevant conditions, that is, under ultra-high vacuum (UHV) and cryogenic cooling. Several design considerations were taken into account to allow UHV compatibility combined with ease of use. The key design feature is a stainless steel rhombus coupled to an external linear drive (z-shift) allowing a variable reflection geometry to be achieved, which is necessary for our analysis. Test data for amorphous benzene ice are presented as a proof of concept, the film thickness, d, was found to vary linearly with surface exposure, and a value for n of 1.43 ± 0.07 was determined.
Souza, Michele Caroline de; Chaves, Raquel Nichele de; Dos Santos, Fernanda Karina; Gomes, Thayse Natacha Queiroz Ferreira; Santos, Daniel Vilhena E; Borges, Alessandra Silva; Pereira, Sara Isabel Sampaio; Forjaz, Cláudia Lúcia de Moraes; Eisenmann, Joey; Maia, José António Ribeiro
2017-02-01
Studies concerning child and adolescent growth, development, performance and health aimed at the multiple interactions amongst this complex set of variables are not common in the Portuguese speaking countries. The aim of this paper is to address the key ideas, methodology and design of the Oporto Growth, Health and Performance Study (OGHPS). The OGHPS is a multidisciplinary mixed-longitudinal study whose main purpose is to examine the multiple interactions among biological, environmental and lifestyle indicators that affect growth, development, health and performance of Portuguese adolescents aged 10-18 years old. This study briefly presents baseline results for growth, physical fitness and lifestyle behaviours for those participating in the cross-sectional sample (n ≈ 8000). Approximately 30% were over-fat or obese. On average, boys were more physically fit and active than girls. Few adolescents meet the guidelines for sleep duration (≈10%) and eating habits (16.2-24.8%), while 76-85% meet the recommended levels of moderate-to-vigorous physical activity. The OGHPS has an innovative approach due to its mixed-longitudinal design and the broad array of variables. Furthermore, subsequent analyses of the longitudinal data will enable a detailed exploration of important factors affecting the growth trajectories of health and performance variables and will also help to identify some of the most opportune times for interventions in terms of health behaviours.
Norms and nurse management of conflicts: keys to understanding nurse-physician collaboration.
Keenan, G M; Cooke, R; Hillis, S L
1998-02-01
In this cross-sectional study, registered nurses from 36 emergency rooms completed an abridged version of the Organizational Culture Inventory (Cooke & Lafferty, 1989) and responded to nine hypothetical conflict vignettes. Stepwise regressions were performed with nurse conflict style intentions as dependent variables and 10 independent variable (three sets of norms, five measures of conflict styles expected to be used by the physician, gender, and education). Nurses' expectations for physicians to collaborate and strong constructive and aggressive norms were found to explain a moderate amount of variance (32%) in nurses' intentions to collaborate in conflicts conducive to nurse-physician collaboration. The findings of this study provide support for the proposed theoretical framework and can be used to design interventions that promote nurse-physician collaboration.
NASA Astrophysics Data System (ADS)
Zhou, Jian; Guo, Ying
2017-02-01
A continuous-variable measurement-device-independent (CV-MDI) multipartite quantum communication protocol is designed to realize multipartite communication based on the GHZ state analysis using Gaussian coherent states. It can remove detector side attack as the multi-mode measurement is blindly done in a suitable Black Box. The entanglement-based CV-MDI multipartite communication scheme and the equivalent prepare-and-measurement scheme are proposed to analyze the security and guide experiment, respectively. The general eavesdropping and coherent attack are considered for the security analysis. Subsequently, all the attacks are ascribed to coherent attack against imperfect links. The asymptotic key rate of the asymmetric configuration is also derived with the numeric simulations illustrating the performance of the proposed protocol.
Accounting for substitution and spatial heterogeneity in a labelled choice experiment.
Lizin, S; Brouwer, R; Liekens, I; Broeckx, S
2016-10-01
Many environmental valuation studies using stated preferences techniques are single-site studies that ignore essential spatial aspects, including possible substitution effects. In this paper substitution effects are captured explicitly in the design of a labelled choice experiment and the inclusion of different distance variables in the choice model specification. We test the effect of spatial heterogeneity on welfare estimates and transfer errors for minor and major river restoration works, and the transferability of river specific utility functions, accounting for key variables such as site visitation, spatial clustering and income. River specific utility functions appear to be transferable, resulting in low transfer errors. However, ignoring spatial heterogeneity increases transfer errors. Copyright © 2016 Elsevier Ltd. All rights reserved.
An Energy Absorber for the International Space Station
NASA Technical Reports Server (NTRS)
Wilkes, Bob; Laurence, Lora
2000-01-01
The energy absorber described herein is similar in size and shape to an automotive shock absorber, requiring a constant, high load to compress over the stroke, and self-resetting with a small load. The differences in these loads over the stroke represent the energy absorbed by the device, which is dissipated as friction. This paper describes the evolution of the energy absorber, presents the results of testing performed, and shows the sensitivity of this device to several key design variables.
New Variable Porosity Flow Diverter (VPOD) Stent Design for Treatment of Cerebrovascular Aneurysms
Ionita, Ciprian; Baier, Robert; Rudin, Stephen
2012-01-01
Using flow diverting Stents for intracranial aneurysm repair has been an area of recent active research. While current commercial flow diverting stents rely on a dense mesh of braided coils for flow diversion, our group has been developing a method to selectively occlude the aneurysm neck, without endangering nearby perforator vessels. In this paper, we present a new method of fabricating the low porosity patch, a key element of such asymmetric vascular stents (AVS). PMID:22254507
NASA Astrophysics Data System (ADS)
Branciforte, R.; Weiss, S. B.; Schaefer, N.
2008-12-01
Climate change threatens California's vast and unique biodiversity. The Bay Area Upland Habitat Goals is a comprehensive regional biodiversity assessment of the 9 counties surrounding San Francisco Bay, and is designing conservation land networks that will serve to protect, manage, and restore that biodiversity. Conservation goals for vegetation, rare plants, mammals, birds, fish, amphibians, reptiles, and invertebrates are set, and those goals are met using the optimization algorithm MARXAN. Climate change issues are being considered in the assessment and network design in several ways. The high spatial variability at mesoclimatic and topoclimatic scales in California creates high local biodiversity, and provides some degree of local resiliency to macroclimatic change. Mesoclimatic variability from 800 m scale PRISM climatic norms is used to assess "mesoclimate spaces" in distinct mountain ranges, so that high mesoclimatic variability, especially local extremes that likely support range limits of species and potential climatic refugia, can be captured in the network. Quantitative measures of network resiliency to climate change include the spatial range of key temperature and precipitation variables within planning units. Topoclimatic variability provides a finer-grained spatial patterning. Downscaling to the topoclimatic scale (10-50 m scale) includes modeling solar radiation across DEMs for predicting maximum temperature differentials, and topographic position indices for modeling minimum temperature differentials. PRISM data are also used to differentiate grasslands into distinct warm and cool types. The overall conservation strategy includes local and regional connectivity so that range shifts can be accommodated.
Verification of models for ballistic movement time and endpoint variability.
Lin, Ray F; Drury, Colin G
2013-01-01
A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.
Robustness of quantum key distribution with discrete and continuous variables to channel noise
NASA Astrophysics Data System (ADS)
Lasota, Mikołaj; Filip, Radim; Usenko, Vladyslav C.
2017-06-01
We study the robustness of quantum key distribution protocols using discrete or continuous variables to the channel noise. We introduce the model of such noise based on coupling of the signal to a thermal reservoir, typical for continuous-variable quantum key distribution, to the discrete-variable case. Then we perform a comparison of the bounds on the tolerable channel noise between these two kinds of protocols using the same noise parametrization, in the case of implementation which is perfect otherwise. Obtained results show that continuous-variable protocols can exhibit similar robustness to the channel noise when the transmittance of the channel is relatively high. However, for strong loss discrete-variable protocols are superior and can overcome even the infinite-squeezing continuous-variable protocol while using limited nonclassical resources. The requirement on the probability of a single-photon production which would have to be fulfilled by a practical source of photons in order to demonstrate such superiority is feasible thanks to the recent rapid development in this field.
NASA Technical Reports Server (NTRS)
Wilson, Emily L.; DiGregorio, A. J.; Riot, Vincent J.; Ammons, Mark S.; Bruner, WIlliam W.; Carter, Darrell; Mao, Jianping; Ramanathan, Anand; Strahan, Susan E.; Oman, Luke D.;
2017-01-01
We present a design for a 4 U (20 cm 20 cm 10 cm) occultation-viewing laser heterodyne radiometer (LHR) that measures methane (CH4), carbon dioxide (CO2) and water vapor(H2O) in the limb that is designed for deployment on a 6 U CubeSat. The LHR design collects sunlight that has undergone absorption by the trace gas and mixes it with a distributive feedback (DFB) laser centered at 1640 nm that scans across CO2, CH4, and H2O absorption features. Upper troposphere lower stratosphere measurements of these gases provide key inputs to stratospheric circulation models: measuring stratospheric circulation and its variability is essential for projecting how climate change will affect stratospheric ozone.
VCE testbed program planning and definition study
NASA Technical Reports Server (NTRS)
Westmoreland, J. S.; Godston, J.
1978-01-01
The flight definition of the Variable Stream Control Engine (VSCE) was updated to reflect design improvements in the two key components: (1) the low emissions duct burner, and (2) the coannular exhaust nozzle. The testbed design was defined and plans for the overall program were formulated. The effect of these improvements was evaluated for performance, emissions, noise, weight, and length. For experimental large scale testing of the duct burner and coannular nozzle, a design definition of the VCE testbed configuration was made. This included selecting the core engine, determining instrumentation requirements, and selecting the test facilities, in addition to defining control system and assembly requirements. Plans for a comprehensive test program to demonstrate the duct burner and nozzle technologies were formulated. The plans include both aeroacoustic and emissions testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husain, Tausif; Hasan, Iftekhar; Sozer, Yilmaz
This paper presents the design considerations in cogging torque minimization in two types of transverse flux machines. The machines have a double stator-single rotor configuration with flux concentrating ferrite magnets. One of the machines has pole windings across each leg of an E-Core stator. Another machine has quasi-U-shaped stator cores and a ring winding. The flux in the stator back iron is transverse in both machines. Different methods of cogging torque minimization are investigated. Key methods of cogging torque minimization are identified and used as design variables for optimization using a design of experiments (DOE) based on the Taguchi method.more » A three-level DOE is performed to reach an optimum solution with minimum simulations. Finite element analysis is used to study the different effects. Two prototypes are being fabricated for experimental verification.« less
Effects of "D"-Amphetamine and Ethanol on Variable and Repetitive Key-Peck Sequences in Pigeons
ERIC Educational Resources Information Center
Ward, Ryan D.; Bailey, Ericka M.; Odum, Amy L.
2006-01-01
This experiment assessed the effects of "d"-Amphetamine and ethanol on reinforced variable and repetitive key-peck sequences in pigeons. Pigeons responded on two keys under a multiple schedule of Repeat and Vary components. In the Repeat component, completion of a target sequence of right, right, left, left resulted in food. In the Vary component,…
Waleckx, Etienne; Gourbière, Sébastien; Dumonteil, Eric
2015-01-01
Chagas disease prevention remains mostly based on triatomine vector control to reduce or eliminate house infestation with these bugs. The level of adaptation of triatomines to human housing is a key part of vector competence and needs to be precisely evaluated to allow for the design of effective vector control strategies. In this review, we examine how the domiciliation/intrusion level of different triatomine species/populations has been defined and measured and discuss how these concepts may be improved for a better understanding of their ecology and evolution, as well as for the design of more effective control strategies against a large variety of triatomine species. We suggest that a major limitation of current criteria for classifying triatomines into sylvatic, intrusive, domiciliary and domestic species is that these are essentially qualitative and do not rely on quantitative variables measuring population sustainability and fitness in their different habitats. However, such assessments may be derived from further analysis and modelling of field data. Such approaches can shed new light on the domiciliation process of triatomines and may represent a key tool for decision-making and the design of vector control interventions. PMID:25993504
Cabrera, Manuel; Machín, Leandro; Arrúa, Alejandra; Antúnez, Lucía; Curutchet, María Rosa; Giménez, Ana; Ares, Gastón
2017-12-01
Warnings are a new directive front-of-pack (FOP) nutrition labelling scheme that highlights products with high content of key nutrients. The design of warnings influences their ability to catch consumers' attention and to clearly communicate their intended meaning, which are key determinants of their effectiveness. The aim of the present work was to evaluate the influence of design features of warnings as a FOP nutrition labelling scheme on perceived healthfulness and attentional capture. Five studies with a total of 496 people were carried out. In the first study, the association of colour and perceived healthfulness was evaluated in an online survey in which participants had to rate their perceived healthfulness of eight colours. In the second study, the influence of colour, shape and textual information on perceived healthfulness was evaluated using choice-conjoint analysis. The third study focused on implicit associations between two design features (shape and colour) on perceived healthfulness. The fourth and fifth studies used visual search to evaluate the influence of colour, size and position of the warnings on attentional capture. Perceived healthfulness was significantly influenced by shape, colour and textual information. Colour was the variable with the largest contribution to perceived healthfulness. Colour, size and position of the warnings on the labels affected attentional capture. Results from the experiments provide recommendations for the design of warnings to identify products with unfavourable nutrient profile.
Post-cracking characteristics of high performance fiber reinforced cementitious composites
NASA Astrophysics Data System (ADS)
Suwannakarn, Supat W.
The application of high performance fiber reinforced cement composites (HPFRCC) in structural systems depends primarily on the material's tensile response, which is a direct function of fiber and matrix characteristics, the bond between them, and the fiber content or volume fraction. The objective of this dissertation is to evaluate and model the post-cracking behavior of HPFRCC. In particular, it focused on the influential parameters controlling tensile behavior and the variability associated with them. The key parameters considered include: the stress and strain at first cracking, the stress and strain at maximum post-cracking, the shape of the stress-strain or stress-elongation response, the multiple cracking process, the shape of the resistance curve after crack localization, the energy associated with the multiple cracking process, and the stress versus crack opening response of a single crack. Both steel fibers and polymeric fibers, perceived to have the greatest potential for current commercial applications, are considered. The main variables covered include fiber type (Torex, Hooked, PVA, and Spectra) and fiber volume fraction (ranging from 0.75% to 2.0%). An extensive experimental program is carried out using direct tensile tests and stress-versus crack opening displacement tests on notched tensile prisms. The key experimental results were analysed and modeled using simple prediction equations which, combined with a composite mechanics approach, allowed for predicting schematic simplified stress-strain and stress-displacement response curves for use in structural modeling. The experimental data show that specimens reinforced with Torex fibers performs best, follows by Hooked and Spectra fibers, then PVA fibers. Significant variability in key parameters was observed througout suggesting that variability must be studied further. The new information obtained can be used as input for material models for finite element analysis and can provide greater confidence in using the HPFRC composites in structural applications. It also provides a good foundation to integrate these composites in conventional structural analysis and design.
Azmal, Mohammad; Sari, Ali Akbari; Foroushani, Abbas Rahimi; Ahmadi, Batoul
2016-06-01
Patient and public involvement is engaging patients, providers, community representatives, and the public in healthcare planning and decision-making. The purpose of this study was to develop a model for the application of patient and public involvement in decision making in the Iranian healthcare system. A mixed qualitative-quantitative approach was used to develop a conceptual model. Thirty three key informants were purposely recruited in the qualitative stage, and 420 people (patients and their companions) were included in a protocol study that was implemented in five steps: 1) Identifying antecedents, consequences, and variables associated with the patient and the publics' involvement in healthcare decision making through a comprehensive literature review; 2) Determining the main variables in the context of Iran's health system using conceptual framework analysis; 3) Prioritizing and weighting variables by Shannon entropy; 4) designing and validating a tool for patient and public involvement in healthcare decision making; and 5) Providing a conceptual model of patient and the public involvement in planning and developing healthcare using structural equation modeling. We used various software programs, including SPSS (17), Max QDA (10), EXCEL, and LISREL. Content analysis, Shannon entropy, and descriptive and analytic statistics were used to analyze the data. In this study, seven antecedents variable, five dimensions of involvement, and six consequences were identified. These variables were used to design a valid tool. A logical model was derived that explained the logical relationships between antecedent and consequent variables and the dimensions of patient and public involvement as well. Given the specific context of the political, social, and innovative environments in Iran, it was necessary to design a model that would be compatible with these features. It can improve the quality of care and promote the patient and the public satisfaction with healthcare and legitimate the representative of people they served for. This model can provide a practical guide for managers and policy makers to involve people in making the decisions that influence their lives.
The Theory of Planned Behaviour and dietary patterns: A systematic review and meta-analysis.
McDermott, M S; Oliver, M; Simnadis, T; Beck, E J; Coltman, T; Iverson, D; Caputi, P; Sharma, R
2015-12-01
Promoting adherence to healthy dietary patterns is a critical public health issue. Models of behaviour, such as the Theory of Planned Behaviour (TPB) allow programme designers to identify antecedents of dietary patterns and design effective interventions. The primary aim of this study was to examine the association between TPB variables and dietary patterns. A systematic literature search was conducted to identify relevant studies. Random-effects meta-analysis was used to calculate average correlations. Meta-regression was used to test the impact of moderator variables. In total, 22 reports met the inclusion criteria. Attitudes had the strongest association with intention (r+=0.61) followed by perceived behavioural control (PBC, r+=0.46) and subjective norm (r+=0.35). The association between intention and behaviour was r+=0.47, and between PBC and behaviour r+=0.32. Moderator analyses revealed that younger participants had stronger PBC-behaviour associations than older participants had, and studies recording participants' perceptions of behaviour reported significantly higher intention-behaviour associations than did those using less subjective measures. TPB variables were found to have medium to large associations with both intention and behaviour that were robust to the influence of key moderators. Recommendations for future research include further examination of the moderation of TPB variables by age and gender and the use of more valid measures of eating behaviour. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Alberti, Fabrizio; Santiago, Sergio; Roccabruna, Mattia; Luque, Salvador; Gonzalez-Aguilar, Jose; Crema, Luigi; Romero, Manuel
2016-05-01
Volumetric absorbers constitute one of the key elements in order to achieve high thermal conversion efficiencies in concentrating solar power plants. Regardless of the working fluid or thermodynamic cycle employed, design trends towards higher absorber output temperatures are widespread, which lead to the general need of components of high solar absorptance, high conduction within the receiver material, high internal convection, low radiative and convective heat losses and high mechanical durability. In this context, the use of advanced manufacturing techniques, such as selective laser melting, has allowed for the fabrication of intricate geometries that are capable of fulfilling the previous requirements. This paper presents a parametric design and analysis of the optical performance of volumetric absorbers of variable porosity conducted by means of detailed numerical ray tracing simulations. Sections of variable macroscopic porosity along the absorber depth were constructed by the fractal growth of single-cell structures. Measures of performance analyzed include optical reflection losses from the absorber front and rear faces, penetration of radiation inside the absorber volume, and radiation absorption as a function of absorber depth. The effects of engineering design parameters such as absorber length and wall thickness, material reflectance and porosity distribution on the optical performance of absorbers are discussed, and general design guidelines are given.
Evaluation of vertical profiles to design continuous descent approach procedure
NASA Astrophysics Data System (ADS)
Pradeep, Priyank
The current research focuses on predictability, variability and operational feasibility aspect of Continuous Descent Approach (CDA), which is among the key concepts of the Next Generation Air Transportation System (NextGen). The idle-thrust CDA is a fuel economical, noise and emission abatement procedure, but requires increased separation to accommodate for variability and uncertainties in vertical and speed profiles of arriving aircraft. Although a considerable amount of researches have been devoted to the estimation of potential benefits of the CDA, only few have attempted to explain the predictability, variability and operational feasibility aspect of CDA. The analytical equations derived using flight dynamics and Base of Aircraft and Data (BADA) Total Energy Model (TEM) in this research gives insight into dependency of vertical profile of CDA on various factors like wind speed and gradient, weight, aircraft type and configuration, thrust settings, atmospheric factors (deviation from ISA (DISA), pressure and density of the air) and descent speed profile. Application of the derived equations to idle-thrust CDA gives an insight into sensitivity of its vertical profile to multiple factors. This suggests fixed geometric flight path angle (FPA) CDA has higher degree of predictability and lesser variability at the cost of non-idle and low thrust engine settings. However, with optimized design this impact can be overall minimized. The CDA simulations were performed using Future ATM Concept Evaluation Tool (FACET) based on radar-track and aircraft type data (BADA) of the real air-traffic to some of the busiest airports in the USA (ATL, SFO and New York Metroplex (JFK, EWR and LGA)). The statistical analysis of the vertical profiles of CDA shows 1) mean geometric FPAs derived from various simulated vertical profiles are consistently shallower than 3° glideslope angle and 2) high level of variability in vertical profiles of idle-thrust CDA even in absence of uncertainties in external factors. Analysis from operational feasibility perspective suggests that two key features of the performance based Flight Management System (FMS) i.e. required time of arrival (RTA) and geometric descent path would help in reduction of unpredictability associated with arrival time and vertical profile of aircraft guided by the FMS coupled with auto-pilot (AP) and auto-throttle (AT). The statistical analysis of the vertical profiles of CDA also suggests that for procedure design window type, 'AT or above' and 'AT or below' altitude and FPA constraints are more realistic and useful compared to obsolete 'AT' type altitude constraint.
Howey, Meghan C L; Palace, Michael W; McMichael, Crystal H
2016-07-05
Building monuments was one way that past societies reconfigured their landscapes in response to shifting social and ecological factors. Understanding the connections between those factors and monument construction is critical, especially when multiple types of monuments were constructed across the same landscape. Geospatial technologies enable past cultural activities and environmental variables to be examined together at large scales. Many geospatial modeling approaches, however, are not designed for presence-only (occurrence) data, which can be limiting given that many archaeological site records are presence only. We use maximum entropy modeling (MaxEnt), which works with presence-only data, to predict the distribution of monuments across large landscapes, and we analyze MaxEnt output to quantify the contributions of spatioenvironmental variables to predicted distributions. We apply our approach to co-occurring Late Precontact (ca. A.D. 1000-1600) monuments in Michigan: (i) mounds and (ii) earthwork enclosures. Many of these features have been destroyed by modern development, and therefore, we conducted archival research to develop our monument occurrence database. We modeled each monument type separately using the same input variables. Analyzing variable contribution to MaxEnt output, we show that mound and enclosure landscape suitability was driven by contrasting variables. Proximity to inland lakes was key to mound placement, and proximity to rivers was key to sacred enclosures. This juxtaposition suggests that mounds met local needs for resource procurement success, whereas enclosures filled broader regional needs for intergroup exchange and shared ritual. Our study shows how MaxEnt can be used to develop sophisticated models of past cultural processes, including monument building, with imperfect, limited, presence-only data.
A Significant Role for Renewables in a Low-Carbon Energy Economy?
NASA Astrophysics Data System (ADS)
Newmark, R. L.
2015-12-01
Renewables currently make up a small (but growing) fraction of total U.S. electricity generation. In some regions, renewable growth has resulted in instantaneous penetration levels of wind and solar in excess of 60% of demand. With decreasing costs, abundant resource potential and low carbon emissions and water requirements, wind and solar are increasingly becoming attractive new generation options. However, factors such as resource variability and geographic distribution of prime resources raise questions regarding the extent to which our power system can rely on variable generation resources. Here, we describe scenario analyses designed to tackle engineering and economic challenges associated with variable generation, along with insights derived from research results. These analyses demonstrate the operability of high renewable systems and quantify some of the engineering challenges (and solutions) associated with maintaining reliability. Key questions addressed include the operational and economic impacts of increasing levels of variable generation on the U.S. power system. Since reliability and economic efficiency are measured across a variety of time frames, and with a variety of metrics, a suite of tools addressing different system impacts are used to understand how new resources affect incumbent resources and operational practices. We summarize a range of modeled scenarios, focusing on ones with 80% RE in the United States and >30% variable wind and solar in the East and the West. We also summarize the environmental impacts and benefits estimated for these and similar scenarios. Results provide key insights to inform the technical, operational and regulatory evolution of the U.S. power system. This work is extended internationally through the 21st Century Power Partnership's collaborations on power system transformation, with active collaboration in Canada, Mexico, India, China and South Africa, among others.
Finite-size analysis of a continuous-variable quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverrier, Anthony; Grosshans, Frederic; Grangier, Philippe
2010-06-15
The goal of this paper is to extend the framework of finite-size analysis recently developed for quantum key distribution to continuous-variable protocols. We do not solve this problem completely here, and we mainly consider the finite-size effects on the parameter estimation procedure. Despite the fact that some questions are left open, we are able to give an estimation of the secret key rate for protocols which do not contain a postselection procedure. As expected, these results are significantly more pessimistic than those obtained in the asymptotic regime. However, we show that recent continuous-variable protocols are able to provide fully securemore » secret keys in the finite-size scenario, over distances larger than 50 km.« less
Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data
Wikle, C.K.; Royle, J. Andrew
2005-01-01
Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.
Optimizing hydraulic fracture design in the diatomite formation, Lost Hills Field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, D.G.; Klins, M.A.; Manrique, J.F.
1996-12-31
Since 1988, over 1.3 billion pounds of proppant have been placed in the Lost Hills Field of Kern County. California in over 2700 hydraulic fracture treatments involving investments of about $150 million. In 1995, systematic reevaluation of the standard, field trial-based fracture design began. Reservoir, geomechanical, and hydraulic fracture characterization; production and fracture modeling; sensitivity analysis; and field test results were integrated to optimize designs with regard to proppant volume, proppant ramps, and perforating strategy. The results support a reduction in proppant volume from 2500 to 1700 lb/ft which will save about $50,000 per well, totalling over $3 million permore » year. Vertical coverage was found to be a key component of fracture quality which could be optimized by eliminating perforations from lower stress intervals, reducing the total number of perforations, and reducing peak slurry loading from 16 to 12 ppa. A relationship between variations in lithology, pore pressure, and stress was observed. Point-source, perforating strategies were investigated and variable multiple fracture behavior was observed. The discussed approach has application in areas where stresses are variable; pay zones are thick; hydraulic fracture design is based primarily on empirical, trial-and-error field test results; and effective, robust predictive models involving real-data feedback have not been incorporated into the design improvement process.« less
Antoniades, Dermot; Douglas, Marianne S V; Michelutti, Neal; Smol, John P
2014-08-01
Ecotones are key areas for the detection of global change because many are predicted to move with shifts in climate. Prince of Wales Island, in the Canadian Arctic Archipelago, spans the transition between mid- to high-Arctic ecoregions. We analyzed limnological variables and recent diatom assemblages from its lakes and ponds to determine if assemblages reflected this ecotone. Limnological gradients were short, and water chemistry explained 20.0% of diatom variance in a redundancy analysis (RDA), driven primarily by dissolved organic carbon, Ca and SO4 . Most taxa were small, benthic forms; key taxa such as planktonic Cyclotella species were restricted to the warmer, southern portion of the study area, while benthic Staurosirella were associated with larger, ice-dominated lakes. Nonetheless, there were no significant changes in diatom assemblages across the mid- to high-Arctic ecoregion boundary. We combined our data set with one from nearby Cornwallis Island to expand the study area and lengthen its environmental gradients. Within this expanded data set, 40.6% of the diatom variance was explained by a combination of water chemistry and geographic variables, and significant relationships were revealed between diatom distributions and key limnological variables, including pH, specific conductivity, and chl-a. Using principal coordinates analysis, we estimated community turnover with latitude and applied piecewise linear regression to determine diatom ecotone positions. A pronounced transition was present between Prince of Wales Island and the colder, more northerly Cornwallis Island. These data will be important in detecting any future northward ecotone movement in response to predicted Arctic climate warming in this highly sensitive region. © 2014 Phycological Society of America.
Exercise Dose in Clinical Practice
Wasfy, Meagan; Baggish, Aaron L.
2016-01-01
There is wide variability in the physical activity patterns of the patients in contemporary clinical cardiovascular practice. This review is designed to address the impact of exercise dose on key cardiovascular risk factors and on mortality. We begin by examining the body of literature that supports a dose-response relationship between exercise and cardiovascular disease risk factors including plasma lipids, hypertension, diabetes mellitus, and obesity. We next explore the relationship between exercise dose and mortality by reviewing the relevant epidemiological literature underlying current physical activity guideline recommendations. We then expand this discussion to critically examine recent data pertaining to the impact of exercise dose at the lowest and highest ends of the spectrum. Finally, we provide a framework for how the key concepts of exercise dose can be integrated into clinical practice. PMID:27267537
Exercise Dose in Clinical Practice.
Wasfy, Meagan M; Baggish, Aaron L
2016-06-07
There is wide variability in the physical activity patterns of the patients in contemporary clinical cardiovascular practice. This review is designed to address the impact of exercise dose on key cardiovascular risk factors and on mortality. We begin by examining the body of literature that supports a dose-response relationship between exercise and cardiovascular disease risk factors, including plasma lipids, hypertension, diabetes mellitus, and obesity. We next explore the relationship between exercise dose and mortality by reviewing the relevant epidemiological literature underlying current physical activity guideline recommendations. We then expand this discussion to critically examine recent data pertaining to the impact of exercise dose at the lowest and highest ends of the spectrum. Finally, we provide a framework for how the key concepts of exercise dose can be integrated into clinical practice. © 2016 American Heart Association, Inc.
Jiang, Yuan; Liese, Eric; Zitney, Stephen E.; ...
2018-02-25
This paper presents a baseline design and optimization approach developed in Aspen Custom Modeler (ACM) for microtube shell-and-tube exchangers (MSTEs) used for high- and low-temperature recuperation in a 10 MWe indirect supercritical carbon dioxide (sCO 2) recompression closed Brayton cycle (RCBC). The MSTE-type recuperators are designed using one-dimensional models with thermal-hydraulic correlations appropriate for sCO 2 and properties models that capture considerable nonlinear changes in CO 2 properties near the critical and pseudo-critical points. Using the successive quadratic programming (SQP) algorithm in ACM, optimal recuperator designs are obtained for either custom or industry-standard microtubes considering constraints based on current advancedmore » manufacturing techniques. The three decision variables are the number of tubes, tube pitch-to-diameter ratio, and tube diameter. Five different objective functions based on different key design measures are considered: minimization of total heat transfer area, heat exchanger volume, metal weight, thermal residence time, and maximization of compactness. Sensitivities studies indicate the constraint on the maximum number of tubes per shell does affect the number of parallel heat exchanger trains but not the tube selection, total number of tubes, tube length and other key design measures in the final optimal design when considering industry-standard tubes. In this study, the optimally designed high- and low-temperature recuperators have 47,000 3/32 inch tubes and 63,000 1/16 inch tubes, respectively. In addition, sensitivities to the design temperature approach and maximum allowable pressure drop are studied, since these specifications significantly impact the optimal design of the recuperators as well as the thermal efficiency and the economic performance of the entire sCO 2 Brayton cycle.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Yuan; Liese, Eric; Zitney, Stephen E.
This paper presents a baseline design and optimization approach developed in Aspen Custom Modeler (ACM) for microtube shell-and-tube exchangers (MSTEs) used for high- and low-temperature recuperation in a 10 MWe indirect supercritical carbon dioxide (sCO 2) recompression closed Brayton cycle (RCBC). The MSTE-type recuperators are designed using one-dimensional models with thermal-hydraulic correlations appropriate for sCO 2 and properties models that capture considerable nonlinear changes in CO 2 properties near the critical and pseudo-critical points. Using the successive quadratic programming (SQP) algorithm in ACM, optimal recuperator designs are obtained for either custom or industry-standard microtubes considering constraints based on current advancedmore » manufacturing techniques. The three decision variables are the number of tubes, tube pitch-to-diameter ratio, and tube diameter. Five different objective functions based on different key design measures are considered: minimization of total heat transfer area, heat exchanger volume, metal weight, thermal residence time, and maximization of compactness. Sensitivities studies indicate the constraint on the maximum number of tubes per shell does affect the number of parallel heat exchanger trains but not the tube selection, total number of tubes, tube length and other key design measures in the final optimal design when considering industry-standard tubes. In this study, the optimally designed high- and low-temperature recuperators have 47,000 3/32 inch tubes and 63,000 1/16 inch tubes, respectively. In addition, sensitivities to the design temperature approach and maximum allowable pressure drop are studied, since these specifications significantly impact the optimal design of the recuperators as well as the thermal efficiency and the economic performance of the entire sCO 2 Brayton cycle.« less
Tunno, Brett J; Dalton, Rebecca; Michanowicz, Drew R; Shmool, Jessie L C; Kinnee, Ellen; Tripathy, Sheila; Cambal, Leah; Clougherty, Jane E
2016-01-01
Health effects of fine particulate matter (PM2.5) vary by chemical composition, and composition can help to identify key PM2.5 sources across urban areas. Further, this intra-urban spatial variation in concentrations and composition may vary with meteorological conditions (e.g., mixing height). Accordingly, we hypothesized that spatial sampling during atmospheric inversions would help to better identify localized source effects, and reveal more distinct spatial patterns in key constituents. We designed a 2-year monitoring campaign to capture fine-scale intra-urban variability in PM2.5 composition across Pittsburgh, PA, and compared both spatial patterns and source effects during “frequent inversion” hours vs 24-h weeklong averages. Using spatially distributed programmable monitors, and a geographic information systems (GIS)-based design, we collected PM2.5 samples across 37 sampling locations per year to capture variation in local pollution sources (e.g., proximity to industry, traffic density) and terrain (e.g., elevation). We used inductively coupled plasma mass spectrometry (ICP-MS) to determine elemental composition, and unconstrained factor analysis to identify source suites by sampling scheme and season. We examined spatial patterning in source factors using land use regression (LUR), wherein GIS-based source indicators served to corroborate factor interpretations. Under both summer sampling regimes, and for winter inversion-focused sampling, we identified six source factors, characterized by tracers associated with brake and tire wear, steel-making, soil and road dust, coal, diesel exhaust, and vehicular emissions. For winter 24-h samples, four factors suggested traffic/fuel oil, traffic emissions, coal/industry, and steel-making sources. In LURs, as hypothesized, GIS-based source terms better explained spatial variability in inversion-focused samples, including a greater contribution from roadway, steel, and coal-related sources. Factor analysis produced source-related constituent suites under both sampling designs, though factors were more distinct under inversion-focused sampling. PMID:26507005
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, University Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex
2010-06-15
In this article, we give a simple proof of the fact that the optimal collective attacks against continuous-variable quantum key distribution with a Gaussian modulation are Gaussian attacks. Our proof, which makes use of symmetry properties of the protocol in phase space, is particularly relevant for the finite-key analysis of the protocol and therefore for practical applications.
Identifying key features of effective active learning: the effects of writing and peer discussion.
Linton, Debra L; Pangle, Wiline M; Wyatt, Kevin H; Powell, Karli N; Sherwood, Rachel E
2014-01-01
We investigated some of the key features of effective active learning by comparing the outcomes of three different methods of implementing active-learning exercises in a majors introductory biology course. Students completed activities in one of three treatments: discussion, writing, and discussion + writing. Treatments were rotated weekly between three sections taught by three different instructors in a full factorial design. The data set was analyzed by generalized linear mixed-effect models with three independent variables: student aptitude, treatment, and instructor, and three dependent (assessment) variables: change in score on pre- and postactivity clicker questions, and coding scores on in-class writing and exam essays. All independent variables had significant effects on student performance for at least one of the dependent variables. Students with higher aptitude scored higher on all assessments. Student scores were higher on exam essay questions when the activity was implemented with a writing component compared with peer discussion only. There was a significant effect of instructor, with instructors showing different degrees of effectiveness with active-learning techniques. We suggest that individual writing should be implemented as part of active learning whenever possible and that instructors may need training and practice to become effective with active learning. © 2014 D. L. Linton et al. CBE—Life Sciences Education © 2014 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Integrating Decision Making and Mental Health Interventions Research: Research Directions
Wills, Celia E.; Holmes-Rovner, Margaret
2006-01-01
The importance of incorporating patient and provider decision-making processes is in the forefront of the National Institute of Mental Health (NIMH) agenda for improving mental health interventions and services. Key concepts in patient decision making are highlighted within a simplified model of patient decision making that links patient-level/“micro” variables to services-level/“macro” variables via the decision-making process that is a target for interventions. The prospective agenda for incorporating decision-making concepts in mental health research includes (a) improved measures for characterizing decision-making processes that are matched to study populations, complexity, and types of decision making; (b) testing decision aids in effectiveness research for diverse populations and clinical settings; and (c) improving the understanding and incorporation of preference concepts in enhanced intervention designs. PMID:16724158
NASA Astrophysics Data System (ADS)
Chaibub Neto, Elias
2016-11-01
Clinical trials traditionally employ blinding as a design mechanism to reduce the influence of placebo effects. In practice, however, it can be difficult or impossible to blind study participants and unblinded trials are common in medical research. Here we show how instrumental variables can be used to quantify and disentangle treatment and placebo effects in randomized clinical trials comparing control and active treatments in the presence of confounders. The key idea is to use randomization to separately manipulate treatment assignment and psychological encouragement conversations/interactions that increase the participants’ desire for improved symptoms. The proposed approach is able to improve the estimation of treatment effects in blinded studies and, most importantly, opens the doors to account for placebo effects in unblinded trials.
Effect of signal jitter on the spectrum of rotor impulsive noise
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.
1987-01-01
The effect of randomness or jitter of the acoustic waveform on the spectrum of rotor impulsive noise is studied because of its importance for data interpretation. An acoustic waveform train is modelled representing rotor impulsive noise. The amplitude, shape, and period between occurrences of individual pulses are allowed to be randomized assuming normal probability distributions. Results, in terms of the standard deviations of the variable quantities, are given for the autospectrum as well as special processed spectra designed to separate harmonic and broadband rotor noise components. Consideration is given to the effect of accuracy in triggering or keying to a rotor one per revolution signal. An example is given showing the resultant spectral smearing at the high frequencies due to the pulse signal period variability.
Effect of signal jitter on the spectrum of rotor impulsive noise
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.
1988-01-01
The effect of randomness or jitter of the acoustic waveform on the spectrum of rotor impulsive noise is studied because of its importance for data interpretation. An acoustic waveform train is modeled representing rotor impulsive noise. The amplitude, shape, and period between occurrences of individual pulses are allowed to be randomized assuming normal probability distributions. Results, in terms of the standard deviations of the variable quantities, are given for the autospectrum as well as special processed spectra designed to separate harmonic and broadband rotor noise components. Consideration is given to the effect of accuracy in triggering or keying to a rotor one per revolution signal. An example is given showing the resultant spectral smearing at the high frequencies due to the pulse signal period variability.
Monitoring biological diversity: strategies, tools, limitations, and challenges
Beever, E.A.
2006-01-01
Monitoring is an assessment of the spatial and temporal variability in one or more ecosystem properties, and is an essential component of adaptive management. Monitoring can help determine whether mandated environmental standards are being met and can provide an early-warning system of ecological change. Development of a strategy for monitoring biological diversity will likely be most successful when based upon clearly articulated goals and objectives and may be enhanced by including several key steps in the process. Ideally, monitoring of biological diversity will measure not only composition, but also structure and function at the spatial and temporal scales of interest. Although biodiversity monitoring has several key limitations as well as numerous theoretical and practical challenges, many tools and strategies are available to address or overcome such challenges; I summarize several of these. Due to the diversity of spatio-temporal scales and comprehensiveness encompassed by existing definitions of biological diversity, an effective monitoring design will reflect the desired sampling domain of interest and its key stressors, available funding, legal requirements, and organizational goals.
Continuous Variable Quantum Key Distribution Using Polarized Coherent States
NASA Astrophysics Data System (ADS)
Vidiella-Barranco, A.; Borelli, L. F. M.
We discuss a continuous variables method of quantum key distribution employing strongly polarized coherent states of light. The key encoding is performed using the variables known as Stokes parameters, rather than the field quadratures. Their quantum counterpart, the Stokes operators Ŝi (i=1,2,3), constitute a set of non-commuting operators, being the precision of simultaneous measurements of a pair of them limited by an uncertainty-like relation. Alice transmits a conveniently modulated two-mode coherent state, and Bob randomly measures one of the Stokes parameters of the incoming beam. After performing reconciliation and privacy amplification procedures, it is possible to distill a secret common key. We also consider a non-ideal situation, in which coherent states with thermal noise, instead of pure coherent states, are used for encoding.
Continuous-variable quantum key distribution protocols over noisy channels.
García-Patrón, Raúl; Cerf, Nicolas J
2009-04-03
A continuous-variable quantum key distribution protocol based on squeezed states and heterodyne detection is introduced and shown to attain higher secret key rates over a noisy line than any other one-way Gaussian protocol. This increased resistance to channel noise can be understood as resulting from purposely adding noise to the signal that is converted into the secret key. This notion of noise-enhanced tolerance to noise also provides a better physical insight into the poorly understood discrepancies between the previously defined families of Gaussian protocols.
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
Report of the Defense Science Board Task Force on Defense Biometrics
2007-03-01
certificates, crypto variables, and encoded biometric indices. The Department of Defense has invested prestige and resources in its Common Access Card (CAC...in turn, could be used to unlock an otherwise secret key or crypto variable which would support the remote authentication. A new key variable...The PSA for biometrics should commission development of appropriate threat model(s) and assign responsibility for maintaining currency of the model
Cogging Torque Minimization in Transverse Flux Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husain, Tausif; Hasan, Iftekhar; Sozer, Yilmaz
2017-02-16
This paper presents the design considerations in cogging torque minimization in two types of transverse flux machines. The machines have a double stator-single rotor configuration with flux concentrating ferrite magnets. One of the machines has pole windings across each leg of an E-Core stator. Another machine has quasi-U-shaped stator cores and a ring winding. The flux in the stator back iron is transverse in both machines. Different methods of cogging torque minimization are investigated. Key methods of cogging torque minimization are identified and used as design variables for optimization using a design of experiments (DOE) based on the Taguchi method.more » A three-level DOE is performed to reach an optimum solution with minimum simulations. Finite element analysis is used to study the different effects. Two prototypes are being fabricated for experimental verification.« less
Benchmarking child and adolescent mental health organizations.
Brann, Peter; Walter, Garry; Coombs, Tim
2011-04-01
This paper describes aspects of the child and adolescent benchmarking forums that were part of the National Mental Health Benchmarking Project (NMHBP). These forums enabled participating child and adolescent mental health organizations to benchmark themselves against each other, with a view to understanding variability in performance against a range of key performance indicators (KPIs). Six child and adolescent mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against relevant KPIs. They also undertook two special projects designed to help them understand the variation in performance on given KPIs. There was considerable inter-organization variability on many of the KPIs. Even within organizations, there was often substantial variability over time. The variability in indicator data raised many questions for participants. This challenged participants to better understand and describe their local processes, prompted them to collect additional data, and stimulated them to make organizational comparisons. These activities fed into a process of reflection about their performance. Benchmarking has the potential to illuminate intra- and inter-organizational performance in the child and adolescent context.
NASA Technical Reports Server (NTRS)
Welch, Gerand E.
2010-01-01
The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range (100% at take-off to 54% at cruise). The variable-speed power turbine, when coupled to a fixed-gear-ratio transmission, offers one approach to accomplish this speed variation. The key aero-challenges of the variable-speed power turbine are related to high work factors at cruise, where the power turbine operates at 54% of take-off speed, wide incidence variations into the vane, blade, and exit-guide-vane rows associated with the power-turbine speed change, and the impact of low aft-stage Reynolds number (transitional flow) at 28 kft cruise. Meanline and 2-D Reynolds-Averaged Navier- Stokes analyses are used to characterize the variable-speed power-turbine aerodynamic challenges and to outline a conceptual design approach that accounts for multi-point operation. Identified technical challenges associated with the aerodynamics of high work factor, incidence-tolerant blading, and low Reynolds numbers pose research needs outlined in the paper
Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef
2012-10-01
The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses. Copyright © 2012 Elsevier B.V. All rights reserved.
Geiser, Christian; Keller, Brian T.; Lockhart, Ginger; Eid, Michael; Cole, David A.; Koch, Tobias
2014-01-01
Researchers analyzing longitudinal data often want to find out whether the process they study is characterized by (1) short-term state variability, (2) long-term trait change, or (3) a combination of state variability and trait change. Classical latent state-trait (LST) models are designed to measure reversible state variability around a fixed set-point or trait, whereas latent growth curve (LGC) models focus on long-lasting and often irreversible trait changes. In the present paper, we contrast LST and LGC models from the perspective of measurement invariance (MI) testing. We show that establishing a pure state-variability process requires (a) the inclusion of a mean structure and (b) establishing strong factorial invariance in LST analyses. Analytical derivations and simulations demonstrate that LST models with non-invariant parameters can mask the fact that a trait-change or hybrid process has generated the data. Furthermore, the inappropriate application of LST models to trait change or hybrid data can lead to bias in the estimates of consistency and occasion-specificity, which are typically of key interest in LST analyses. Four tips for the proper application of LST models are provided. PMID:24652650
Chen, Yantian; Bloemen, Veerle; Impens, Saartje; Moesen, Maarten; Luyten, Frank P; Schrooten, Jan
2011-12-01
Cell seeding into scaffolds plays a crucial role in the development of efficient bone tissue engineering constructs. Hence, it becomes imperative to identify the key factors that quantitatively predict reproducible and efficient seeding protocols. In this study, the optimization of a cell seeding process was investigated using design of experiments (DOE) statistical methods. Five seeding factors (cell type, scaffold type, seeding volume, seeding density, and seeding time) were selected and investigated by means of two response parameters, critically related to the cell seeding process: cell seeding efficiency (CSE) and cell-specific viability (CSV). In addition, cell spatial distribution (CSD) was analyzed by Live/Dead staining assays. Analysis identified a number of statistically significant main factor effects and interactions. Among the five seeding factors, only seeding volume and seeding time significantly affected CSE and CSV. Also, cell and scaffold type were involved in the interactions with other seeding factors. Within the investigated ranges, optimal conditions in terms of CSV and CSD were obtained when seeding cells in a regular scaffold with an excess of medium. The results of this case study contribute to a better understanding and definition of optimal process parameters for cell seeding. A DOE strategy can identify and optimize critical process variables to reduce the variability and assists in determining which variables should be carefully controlled during good manufacturing practice production to enable a clinically relevant implant.
Sanchez-Johnsen, Lisa; Craven, Meredith; Nava, Magdalena; Alonso, Angelica; Dykema-Engblade, Amanda; Rademaker, Alfred; Xie, Hui
2017-08-01
Overweight and obesity are associated with significant health problems and rates of obesity are high among Latino men. This paper describes the design, rationale and participant characteristics of the key demographic variables assessed in an NIH-funded study (R21-CA143636) addressing culture and several obesity-related variables (diet, physical activity, and body image) among Mexican and Puerto Rican men using a community-based participatory research framework. Participants completed objective measures (height, weight, body fat, hip, waist), a health and culture interview, a diet questionnaire, and used an accelerometer to measure their level of physical activity. A total of 203 participants completed the measures and the health and culture interview and 193 completed all study components. Puerto Ricans were older than Mexicans (p < .0001) and there were significant differences in marital status (p < .05), country of birth (p < .05), smoking (p < .05) and work status (p < .001). There were no significant differences in religion, education, health insurance, Body Mass Index, body fat, hip and waist measurements, and the language preference of the interview. Results have implications for the development of a future intervention that incorporates the role of cultural factors into a community participatory obesity intervention for Latino men.
NASA Astrophysics Data System (ADS)
Vanwalleghem, T.; Román, A.; Giraldez, J. V.
2016-12-01
There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of a geostatistical versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.
The BACnet Campus Challenge - Part 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masica, Ken; Tom, Steve
Here, the BACnet protocol was designed to achieve interoperability among building automation vendors and evolve over time to include new functionality as well as support new communication technologies such as the Ethernet and IP protocols as they became prevalent and economical in the market place. For large multi-building, multi-vendor campus environments, standardizing on the BACnet protocol as an implementation strategy can be a key component in meeting the challenge of an interoperable, flexible, and scalable building automation system. The interoperability of BACnet is especially important when large campuses with legacy equipment have DDC upgrades to facilities performed over different timemore » frames and use different contractors that install equipment from different vendors under the guidance of different campus HVAC project managers. In these circumstances, BACnet can serve as a common foundation for interoperability when potential variability exists in approaches to the design-build process by numerous parties over time. Likewise, BACnet support for a range of networking protocols and technologies can be a key strategy for achieving flexible and scalable automation systems as campuses and enterprises expand networking infrastructures using standard interoperable protocols like IP and Ethernet.« less
Comparative study of air-conditioning energy use of four office buildings in China and USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Xin; Yan, Da; An, Jingjing
Energy use in buildings has great variability. In order to design and operate low energy buildings as well as to establish building energy codes and standards and effective energy policy, it is crucial to understand and quantify key factors influencing building energy performance. Here, this study investigates air-conditioning (AC) energy use of four office buildings in four locations: Beijing, Taiwan, Hong Kong, and Berkeley. Building simulation was employed to quantify the influences of key factors, including climate, building envelope and occupant behavior. Through simulation of various combinations of the three influencing elements, it is found that climate can lead tomore » AC cooling consumption differences by almost two times, while occupant behavior resulted in the greatest differences (of up to three times) in AC cooling consumption. The influence of occupant behavior on AC energy consumption is not homogeneous. Under similar climates, when the occupant behavior in the building differed, the optimized building envelope design also differed. In conclusion, the optimal building envelope should be determined according to the climate as well as the occupants who use the building.« less
Comparative study of air-conditioning energy use of four office buildings in China and USA
Zhou, Xin; Yan, Da; An, Jingjing; ...
2018-04-05
Energy use in buildings has great variability. In order to design and operate low energy buildings as well as to establish building energy codes and standards and effective energy policy, it is crucial to understand and quantify key factors influencing building energy performance. Here, this study investigates air-conditioning (AC) energy use of four office buildings in four locations: Beijing, Taiwan, Hong Kong, and Berkeley. Building simulation was employed to quantify the influences of key factors, including climate, building envelope and occupant behavior. Through simulation of various combinations of the three influencing elements, it is found that climate can lead tomore » AC cooling consumption differences by almost two times, while occupant behavior resulted in the greatest differences (of up to three times) in AC cooling consumption. The influence of occupant behavior on AC energy consumption is not homogeneous. Under similar climates, when the occupant behavior in the building differed, the optimized building envelope design also differed. In conclusion, the optimal building envelope should be determined according to the climate as well as the occupants who use the building.« less
The BACnet Campus Challenge - Part 1
Masica, Ken; Tom, Steve
2015-12-01
Here, the BACnet protocol was designed to achieve interoperability among building automation vendors and evolve over time to include new functionality as well as support new communication technologies such as the Ethernet and IP protocols as they became prevalent and economical in the market place. For large multi-building, multi-vendor campus environments, standardizing on the BACnet protocol as an implementation strategy can be a key component in meeting the challenge of an interoperable, flexible, and scalable building automation system. The interoperability of BACnet is especially important when large campuses with legacy equipment have DDC upgrades to facilities performed over different timemore » frames and use different contractors that install equipment from different vendors under the guidance of different campus HVAC project managers. In these circumstances, BACnet can serve as a common foundation for interoperability when potential variability exists in approaches to the design-build process by numerous parties over time. Likewise, BACnet support for a range of networking protocols and technologies can be a key strategy for achieving flexible and scalable automation systems as campuses and enterprises expand networking infrastructures using standard interoperable protocols like IP and Ethernet.« less
The impact of price policy on demand for alcohol in rural India.
Subramanian, Arjunan; Kumar, Parmod
2017-10-01
Whether raising the price of addictive goods can reduce its burden is widely debated in many countries, largely due to lack of appropriate data and robust methods. Three key concerns frequently raised in the literature are: unobserved heterogeneity; omitted variables; identification problem. Addressing these concerns, using robust instrument and employing unique individual-level panel data from Indian Punjab, this paper investigates two related propositions (i) will increase in alcohol price reduce its burden (ii) since greater incomes raise the costs of inebriation, will higher incomes affect consumption of alcohol negatively. Distinct from previous studies, the key variable of interest is the budget share of alcohol that allows studying the burden of alcohol consumption on drinker's and also on other family members. Results presented show that an increase in alcohol price is likely to be regressive, especially on the bottom quartile, with a rise in the budget share of alcohol given budget constraint. This outcome is robust to different econometric specifications. Preliminary explorations suggest that higher per capita income increases the odds of quitting drinking. Results reported have wider implications for the effective design of addiction related health policies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Understanding local-scale drivers of biodiversity outcomes in terrestrial protected areas.
Barnes, Megan D; Craigie, Ian D; Dudley, Nigel; Hockings, Marc
2017-07-01
Conservation relies heavily on protected areas (PAs) maintaining their key biodiversity features to meet global biodiversity conservation goals. However, PAs have had variable success, with many failing to fully maintain their biodiversity features. The current literature concerning what drives variability in PA performance is rapidly expanding but unclear, sometimes contradictory, and spread across multiple disciplines. A clear understanding of the drivers of successful biodiversity conservation in PAs is necessary to make them fully effective. Here, we conduct a comprehensive assessment of the current state of knowledge concerning the drivers of biological outcomes within PAs, focusing on those that can be addressed at local scales. We evaluate evidence in support of potential drivers to identify those that enable more successful outcomes and those that impede success and provide a synthetic review. Interactions are discussed where they are known, and we highlight gaps in understanding. We find that elements of PA design, management, and local and national governance challenges, species and system ecology, and sociopolitical context can all influence outcomes. Adjusting PA management to focus on actions and policies that influence the key drivers identified here could improve global biodiversity outcomes. © 2016 New York Academy of Sciences.
Flight Mechanics of the Entry, Descent and Landing of the ExoMars Mission
NASA Technical Reports Server (NTRS)
HayaRamos, Rodrigo; Boneti, Davide
2007-01-01
ExoMars is ESA's current mission to planet Mars. A high mobility rover and a fixed station will be deployed on the surface of Mars. This paper regards the flight mechanics of the Entry, Descent and Landing (EDL) phases used for the mission analysis and design of the Baseline and back-up scenarios of the mission. The EDL concept is based on a ballistic entry, followed by a descent under parachutes and inflatable devices (airbags) for landing. The mission analysis and design is driven by the flexibility in terms of landing site, arrival dates and the very stringent requirement in terms of landing accuracy. The challenging requirements currently imposed to the mission need innovative analysis and design techniques to support system design trade-offs to cope with the variability in entry conditions. The concept of the Global Entry Corridor has been conceived, designed, implemented and successfully validated as a key tool to provide a global picture of the mission capabilities in terms of landing site reachability.
NIMBUS: A Near-Infrared Multi-Band Ultraprecise Spectroimager for SOFIA
NASA Technical Reports Server (NTRS)
McElwain, Michael W.; Mandell, Avi; Woodgate, Bruce E.; Spiegel, David S.; Madhusudhan, Nikku; Amatucci, Edward; Blake, Cullen; Budinoff, Jason; Burgasser, Adam; Burrows, Adam;
2012-01-01
We present a new and innovative near-infrared multi-band ultraprecise spectroimager (NIMBUS) for SOFIA. This instrument will enable many exciting observations in the new age of precision astronomy. This optical design splits the beam into 8 separate spectral bandpasses, centered around key molecular bands from 1 to 4 microns. Each spectral channel has a wide field of view for simultaneous observations of a reference star that can decorrelate time-variable atmospheric and optical assembly effects, allowing the instrument to achieve ultraprecise photometry for a wide variety of astrophysical sources
McDonald, Suzanne; Quinn, Francis; Vieira, Rute; O'Brien, Nicola; White, Martin; Johnston, Derek W; Sniehotta, Falko F
2017-12-01
n-of-1 studies test hypotheses within individuals based on repeated measurement of variables within the individual over time. Intra-individual effects may differ from those found in between-participant studies. Using examples from a systematic review of n-of-1 studies in health behaviour research, this article provides a state of the art overview of the use of n-of-1 methods, organised according to key methodological considerations related to n-of-1 design and analysis, and describes future challenges and opportunities. A comprehensive search strategy (PROSPERO:CRD42014007258) was used to identify articles published between 2000 and 2016, reporting observational or interventional n-of-1 studies with health behaviour outcomes. Thirty-nine articles were identified which reported on n-of-1 observational designs and a range of n-of-1 interventional designs, including AB, ABA, ABABA, alternating treatments, n-of-1 randomised controlled trial, multiple baseline and changing criterion designs. Behaviours measured included treatment adherence, physical activity, drug/alcohol use, sleep, smoking and eating behaviour. Descriptive, visual or statistical analyses were used. We identify scope and opportunities for using n-of-1 methods to answer key questions in health behaviour research. n-of-1 methods provide the tools needed to help advance theoretical knowledge and personalise/tailor health behaviour interventions to individuals.
Wide range operation of advanced low NOx aircraft gas turbine combustors
NASA Technical Reports Server (NTRS)
Roberts, P. B.; Fiorito, R. J.; Butze, H. F.
1978-01-01
The paper summarizes the results of an experimental test rig program designed to define and demonstrates techniques which would allow the jet-induced circulation and vortex air blast combustors to operate stably with acceptable emissions at simulated engine idle without compromise to the low NOx emissions under the high-altitude supersonic cruise condition. The discussion focuses on the test results of the key combustor modifications for both the simulated engine idle and cruise conditions. Several range-augmentation techniques are demonstrated that allow the lean-reaction premixed aircraft gas turbine combustor to operate with low NOx emissons at engine cruise and acceptable CO and UHC levels at engine idle. These techniques involve several combinations, including variable geometry and fuel switching designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Na; Wu, Yu-Ping; Min, Hao
A radio-frequency (RF) source designed for cold atom experiments is presented. The source uses AD9858, a direct digital synthesizer, to generate the sine wave directly, up to 400 MHz, with sub-Hz resolution. An amplitude control circuit consisting of wideband variable gain amplifier and high speed digital to analog converter is integrated into the source, capable of 70 dB off isolation and 4 ns on-off keying. A field programmable gate array is used to implement a versatile frequency and amplitude co-sweep logic. Owing to modular design, the RF sources have been used on many cold atom experiments to generate various complicatedmore » RF sequences, enriching the operation schemes of cold atoms, which cannot be done by standard RF source instruments.« less
Stronger steerability criterion for more uncertain continuous-variable systems
NASA Astrophysics Data System (ADS)
Chowdhury, Priyanka; Pramanik, Tanumoy; Majumdar, A. S.
2015-10-01
We derive a fine-grained uncertainty relation for the measurement of two incompatible observables on a single quantum system of continuous variables, and show that continuous-variable systems are more uncertain than discrete-variable systems. Using the derived fine-grained uncertainty relation, we formulate a stronger steering criterion that is able to reveal the steerability of NOON states that has hitherto not been possible using other criteria. We further obtain a monogamy relation for our steering inequality which leads to an, in principle, improved lower bound on the secret key rate of a one-sided device independent quantum key distribution protocol for continuous variables.
Access to health care and community social capital.
Hendryx, Michael S; Ahern, Melissa M; Lovrich, Nicholas P; McCurdy, Arthur H
2002-02-01
To test the hypothesis that variation in reported access to health care is positively related to the level of social capital present in a community. The 1996 Household Survey of the Community Tracking Study, drawn from 22 metropolitan statistical areas across the United States (n = 19,672). Additional data for the 22 communities are from a 1996 multicity broadcast media marketing database, including key social capital indicators, the 1997 National Profile of Local Health Departments survey, and Interstudy, American Hospital Association, and American Medical Association sources. The design is cross-sectional. Self-reported access to care problems is the dependent variable. Independent variables include individual sociodemographic variables, community-level health sector variables, and social capital variables. Data are merged from the various sources and weighted to be population representative and are analyzed using hierarchical categorical modeling. Persons who live in metropolitan statistical areas featuring higher levels of social capital report fewer problems accessing health care. A higher HMO penetration rate in a metropolitan statistical area was also associated with fewer access problems. Other health sector variables were not related to health care access. The results observed for 22 major U.S. cities are consistent with the hypothesis that community social capital enables better access to care, perhaps through improving community accountability mechanisms.
Automated evaluation of AIMS images: an approach to minimize evaluation variability
NASA Astrophysics Data System (ADS)
Dürr, Arndt C.; Arndt, Martin; Fiebig, Jan; Weiss, Samuel
2006-05-01
Defect disposition and qualification with stepper simulating AIMS tools on advanced masks of the 90nm node and below is key to match the customer's expectations for "defect free" masks, i.e. masks containing only non-printing design variations. The recently available AIMS tools allow for a large degree of automated measurements enhancing the throughput of masks and hence reducing cycle time - up to 50 images can be recorded per hour. However, this amount of data still has to be evaluated by hand which is not only time-consuming but also error prone and exhibits a variability depending on the person doing the evaluation which adds to the tool intrinsic variability and decreases the reliability of the evaluation. In this paper we present the results of an MatLAB based algorithm which automatically evaluates AIMS images. We investigate its capabilities regarding throughput, reliability and matching with handmade evaluation for a large variety of dark and clear defects and discuss the limitations of an automated AIMS evaluation algorithm.
NASA Astrophysics Data System (ADS)
Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying
2017-02-01
Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.
Quantum hacking of two-way continuous-variable quantum key distribution using Trojan-horse attack
NASA Astrophysics Data System (ADS)
Ma, Hong-Xin; Bao, Wan-Su; Li, Hong-Wei; Chou, Chun
2016-08-01
We present a Trojan-horse attack on the practical two-way continuous-variable quantum key distribution system. Our attack mainly focuses on the imperfection of the practical system that the modulator has a redundancy of modulation pulse-width, which leaves a loophole for the eavesdropper inserting a Trojan-horse pulse. Utilizing the unique characteristics of two-way continuous-variable quantum key distribution that Alice only takes modulation operation on the received mode without any measurement, this attack allows the eavesdropper to render all of the final keys shared between the legitimate parties insecure without being detected. After analyzing the feasibility of the attack, the corresponding countermeasures are put forward. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the National Natural Science Foundation of China (Grant Nos. 11304397 and 61505261).
Framework for Multidisciplinary Analysis, Design, and Optimization with High-Fidelity Analysis Tools
NASA Technical Reports Server (NTRS)
Orr, Stanley A.; Narducci, Robert P.
2009-01-01
A plan is presented for the development of a high fidelity multidisciplinary optimization process for rotorcraft. The plan formulates individual disciplinary design problems, identifies practical high-fidelity tools and processes that can be incorporated in an automated optimization environment, and establishes statements of the multidisciplinary design problem including objectives, constraints, design variables, and cross-disciplinary dependencies. Five key disciplinary areas are selected in the development plan. These are rotor aerodynamics, rotor structures and dynamics, fuselage aerodynamics, fuselage structures, and propulsion / drive system. Flying qualities and noise are included as ancillary areas. Consistency across engineering disciplines is maintained with a central geometry engine that supports all multidisciplinary analysis. The multidisciplinary optimization process targets the preliminary design cycle where gross elements of the helicopter have been defined. These might include number of rotors and rotor configuration (tandem, coaxial, etc.). It is at this stage that sufficient configuration information is defined to perform high-fidelity analysis. At the same time there is enough design freedom to influence a design. The rotorcraft multidisciplinary optimization tool is built and substantiated throughout its development cycle in a staged approach by incorporating disciplines sequentially.
Interresponse Time Structures in Variable-Ratio and Variable-Interval Schedules
ERIC Educational Resources Information Center
Bowers, Matthew T.; Hill, Jade; Palya, William L.
2008-01-01
The interresponse-time structures of pigeon key pecking were examined under variable-ratio, variable-interval, and variable-interval plus linear feedback schedules. Whereas the variable-ratio and variable-interval plus linear feedback schedules generally resulted in a distinct group of short interresponse times and a broad distribution of longer…
NASA Technical Reports Server (NTRS)
Cruse, T. A.
1987-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.
1988-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
Discrete and continuous variables for measurement-device-independent quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Feihu; Curty, Marcos; Qi, Bing
In a recent Article in Nature Photonics, Pirandola et al.1 claim that the achievable secret key rates of discrete-variable (DV) measurementdevice- independent (MDI) quantum key distribution (QKD) (refs 2,3) are “typically very low, unsuitable for the demands of a metropolitan network” and introduce a continuous-variable (CV) MDI QKD protocol capable of providing key rates which, they claim, are “three orders of magnitude higher” than those of DV MDI QKD. We believe, however, that the claims regarding low key rates of DV MDI QKD made by Pirandola et al.1 are too pessimistic. Here in this paper, we show that the secretmore » key rate of DV MDI QKD with commercially available high-efficiency single-photon detectors (SPDs) (for example, see http://www.photonspot.com/detectors and http://www.singlequantum.com) and good system alignment is typically rather high and thus highly suitable for not only long-distance communication but also metropolitan networks.« less
Discrete and continuous variables for measurement-device-independent quantum cryptography
Xu, Feihu; Curty, Marcos; Qi, Bing; ...
2015-11-16
In a recent Article in Nature Photonics, Pirandola et al.1 claim that the achievable secret key rates of discrete-variable (DV) measurementdevice- independent (MDI) quantum key distribution (QKD) (refs 2,3) are “typically very low, unsuitable for the demands of a metropolitan network” and introduce a continuous-variable (CV) MDI QKD protocol capable of providing key rates which, they claim, are “three orders of magnitude higher” than those of DV MDI QKD. We believe, however, that the claims regarding low key rates of DV MDI QKD made by Pirandola et al.1 are too pessimistic. Here in this paper, we show that the secretmore » key rate of DV MDI QKD with commercially available high-efficiency single-photon detectors (SPDs) (for example, see http://www.photonspot.com/detectors and http://www.singlequantum.com) and good system alignment is typically rather high and thus highly suitable for not only long-distance communication but also metropolitan networks.« less
Infrared Space Observatory (ISO) Key Project: the Birth and Death of Planets
NASA Technical Reports Server (NTRS)
Stencel, Robert E.; Creech-Eakman, Michelle; Fajardo-Acosta, Sergio; Backman, Dana
1999-01-01
This program was designed to continue to analyze observations of stars thought to be forming protoplanets, using the European Space Agency's Infrared Space Observatory, ISO, as one of NASA Key Projects with ISO. A particular class of Infrared Astronomy Satellite (IRAS) discovered stars, known after the prototype, Vega, are principal targets for these observations aimed at examining the evidence for processes involved in forming, or failing to form, planetary systems around other stars. In addition, this program continued to provide partial support for related science in the WIRE, SOFIA and Space Infrared Telescope Facility (SIRTF) projects, plus approved ISO supplementary time observations under programs MCREE1 29 and VEGADMAP. Their goals include time dependent changes in SWS spectra of Long Period Variable stars and PHOT P32 mapping experiments of recognized protoplanetary disk candidate stars.
Furrer, F; Franz, T; Berta, M; Leverrier, A; Scholz, V B; Tomamichel, M; Werner, R F
2012-09-07
We provide a security analysis for continuous variable quantum key distribution protocols based on the transmission of two-mode squeezed vacuum states measured via homodyne detection. We employ a version of the entropic uncertainty relation for smooth entropies to give a lower bound on the number of secret bits which can be extracted from a finite number of runs of the protocol. This bound is valid under general coherent attacks, and gives rise to keys which are composably secure. For comparison, we also give a lower bound valid under the assumption of collective attacks. For both scenarios, we find positive key rates using experimental parameters reachable today.
Phillips, K A; Morrison, K R; Andersen, R; Aday, L A
1998-01-01
OBJECTIVE: The behavioral model of utilization, developed by Andersen, Aday, and others, is one of the most frequently used frameworks for analyzing the factors that are associated with patient utilization of healthcare services. However, the use of the model for examining the context within which utilization occurs-the role of the environment and provider-related factors-has been largely neglected. OBJECTIVE: To conduct a systematic review and analysis to determine if studies of medical care utilization that have used the behavioral model during the last 20 years have included environmental and provider-related variables and the methods used to analyze these variables. We discuss barriers to the use of these contextual variables and potential solutions. DATA SOURCES: The Social Science Citation Index and Science Citation Index. We included all articles from 1975-1995 that cited any of three key articles on the behavioral model, that included all articles that were empirical analyses and studies of formal medical care utilization, and articles that specifically stated their use of the behavioral model (n = 139). STUDY DESIGN: Design was a systematic literature review. DATA ANALYSIS: We used a structured review process to code articles on whether they included contextual variables: (1) environmental variables (characteristics of the healthcare delivery system, external environment, and community-level enabling factors); and (2) provider-related variables (patient factors that may be influenced by providers and provider characteristics that interact with patient characteristics to influence utilization). We also examined the methods used in studies that included contextual variables. PRINCIPAL FINDINGS: Forty-five percent of the studies included environmental variables and 51 percent included provider-related variables. Few studies examined specific measures of the healthcare system or provider characteristics or used methods other than simple regression analysis with hierarchical entry of variables. Only 14 percent of studies analyzed the context of healthcare by including both environmental and provider-related variables as well as using relevant methods. CONCLUSIONS: By assessing whether and how contextual variables are used, we are able to highlight the contributions made by studies using these approaches, to identify variables and methods that have been relatively underused, and to suggest solutions to barriers in using contextual variables. PMID:9685123
Case studies of conservation plans that incorporate geodiversity.
Anderson, M G; Comer, P J; Beier, P; Lawler, J J; Schloss, C A; Buttrick, S; Albano, C M; Faith, D P
2015-06-01
Geodiversity has been used as a surrogate for biodiversity when species locations are unknown, and this utility can be extended to situations where species locations are in flux. Recently, scientists have designed conservation networks that aim to explicitly represent the range of geophysical environments, identifying a network of physical stages that could sustain biodiversity while allowing for change in species composition in response to climate change. Because there is no standard approach to designing such networks, we compiled 8 case studies illustrating a variety of ways scientists have approached the challenge. These studies show how geodiversity has been partitioned and used to develop site portfolios and connectivity designs; how geodiversity-based portfolios compare with those derived from species and communities; and how the selection and combination of variables influences the results. Collectively, they suggest 4 key steps when using geodiversity to augment traditional biodiversity-based conservation planning: create land units from species-relevant variables combined in an ecologically meaningful way; represent land units in a logical spatial configuration and integrate with species locations when possible; apply selection criteria to individual sites to ensure they are appropriate for conservation; and develop connectivity among sites to maintain movements and processes. With these considerations, conservationists can design more effective site portfolios to ensure the lasting conservation of biodiversity under a changing climate. © 2015 Society for Conservation Biology.
The Intertwining of Enterprise Strategy and Requirements
NASA Astrophysics Data System (ADS)
Loucopoulos, Pericles; Garfield, Joy
Requirements Engineering techniques need to focus not only on the target technical system, as has traditionally been the case, but also on the interplay between business and system functionality. Whether a business wishes to exploit advances in technology to achieve new strategic objectives or to organise work in innovative ways, the process of Requirements Engineering could and should present opportunities for modelling and evaluating the potential impact that technology can bring about to the enterprise.This chapter discusses a co-designing process that offers opportunities of change to both the business and its underlying technical systems, in a synergistic manner. In these design situations some of the most challenging projects involve multiple stakeholders from different participating organisations, subcontractors, divisions etc who may have a diversity of expertise, come from different organisational cultures and often have competing goals. Stakeholders are faced with many different alternative future ‘worlds’ each one demanding a possibly different development strategy.There are acute questions about the potential structure of the new business system and how key variables in this structure could impact on the dynamics of the system. This chapter presents a framework which enables the evaluation of requirements through (a) system dynamics modelling, (b) ontology modelling, (c) scenario modelling and (d) rationale modelling. System dynamics modelling is used to define the behaviour of an enterprise system in terms of four perspectives. Ontology modelling is used to formally define invariant components of the physical and social world within the enterprise domain. Scenario modelling is used to identify critical variables and by quantitatively analyzing the effects of these variables through simulation to better understand the dynamic behaviour of the possible future structures. Rationale modelling is used to assist collaborative discussions when considering either ontology models or scenarios for change, developing maps, which chart the assumptions and reasoning behind key decisions during the requirements process.
Imsirovic, Jasmin; Derricks, Kelsey; Buczek-Thomas, Jo Ann; Rich, Celeste B; Nugent, Matthew A; Suki, Béla
2013-01-01
A broad range of cells are subjected to irregular time varying mechanical stimuli within the body, particularly in the respiratory and circulatory systems. Mechanical stretch is an important factor in determining cell function; however, the effects of variable stretch remain unexplored. In order to investigate the effects of variable stretch, we designed, built and tested a uniaxial stretching device that can stretch three-dimensional tissue constructs while varying the strain amplitude from cycle to cycle. The device is the first to apply variable stretching signals to cells in tissues or three dimensional tissue constructs. Following device validation, we applied 20% uniaxial strain to Gelfoam samples seeded with neonatal rat lung fibroblasts with different levels of variability (0%, 25%, 50% and 75%). RT-PCR was then performed to measure the effects of variable stretch on key molecules involved in cell-matrix interactions including: collagen 1α, lysyl oxidase, α-actin, β1 integrin, β3 integrin, syndecan-4, and vascular endothelial growth factor-A. Adding variability to the stretching signal upregulated, downregulated or had no effect on mRNA production depending on the molecule and the amount of variability. In particular, syndecan-4 showed a statistically significant peak at 25% variability, suggesting that an optimal variability of strain may exist for production of this molecule. We conclude that cycle-by-cycle variability in strain influences the expression of molecules related to cell-matrix interactions and hence may be used to selectively tune the composition of tissue constructs.
Physically Active Adults: An Analysis of the Key Variables That Keep Them Moving
ERIC Educational Resources Information Center
Downs, Andrew
2016-01-01
Background: A large proportion of adults are insufficiently physically active, and researchers have yet to determine the factors that enable individuals to maintain adequate levels of physical activity throughout adulthood. Purpose: This study sought to identify the key variables linked with consistent physical activity in adulthood as elucidated…
Education and Success: A Case Study of the Thai Public Service.
ERIC Educational Resources Information Center
Fry, Gerald W.
1980-01-01
Studied is the bureaucracy in Thailand, and access to an promotion within the system--or the "degree of openness" in the Thai public service. The key dependent variable is occupational attainment. Some key intervening variables include educational attainment, total job experience, sex, and regional remoteness of early schooling. (KC)
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Integration of nanoscale memristor synapses in neuromorphic computing architectures
NASA Astrophysics Data System (ADS)
Indiveri, Giacomo; Linares-Barranco, Bernabé; Legenstein, Robert; Deligeorgis, George; Prodromakis, Themistoklis
2013-09-01
Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.
A 24 km fiber-based discretely signaled continuous variable quantum key distribution system.
Dinh Xuan, Quyen; Zhang, Zheshen; Voss, Paul L
2009-12-21
We report a continuous variable key distribution system that achieves a final secure key rate of 3.45 kilobits/s over a distance of 24.2 km of optical fiber. The protocol uses discrete signaling and post-selection to improve reconciliation speed and quantifies security by means of quantum state tomography. Polarization multiplexing and a frequency translation scheme permit transmission of a continuous wave local oscillator and suppression of noise from guided acoustic wave Brillouin scattering by more than 27 dB.
Coherent attacking continuous-variable quantum key distribution with entanglement in the middle
NASA Astrophysics Data System (ADS)
Zhang, Zhaoyuan; Shi, Ronghua; Zeng, Guihua; Guo, Ying
2018-06-01
We suggest an approach on the coherent attack of continuous-variable quantum key distribution (CVQKD) with an untrusted entangled source in the middle. The coherent attack strategy can be performed on the double links of quantum system, enabling the eavesdropper to steal more information from the proposed scheme using the entanglement correlation. Numeric simulation results show the improved performance of the attacked CVQKD system in terms of the derived secret key rate with the controllable parameters maximizing the stolen information.
25 MHz clock continuous-variable quantum key distribution system over 50 km fiber channel
Wang, Chao; Huang, Duan; Huang, Peng; Lin, Dakai; Peng, Jinye; Zeng, Guihua
2015-01-01
In this paper, a practical continuous-variable quantum key distribution system is developed and it runs in the real-world conditions with 25 MHz clock rate. To reach high-rate, we have employed a homodyne detector with maximal bandwidth to 300 MHz and an optimal high-efficiency error reconciliation algorithm with processing speed up to 25 Mbps. To optimize the stability of the system, several key techniques are developed, which include a novel phase compensation algorithm, a polarization feedback algorithm, and related stability method on the modulators. Practically, our system is tested for more than 12 hours with a final secret key rate of 52 kbps over 50 km transmission distance, which is the highest rate so far in such distance. Our system may pave the road for practical broadband secure quantum communication with continuous variables in the commercial conditions. PMID:26419413
25 MHz clock continuous-variable quantum key distribution system over 50 km fiber channel.
Wang, Chao; Huang, Duan; Huang, Peng; Lin, Dakai; Peng, Jinye; Zeng, Guihua
2015-09-30
In this paper, a practical continuous-variable quantum key distribution system is developed and it runs in the real-world conditions with 25 MHz clock rate. To reach high-rate, we have employed a homodyne detector with maximal bandwidth to 300 MHz and an optimal high-efficiency error reconciliation algorithm with processing speed up to 25 Mbps. To optimize the stability of the system, several key techniques are developed, which include a novel phase compensation algorithm, a polarization feedback algorithm, and related stability method on the modulators. Practically, our system is tested for more than 12 hours with a final secret key rate of 52 kbps over 50 km transmission distance, which is the highest rate so far in such distance. Our system may pave the road for practical broadband secure quantum communication with continuous variables in the commercial conditions.
Teleportation-based continuous variable quantum cryptography
NASA Astrophysics Data System (ADS)
Luiz, F. S.; Rigolin, Gustavo
2017-03-01
We present a continuous variable (CV) quantum key distribution (QKD) scheme based on the CV quantum teleportation of coherent states that yields a raw secret key made up of discrete variables for both Alice and Bob. This protocol preserves the efficient detection schemes of current CV technology (no single-photon detection techniques) and, at the same time, has efficient error correction and privacy amplification schemes due to the binary modulation of the key. We show that for a certain type of incoherent attack, it is secure for almost any value of the transmittance of the optical line used by Alice to share entangled two-mode squeezed states with Bob (no 3 dB or 50% loss limitation characteristic of beam splitting attacks). The present CVQKD protocol works deterministically (no postselection needed) with efficient direct reconciliation techniques (no reverse reconciliation) in order to generate a secure key and beyond the 50% loss case at the incoherent attack level.
Electricity Market Manipulation: How Behavioral Modeling Can Help Market Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallo, Giulia
The question of how to best design electricity markets to integrate variable and uncertain renewable energy resources is becoming increasingly important as more renewable energy is added to electric power systems. Current markets were designed based on a set of assumptions that are not always valid in scenarios of high penetrations of renewables. In a future where renewables might have a larger impact on market mechanisms as well as financial outcomes, there is a need for modeling tools and power system modeling software that can provide policy makers and industry actors with more realistic representations of wholesale markets. One optionmore » includes using agent-based modeling frameworks. This paper discusses how key elements of current and future wholesale power markets can be modeled using an agent-based approach and how this approach may become a useful paradigm that researchers can employ when studying and planning for power systems of the future.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Hongyi; Li, Yang; Zeng, Danielle
Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less
NASA Astrophysics Data System (ADS)
Wilson, Emily L.; DiGregorio, A. J.; Riot, Vincent J.; Ammons, Mark S.; Bruner, William W.; Carter, Darrell; Mao, Jianping; Ramanathan, Anand; Strahan, Susan E.; Oman, Luke D.; Hoffman, Christine; Garner, Richard M.
2017-03-01
We present a design for a 4 U (20 cm × 20 cm × 10 cm) occultation-viewing laser heterodyne radiometer (LHR) that measures methane (CH4), carbon dioxide (CO2) and water vapor (H2O) in the limb that is designed for deployment on a 6 U CubeSat. The LHR design collects sunlight that has undergone absorption by the trace gas and mixes it with a distributive feedback (DFB) laser centered at 1640 nm that scans across CO2, CH4, and H2O absorption features. Upper troposphere/lower stratosphere measurements of these gases provide key inputs to stratospheric circulation models: measuring stratospheric circulation and its variability is essential for projecting how climate change will affect stratospheric ozone.
Predicting Baseline for Analysis of Electricity Pricing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T.; Lee, D.; Choi, J.
2016-05-03
To understand the impact of new pricing structure on residential electricity demands, we need a baseline model that captures every factor other than the new price. The standard baseline is a randomized control group, however, a good control group is hard to design. This motivates us to devlop data-driven approaches. We explored many techniques and designed a strategy, named LTAP, that could predict the hourly usage years ahead. The key challenge in this process is that the daily cycle of electricity demand peaks a few hours after the temperature reaching its peak. Existing methods rely on the lagged variables ofmore » recent past usages to enforce this daily cycle. These methods have trouble making predictions years ahead. LTAP avoids this trouble by assuming the daily usage profile is determined by temperature and other factors. In a comparison against a well-designed control group, LTAP is found to produce accurate predictions.« less
Statistical Analyses of Femur Parameters for Designing Anatomical Plates.
Wang, Lin; He, Kunjin; Chen, Zhengming
2016-01-01
Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.
NASA Astrophysics Data System (ADS)
Ganguli, R.
2002-11-01
An aeroelastic analysis based on finite elements in space and time is used to model the helicopter rotor in forward flight. The rotor blade is represented as an elastic cantilever beam undergoing flap and lag bending, elastic torsion and axial deformations. The objective of the improved design is to reduce vibratory loads at the rotor hub that are the main source of helicopter vibration. Constraints are imposed on aeroelastic stability, and move limits are imposed on the blade elastic stiffness design variables. Using the aeroelastic analysis, response surface approximations are constructed for the objective function (vibratory hub loads). It is found that second order polynomial response surfaces constructed using the central composite design of the theory of design of experiments adequately represents the aeroelastic model in the vicinity of the baseline design. Optimization results show a reduction in the objective function of about 30 per cent. A key accomplishment of this paper is the decoupling of the analysis problem and the optimization problems using response surface methods, which should encourage the use of optimization methods by the helicopter industry.
Scrivani, Peter V; Erb, Hollis N
2013-01-01
High quality clinical research is essential for advancing knowledge in the areas of veterinary radiology and radiation oncology. Types of clinical research studies may include experimental studies, method-comparison studies, and patient-based studies. Experimental studies explore issues relative to pathophysiology, patient safety, and treatment efficacy. Method-comparison studies evaluate agreement between techniques or between observers. Patient-based studies investigate naturally acquired disease and focus on questions asked in clinical practice that relate to individuals or populations (e.g., risk, accuracy, or prognosis). Careful preplanning and study design are essential in order to achieve valid results. A key point to planning studies is ensuring that the design is tailored to the study objectives. Good design includes a comprehensive literature review, asking suitable questions, selecting the proper sample population, collecting the appropriate data, performing the correct statistical analyses, and drawing conclusions supported by the available evidence. Most study designs are classified by whether they are experimental or observational, longitudinal or cross-sectional, and prospective or retrospective. Additional features (e.g., controlled, randomized, or blinded) may be described that address bias. Two related challenging aspects of study design are defining an important research question and selecting an appropriate sample population. The sample population should represent the target population as much as possible. Furthermore, when comparing groups, it is important that the groups are as alike to each other as possible except for the variables of interest. Medical images are well suited for clinical research because imaging signs are categorical or numerical variables that might be predictors or outcomes of diseases or treatments. © 2013 Veterinary Radiology & Ultrasound.
Guzmán Q, J. Antonio; Cordero, Roberto A.
2016-01-01
Background and Aims Plant design refers to the construction of the plant body or its constituent parts in terms of form and function. Although neighbourhood structure is recognized as a factor that limits plant survival and species coexistence, its relative importance in plant design is not well understood. We conducted field research to analyse how the surrounding environment of neighbourhood structure and related effects on light availability are associated with changes in plant design in two understorey plants (Palicourea padifolia and Psychotria elata) within two successional stages of a cloud forest in Costa Rica. Methods Features of plant neighbourhood physical structure and light availability, estimated using hemispherical photographs, were used as variables that reflect the surrounding environment. Measures of plant biomechanics, allometry, branching and plant slenderness were used as functional plant attributes that reflect plant design. We propose a framework using a partial least squares path model and used it to test this association. Key Results The multidimensional response of plant design of these species suggests that decreases in the height-based factor of safety and increases in mechanical load and developmental stability are influenced by increases in maximum height of neighbours and a distance-dependence interference index more than neighbourhood plant density or neighbour aggregation. Changes in plant branching and slenderness are associated positively with light availability and negatively with canopy cover. Conclusions Although it has been proposed that plant design varies according to plant density and light availability, we found that neighbour size and distance-dependence interference are associated with changes in biomechanics, allometry and branching, and they must be considered as key factors that contribute to the adaptation and coexistence of these plants in this highly diverse forest community. PMID:27245635
Booth, Amy R; Norman, Paul; Harris, Peter R; Goyder, Elizabeth
2014-02-01
The study sought to (1) explain intentions to get tested for chlamydia regularly in a group of young people living in deprived areas using the theory of planned behaviour (TPB); and (2) test whether self-identity explained additional variance in testing intentions. A cross-sectional design was used for this study. Participants (N = 278, 53% male; M = 17.05 years) living in deprived areas of a UK city were recruited from a vocational education setting. Participants completed a self-administered questionnaire, including measures of attitude, injunctive subjective norm, descriptive norm, perceived behavioural control, self-identity, intention and past behaviour in relation to getting tested for chlamydia regularly. The TPB explained 43% of the variance in chlamydia testing intentions with all variables emerging as significant predictors. However, self-identity explained additional variance in intentions (ΔR(2) = .22) and emerged as the strongest predictor, even when controlling for past behaviour. The study identified the key determinants of intention to get tested for chlamydia regularly in a sample of young people living in areas of increased deprivation: a hard-to-reach, high-risk population. The findings indicate the key variables to target in interventions to promote motivation to get tested for chlamydia regularly in equivalent samples, amongst which self-identity is critical. What is already known on this subject? Young people living in deprived areas have been identified as an at-risk group for chlamydia. Qualitative research has identified several themes in relation to factors affecting the uptake of chlamydia testing, which fit well with the constructs of the Theory of Planned Behaviour (TPB). Identity concerns have also been identified as playing an important part in young people's chlamydia testing decisions. What does this study add? TPB explained 43% of the variance in chlamydia testing intentions and all variables were significant predictors. Self-identity explained additional 22% of the variance in intentions and emerged as the strongest predictor. Indicates key variables to target in interventions to promote regular chlamydia testing in deprived young people. © 2013 The British Psychological Society.
Composable security proof for continuous-variable quantum key distribution with coherent States.
Leverrier, Anthony
2015-02-20
We give the first composable security proof for continuous-variable quantum key distribution with coherent states against collective attacks. Crucially, in the limit of large blocks the secret key rate converges to the usual value computed from the Holevo bound. Combining our proof with either the de Finetti theorem or the postselection technique then shows the security of the protocol against general attacks, thereby confirming the long-standing conjecture that Gaussian attacks are optimal asymptotically in the composable security framework. We expect that our parameter estimation procedure, which does not rely on any assumption about the quantum state being measured, will find applications elsewhere, for instance, for the reliable quantification of continuous-variable entanglement in finite-size settings.
NASA Astrophysics Data System (ADS)
Roy, Satadru
Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.
NASA Astrophysics Data System (ADS)
Ault, T. R.; Cole, J. E.; St. George, S.
2012-11-01
We assess the magnitude of decadal to multidecadal (D2M) variability in Climate Model Intercomparison Project 5 (CMIP5) simulations that will be used to understand, and plan for, climate change as part of the Intergovernmental Panel on Climate Change's 5th Assessment Report. Model performance on D2M timescales is evaluated using metrics designed to characterize the relative and absolute magnitude of variability at these frequencies. In observational data, we find that between 10% and 35% of the total variance occurs on D2M timescales. Regions characterized by the high end of this range include Africa, Australia, western North America, and the Amazon region of South America. In these areas D2M fluctuations are especially prominent and linked to prolonged drought. D2M fluctuations account for considerably less of the total variance (between 5% and 15%) in the CMIP5 archive of historical (1850-2005) simulations. The discrepancy between observation and model based estimates of D2M prominence reflects two features of the CMIP5 archive. First, interannual components of variability are generally too energetic. Second, decadal components are too weak in several key regions. Our findings imply that projections of the future lack sufficient decadal variability, presenting a limited view of prolonged drought and pluvial risk.
Yaghoobpour Tari, Shima; Wachowicz, Keith; Gino Fallone, B
2017-04-21
A prototype rotating hybrid magnetic resonance imaging system and linac has been developed to allow for simultaneous imaging and radiation delivery parallel to B 0 . However, the design of a compact magnet capable of rotation in a small vault with sufficient patient access and a typical clinical source-to-axis distance (SAD) is challenging. This work presents a novel superconducting magnet design as a proof of concept that allows for a reduced SAD and ample patient access by moving the superconducting coils to the side of the yoke. The yoke and pole-plate structures are shaped to direct the magnetic flux appropriately. The outer surface of the pole plate is optimized subject to the minimization of a cost function, which evaluates the uniformity of the magnetic field over an ellipsoid. The magnetic field calculations required in this work are performed with the 3D finite element method software package Opera-3D. Each tentative design strategy is virtually modeled in this software package, which is externally controlled by MATLAB, with its key geometries defined as variables. The optimization variables are the thickness of the pole plate at control points distributed over the pole plate surface. A novel design concept as a superconducting non-axial magnet is introduced, which could create a large uniform B 0 magnetic field with fewer geometric restriction. This non-axial 0.5 T superconducting magnet has a moderately reduced SAD of 123 cm and a vertical patient opening of 68 cm. This work is presented as a proof of principle to investigate the feasibility of a non-axial magnet with the coils located around the yoke, and the results encourage future design optimizations to maximize the benefits of this non-axial design.
NASA Astrophysics Data System (ADS)
Yaghoobpour Tari, Shima; Wachowicz, Keith; Fallone, B. Gino
2017-04-01
A prototype rotating hybrid magnetic resonance imaging system and linac has been developed to allow for simultaneous imaging and radiation delivery parallel to B 0. However, the design of a compact magnet capable of rotation in a small vault with sufficient patient access and a typical clinical source-to-axis distance (SAD) is challenging. This work presents a novel superconducting magnet design as a proof of concept that allows for a reduced SAD and ample patient access by moving the superconducting coils to the side of the yoke. The yoke and pole-plate structures are shaped to direct the magnetic flux appropriately. The outer surface of the pole plate is optimized subject to the minimization of a cost function, which evaluates the uniformity of the magnetic field over an ellipsoid. The magnetic field calculations required in this work are performed with the 3D finite element method software package Opera-3D. Each tentative design strategy is virtually modeled in this software package, which is externally controlled by MATLAB, with its key geometries defined as variables. The optimization variables are the thickness of the pole plate at control points distributed over the pole plate surface. A novel design concept as a superconducting non-axial magnet is introduced, which could create a large uniform B 0 magnetic field with fewer geometric restriction. This non-axial 0.5 T superconducting magnet has a moderately reduced SAD of 123 cm and a vertical patient opening of 68 cm. This work is presented as a proof of principle to investigate the feasibility of a non-axial magnet with the coils located around the yoke, and the results encourage future design optimizations to maximize the benefits of this non-axial design.
Discovering new variable stars at Key Stage 3
NASA Astrophysics Data System (ADS)
Chubb, Katy; Hood, Rosie; Wilson, Thomas; Holdship, Jonathan; Hutton, Sarah
2017-05-01
Details of the London pilot of the ‘Discovery Project’ are presented, where university-based astronomers were given the chance to pass on some real and applied knowledge of astronomy to a group of selected secondary school pupils. It was aimed at students in Key Stage 3 of their education, allowing them to be involved in real astronomical research at an early stage of their education, the chance to become the official discoverer of a new variable star, and to be listed in the International Variable Star Index database (The International Variable Star Index, Version 1.1, American Association of Variable Star Observers (AAVSO), 2016, http://aavso.org/vsx), all while learning and practising research-level skills. Future plans are discussed.
Thermohaline circulation at three key sections in the North Atlantic over 1985-2002
NASA Astrophysics Data System (ADS)
Marsh, Robert; de Cuevas, Beverly A.; Coward, Andrew C.; Bryden, Harry L.; Álvarez, Marta
2005-05-01
Efforts are presently underway to monitor the Thermohaline Circulation (THC) in the North Atlantic. A measuring strategy has been designed to monitor both the Meridional Overturning Circulation (MOC) in the subtropics and dense outflows at higher latitudes. To provide a historical context for these new observations, we diagnose an eddy-permitting ocean model simulation of the period 1985-2002. We present time series of the THC, MOC and heat transport, at key hydrographic sections in the subtropics, the northeast Atlantic and the Labrador Sea. The simulated THC compares well with observations. We find considerable variability in the THC on each section, most strikingly in the Labrador Sea during the early 1990's, consistent with observed changes. Overturning in the northeast Atlantic declines by ~20% over the 1990's, coincident with an increase in the subtropics. We speculate that MOC weakening may soon be detected in the subtropics, if the decline continues in mid-latitudes.
Designing insurance to promote use of childhood obesity prevention services.
Rask, Kimberly J; Gazmararian, Julie A; Kohler, Susan S; Hawley, Jonathan N; Bogard, Jenny; Brown, Victoria A
2013-01-01
Childhood obesity is a recognized public health crisis. This paper reviews the lessons learned from a voluntary initiative to expand insurance coverage for childhood obesity prevention and treatment services in the United States. In-depth telephone interviews were conducted with key informants from 16 participating health plans and employers in 2010-11. Key informants reported difficulty ensuring that both providers and families were aware of the available services. Participating health plans and employers are beginning new tactics including removing enrollment requirements, piloting enhanced outreach to selected physician practices, and educating providers on effective care coordination and use of obesity-specific billing codes through professional organizations. The voluntary initiative successfully increased private health insurance coverage for obesity services, but the interviews described variability in implementation with both best practices and barriers identified. Increasing utilization of obesity-related health services in the long term will require both family- and provider-focused interventions in partnership with improved health insurance coverage.
Vishnevskaia, M S; Pavlov, A V; Dziubenko, E A; Dziubenko, N I; Potokina, E K
2014-04-01
Based on legume genome syntheny, the nucleotide sequence of Srlk gene, key role of which in response to salt stress was demonstrated for the model species Medicago truncatula, was identified in the major forage and siderate crop alfalfa (Medicago sativa). In twelve alfalfa samples originating from regions with contrasting growing conditions, 19 SNPs were revealed in the Srlk gene. For two nonsynonymous SNPs, molecular markers were designed that could be further used to analyze the association between Srlk gene nucleotide polymorphism and the variability in salt stress tolerance among alfalfa cultivars.
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2016-02-01
Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10-15, making it a viable solution in the next-generation high-speed fiber optical communications.
Evaluation of nursing practice: process and critique.
Braunstein, M S
1998-01-01
This article describes the difficulties in conducting clinical trials to evaluate nursing practice models. Suggestions are offered for strengthening the process. A clinical trial of a nursing practice model based on a synthesis of Aristotelian theory with Rogers' science is described. The rationale for decisions regarding the research procedures used in presented. Methodological limitations of the study design and the specifications of the practice model are examined. It is concluded that clear specification of theoretical relationships within a practice model and clear identification of key intervening variables will enable researchers to better connect the treatment with the outcome.
Gioia, Deborah; Brekke, John S
2003-01-01
Employment is an important outcome for individuals with schizophrenia and the Americans with Disabilities Act (ADA) is a key structural variable designed to favorably influence work. Little is known about how individuals understand and utilize ADA rights. The purpose of this mixed method study was to elicit understanding of the knowledge and use of ADA provisions from 20 persons with schizophrenia who returned to work. Three distinct groups emerged. Group differences suggest that use of ADA provisions may be dependent on individual need and comfort with ADA opportunity.
Wang, Hongye; McIntosh, Anthony R; Kovacevic, Natasa; Karachalios, Maria; Protzner, Andrea B
2016-07-01
Recent empirical work suggests that, during healthy aging, the variability of network dynamics changes during task performance. Such variability appears to reflect the spontaneous formation and dissolution of different functional networks. We sought to extend these observations into resting-state dynamics. We recorded EEG in young, middle-aged, and older adults during a "rest-task-rest" design and investigated if aging modifies the interaction between resting-state activity and external stimulus-induced activity. Using multiscale entropy as our measure of variability, we found that, with increasing age, resting-state dynamics shifts from distributed to more local neural processing, especially at posterior sources. In the young group, resting-state dynamics also changed from pre- to post-task, where fine-scale entropy increased in task-positive regions and coarse-scale entropy increased in the posterior cingulate, a key region associated with the default mode network. Lastly, pre- and post-task resting-state dynamics were linked to performance on the intervening task for all age groups, but this relationship became weaker with increasing age. Our results suggest that age-related changes in resting-state dynamics occur across different spatial and temporal scales and have consequences for information processing capacity.
Oh, Ching Mien; Guo, Qiyun; Wan Sia Heng, Paul; Chan, Lai Wah
2014-07-01
In any manufacturing process, the success of producing an end product with the desired properties and yield depends on a range of factors that include the equipment, process and formulation variables. It is the interest of manufacturers and researchers to understand each manufacturing process better and ascertain the effects of various manufacturing-associated factors on the properties of the end product. Unless the manufacturing process is well understood, it would be difficult to set realistic limits for the process variables and raw material specifications to ensure consistently high-quality and reproducible end products. Over the years, spray congealing has been used to produce particulates by the food and pharmaceutical industries. The latter have used this technology to develop specialized drug delivery systems. In this review, basic principles as well as advantages and disadvantages of the spray congealing process will be covered. Recent developments in spray congealing equipment, process variables and formulation variables such as the matrix material, encapsulated material and additives will also be discussed. Innovative equipment designs and formulations for spray congealing have emerged. Judicious choice of atomizers, polymers and additives is the key to achieve the desired properties of the microparticles for drug delivery.
A methodology for rapid vehicle scaling and configuration space exploration
NASA Astrophysics Data System (ADS)
Balaba, Davis
2009-12-01
The Configuration-space Exploration and Scaling Methodology (CESM) entails the representation of component or sub-system geometries as matrices of points in 3D space. These typically large matrices are reduced using minimal convex sets or convex hulls. This reduction leads to significant gains in collision detection speed at minimal approximation expense. (The Gilbert-Johnson-Keerthi algorithm [79] is used for collision detection purposes in this methodology.) Once the components are laid out, their collective convex hull (from here on out referred to as the super-hull) is used to approximate the inner mold line of the minimum enclosing envelope of the vehicle concept. A sectional slicing algorithm is used to extract the sectional dimensions of this envelope. An offset is added to these dimensions in order to come up with the sectional fuselage dimensions. Once the lift and control surfaces are added, vehicle level objective functions can be evaluated and compared to other designs. The size of the design space coupled with the fact that some key constraints such as the number of collisions are discontinuous, dictate that a domain-spanning optimization routine be used. Also, as this is a conceptual design tool, the goal is to provide the designer with a diverse baseline geometry space from which to chose. For these reasons, a domain-spanning algorithm with counter-measures against speciation and genetic drift is the recommended optimization approach. The Non-dominated Sorting Genetic Algorithm (NSGA-II) [60] is shown to work well for the proof of concept study. There are two major reasons why the need to evaluate higher fidelity, custom geometric scaling laws became a part of this body of work. First of all, historical-data based regressions become implicitly unreliable when the vehicle concept in question is designed around a disruptive technology. Second, it was shown that simpler approaches such as photographic scaling can result in highly suboptimal concepts even for very small scaling factors. Yet good scaling information is critical to the success of any conceptual design process. In the CESM methodology, it is assumed that the new technology has matured enough to permit the prediction of the scaling behavior of the various subsystems in response to requirement changes. Updated subsystem geometry data is generated by applying the new requirement settings to the affected subsystems. All collisions are then eliminated using the NSGA-II algorithm. This is done while minimizing the adverse impact on the vehicle packing density. Once all collisions are eliminated, the vehicle geometry is reconstructed and system level data such as fuselage volume can be harvested. This process is repeated for all requirement settings. Dimensional analysis and regression can be carried out using this data and all other pertinent metrics in the manner described by Mendez [124] and Segel [173]. The dominant parameters for each response show up as in the dimensionally consistent groups that form the independent variables. More importantly the impact of changes in any of these variables on system level dependent variables can be easily and rapidly evaluated. In this way, the conceptual design process can be accelerated without sacrificing analysis accuracy. Scaling laws for take-off gross weight and fuselage volume as functions of fuel cell specific power and power density for a notional General Aviation vehicle are derived for the proof of concept. CESM enables the designer to maintain design freedom by portably carrying multiple designs deeper into the design process. Also since CESM is a bottom-up approach, all proposed baseline concepts are implicitly volumetrically feasible. System level geometry parameters become fall-outs as opposed to inputs. This is a critical attribute as, without the benefit of experience, a designer would be hard pressed to set the appropriate ranges for such parameters for a vehicle built around a disruptive technology. Furthermore, scaling laws generated from custom data for each concept are subject to less design noise than say, regression based approaches. Through these laws, key physics-based characteristics of vehicle subsystems such as energy density can be mapped onto key system level metrics such as fuselage volume or take-off gross weight. These laws can then substitute some historical-data based analyses thereby improving the fidelity of the analyses and reducing design time. (Abstract shortened by UMI.)
The EarthCARE multi spectral imager thermal infrared optical unit
NASA Astrophysics Data System (ADS)
Chang, M. P. J. L.; Woods, D.; Baister, Guy; Lobb, Dan; Wood, Trevor
2017-11-01
The EarthCARE satellite mission objective is the observation of clouds and aerosols from low Earth orbit. The key spatial context providing instrument within the payload suite of 4 instruments is the Multi-Spectral Imager (MSI), previously described in [1]. The MSI is intended to provide information on the horizontal variability of the atmospheric conditions and to identify e.g. cloud type, textures, and temperature. It will form Earth images at 500m ground sample distance (GSD) over a swath width of 150km; it will image Earth in 7 spectral bands: one visible, one near-IR, two short-wave IR and three thermal IR. The instrument will be comprised of two key parts: • a visible-NIR-SWIR (VNS) optical unit radiometrically calibrated using a sun illuminated quasivolume diffuser and shutter system • a thermal IR (TIR) optical unit radiometrically calibrated using cold space and an internal black-body. This paper, being the first of a sequence of two, will provide an overview of the MSI and enter into more detail the critical performance parameters and detailed design the MSI TIR optical design. The TIR concept is to provide pushbroom imaging of its 3 bands through spectral separation from a common aperture. The result is an efficient, well controlled optical design without the need for multiple focal plane arrays. The designed focal plane houses an area array detector and will meet a challenging set of requirements, including radiometric resolution, accuracy, distortion and MTF.
Airport Noise Tech Challenge Overview
NASA Technical Reports Server (NTRS)
Bridges, James
2011-01-01
The Supersonics Project, operating under NASA Aeronautics Mission Directorate#s Fundamental Aero Program, has been organized around the Technical Challenges that have historically precluded commercial supersonic flight. One of these Challenges is making aircraft that are capable of such high aerodynamic performance quiet enough around airports that they will not be objectionable. It is recognized that a successful civilian supersonic aircraft will be a system where many new technologies will come together, and for this to happen not only will new low noise propulsion concepts be required, but new engineering tools that predict the noise of the aircraft as these technologies are combined and compromised with the rest of the aircraft design. These are the two main objectives of the Airport Noise Tech Challenge. " ! As a Project in the Fundamental Aero Program, we work at a relatively low level of technology readiness. However, we have high level milestones which force us to integrate our efforts to impact systems-level activities. To keep the low-level work tied to delivering engineering tools and low-noise concepts, we have structured our milestones around development of the concepts and organized our activities around developing and applying our engineering tools to these concepts. The final deliverables in these milestones are noise prediction modules validated against the best embodiment of each concept. These will then be used in cross-disciplinary exercises to demonstrate the viability of aircraft designs to meet all the Technical Challenges. Some of the concepts being developed are shown: Fan Flow Diverters, Multi-jet Shielding, High-Aspect Ratio Embedded Nozzles, Plasma Actuated Instability Manipulation, Highly Variable Cycle Mixer- Ejectors, and Inverted Velocity Profiles. These concepts are being developed for reduced jet noise along with the design tools which describe how they perform when used in various aircraft configurations. Several key upcoming events are highlighted, including tests of the Highly Variable Cycle Mixer-Ejectors, and Inverted Velocity Profiles. Other key events are milestones to be delivered within the next calendar year.
Zador, Zsolt; Sperrin, Matthew; King, Andrew T
2016-01-01
Traumatic brain injury remains a global health problem. Understanding the relative importance of outcome predictors helps optimize our treatment strategies by informing assessment protocols, clinical decisions and trial designs. In this study we establish importance ranking for outcome predictors based on receiver operating indices to identify key predictors of outcome and create simple predictive models. We then explore the associations between key outcome predictors using Bayesian networks to gain further insight into predictor importance. We analyzed the corticosteroid randomization after significant head injury (CRASH) trial database of 10008 patients and included patients for whom demographics, injury characteristics, computer tomography (CT) findings and Glasgow Outcome Scale (GCS) were recorded (total of 13 predictors, which would be available to clinicians within a few hours following the injury in 6945 patients). Predictions of clinical outcome (death or severe disability at 6 months) were performed using logistic regression models with 5-fold cross validation. Predictive performance was measured using standardized partial area (pAUC) under the receiver operating curve (ROC) and we used Delong test for comparisons. Variable importance ranking was based on pAUC targeted at specificity (pAUCSP) and sensitivity (pAUCSE) intervals of 90-100%. Probabilistic associations were depicted using Bayesian networks. Complete AUC analysis showed very good predictive power (AUC = 0.8237, 95% CI: 0.8138-0.8336) for the complete model. Specificity focused importance ranking highlighted age, pupillary, motor responses, obliteration of basal cisterns/3rd ventricle and midline shift. Interestingly when targeting model sensitivity, the highest-ranking variables were age, severe extracranial injury, verbal response, hematoma on CT and motor response. Simplified models, which included only these key predictors, had similar performance (pAUCSP = 0.6523, 95% CI: 0.6402-0.6641 and pAUCSE = 0.6332, 95% CI: 0.62-0.6477) compared to the complete models (pAUCSP = 0.6664, 95% CI: 0.6543-0.679, pAUCSE = 0.6436, 95% CI: 0.6289-0.6585, de Long p value 0.1165 and 0.3448 respectively). Bayesian networks showed the predictors that did not feature in the simplified models were associated with those that did. We demonstrate that importance based variable selection allows simplified predictive models to be created while maintaining prediction accuracy. Variable selection targeting specificity confirmed key components of clinical assessment in TBI whereas sensitivity based ranking suggested extracranial injury as one of the important predictors. These results help refine our approach to head injury assessment, decision-making and outcome prediction targeted at model sensitivity and specificity. Bayesian networks proved to be a comprehensive tool for depicting probabilistic associations for key predictors giving insight into why the simplified model has maintained accuracy.
Improving hospital weekend handover: a user-centered, standardised approach.
Mehra, Avi; Henein, Christin
2014-01-01
Clinical Handover remains one of the most perilous procedures in medicine (1). Weekend handover has emerged as a key area of concern with high variability in handover processes across hospitals (1,2,4, 5-10). Studying weekend handover processes within medicine at an acute teaching hospital revealed huge variability in documented content and structure. A total of 12 different pro formas were in use by the medical day-team to handover to the weekend team on-call. A Likert-survey of doctors revealed 93% felt the current handover system needed improvement with 71% stating that it did not ensure patient safety (Chi-squared, p-value <0.001, n=32). Semi-structured interviews of doctors identified common themes including "a lack of consistency in approach" "poor standardization" and "high variability". Seeking to address concerns of standardization, a standardized handover pro forma was developed using Royal College of Physician (RCP) guidelines (2), with direct end-user input. Results following implementation revealed a considerable improvement in documented ceiling of care, urgency of task and team member assignment with 100% uptake of the new proforma at both 4-week and 6-month post-implementation analyses. 88% of doctors surveyed perceived that the new proforma improved patient safety (p<0.01, n=25), with 62% highlighting that it allowed doctors to work more efficiently. Results also revealed that 44% felt further improvements were needed and highlighted electronic solutions and handover training as main priorities. Handover briefing was subsequently incorporated into junior doctor induction and education modules delivered, with good feedback. Following collaboration with key stakeholders and with end-user input, integrated electronic handover software was designed and funding secured. The software is currently under final development. Introducing a standardized handover proforma can be an effective initial step in improving weekend handover. Handover education and end-user involvement are key in improving the process. Electronic handover solutions have been shown to significantly increase the quality of handover and are worth considering (9, 10).
Thompson, Wanda M; Berry, Diane; Hu, Jie
2013-05-01
To feasibility test a 12-week church-based physical activity intervention that was culturally sensitive, age- and gender specific directed at changing attitudes of Black adolescent girls' to be more physically active. A one-group pre- and posttest design was used. A convenience sample of Black adolescent girls between the age of 12-18 (n = 41). A 60-min 12-week church-based program that included interactive educational sessions followed by a high energy dance aerobics class was used. Data were collected on biophysical measures. Surveys were used to assess the following variables: attitudes, enjoyment, self-efficacy, intention, social and family support, and PA levels. Paired t-tests and repeated measures ANOVA revealed no significant changes in key variables. Positive changes were noted in the odds ratios for attitudes, self-efficacy, and intention. Body mass index, metabolic equivalent tasks, and fitness showed positive trends from pre to post intervention. Family support was significantly correlated with physical activity level (p < .01). The study showed that physical activity programs in Black churches aimed at Black adolescent girls are feasible. Participants evaluated the intervention very favorably. Family support may be a key factor in increasing physical activity levels in Black adolescent girls. © 2012 Wiley Periodicals, Inc.
Time assignment system and its performance aboard the Hitomi satellite
NASA Astrophysics Data System (ADS)
Terada, Yukikatsu; Yamaguchi, Sunao; Sugimoto, Shigenobu; Inoue, Taku; Nakaya, Souhei; Murakami, Maika; Yabe, Seiya; Oshimizu, Kenya; Ogawa, Mina; Dotani, Tadayasu; Ishisaki, Yoshitaka; Mizushima, Kazuyo; Kominato, Takashi; Mine, Hiroaki; Hihara, Hiroki; Iwase, Kaori; Kouzu, Tomomi; Tashiro, Makoto S.; Natsukari, Chikara; Ozaki, Masanobu; Kokubun, Motohide; Takahashi, Tadayuki; Kawakami, Satoko; Kasahara, Masaru; Kumagai, Susumu; Angelini, Lorella; Witthoeft, Michael
2018-01-01
Fast timing capability in x-ray observation of astrophysical objects is one of the key properties for the ASTRO-H (Hitomi) mission. Absolute timing accuracies of 350 or 35 μs are required to achieve nominal scientific goals or to study fast variabilities of specific sources. The satellite carries a GPS receiver to obtain accurate time information, which is distributed from the central onboard computer through the large and complex SpaceWire network. The details of the time system on the hardware and software design are described. In the distribution of the time information, the propagation delays and jitters affect the timing accuracy. Six other items identified within the timing system will also contribute to absolute time error. These error items have been measured and checked on ground to ensure the time error budgets meet the mission requirements. The overall timing performance in combination with hardware performance, software algorithm, and the orbital determination accuracies, etc. under nominal conditions satisfies the mission requirements of 35 μs. This work demonstrates key points for space-use instruments in hardware and software designs and calibration measurements for fine timing accuracy on the order of microseconds for midsized satellites using the SpaceWire (IEEE1355) network.
Sung, Peng-Cheng
2014-01-01
This study examined the effects of glovebox gloves for 11 females on maximum grip and key pinch strength and on contact forces generated from simulated tasks of a roller, a pair of tweezers and a crescent wrench. The independent variables were gloves fabricated of butyl, CSM/hypalon and neoprene materials; two glove thicknesses; and layers of gloves worn including single, double and triple gloving. CSM/hypalon and butyl gloves produced greater grip strength than the neoprene gloves. CSM/hypalon gloves also lowered contact forces for roller and wrench tasks. Single gloving and thin gloves improved hand strength performances. However, triple layers lowered contact forces for all tasks. Based on the evaluating results, selection and design recommendations of gloves for three hand tools were provided to minimise the effects on hand strength and optimise protection of the palmar hand in glovebox environments. To improve safety and health in the glovebox environments where gloves usage is a necessity, this study provides recommendations for selection and design of glovebox gloves for three hand tools including a roller, a pair of tweezers and a crescent wrench based on the results discovered in the experiments.
Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana
2015-05-01
Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.
Run-of-river power plants in Alpine regions: Whither optimal capacity?
NASA Astrophysics Data System (ADS)
Lazzaro, G.; Botter, G.
2015-07-01
Although run-of-river hydropower represents a key source of renewable energy, it cannot prevent stresses on river ecosystems and human well-being. This is especially true in Alpine regions, where the outflow of a plant is placed several kilometers downstream of the intake, inducing the depletion of river reaches of considerable length. Here multiobjective optimization is used in the design of the capacity of run-of-river plants to identify optimal trade-offs between two contrasting objectives: the maximization of the profitability and the minimization of the hydrologic disturbance between the intake and the outflow. The latter is evaluated considering different flow metrics: mean discharge, temporal autocorrelation, and streamflow variability. Efficient and Pareto-optimal plant sizes are devised for two representative case studies belonging to the Piave river (Italy). Our results show that the optimal design capacity is strongly affected by the flow regime at the plant intake. In persistent regimes with a reduced flow variability, the optimal trade-off between economic exploitation and hydrologic disturbance is obtained for a narrow range of capacities sensibly smaller than the economic optimum. In erratic regimes featured by an enhanced flow variability, instead, the Pareto front is discontinuous and multiple trade-offs can be identified, which imply either smaller or larger plants compared to the economic optimum. In particular, large capacities reduce the impact of the plant on the streamflow variability at seasonal and interannual time scale. Multiobjective analysis could provide a clue for the development of policy actions based on the evaluation of the environmental footprint of run-of-river plants.
Animal escapology I: theoretical issues and emerging trends in escape trajectories
Domenici, Paolo; Blagburn, Jonathan M.; Bacon, Jonathan P.
2011-01-01
Summary Escape responses are used by many animal species as their main defence against predator attacks. Escape success is determined by a number of variables; important are the directionality (the percentage of responses directed away from the threat) and the escape trajectories (ETs) measured relative to the threat. Although logic would suggest that animals should always turn away from a predator, work on various species shows that these away responses occur only approximately 50–90% of the time. A small proportion of towards responses may introduce some unpredictability and may be an adaptive feature of the escape system. Similar issues apply to ETs. Theoretically, an optimal ET can be modelled on the geometry of predator–prey encounters. However, unpredictability (and hence high variability) in trajectories may be necessary for preventing predators from learning a simple escape pattern. This review discusses the emerging trends in escape trajectories, as well as the modulating key factors, such as the surroundings and body design. The main ET patterns identified are: (1) high ET variability within a limited angular sector (mainly 90–180 deg away from the threat; this variability is in some cases based on multiple peaks of ETs), (2) ETs that allow sensory tracking of the threat and (3) ETs towards a shelter. These characteristic features are observed across various taxa and, therefore, their expression may be mainly related to taxon-independent animal design features and to the environmental context in which prey live – for example whether the immediate surroundings of the prey provide potential refuges. PMID:21753039
Secure quantum key distribution using continuous variables of single photons.
Zhang, Lijian; Silberhorn, Christine; Walmsley, Ian A
2008-03-21
We analyze the distribution of secure keys using quantum cryptography based on the continuous variable degree of freedom of entangled photon pairs. We derive the information capacity of a scheme based on the spatial entanglement of photons from a realistic source, and show that the standard measures of security known for quadrature-based continuous variable quantum cryptography (CV-QKD) are inadequate. A specific simple eavesdropping attack is analyzed to illuminate how secret information may be distilled well beyond the bounds of the usual CV-QKD measures.
NASA Technical Reports Server (NTRS)
Paul, D. L.
1975-01-01
A low speed test program was conducted in a 9- by 15-foot V/STOL wind tunnel to investigate internal performance characteristics and determine key design features required for an inlet to meet the demanding operational conditions of the QCSEE application. Four models each having a design average throat Mach number of 0.79 were tested over a range of incidence angle, throat Mach number, and freestream velocity. Principal design variable was internal lip diameter ratio. Stable, efficient inlet performance was found to be feasible at and beyond the 50 deg incidence angle required by the QCSEE application at its 41.2 m/sec (80 knot) nominal takeoff velocity, through suitably designed inlet lip and diffuser components. Forebody design was found to significantly impact flow stability via nose curvature. Measured inlet wall pressures were used to select a location for the inlet throat Mach number control's static pressure port that properly balanced the conflicting demands of relative insensitivity to flow incidence and sufficiently high response to changes in engine flow demand.
Castellucci, H I; Arezes, P M; Molenbroek, J F M; de Bruin, R; Viviani, C
2017-01-01
The purpose of this study was to determine, using a systematic review, whether the design and/or dimensions of school furniture affect the students' physical responses and/or their performance. Of the review studies, 64% presented positive results, i.e. proven effects; 24% presented negative effects or no change/effect; and the remaining 12% showed an unclear effect. The compatibility between school furniture dimensions and students' anthropometric characteristics was identified as a key factor for improving some students' physical responses. Design characteristics such as high furniture, sit-stand furniture, and tilt tables and seats also present positive effects. Finally, we concluded that further research should be conducted exploring various aspects of those variables, particularly focusing on more objective measures complemented by controlled and prospective design. Practitioner Summary: A systematic review of the literature presents a clearly positive effect of school furniture dimensions on students' performance and physical responses. Similar results appeared when school furniture design was tested. However, studying the effects of design and dimensions together produced an unclear positive effect.
Kleis, Sebastian; Rueckmann, Max; Schaeffer, Christian G
2017-04-15
In this Letter, we propose a novel implementation of continuous variable quantum key distribution that operates with a real local oscillator placed at the receiver site. In addition, pulsing of the continuous wave laser sources is not required, leading to an extraordinary practical and secure setup. It is suitable for arbitrary schemes based on modulated coherent states and heterodyne detection. The shown results include transmission experiments, as well as an excess noise analysis applying a discrete 8-state phase modulation. Achievable key rates under collective attacks are estimated. The results demonstrate the high potential of the approach to achieve high secret key rates at relatively low effort and cost.
Field demonstration of a continuous-variable quantum key distribution network.
Huang, Duan; Huang, Peng; Li, Huasheng; Wang, Tao; Zhou, Yingming; Zeng, Guihua
2016-08-01
We report on what we believe is the first field implementation of a continuous-variable quantum key distribution (CV-QKD) network with point-to-point configuration. Four QKD nodes are deployed on standard communication infrastructures connected with commercial telecom optical fiber. Reliable key exchange is achieved in the wavelength-division-multiplexing CV-QKD network. The impact of a complex and volatile field environment on the excess noise is investigated, since excess noise controlling and reduction is arguably the major issue pertaining to distance and the secure key rate. We confirm the applicability and verify the maturity of the CV-QKD network in a metropolitan area, thus paving the way for a next-generation global secure communication network.
Continuous-variable quantum key distribution with 1 Mbps secure key rate.
Huang, Duan; Lin, Dakai; Wang, Chao; Liu, Weiqi; Fang, Shuanghong; Peng, Jinye; Huang, Peng; Zeng, Guihua
2015-06-29
We report the first continuous-variable quantum key distribution (CVQKD) experiment to enable the creation of 1 Mbps secure key rate over 25 km standard telecom fiber in a coarse wavelength division multiplexers (CWDM) environment. The result is achieved with two major technological advances: the use of a 1 GHz shot-noise-limited homodyne detector and the implementation of a 50 MHz clock system. The excess noise due to noise photons from local oscillator and classical data channels in CWDM is controlled effectively. We note that the experimental verification of high-bit-rate CVQKD in the multiplexing environment is a significant step closer toward large-scale deployment in fiber networks.
Transcriptional bursting is intrinsically caused by interplay between RNA polymerases on DNA
NASA Astrophysics Data System (ADS)
Fujita, Keisuke; Iwaki, Mitsuhiro; Yanagida, Toshio
2016-12-01
Cell-to-cell variability plays a critical role in cellular responses and decision-making in a population, and transcriptional bursting has been broadly studied by experimental and theoretical approaches as the potential source of cell-to-cell variability. Although molecular mechanisms of transcriptional bursting have been proposed, there is little consensus. An unsolved key question is whether transcriptional bursting is intertwined with many transcriptional regulatory factors or is an intrinsic characteristic of RNA polymerase on DNA. Here we design an in vitro single-molecule measurement system to analyse the kinetics of transcriptional bursting. The results indicate that transcriptional bursting is caused by interplay between RNA polymerases on DNA. The kinetics of in vitro transcriptional bursting is quantitatively consistent with the gene-nonspecific kinetics previously observed in noisy gene expression in vivo. Our kinetic analysis based on a cellular automaton model confirms that arrest and rescue by trailing RNA polymerase intrinsically causes transcriptional bursting.
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.
2017-07-01
Though Ogden et al. list several shortcomings of the original SCS-CN method, fit for purpose is a key consideration in hydrological modelling, as shown by the adoption of SCS-CN method in many design standards. The theoretical framework of Bartlett et al. [2016a] reveals a family of semidistributed models, of which the SCS-CN method is just one member. Other members include event-based versions of the Variable Infiltration Capacity (VIC) model and TOPMODEL. This general model allows us to move beyond the limitations of the original SCS-CN method under different rainfall-runoff mechanisms and distributions for soil and rainfall variability. Future research should link this general model approach to different hydrogeographic settings, in line with the call for action proposed by Ogden et al.
Magai, Carol; Consedine, Nathan S; Adjei, Brenda A; Hershman, Dawn; Neugut, Alfred
2008-12-01
Despite lower incidence, African American women are at increased risk of dying from breast cancer relative to their European American counterparts. Although there are key differences in both screening behavior and tumor characteristics, an additional part of this mortality difference may lie in the fact that African American women receive suboptimal adjuvant chemotherapy and may receive suboptimal hormonal therapy, therapies that are known to increase survival. The authors consider ethnic differences in the psychosocial factors that have been shown to relate to poor screening adherence and consider how they may influence adherence to breast cancer adjuvant treatment, thus the receipt of suboptimal adjuvant chemo or hormonal therapy. To this end, they review ethnic differences in cognitive, emotional, and social network variables. Psychosocial variables should be included in research designed to understand cancer disparities as well interventions that can be tailored to culturally diverse populations to improve treatment adherence.
Using health education theories to explain behavior change: a cross-country analysis. 2000-2001.
Murray-Johnson, Lisa; Witte, Kim; Boulay, Marc; Figueroa, Maria Elena; Storey, Douglas; Tweedie, Ian
Scholars within the fields of public health, health education, health promotion, and health communication look to specific theories to explain health behavior change. The purpose of this article is to critically compare four health theories and key variables within them with regard to behavior change in the area of reproductive health. Using cross-country analyses of Ghana, Nepal, and Nicaragua (data sets provided by the Center for Communication Programs, Johns Hopkins University), the authors looked at the Health Belief Model, Theory of Reasoned Action, Extended Parallel Process Model, and Social Cognitive Theory for these two defined objectives. Results show that all four theories provide an excellent fit to the data, but that certain variables within them may have particular value for understanding specific aspects of behavior change. Recommendations for the selection of theories to use as guidelines in the design and evaluation of reproductive health programs are provided.
Early acquisition of gender agreement in the Spanish noun phrase: starting small.
Mariscal, Sonia
2009-01-01
Nativist and constructivist accounts differ in their characterization of children's knowledge of grammatical categories. In this paper we present research on the process of acquisition of a particular grammatical system, gender agreement in the Spanish noun phrase, in children under three years of age. The design of the longitudinal study employed presents some variations in relation to classical studies. The aim was to obtain a large corpus of NP data which would allow different types of analysis of the children's productions to be carried out. Intra-individual variability in early NP types was analyzed and measured, and an elicitation task for adjectives was used. Results show that the acquisition of NP and gender agreement is a complex process which advances as the children gradually integrate different pieces of evidence: phonological, distributional and functional. The reduction of variability as the grammatical process advances is a key feature for its explanation.
Emergency strategy optimization for the environmental control system in manned spacecraft
NASA Astrophysics Data System (ADS)
Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin
2018-02-01
It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.
Estimation of Quasi-Stiffness and Propulsive Work of the Human Ankle in the Stance Phase of Walking
Shamaei, Kamran; Sawicki, Gregory S.; Dollar, Aaron M.
2013-01-01
Characterizing the quasi-stiffness and work of lower extremity joints is critical for evaluating human locomotion and designing assistive devices such as prostheses and orthoses intended to emulate the biological behavior of human legs. This work aims to establish statistical models that allow us to predict the ankle quasi-stiffness and net mechanical work for adults walking on level ground. During the stance phase of walking, the ankle joint propels the body through three distinctive phases of nearly constant stiffness known as the quasi-stiffness of each phase. Using a generic equation for the ankle moment obtained through an inverse dynamics analysis, we identify key independent parameters needed to predict ankle quasi-stiffness and propulsive work and also the functional form of each correlation. These parameters include gait speed, ankle excursion, and subject height and weight. Based on the identified form of the correlation and key variables, we applied linear regression on experimental walking data for 216 gait trials across 26 subjects (speeds from 0.75–2.63 m/s) to obtain statistical models of varying complexity. The most general forms of the statistical models include all the key parameters and have an R2 of 75% to 81% in the prediction of the ankle quasi-stiffnesses and propulsive work. The most specific models include only subject height and weight and could predict the ankle quasi-stiffnesses and work for optimal walking speed with average error of 13% to 30%. We discuss how these models provide a useful framework and foundation for designing subject- and gait-specific prosthetic and exoskeletal devices designed to emulate biological ankle function during level ground walking. PMID:23555839
Parallel Aircraft Trajectory Optimization with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Gray, Justin S.; Naylor, Bret
2016-01-01
Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.
Enhancing Heart-Beat-Based Security for mHealth Applications.
Seepers, Robert M; Strydis, Christos; Sourdis, Ioannis; De Zeeuw, Chris I
2017-01-01
In heart-beat-based security, a security key is derived from the time difference between consecutive heart beats (the inter-pulse interval, IPI), which may, subsequently, be used to enable secure communication. While heart-beat-based security holds promise in mobile health (mHealth) applications, there currently exists no work that provides a detailed characterization of the delivered security in a real system. In this paper, we evaluate the strength of IPI-based security keys in the context of entity authentication. We investigate several aspects that should be considered in practice, including subjects with reduced heart-rate variability (HRV), different sensor-sampling frequencies, intersensor variability (i.e., how accurate each entity may measure heart beats) as well as average and worst-case-authentication time. Contrary to the current state of the art, our evaluation demonstrates that authentication using multiple, less-entropic keys may actually increase the key strength by reducing the effects of intersensor variability. Moreover, we find that the maximal key strength of a 60-bit key varies between 29.2 bits and only 5.7 bits, depending on the subject's HRV. To improve security, we introduce the inter-multi-pulse interval (ImPI), a novel method of extracting entropy from the heart by considering the time difference between nonconsecutive heart beats. Given the same authentication time, using the ImPI for key generation increases key strength by up to 3.4 × (+19.2 bits) for subjects with limited HRV, at the cost of an extended key-generation time of 4.8 × (+45 s).
Bajorek, Beata; Lemay, Kate; Magin, Parker; Roberts, Christopher; Krass, Ines; Armour, Carol
2017-06-01
In the management of hypertension, blood pressure (BP) monitoring and medication use are key strategies, but they are dependent on patients' motivation to practice self-care. To gauge patients' approaches to monitoring their blood pressure, as well as explore their attitudes toward, and actions relating to, high blood pressure readings, as the key components of their self-management of hypertension. This qualitative study, comprising individual telephone interviews, involved patients attending community pharmacies in Sydney (Australia). Patients' perspectives were elicited using a purpose-designed, semi-structured interview guide. The verbal responses were audio-recorded, transcribed verbatim, and thematically analysed. Three key themes arose: (1) approaches to monitoring blood pressure, (2) attitudes to variability in BP, (3) responses to high BP readings. Many patients self-regulated the frequency of monitoring based on perceived need and/or opportunity. Most were indifferent toward their readings, regarding BP fluctuations as 'normal'. When a high BP was detected, the action taken was highly variable, with no clear action plans in place. Several patients recognised a high BP to be a consequence of not taking their antihypertensive medication, triggering the resumption of short-term adherence to their preferred management strategy, i.e., self-medication with antihypertensives (i.e., restarting their medication) and/or self-management via lifestyle strategies. This study highlights patients' inappropriate self-management of hypertension. Misperceptions about hypertension, e.g., accepting BP fluctuations as normal, can produce indifferent attitudes as well as influence patients' self-management actions. This lack of insight undermines long-term adherence to antihypertensive therapy.
Biomarker-Guided Non-Adaptive Trial Designs in Phase II and Phase III: A Methodological Review
Antoniou, Miranta; Kolamunnage-Dona, Ruwanthi; Jorgensen, Andrea L.
2017-01-01
Biomarker-guided treatment is a rapidly developing area of medicine, where treatment choice is personalised according to one or more of an individual’s biomarker measurements. A number of biomarker-guided trial designs have been proposed in the past decade, including both adaptive and non-adaptive trial designs which test the effectiveness of a biomarker-guided approach to treatment with the aim of improving patient health. A better understanding of them is needed as challenges occur both in terms of trial design and analysis. We have undertaken a comprehensive literature review based on an in-depth search strategy with a view to providing the research community with clarity in definition, methodology and terminology of the various biomarker-guided trial designs (both adaptive and non-adaptive designs) from a total of 211 included papers. In the present paper, we focus on non-adaptive biomarker-guided trial designs for which we have identified five distinct main types mentioned in 100 papers. We have graphically displayed each non-adaptive trial design and provided an in-depth overview of their key characteristics. Substantial variability has been observed in terms of how trial designs are described and particularly in the terminology used by different authors. Our comprehensive review provides guidance for those designing biomarker-guided trials. PMID:28125057
A probabilistic approach to aircraft design emphasizing stability and control uncertainties
NASA Astrophysics Data System (ADS)
Delaurentis, Daniel Andrew
In order to address identified deficiencies in current approaches to aerospace systems design, a new method has been developed. This new method for design is based on the premise that design is a decision making activity, and that deterministic analysis and synthesis can lead to poor, or misguided decision making. This is due to a lack of disciplinary knowledge of sufficient fidelity about the product, to the presence of uncertainty at multiple levels of the aircraft design hierarchy, and to a failure to focus on overall affordability metrics as measures of goodness. Design solutions are desired which are robust to uncertainty and are based on the maximum knowledge possible. The new method represents advances in the two following general areas. 1. Design models and uncertainty. The research performed completes a transition from a deterministic design representation to a probabilistic one through a modeling of design uncertainty at multiple levels of the aircraft design hierarchy, including: (1) Consistent, traceable uncertainty classification and representation; (2) Concise mathematical statement of the Probabilistic Robust Design problem; (3) Variants of the Cumulative Distribution Functions (CDFs) as decision functions for Robust Design; (4) Probabilistic Sensitivities which identify the most influential sources of variability. 2. Multidisciplinary analysis and design. Imbedded in the probabilistic methodology is a new approach for multidisciplinary design analysis and optimization (MDA/O), employing disciplinary analysis approximations formed through statistical experimentation and regression. These approximation models are a function of design variables common to the system level as well as other disciplines. For aircraft, it is proposed that synthesis/sizing is the proper avenue for integrating multiple disciplines. Research hypotheses are translated into a structured method, which is subsequently tested for validity. Specifically, the implementation involves the study of the relaxed static stability technology for a supersonic commercial transport aircraft. The probabilistic robust design method is exercised resulting in a series of robust design solutions based on different interpretations of "robustness". Insightful results are obtained and the ability of the method to expose trends in the design space are noted as a key advantage.
Environmental Variability in the Florida Keys: Impacts on Coral Reef Resilience and Health
NASA Astrophysics Data System (ADS)
Soto, I. M.; Muller-Karger, F. E.
2005-12-01
Environmental variability contributes to both mass mortality and resilience in tropical coral reef communities. We assess variations in sea surface temperature (SST) and ocean color in the Florida Keys using satellite imagery, and provide insight into how this variability is associated with locations of resilient coral communities (those unaffected by or able to recover from major events). The project tests the hypothesis that areas with historically low environmental variability promote lower levels of coral reef resilience. Time series of SST from the Advanced Very High Resolution Radiometer (AVHRR) sensors and ocean color derived quantities (e.g., turbidity and chlorophyll) from the Sea-viewing Wide Field of View Sensor (SeaWiFS) are being constructed over the entire Florida Keys region for a period of twelve and nine years, respectively. These data will be compared with historical coral cover data derived from Landsat imagery (1984-2002). Improved understanding of the causes of coral reef decline or resilience will help protect and manage these natural treasures.
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
Quality of care and investment in property, plant, and equipment in hospitals.
Levitt, S W
1994-01-01
OBJECTIVE. This study explores the relationship between quality of care and investment in property, plant, and equipment (PPE) in hospitals. DATA SOURCES. Hospitals' investment in PPE was derived from audited financial statements for the fiscal years 1984-1989. Peer Review Organization (PRO) Generic Quality Screen (GQS) reviews and confirmed failures between April 1989 and September 1990 were obtained from the Massachusetts PRO. STUDY DESIGN. Weighted least squares regression models used PRO GQS confirmed failure rates as the dependent variable, and investment in PPE as the key explanatory variable. DATA EXTRACTION. Investment in PPE was standardized, summed by the hospital over the six years, and divided by the hospital's average number of beds in that period. The number of PRO reviewed cases with one or more GQS confirmed failures was divided by the total number of cases reviewed to create confirmed failure rates. PRINCIPAL FINDINGS. Investment in PPE in Massachusetts hospitals is correlated with GQS confirmed failure rates. CONCLUSIONS. A financial variable, investment in PPE, predicts certain dimensions of quality of care in hospitals. PMID:8113054
Volpe, Ellen M.; Hardie, Thomas L.; Cerulli, Catherine
2013-01-01
Objective To explore the associations among dating violence (DV), aggression, relationship power, and depressive symptoms. Design A cross-sectional survey secondary analysis. Setting An urban, school based health center, October, 2009 through May, 2009. Participants Low income, adolescent girls (n= 155), ages 14–18. Methods Descriptive and bivariate analyses were conducted to illustrate patterns and associations among variables. Key variables included depressive symptoms, DV victimization and aggression, and relationship power. We used mediation analyses to determine the direct and indirect effects among variables. Results Both DV victimization and aggression were reported frequently. Furthermore, DV victimization had a significant direct effect on depression and an indirect effect through relationship power. Depressive symptoms and relationship power were associated with DV aggression. Although relationship power did have a significant inverse effect on depressive symptoms, it was not through DV aggression. Conclusions Complex associations remain between mental health and DV; however, relationship power partially accounts for DV victimization's effect on depressive symptoms. Depressive symptoms are associated with DV victimization and aggression; therefore, nurses should address relationship power in clinical and community interventions. PMID:22697267
De Guio, François; Jouvent, Eric; Biessels, Geert Jan; Black, Sandra E; Brayne, Carol; Chen, Christopher; Cordonnier, Charlotte; De Leeuw, Frank-Eric; Dichgans, Martin; Doubal, Fergus; Duering, Marco; Dufouil, Carole; Duzel, Emrah; Fazekas, Franz; Hachinski, Vladimir; Ikram, M Arfan; Linn, Jennifer; Matthews, Paul M; Mazoyer, Bernard; Mok, Vincent; Norrving, Bo; O’Brien, John T; Pantoni, Leonardo; Ropele, Stefan; Sachdev, Perminder; Schmidt, Reinhold; Seshadri, Sudha; Smith, Eric E; Sposato, Luciano A; Stephan, Blossom; Swartz, Richard H; Tzourio, Christophe; van Buchem, Mark; van der Lugt, Aad; van Oostenbrugge, Robert; Vernooij, Meike W; Viswanathan, Anand; Werring, David; Wollenweber, Frank; Wardlaw, Joanna M
2016-01-01
Brain imaging is essential for the diagnosis and characterization of cerebral small vessel disease. Several magnetic resonance imaging markers have therefore emerged, providing new information on the diagnosis, progression, and mechanisms of small vessel disease. Yet, the reproducibility of these small vessel disease markers has received little attention despite being widely used in cross-sectional and longitudinal studies. This review focuses on the main small vessel disease-related markers on magnetic resonance imaging including: white matter hyperintensities, lacunes, dilated perivascular spaces, microbleeds, and brain volume. The aim is to summarize, for each marker, what is currently known about: (1) its reproducibility in studies with a scan–rescan procedure either in single or multicenter settings; (2) the acquisition-related sources of variability; and, (3) the techniques used to minimize this variability. Based on the results, we discuss technical and other challenges that need to be overcome in order for these markers to be reliably used as outcome measures in future clinical trials. We also highlight the key points that need to be considered when designing multicenter magnetic resonance imaging studies of small vessel disease. PMID:27170700
Health communication cards as a tool for behaviour change.
Matteson, Carrie L; Merth, Thomas D N; Finegood, Diane T
2014-01-01
Individuals seeking healthcare treatment in the context of obesity often experience difficulty engaging in discussions around their health and face challenges finding consensus with practitioners on care plans that best suit their lives. The complex set of biological, social, and environmental variables that have contributed to the higher prevalence of obesity are well illustrated in the foresight obesity system map. Effectively understanding and addressing key variables for each individual has proven to be difficult, with clinicians facing barriers and limited resources to help address patients' unique needs. However, productive discussions inspired by patient centered care may be particularly effective in promoting behaviour change. Tools based on systems science that facilitate patient centered care and help identify behaviour change priorities have not been developed to help treat adult obesity. This project created and pilot tested a card based clinical communication tool designed to help facilitate conversations with individuals engaged in health behaviour change. The health communication cards were designed to help direct conversation between patients and healthcare providers toward issues relevant to the individual. Use of the cards to facilitate patient driven conversations in clinical care may help to streamline conversations, set realistic care plan goals, and improve long term rates of compliance.
NASA Astrophysics Data System (ADS)
Goienetxea Uriarte, A.; Ruiz Zúñiga, E.; Urenda Moris, M.; Ng, A. H. C.
2015-05-01
Discrete Event Simulation (DES) is nowadays widely used to support decision makers in system analysis and improvement. However, the use of simulation for improving stochastic logistic processes is not common among healthcare providers. The process of improving healthcare systems involves the necessity to deal with trade-off optimal solutions that take into consideration a multiple number of variables and objectives. Complementing DES with Multi-Objective Optimization (SMO) creates a superior base for finding these solutions and in consequence, facilitates the decision-making process. This paper presents how SMO has been applied for system improvement analysis in a Swedish Emergency Department (ED). A significant number of input variables, constraints and objectives were considered when defining the optimization problem. As a result of the project, the decision makers were provided with a range of optimal solutions which reduces considerably the length of stay and waiting times for the ED patients. SMO has proved to be an appropriate technique to support healthcare system design and improvement processes. A key factor for the success of this project has been the involvement and engagement of the stakeholders during the whole process.
Fields, Dail; Roman, Paul M; Blum, Terry C
2012-01-01
Objective To examine the relationships among general management systems, patient-focused quality management/continuous process improvement (TQM/CPI) processes, resource availability, and multiple dimensions of substance use disorder (SUD) treatment. Data Sources/Study Setting Data are from a nationally representative sample of 221 SUD treatment centers through the National Treatment Center Study (NTCS). Study Design The design was a cross-sectional field study using latent variable structural equation models. The key variables are management practices, TQM/continuous quality improvement (CQI) practices, resource availability, and treatment center performance. Data Collection Interviews and questionnaires provided data from treatment center administrative directors and clinical directors in 2007–2008. Principal Findings Patient-focused TQM/CQI practices fully mediated the relationship between internal management practices and performance. The effects of TQM/CQI on performance are significantly larger for treatment centers with higher levels of staff per patient. Conclusions Internal management practices may create a setting that supports implementation of specific patient-focused practices and protocols inherent to TQM/CQI processes. However, the positive effects of internal management practices on treatment center performance occur through use of specific patient-focused TQM/CPI practices and have more impact when greater amounts of supporting resources are present. PMID:22098342
Design and Application of Sensors for Chemical Cytometry.
Vickerman, Brianna M; Anttila, Matthew M; Petersen, Brae V; Allbritton, Nancy L; Lawrence, David S
2018-02-08
The bulk cell population response to a stimulus, be it a growth factor or a cytotoxic agent, neglects the cell-to-cell variability that can serve as a friend or as a foe in human biology. Biochemical variations among closely related cells furnish the basis for the adaptability of the immune system but also act as the root cause of resistance to chemotherapy by tumors. Consequently, the ability to probe for the presence of key biochemical variables at the single-cell level is now recognized to be of significant biological and biomedical impact. Chemical cytometry has emerged as an ultrasensitive single-cell platform with the flexibility to measure an array of cellular components, ranging from metabolite concentrations to enzyme activities. We briefly review the various chemical cytometry strategies, including recent advances in reporter design, probe and metabolite separation, and detection instrumentation. We also describe strategies for improving intracellular delivery, biochemical specificity, metabolic stability, and detection sensitivity of probes. Recent applications of these strategies to small molecules, lipids, proteins, and other analytes are discussed. Finally, we assess the current scope and limitations of chemical cytometry and discuss areas for future development to meet the needs of single-cell research.
Resistance to Change and Preference for Variable versus Fixed Response Sequences
ERIC Educational Resources Information Center
Arantes, Joana; Berg, Mark E.; Le, Dien; Grace, Randolph C.
2012-01-01
In Experiment 1, 4 pigeons were trained on a multiple chain schedule in which the initial link was a variable-interval (VI) 20-s schedule signalled by a red or green center key, and terminal links required four responses made to the left (L) and/or right (R) keys. In the REPEAT component, signalled by red keylights, only LRLR terminal-link…
Comparative Model Evaluation Studies of Biogenic Trace Gas Fluxes in Tropical Forests
NASA Technical Reports Server (NTRS)
Potter, C. S.; Peterson, David L. (Technical Monitor)
1997-01-01
Simulation modeling can play a number of important roles in large-scale ecosystem studies, including synthesis of patterns and changes in carbon and nutrient cycling dynamics, scaling up to regional estimates, and formulation of testable hypotheses for process studies. Recent comparative studies have shown that ecosystem models of soil trace gas exchange with the atmosphere are evolving into several distinct simulation approaches. Different levels of detail exist among process models in the treatment of physical controls on ecosystem nutrient fluxes and organic substrate transformations leading to gas emissions. These differences are is in part from distinct objectives of scaling and extrapolation. Parameter requirements for initialization scalings, boundary conditions, and time-series driven therefore vary among ecosystem simulation models, such that the design of field experiments for integration with modeling should consider a consolidated series of measurements that will satisfy most of the various model requirements. For example, variables that provide information on soil moisture holding capacity, moisture retention characteristics, potential evapotranspiration and drainage rates, and rooting depth appear to be of the first order in model evaluation trials for tropical moist forest ecosystems. The amount and nutrient content of labile organic matter in the soil, based on accurate plant production estimates, are also key parameters that determine emission model response. Based on comparative model results, it is possible to construct a preliminary evaluation matrix along categories of key diagnostic parameters and temporal domains. Nevertheless, as large-scale studied are planned, it is notable that few existing models age designed to simulate transient states of ecosystem change, a feature which will be essential for assessment of anthropogenic disturbance on regional gas budgets, and effects of long-term climate variability on biosphere-atmosphere exchange.
Khurana, Atika; Romer, Dan; Betancourt, Laura M.; Brodsky, Nancy L.; Giannetta, Joan M.; Hurt, Hallam
2012-01-01
Aims 1) To evaluate the role of pre-existing weakness in working memory ability (WM) as a risk factor for early alcohol use as mediated by different forms of impulsivity. 2) To assess the adverse effects of progressive alcohol use on variations in WM over time. Design, Setting and Participants A community sample of 358 adolescents [48% males, Meanage(baseline) = 11.4± 0.87 years] from a longitudinal cohort design, assessed annually over four consecutive years with less than 6% attrition. Measurements Repeated assessments were conducted for the following key variables: WM (based on performance on four separate tasks), frequency of alcohol use (AU), and three forms of impulsivity, namely sensation seeking (SS), acting-without-thinking (AWT) and delay discounting (DD). Latent growth curve modeling procedures were used to identify individual trajectories of change for all key variables. Findings Weakness in WM (at baseline) significantly predicted both concurrent alcohol use and increased frequency of use over the four waves (p <.05). This effect was entirely mediated by two forms of impulsivity, AWT and DD, both of which were characterized by underlying weakness in WM. No individual variation was observed in the slopes of WM, which suggests that individual variations in alcohol use were not associated with changes in WM in our early adolescent sample. Conclusions Early adolescent alcohol use may be a consequence of (pre-existing) weaknesses in working memory (WM) rather than a cause of it. Efforts to reduce early alcohol use should consider the distinct roles of different impulsivity dimensions, in addition to WM, as potential targets of intervention. PMID:23033972
Villaveces, Andrés; Peck, Michael; Faraklas, Iris; Hsu-Chang, Naiwei; Joe, Victor; Wibbenmeyer, Lucy
2014-01-01
Detailed information on the cause of burns is necessary to construct effective prevention programs. The International Classification of External Causes of Injury (ICECI) is a data collection tool that allows comprehensive categorization of multiple facets of injury events. The objective of this study was to conduct a process evaluation of software designed to improve the ease of use of the ICECI so as to identify key additional variables useful for understanding the occurrence of burn injuries, and compare this software with existing data-collection practices conducted for burn injuries. The authors completed a process evaluation of the implementation and ease of use of the software in six U.S. burn centers. They also collected preliminary burn injury data and compared them with existing variables reported to the American Burn Association's National Burn Repository (NBR). The authors accomplished their goals of 1) creating a data-collection tool for the ICECI, which can be linked to existing operational programs of the NBR, 2) training registrars in the use of this tool, 3) establishing quality-control mechanisms for ensuring accuracy and reliability, 4) incorporating ICECI data entry into the weekly routine of the burn registrar, and 5) demonstrating the quality differences between data collected using this tool and the NBR. Using this or similar tools with the ICECI structure or key selected variables can improve the quantity and quality of data on burn injuries in the United States and elsewhere and thus can be more useful in informing prevention strategies.
Herber, Oliver R; Jones, Martyn C; Smith, Karen; Johnston, Derek W
2012-12-01
This research protocol describes and justifies a study to assess patients' cardiac-related beliefs (i.e. illness representations, knowledge/misconceptions, cardiac treatment beliefs), motivation and mood over time to predict non-attendance at a cardiac rehabilitation programme by measuring weekly/monthly changes in these key variables. Heart disease is the UK's leading cause of death. Evidence from meta-analyses suggests that cardiac rehabilitation facilitates recovery following acute cardiac events. However, 30-60% of patients do not attend cardiac rehabilitation. There is some evidence from questionnaire studies that a range of potentially modifiable psychological variables including patients' cardiac-related beliefs, motivation and mood may influence attendance. Mixed-methods. In this study, during 2012-2013, electronic diary data will be gathered weekly/monthly from 240 patients with acute coronary syndrome from discharge from hospital until completion of the cardiac rehabilitation programme. This will identify changes and interactions between key variables over time and their power to predict non-attendance at cardiac rehabilitation. Data will be analysed to examine the relationship between patients' illness perceptions, cardiac treatment beliefs, knowledge/misconceptions, mood and non-attendance of the cardiac rehabilitation programme. The qualitative component (face-to-face interviews) seeks to explore why patients decide not to attend, not complete or complete the cardiac rehabilitation programme. The identification of robust predictors of (non-)attendance is important for the design and delivery of interventions aimed at optimizing cardiac rehabilitation uptake. Funding for the study was granted in February 2011 by the Scottish Government Chief Scientist Office (CZH/4/650). © 2012 Blackwell Publishing Ltd.
Estimating under-five mortality in space and time in a developing world context.
Wakefield, Jon; Fuglstad, Geir-Arne; Riebler, Andrea; Godwin, Jessica; Wilson, Katie; Clark, Samuel J
2018-01-01
Accurate estimates of the under-five mortality rate in a developing world context are a key barometer of the health of a nation. This paper describes a new model to analyze survey data on mortality in this context. We are interested in both spatial and temporal description, that is wishing to estimate under-five mortality rate across regions and years and to investigate the association between the under-five mortality rate and spatially varying covariate surfaces. We illustrate the methodology by producing yearly estimates for subnational areas in Kenya over the period 1980-2014 using data from the Demographic and Health Surveys, which use stratified cluster sampling. We use a binomial likelihood with fixed effects for the urban/rural strata and random effects for the clustering to account for the complex survey design. Smoothing is carried out using Bayesian hierarchical models with continuous spatial and temporally discrete components. A key component of the model is an offset to adjust for bias due to the effects of HIV epidemics. Substantively, there has been a sharp decline in Kenya in the under-five mortality rate in the period 1980-2014, but large variability in estimated subnational rates remains. A priority for future research is understanding this variability. In exploratory work, we examine whether a variety of spatial covariate surfaces can explain the variability in under-five mortality rate. Temperature, precipitation, a measure of malaria infection prevalence, and a measure of nearness to cities were candidates for inclusion in the covariate model, but the interplay between space, time, and covariates is complex.
Risius, Debbie; Milligan, Alexandra; Berns, Jason; Brown, Nicola; Scurr, Joanna
2017-05-01
To assess the effectiveness of breast support previous studies monitored breast kinematics and kinetics, subjective feedback, muscle activity (EMG), ground reaction forces (GRFs) and physiological measures in isolation. Comparing these variables within one study will establish the key performance variables that distinguish between breast supports during activities such as running. This study investigates the effects of changes in breast support on biomechanical, physiological and subjective measures during running. Ten females (34D) ran for 10 min in high and low breast supports, and for 2 min bare breasted (2.8 m·s -1 ). Breast and body kinematics, EMG, expired air and heart rate were recorded. GRFs were recorded during 10 m overground runs (2.8 m·s -1 ) and subjective feedback obtained after each condition. Of the 62 variables measured, 22 kinematic and subjective variables were influenced by changes in breast support. Willingness to exercise, time lag and superio-inferior breast velocity were most affected. GRFs, EMG and physiological variables were unaffected by breast support changes during running. Breast displacement reduction, although previously advocated, was not the most sensitive variable to breast support changes during running. Instead breast support products should be assessed using a battery of performance indicators, including the key kinematic and subjective variables identified here.
NASA Astrophysics Data System (ADS)
Bui, Francis Minhthang; Hatzinakos, Dimitrios
2007-12-01
As electronic communications become more prevalent, mobile and universal, the threats of data compromises also accordingly loom larger. In the context of a body sensor network (BSN), which permits pervasive monitoring of potentially sensitive medical data, security and privacy concerns are particularly important. It is a challenge to implement traditional security infrastructures in these types of lightweight networks since they are by design limited in both computational and communication resources. A key enabling technology for secure communications in BSN's has emerged to be biometrics. In this work, we present two complementary approaches which exploit physiological signals to address security issues: (1) a resource-efficient key management system for generating and distributing cryptographic keys to constituent sensors in a BSN; (2) a novel data scrambling method, based on interpolation and random sampling, that is envisioned as a potential alternative to conventional symmetric encryption algorithms for certain types of data. The former targets the resource constraints in BSN's, while the latter addresses the fuzzy variability of biometric signals, which has largely precluded the direct application of conventional encryption. Using electrocardiogram (ECG) signals as biometrics, the resulting computer simulations demonstrate the feasibility and efficacy of these methods for delivering secure communications in BSN's.
Breaking the trade-off between efficiency and service.
Frei, Frances X
2006-11-01
For manufacturers, customers are the open wallets at the end of the supply chain. But for most service businesses, they are key inputs to the production process. Customers introduce tremendous variability to that process, but they also complain about any lack of consistency and don't care about the company's profit agenda. Managing customer-introduced variability, the author argues, is a central challenge for service companies. The first step is to diagnose which type of variability is causing mischief: Customers may arrive at different times, request different kinds of service, possess different capabilities, make varying degrees of effort, and have different personal preferences. Should companies accommodate variability or reduce it? Accommodation often involves asking employees to compensate for the variations among customers--a potentially costly solution. Reduction often means offering a limited menu of options, which may drive customers away. Some companies have learned to deal with customer-introduced variability without damaging either their operating environments or customers' service experiences. Starbucks, for example, handles capability variability among its customers by teaching them the correct ordering protocol. Dell deals with arrival and request variability in its high-end server business by outsourcing customer service while staying in close touch with customers to discuss their needs and assess their experiences with third-party providers. The effective management of variability often requires a company to influence customers' behavior. Managers attempting that kind of intervention can follow a three-step process: diagnosing the behavioral problem, designing an operating role for customers that creates new value for both parties, and testing and refining approaches for influencing behavior.
Matsushima, Kazuhide; Peng, Monica; Velasco, Carlos; Schaefer, Eric; Diaz-Arrastia, Ramon; Frankel, Heidi
2012-04-01
Significant glycemic excursions (so-called glucose variability) affect the outcome of generic critically ill patients but has not been well studied in patients with traumatic brain injury (TBI). The purpose of this study was to evaluate the impact of glucose variability on long-term functional outcome of patients with TBI. A noncomputerized tight glucose control protocol was used in our intensivist model surgical intensive care unit. The relationship between the glucose variability and long-term (a median of 6 months after injury) functional outcome defined by extended Glasgow Outcome Scale (GOSE) was analyzed using ordinal logistic regression models. Glucose variability was defined by SD and percentage of excursion (POE) from the preset range glucose level. A total of 109 patients with TBI under tight glucose control had long-term GOSE evaluated. In univariable analysis, there was a significant association between lower GOSE score and higher mean glucose, higher SD, POE more than 60, POE 80 to 150, and single episode of glucose less than 60 mg/dL but not POE 80 to 110. After adjusting for possible confounding variables in multivariable ordinal logistic regression models, higher SD, POE more than 60, POE 80 to 150, and single episode of glucose less than 60 mg/dL were significantly associated with lower GOSE score. Glucose variability was significantly associated with poorer long-term functional outcome in patients with TBI as measured by the GOSE score. Well-designed protocols to minimize glucose variability may be key in improving long-term functional outcome. Copyright © 2012 Elsevier Inc. All rights reserved.
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
Safety performance functions incorporating design consistency variables.
Montella, Alfonso; Imbriani, Lella Liana
2015-01-01
Highway design which ensures that successive elements are coordinated in such a way as to produce harmonious and homogeneous driver performances along the road is considered consistent and safe. On the other hand, an alignment which requires drivers to handle high speed gradients and does not meet drivers' expectancy is considered inconsistent and produces higher crash frequency. To increase the usefulness and the reliability of existing safety performance functions and contribute to solve inconsistencies of existing highways as well as inconsistencies arising in the design phase, we developed safety performance functions for rural motorways that incorporate design consistency measures. Since the design consistency variables were used only for curves, two different sets of models were fitted for tangents and curves. Models for the following crash characteristics were fitted: total, single-vehicle run-off-the-road, other single vehicle, multi vehicle, daytime, nighttime, non-rainy weather, rainy weather, dry pavement, wet pavement, property damage only, slight injury, and severe injury (including fatal). The design consistency parameters in this study are based on operating speed models developed through an instrumented vehicle equipped with a GPS continuous speed tracking from a field experiment conducted on the same motorway where the safety performance functions were fitted (motorway A16 in Italy). Study results show that geometric design consistency has a significant effect on safety of rural motorways. Previous studies on the relationship between geometric design consistency and crash frequency focused on two-lane rural highways since these highways have the higher crash rates and are generally characterized by considerable inconsistencies. Our study clearly highlights that the achievement of proper geometric design consistency is a key design element also on motorways because of the safety consequences of design inconsistencies. The design consistency measures which are significant explanatory variables of the safety performance functions developed in this study are: (1) consistency in driving dynamics, i.e., difference between side friction assumed with respect to the design speed and side friction demanded at the 85th percentile speed; (2) operating speed consistency, i.e., absolute value of the 85th percentile speed reduction through successive elements of the road; (3) inertial speed consistency, i.e., difference between the operating speed in the curve and the average operating speed along the 5 km preceding the beginning of the curve; and (4) length of tangent preceding the curve (only for run-off-the-road crashes). Copyright © 2014 Elsevier Ltd. All rights reserved.
Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Rajeeva; Kumar, Aditya; Dai, Dan
2012-12-31
This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developedmore » will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve these two formulations were developed and validated. For a given OSP problem the computation efficiency largely depends on the “size” of the problem. Initially a simplified 1-D gasifier model assuming axial and azimuthal symmetry was used to test out various OSP algorithms. Finally these algorithms were used to design the optimal sensor network for condition monitoring of IGCC gasifier refractory wear and RSC fouling. The sensors type and locations obtained as solution to the OSP problem were validated using model based sensing approach. The OSP algorithm has been developed in a modular form and has been packaged as a software tool for OSP design where a designer can explore various OSP design algorithm is a user friendly way. The OSP software tool is implemented in Matlab/Simulink© in-house. The tool also uses few optimization routines that are freely available on World Wide Web. In addition a modular Extended Kalman Filter (EKF) block has also been developed in Matlab/Simulink© which can be utilized for model based sensing of important process variables that are not directly measured through combining the online sensors with model based estimation once the hardware sensor and their locations has been finalized. The OSP algorithm details and the results of applying these algorithms to obtain optimal sensor location for condition monitoring of gasifier refractory wear and RSC fouling profile are summarized in this final report.« less
NASA Astrophysics Data System (ADS)
Keum, Jongho; Coulibaly, Paulin
2017-07-01
Adequate and accurate hydrologic information from optimal hydrometric networks is an essential part of effective water resources management. Although the key hydrologic processes in the water cycle are interconnected, hydrometric networks (e.g., streamflow, precipitation, groundwater level) have been routinely designed individually. A decision support framework is proposed for integrated design of multivariable hydrometric networks. The proposed method is applied to design optimal precipitation and streamflow networks simultaneously. The epsilon-dominance hierarchical Bayesian optimization algorithm was combined with Shannon entropy of information theory to design and evaluate hydrometric networks. Specifically, the joint entropy from the combined networks was maximized to provide the most information, and the total correlation was minimized to reduce redundant information. To further optimize the efficiency between the networks, they were designed by maximizing the conditional entropy of the streamflow network given the information of the precipitation network. Compared to the traditional individual variable design approach, the integrated multivariable design method was able to determine more efficient optimal networks by avoiding the redundant stations. Additionally, four quantization cases were compared to evaluate their effects on the entropy calculations and the determination of the optimal networks. The evaluation results indicate that the quantization methods should be selected after careful consideration for each design problem since the station rankings and the optimal networks can change accordingly.
Generalized Alvarez lens for correction of laser aberrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFortune, K N
2004-12-02
The Alvarez lens (US Patent No. 3,305,294 [1]) is a compact aberration corrector. The original design emphasized in the patent consists of a pair of adjacent optical elements that provide a variable focus. A lens system with a variable effective focal length is nothing new. Such systems are widely used in cameras, for example. It is the compactness and simplicity of operation that is the key advantage of the Alvarez lens. All of the complexity is folded into the design and fabrication of the optical elements. As mentioned in the Alvarez patent [1] and elaborated upon in Palusinski et al.more » [2], if one is willing to fold even more complexity into the optical elements, it is possible to correct higher-order aberrations as well. There is no theoretical limit to the number or degree of wavefront distortions that can be corrected. The only limitation is that there must be a fixed relative magnitude of the aberrations. Independent correction of each component of the higher-order aberrations can not be performed without additional elements and degrees of freedom [3]. Under some circumstances, coupling may be observed between different aberrations. This can be mitigated with the appropriate choice of design parameters. New methods are available today that increase the practicality of making higher-order aberration correctors [4,5,6].« less
NASA Astrophysics Data System (ADS)
Ashe, E.; Toth, L. T.; Cheng, H.; Edwards, R. L.; Richey, J. N.
2016-12-01
The oceanic passage between the Florida Keys and Cuba, known as the Straits of Florida, provides a critical connection between the tropics and northern Atlantic. Changes in the character of water masses transported through this region may ultimately have important impacts on high-latitude climate variability. Although recent studies have documented significant changes in the density of regional surface waters over millennial timescales, little is known about the contribution of local- to regional-scale changes in circulation to surface-water variability. Local variability in the radiocarbon age, ΔR, of surface waters can be used to trace changes in local water-column mixing and/or changes in regional source water over a variety of spatial and temporal scales. We reconstructed "snapshots" of ΔR variability across the Florida Keys reef tract during the last 10,000 years by dating 68 unaltered corals collected from Holocene reef cores with both U-series and radiocarbon techniques. We combined the snapshots of ΔR into a semi-empirical model to develop a robust statistical reconstruction of millennial-scale variability in ΔR on the Florida Keys reef tract. Our model demonstrates that ΔR varied significantly during the Holocene, with relatively high values during the early Holocene and around 3000 years BP and relatively low values around 7000 years BP and at present. We compare the trends in ΔR to existing paleoceanographic reconstructions to evaluate the relative contribution of local upwelling versus changes in source water to the region as a whole in driving local radiocarbon variability, and discuss the importance of these results to our understanding of regional-scale oceanographic and climatic variability during the Holocene. We also discuss the implications of our results for radiocarbon dating of marine samples from south Florida and present a model of ΔR versus 14C age that can be used to improve the accuracy of radiocarbon calibrations from this region.
Experimental study on discretely modulated continuous-variable quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Yong; Zou Hongxin; Chen Pingxing
2010-08-15
We present a discretely modulated continuous-variable quantum key distribution system in free space by using strong coherent states. The amplitude noise in the laser source is suppressed to the shot-noise limit by using a mode cleaner combined with a frequency shift technique. Also, it is proven that the phase noise in the source has no impact on the final secret key rate. In order to increase the encoding rate, we use broadband homodyne detectors and the no-switching protocol. In a realistic model, we establish a secret key rate of 46.8 kbits/s against collective attacks at an encoding rate of 10more » MHz for a 90% channel loss when the modulation variance is optimal.« less
General approach and scope. [rotor blade design optimization
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
This paper describes a joint activity involving NASA and Army researchers at the NASA Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure will be closely coupled, while acoustics and airframe dynamics will be decoupled and be accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is to be integrated with the first three disciplines. Finally, in phase 3, airframe dynamics will be fully integrated with the other four disciplines. This paper deals with details of the phase 1 approach and includes details of the optimization formulation, design variables, constraints, and objective function, as well as details of discipline interactions, analysis methods, and methods for validating the procedure.
Enzyme reactor design under thermal inactivation.
Illanes, Andrés; Wilson, Lorena
2003-01-01
Temperature is a very relevant variable for any bioprocess. Temperature optimization of bioreactor operation is a key aspect for process economics. This is especially true for enzyme-catalyzed processes, because enzymes are complex, unstable catalysts whose technological potential relies on their operational stability. Enzyme reactor design is presented with a special emphasis on the effect of thermal inactivation. Enzyme thermal inactivation is a very complex process from a mechanistic point of view. However, for the purpose of enzyme reactor design, it has been oversimplified frequently, considering one-stage first-order kinetics of inactivation and data gathered under nonreactive conditions that poorly represent the actual conditions within the reactor. More complex mechanisms are frequent, especially in the case of immobilized enzymes, and most important is the effect of catalytic modulators (substrates and products) on enzyme stability under operation conditions. This review focuses primarily on reactor design and operation under modulated thermal inactivation. It also presents a scheme for bioreactor temperature optimization, based on validated temperature-explicit functions for all the kinetic and inactivation parameters involved. More conventional enzyme reactor design is presented merely as a background for the purpose of highlighting the need for a deeper insight into enzyme inactivation for proper bioreactor design.
Continuous-variable quantum key distribution with Gaussian source noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Yujie; Peng Xiang; Yang Jian
2011-05-15
Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.
Water clarity in the Florida Keys, USA, as observed from space (1984-2002)
NASA Astrophysics Data System (ADS)
Palandro, D. A.; Hu, C.; Andrefouet, S.; Muller-Karger, F. E.; Hallock, P.
2007-12-01
Landsat TM and ETM+ satellite data were used to derive the diffuse attenuation coefficient (Kd, m-1), a measure of water clarity, for 29 sites throughout the Florida Keys Reef Tract. A total of 28 individual Landsat images between 1984 and 2002 were used, with imagery gathered every two years for spring seasons and every six years for fall seasons. Useful information was obtained by Landsat bands 1 (blue) and 2 (green), except when sites were covered by clouds or showed turbid water. Landsat band 3 (red) provided no consistent data due to the high absorption of red light by water. Because image sampling represented only one or two samples per year on specific days, and because water turbidity may change over short time scales, it was not possible to assess temporal trends at the sites with the Landsat data. Kd values in band 1 were higher in the spring (mean spring = 0.034 m-1, mean fall = 0.031 m-1) and band 2 were higher in the fall (mean spring = 0.056 m-1, mean fall = 0.058 m-1), but the differences were not statistically significant. Spatial variability was high between sites and between regions (Upper, Middle and Lower Keys), with band 1 ranges of 0.019 m-1 - 0.060 m-1 and band 2 ranges of 0.036 m-1 - 0.076 m-1. The highest Kd values were found in the Upper Keys, followed by the Middle Keys and Lower Keys, respectively. This result must be taken in context however, two Middle Keys sites were found to be inconsistent due to high turbidity, obscuring the benthos and altering our assumption of a visible seafloor, which the algorithm is dependent upon. If all Middle Keys data were valid it is likely that this region would have the highest Kd values for both bands. The Landsat-derived Kd values, and inherent variability, may be influenced by the dominant water mass associated with each Florida Keys region, as well as localized oceanic variables. The methodology used here may be applied to other reef areas and used with satellites that offer higher temporal resolution to assess temporal change and variability.
McLean, A P; Blampied, N M
1995-01-01
Behavioral momentum theory relates resistance to change of responding in a multiple-schedule component to the total reinforcement obtained in that component, regardless of how the reinforcers are produced. Four pigeons responded in a series of multiple-schedule conditions in which a variable-interval 40-s schedule arranged reinforcers for pecking in one component and a variable-interval 360-s schedule arranged them in the other. In addition, responses on a second key were reinforced according to variable-interval schedules that were equal in the two components. In different parts of the experiment, responding was disrupted by changing the rate of reinforcement on the second key or by delivering response-independent food during a blackout separating the two components. Consistent with momentum theory, responding on the first key in Part 1 changed more in the component with the lower reinforcement total when it was disrupted by changes in the rate of reinforcement on the second key. However, responding on the second key changed more in the component with the higher reinforcement total. In Parts 2 and 3, responding was disrupted with free food presented during intercomponent blackouts, with extinction (Part 2) or variable-interval 80-s reinforcement (Part 3) arranged on the second key. Here, resistance to change was greater for the component with greater overall reinforcement. Failures of momentum theory to predict short-term differences in resistance to change occurred with disruptors that caused greater change between steady states for the richer component. Consistency of effects across disruptors may yet be found if short-term effects of disruptors are assessed relative to the extent of change observed after prolonged exposure.
NASA Astrophysics Data System (ADS)
Moffitt, Blake Almy
Unmanned Aerial Vehicles (UAVs) are the most dynamic growth sector of the aerospace industry today. The need to provide persistent intelligence, surveillance, and reconnaissance for military operations is driving the planned acquisition of over 5,000 UAVs over the next five years. The most pressing need is for quiet, small UAVs with endurance beyond what is capable with advanced batteries or small internal combustion propulsion systems. Fuel cell systems demonstrate high efficiency, high specific energy, low noise, low temperature operation, modularity, and rapid refuelability making them a promising enabler of the small, quiet, and persistent UAVs that military planners are seeking. Despite the perceived benefits, the actual near-term performance of fuel cell powered UAVs is unknown. Until the auto industry began spending billions of dollars in research, fuel cell systems were too heavy for useful flight applications. However, the last decade has seen rapid development with fuel cell gravimetric and volumetric power density nearly doubling every 2--3 years. As a result, a few design studies and demonstrator aircraft have appeared, but overall the design methodology and vehicles are still in their infancy. The design of fuel cell aircraft poses many challenges. Fuel cells differ fundamentally from combustion based propulsion in how they generate power and interact with other aircraft subsystems. As a result, traditional multidisciplinary analysis (MDA) codes are inappropriate. Building new MDAs is difficult since fuel cells are rapidly changing in design, and various competitive architectures exist for balance of plant, hydrogen storage, and all electric aircraft subsystems. In addition, fuel cell design and performance data is closely protected which makes validation difficult and uncertainty significant. Finally, low specific power and high volumes compared to traditional combustion based propulsion result in more highly constrained design spaces that are problematic for design space exploration. To begin addressing the current gaps in fuel cell aircraft development, a methodology has been developed to explore and characterize the near-term performance of fuel cell powered UAVs. The first step of the methodology is the development of a valid MDA. This is accomplished by using propagated uncertainty estimates to guide the decomposition of a MDA into key contributing analyses (CAs) that can be individually refined and validated to increase the overall accuracy of the MDA. To assist in MDA development, a flexible framework for simultaneously solving the CAs is specified. This enables the MDA to be easily adapted to changes in technology and the changes in data that occur throughout a design process. Various CAs that model a polymer electrolyte membrane fuel cell (PEMFC) UAV are developed, validated, and shown to be in agreement with hardware-in-the-loop simulations of a fully developed fuel cell propulsion system. After creating a valid MDA, the final step of the methodology is the synthesis of the MDA with an uncertainty propagation analysis, an optimization routine, and a chance constrained problem formulation. This synthesis allows an efficient calculation of the probabilistic constraint boundaries and Pareto frontiers that will govern the design space and influence design decisions relating to optimization and uncertainty mitigation. A key element of the methodology is uncertainty propagation. The methodology uses Systems Sensitivity Analysis (SSA) to estimate the uncertainty of key performance metrics due to uncertainties in design variables and uncertainties in the accuracy of the CAs. A summary of SSA is provided and key rules for properly decomposing a MDA for use with SSA are provided. Verification of SSA uncertainty estimates via Monte Carlo simulations is provided for both an example problem as well as a detailed MDA of a fuel cell UAV. Implementation of the methodology was performed on a small fuel cell UAV designed to carry a 2.2 kg payload with 24 hours of endurance. Uncertainty distributions for both design variables and the CAs were estimated based on experimental results and were found to dominate the design space. To reduce uncertainty and test the flexibility of the MDA framework, CAs were replaced with either empirical, or semi-empirical relationships during the optimization process. The final design was validated via a hardware-in-the loop simulation. Finally, the fuel cell UAV probabilistic design space was studied. A graphical representation of the design space was generated and the optima due to deterministic and probabilistic constraints were identified. The methodology was used to identify Pareto frontiers of the design space which were shown on contour plots of the design space. Unanticipated discontinuities of the Pareto fronts were observed as different constraints became active providing useful information on which to base design and development decisions.
Risk and Protective Factors of Internet Addiction: A Meta-Analysis of Empirical Studies in Korea
Koo, Hoon Jung
2014-01-01
Purpose A meta-analysis of empirical studies performed in Korea was conducted to systematically investigate the associations between the indices of Internet addiction (IA) and psychosocial variables. Materials and Methods Systematic literature searches were carried out using the Korean Studies Information Service System, Research Information Sharing Service, Science Direct, Google Scholar, and references in review articles. The key words were Internet addiction, (Internet) game addiction, and pathological, problematic, and excessive Internet use. Only original research papers using Korean samples published from 1999 to 2012 and officially reviewed by peers were included for analysis. Ninety-five studies meeting the inclusion criteria were identified. Results The magnitude of the overall effect size of the intrapersonal variables associated with internet addiction was significantly higher than that of interpersonal variables. Specifically, IA demonstrated a medium to strong association with "escape from self" and "self-identity" as self-related variables. "Attention problem", "self-control", and "emotional regulation" as control and regulation-relation variables; "addiction and absorption traits" as temperament variables; "anger" and "aggression" as emotion and mood and variables; "negative stress coping" as coping variables were also associated with comparably larger effect sizes. Contrary to our expectation, the magnitude of the correlations between relational ability and quality, parental relationships and family functionality, and IA were found to be small. The strength of the association between IA and the risk and protective factors was found to be higher in younger age groups. Conclusion The findings highlight a need for closer examination of psychosocial factors, especially intrapersonal variables when assessing high-risk individuals and designing intervention strategies for both general IA and Internet game addiction. PMID:25323910
Mouriño, Viviana; Cattalini, Juan Pablo; Boccaccini, Aldo R.
2012-01-01
This article provides an overview on the application of metallic ions in the fields of regenerative medicine and tissue engineering, focusing on their therapeutic applications and the need to design strategies for controlling the release of loaded ions from biomaterial scaffolds. A detailed summary of relevant metallic ions with potential use in tissue engineering approaches is presented. Remaining challenges in the field and directions for future research efforts with focus on the key variables needed to be taken into account when considering the controlled release of metallic ions in tissue engineering therapeutics are also highlighted. PMID:22158843
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1995-01-01
A new numerical framework for solving conservation laws is being developed. This new framework differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to overcome several key limitations of the above traditional methods. A two-level scheme for solving the convection-diffusion equation is constructed and used to illuminate the major differences between the present method and those previously mentioned. This explicit scheme, referred to as the a-mu scheme, has two independent marching variables.
Clay-mediated reactions of HCN oligomers - The effect of the oxidation state of the clay
NASA Technical Reports Server (NTRS)
Ferris, J. P.; Alwis, K. W.; Edelson, E. H.; Mount, N.; Hagan, W. J., Jr.
1981-01-01
Montmorillonite clays which contain Fe(III) inhibit the oligomerization of aqueous solutions of HCN. The inhibitory effect is due to the rapid oxidation of diaminomaleonitrile, a key intermediate in HCN oligomerization, by the Fe(III) incorporated into the aluminosilicate lattice of the clay. The Fe(III) oxidizes diaminomaleonitrile to diiminosuccinonitrile, a compound which is rapidly hydrolyzed to HCN and oxalic acid derivatives. Diaminomaleonitrile is not oxidized when Fe(III) in the montmorillonite is reduced with hydrazine. The oxidation state of the clay is an important variable in experiments designed to simulate clay catalysis on the primitive earth.
Crew behavior and performance in space analog environments
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.
1992-01-01
The objectives and the current status of the Crew Factors research program conducted at NASA-Ames Research Center are reviewed. The principal objectives of the program are to determine the effects of a broad class of input variables on crew performance and to provide guidance with respect to the design and management of crews assigned to future space missions. A wide range of research environments are utilized, including controlled experimental settings, high fidelity full mission simulator facilities, and fully operational field environments. Key group processes are identified, and preliminary data are presented on the effect of crew size, type, and structure on team performance.
Small Interactive Image Processing System (SMIPS) system description
NASA Technical Reports Server (NTRS)
Moik, J. G.
1973-01-01
The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.
Watson, Roger
2015-04-01
This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.
Koenig, Kristie Patten; Buckley-Reen, Anne; Garg, Satvika
2012-01-01
Occupational therapists use school-based yoga programs, but these interventions typically lack manualization and evidence from well-designed studies. Using an experimental pretest-posttest control group design, we examined the effectiveness of the Get Ready to Learn (GRTL) classroom yoga program among children with autism spectrum disorders (ASD). The intervention group received the manualized yoga program daily for 16 wk, and the control group engaged in their standard morning routine. We assessed challenging behaviors with standardized measures and behavior coding before and after intervention. We completed a between-groups analysis of variance to assess differences in gain scores on the dependent variables. Students in the GRTL program showed significant decreases (p < .05) in teacher ratings of maladaptive behavior, as measured with the Aberrant Behavior Checklist, compared with the control participants. This study demonstrates that use of daily classroomwide yoga interventions has a significant impact on key classroom behaviors among children with ASD. Copyright © 2012 by the American Occupational Therapy Association, Inc.
Xu, Hongyi; Li, Yang; Zeng, Danielle
2017-01-02
Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less
SSUSI-Lite: a far-ultraviolet hyper-spectral imager for space weather remote sensing
NASA Astrophysics Data System (ADS)
Ogorzalek, Bernard; Osterman, Steven; Carlsson, Uno; Grey, Matthew; Hicks, John; Hourani, Ramsey; Kerem, Samuel; Marcotte, Kathryn; Parker, Charles; Paxton, Larry J.
2015-09-01
SSUSI-Lite is a far-ultraviolet (115-180nm) hyperspectral imager for monitoring space weather. The SSUSI and GUVI sensors, its predecessors, have demonstrated their value as space weather monitors. SSUSI-Lite is a refresh of the Special Sensor Ultraviolet Spectrographic Imager (SSUSI) design that has flown on the Defense Meteorological Satellite Program (DMSP) spacecraft F16 through F19. The refresh updates the 25-year-old design and insures that the next generation of SSUSI/GUVI sensors can be accommodated on any number of potential platforms. SSUSI-Lite maintains the same optical layout as SSUSI, includes updates to key functional elements, and reduces the sensor volume, mass, and power requirements. SSUSI-Lite contains an improved scanner design that results in precise mirror pointing and allows for variable scan profiles. The detector electronics have been redesigned to employ all digital pulse processing. The largest decrease in volume, mass, and power has been obtained by consolidating all control and power electronics into one data processing unit.
Design and test of three active flutter suppression controllers
NASA Technical Reports Server (NTRS)
Christhilf, David M.; Waszak, Martin R.; Adams, William M.; Srinathkumar, S.; Mukhopadhyay, Vivek
1991-01-01
Three flutter suppression control law design techniques are presented. Each uses multiple control surfaces and/or sensors. The first uses linear combinations of several accelerometer signals together with dynamic compensation to synthesize the modal rate of the critical mode for feedback to distributed control surfaces. The second uses traditional tools (pole/zero loci and Nyquist diagrams) to develop a good understanding of the flutter mechanism and produce a controller with minimal complexity and good robustness to plant uncertainty. The third starts with a minimum energy Linear Quadratic Gaussian controller, applies controller order reduction, and then modifies weight and noise covariance matrices to improve multi-variable robustness. The resulting designs were implemented digitally and tested subsonically on the Active Flexible Wing (AFW) wind tunnel model. Test results presented here include plant characteristics, maximum attained closed-loop dynamic pressure, and Root Mean Square control surface activity. A key result is that simultaneous symmetric and antisymmetric flutter suppression was achieved by the second control law, with a 24 percent increase in attainable dynamic pressure.
Combined control-structure optimization
NASA Technical Reports Server (NTRS)
Salama, M.; Milman, M.; Bruno, R.; Scheid, R.; Gibson, S.
1989-01-01
An approach for combined control-structure optimization keyed to enhancing early design trade-offs is outlined and illustrated by numerical examples. The approach employs a homotopic strategy and appears to be effective for generating families of designs that can be used in these early trade studies. Analytical results were obtained for classes of structure/control objectives with linear quadratic Gaussian (LQG) and linear quadratic regulator (LQR) costs. For these, researchers demonstrated that global optima can be computed for small values of the homotopy parameter. Conditions for local optima along the homotopy path were also given. Details of two numerical examples employing the LQR control cost were given showing variations of the optimal design variables along the homotopy path. The results of the second example suggest that introducing a second homotopy parameter relating the two parts of the control index in the LQG/LQR formulation might serve to enlarge the family of Pareto optima, but its effect on modifying the optimal structural shapes may be analogous to the original parameter lambda.
NASA Technical Reports Server (NTRS)
Mantay, Wayne R.; Adelman, Howard M.
1990-01-01
This paper describes a joint NASA/Army research activity at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The activity is being guided by a Steering Committee made up of key NASA and Army researchers and managers. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and the interdisciplinary interactions are defined in terms of the information that must be transferred among disciplinary analyses as well as the trade-offs between disciplines in determining the details of the design. At this writing, some significant progress has been made. Results given in the paper represent accomplishments in rotor aerodynamic performance optimization for minimum horsepower, rotor dynamic optimization for vibration reduction, approximate analysis of frequencies and mode shapes, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.
Back-support large laser mirror unit: mounting modeling and analysis
NASA Astrophysics Data System (ADS)
Wang, Hui; Zhang, Zheng; Long, Kai; Liu, Tianye; Li, Jun; Liu, Changchun; Xiong, Zhao; Yuan, Xiaodong
2018-01-01
In high-power laser system, the surface wavefront of large optics has a close link with its structure design and mounting method. The back-support transport mirror design is presently being investigated as a means in China's high-power laser system to hold the optical component firmly while minimizing the distortion of its reflecting surface. We have proposed a comprehensive analytical framework integrated numerical modeling and precise metrology for the mirror's mounting performance evaluation while treating the surface distortion as a key decision variable. The combination of numerical simulation and field tests demonstrates that the comprehensive analytical framework provides a detailed and accurate approach to evaluate the performance of the transport mirror. It is also verified that the back-support transport mirror is effectively compatible with state-of-the-art optical quality specifications. This study will pave the way for future research to solidify the design of back-support large laser optics in China's next generation inertial confinement fusion facility.
Johannes Breidenbach; Clara Antón-Fernández; Hans Petersson; Ronald E. McRoberts; Rasmus Astrup
2014-01-01
National Forest Inventories (NFIs) provide estimates of forest parameters for national and regional scales. Many key variables of interest, such as biomass and timber volume, cannot be measured directly in the field. Instead, models are used to predict those variables from measurements of other field variables. Therefore, the uncertainty or variability of NFI estimates...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houssainy, Sammy; Janbozorgi, Mohammad; Kavehpour, Pirouz
Compressed Air Energy Storage (CAES) can potentially allow renewable energy sources to meet electricity demands as reliably as coal-fired power plants. However, conventional CAES systems rely on the combustion of natural gas, require large storage volumes, and operate at high pressures, which possess inherent problems such as high costs, strict geological locations, and the production of greenhouse gas emissions. A novel and patented hybrid thermal-compressed air energy storage (HT-CAES) design is presented which allows a portion of the available energy, from the grid or renewable sources, to operate a compressor and the remainder to be converted and stored in themore » form of heat, through joule heating in a sensible thermal storage medium. The HT-CAES design incudes a turbocharger unit that provides supplementary mass flow rate alongside the air storage. The hybrid design and the addition of a turbocharger have the beneficial effect of mitigating the shortcomings of conventional CAES systems and its derivatives by eliminating combustion emissions and reducing storage volumes, operating pressures, and costs. Storage efficiency and cost are the two key factors, which upon integration with renewable energies would allow the sources to operate as independent forms of sustainable energy. The potential of the HT-CAES design is illustrated through a thermodynamic optimization study, which outlines key variables that have a major impact on the performance and economics of the storage system. The optimization analysis quantifies the required distribution of energy between thermal and compressed air energy storage, for maximum efficiency, and for minimum cost. This study provides a roundtrip energy and exergy efficiency map of the storage system and illustrates a trade off that exists between its capital cost and performance.« less
Microfine coal firing results from a retrofit gas/oil-designed industrial boiler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, R.; Borio, R.W.; Liljedahl, G.
1995-12-31
The development of a High Efficiency Advanced Coal Combustor (HEACC) has been in progress since 1987 and the ABB Power Plant Laboratories. The initial work on this concept produced an advanced coal firing system that was capable of firing both water-based and dry pulverized coal in an industrial boiler environment. Economics may one day dictate that it makes sense to replace oil or natural gas with coal in boilers that were originally designed to burn these fuels. The objective of the current program is to demonstrate the technical and economic feasibility of retrofitting a gas/oil designed boiler to burn micronizedmore » coal. In support of this overall objective, the following specific areas were targeted: A coal handling/preparation system that can meet the technical requirements for retrofitting microfine coal on a boiler designed for burning oil or natural gas; Maintaining boiler thermal performance in accordance with specifications when burning oil or natural gas; Maintaining NOx emissions at or below 0.6 lb/MBtu; Achieving combustion efficiencies of 98% or higher; and Calculating economic payback periods as a function of key variables. The overall program has consisted of five major tasks: (1) A review of current state-of-the-art coal firing system components; (2) Design and experimental testing of a prototype HEACC burner; (3) Installation and testing of a HEACC system in a commercial retrofit application; (4) Economic evaluation of the HEACC concept for retrofit applications; and (5) Long term demonstration under commercial user demand conditions. This paper will summarize the latest key experimental results (Task 3) and the economic evaluation (Task 4) of the HEACC concept for retrofit applications. 28 figs., 6 tabs.« less
Kiloampere, Variable-Temperature, Critical-Current Measurements of High-Field Superconductors
Goodrich, LF; Cheggour, N; Stauffer, TC; Filla, BJ; Lu, XF
2013-01-01
We review variable-temperature, transport critical-current (Ic) measurements made on commercial superconductors over a range of critical currents from less than 0.1 A to about 1 kA. We have developed and used a number of systems to make these measurements over the last 15 years. Two exemplary variable-temperature systems with coil sample geometries will be described: a probe that is only variable-temperature and a probe that is variable-temperature and variable-strain. The most significant challenge for these measurements is temperature stability, since large amounts of heat can be generated by the flow of high current through the resistive sample fixture. Therefore, a significant portion of this review is focused on the reduction of temperature errors to less than ±0.05 K in such measurements. A key feature of our system is a pre-regulator that converts a flow of liquid helium to gas and heats the gas to a temperature close to the target sample temperature. The pre-regulator is not in close proximity to the sample and it is controlled independently of the sample temperature. This allows us to independently control the total cooling power, and thereby fine tune the sample cooling power at any sample temperature. The same general temperature-control philosophy is used in all of our variable-temperature systems, but the addition of another variable, such as strain, forces compromises in design and results in some differences in operation and protocol. These aspects are analyzed to assess the extent to which the protocols for our systems might be generalized to other systems at other laboratories. Our approach to variable-temperature measurements is also placed in the general context of measurement-system design, and the perceived advantages and disadvantages of design choices are presented. To verify the accuracy of the variable-temperature measurements, we compared critical-current values obtained on a specimen immersed in liquid helium (“liquid” or Ic liq) at 5 K to those measured on the same specimen in flowing helium gas (“gas” or Ic gas) at the same temperature. These comparisons indicate the temperature control is effective over the superconducting wire length between the voltage taps, and this condition is valid for all types of sample investigated, including Nb-Ti, Nb3Sn, and MgB2 wires. The liquid/gas comparisons are used to study the variable-temperature measurement protocol that was necessary to obtain the “correct” critical current, which was assumed to be the Ic liq. We also calibrated the magnetoresistance effect of resistive thermometers for temperatures from 4 K to 35 K and magnetic fields from 0 T to 16 T. This calibration reduces systematic errors in the variable-temperature data, but it does not affect the liquid/gas comparison since the same thermometers are used in both cases. PMID:26401435
Human embryo culture media comparisons.
Pool, Thomas B; Schoolfield, John; Han, David
2012-01-01
Every program of assisted reproduction strives to maximize pregnancy outcomes from in vitro fertilization and selecting an embryo culture medium, or medium pair, consistent with high success rates is key to this process. The common approach is to replace an existing medium with a new one of interest in the overall culture system and then perform enough cycles of IVF to see if a difference is noted both in laboratory measures of embryo quality and in pregnancy. This approach may allow a laboratory to select one medium over another but the outcomes are only relevant to that program, given that there are well over 200 other variables that may influence the results in an IVF cycle. A study design that will allow for a more global application of IVF results, ones due to culture medium composition as the single variable, is suggested. To perform a study of this design, the center must have a patient caseload appropriate to meet study entrance criteria, success rates high enough to reveal a difference if one exists and a strong program of quality assurance and control in both the laboratory and clinic. Sibling oocytes are randomized to two study arms and embryos are evaluated on day 3 for quality grades. Inter and intra-observer variability are evaluated by kappa statistics and statistical power and study size estimates are performed to bring discriminatory capability to the study. Finally, the complications associated with extending such a study to include blastocyst production on day 5 or 6 are enumerated.
Psychological Predictors of Outcomes with Lumbar Spinal Fusion: A Systematic Literature Review.
Wilhelm, Mark; Reiman, Michael; Goode, Adam; Richardson, William; Brown, Christopher; Vaughn, Daniel; Cook, Chad
2017-04-01
To review the predictive/risk psychological factors at baseline that are associated with a favourable (or non-favourable) outcome following lumbar spinal fusion (LSF). A computer-assisted literature search of PubMed, CINAHL complete and EMBASE for studies published between January 1, 1990 and October 1, 2014 with controlled vocabulary and key words related to LSF, degenerative lumbar spine diagnoses and appropriate terms for predictive variables. Each study was required to be a retrospective or prospective design that involved LSF (all forms). Quality assessment was conducted with the Quality In Prognosis Studies tool. A study protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO# CRD42014008728). The majority of the eight accepted studies were observational, prospective cohorts (n = 6). High levels of baseline depression and lower SF-36 Mental Component Scores (MCS) lower quality of life were associated with non-favourable outcomes. Two studies were rated as high quality, five were moderate and one study had low quality. At present, there are a number of psychological variables that are associated with a poorer outcome with LSF. Higher levels of depression and lower scores on the SF-36 MCS are the most commonly implicated. However, based on the results of the studies using single arm designs there is not enough evidence to determine which psychological variables are influential in predicting outcomes for LSF. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Lu, Yi; Cai, Hui; Bosch, Sheila J
2017-07-01
This study examined how the spatial characteristics of patient beds, which are influenced by patient room design and nursing unit configuration, affect patients' perceptions about privacy. In the hospital setting, most patients expect a certain degree of privacy but also understand that their caregivers need appropriate access to them in order to provide high-quality care. Even veteran healthcare designers may struggle to create just the right balance between privacy and accessibility. A paper-based survey was conducted with 159 participants in Hong Kong-72 (45.3%) participants had been hospitalized and 87 (54.7%) participants had not-to document their selection of high-privacy beds, given simplified plans of eight nursing units. Two types of information, comprised of six variables, were examined for each bed. These include (1) room-level variables, specifically the number of beds per room and area per bed and (2) relational variables, including walking distance, directional change, integration, and control. The results demonstrate that when asked to identify high-privacy beds, participants selected beds in patient rooms with fewer beds per room, a larger area per bed, and a longer walking distance to the care team workstation. Interestingly, the participants having been hospitalized also chose beds with a visual connection to the care team workstation as being high in privacy. The participants with hospitalization experience may be willing to accept a bed with reduced visual privacy, perhaps out of a concern for safety.
CFRP variable curvature mirror used for realizing non-moving-element optical zoom imaging
NASA Astrophysics Data System (ADS)
Zhao, Hui; Fan, Xuewu; Pang, Zhihai; Ren, Guorui; Wang, Wei; Xie, Yongjie; Ma, Zhen; Du, Yunfei; Su, Yu; Wei, Jingxuan
2014-12-01
In recent years, how to eliminate moving elements while realizing optical zoom imaging has been paid much attention. Compared with the conventional optical zooming techniques, removing moving elements would bring in many benefits such as reduction in weight, volume and power cost and so on. The key to implement non-moving-element optical zooming lies in the design of variable curvature mirror (VCM). In order to obtain big enough optical magnification, the VCM should be capable of generating a large variation of saggitus. Hence, the mirror material should not be brittle, in other words the corresponding ultimate strength should be high enough to ensure that mirror surface would not be broken during large curvature variation. Besides that, the material should have a not too big Young's modulus because in this case less force is required to generate a deformation. Among all available materials, for instance SiC, Zerodur and et.al, CFRP (carbon fiber reinforced polymer) satisfies all these requirements and many related research have proven this. In this paper, a CFRP VCM is designed, fabricated and tested. With a diameter of 100mm, a thickness of 2mm and an initial curvature radius of 1740mm, this component could change its curvature radius from 1705mm to 1760mm, which correspond to a saggitus variation of nearly 23μm. The work reported further proves the suitability of CFRP in constructing variable curvature mirror which could generate a large variation of saggitus.
An initiative in multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
Described is a joint NASA/Army initiative at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The activity is being guided by a Steering Committee made up of key NASA and Army researchers and managers. The committee, which has been named IRASC (Integrated Rotorcraft Analysis Steering Committee), has defined two principal foci for the activity: a white paper which sets forth the goals and plans of the effort; and a rotor design project which will validate the basic constituents, as well as the overall design methodology for multidisciplinary optimization. The optimization formulation is described in terms of the objective function, design variables, and constraints. Additionally, some of the analysis aspects are discussed and an initial attempt at defining the interdisciplinary couplings is summarized. At this writing, some significant progress has been made, principally in the areas of single discipline optimization. Results are given which represent accomplishments in rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, and rotor structural optimization for minimum weight.
An initiative in multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1988-01-01
Described is a joint NASA/Army initiative at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The activity is being guided by a Steering Committee made up of key NASA and Army researchers and managers. The committee, which has been named IRASC (Integrated Rotorcraft Analysis Steering Committee), has defined two principal foci for the activity: a white paper which sets forth the goals and plans of the effort; and a rotor design project which will validate the basic constituents, as well as the overall design methodology for multidisciplinary optimization. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. Additionally, some of the analysis aspects are discussed and an initial attempt at defining the interdisciplinary couplings is summarized. At this writing, some significant progress has been made, principally in the areas of single discipline optimization. Results are given which represent accomplishments in rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, and rotor structural optimization for minimum weight.
Basic principles of stability.
Egan, William; Schofield, Timothy
2009-11-01
An understanding of the principles of degradation, as well as the statistical tools for measuring product stability, is essential to management of product quality. Key to this is management of vaccine potency. Vaccine shelf life is best managed through determination of a minimum potency release requirement, which helps assure adequate potency throughout expiry. Use of statistical tools such a least squares regression analysis should be employed to model potency decay. The use of such tools provides incentive to properly design vaccine stability studies, while holding stability measurements to specification presents a disincentive for collecting valuable data. The laws of kinetics such as Arrhenius behavior help practitioners design effective accelerated stability programs, which can be utilized to manage stability after a process change. Design of stability studies should be carefully considered, with an eye to minimizing the variability of the stability parameter. In the case of measuring the degradation rate, testing at the beginning and the end of the study improves the precision of this estimate. Additional design considerations such as bracketing and matrixing improve the efficiency of stability evaluation of vaccines.
Supply chain network design problem for a new market opportunity in an agile manufacturing system
NASA Astrophysics Data System (ADS)
Babazadeh, Reza; Razmi, Jafar; Ghodsi, Reza
2012-08-01
The characteristics of today's competitive environment, such as the speed with which products are designed, manufactured, and distributed, and the need for higher responsiveness and lower operational cost, are forcing companies to search for innovative ways to do business. The concept of agile manufacturing has been proposed in response to these challenges for companies. This paper copes with the strategic and tactical level decisions in agile supply chain network design. An efficient mixed-integer linear programming model that is able to consider the key characteristics of agile supply chain such as direct shipments, outsourcing, different transportation modes, discount, alliance (process and information integration) between opened facilities, and maximum waiting time of customers for deliveries is developed. In addition, in the proposed model, the capacity of facilities is determined as decision variables, which are often assumed to be fixed. Computational results illustrate that the proposed model can be applied as a power tool in agile supply chain network design as well as in the integration of strategic decisions with tactical decisions.
Technology-design-manufacturing co-optimization for advanced mobile SoCs
NASA Astrophysics Data System (ADS)
Yang, Da; Gan, Chock; Chidambaram, P. R.; Nallapadi, Giri; Zhu, John; Song, S. C.; Xu, Jeff; Yeap, Geoffrey
2014-03-01
How to maintain the Moore's Law scaling beyond the 193 immersion resolution limit is the key question semiconductor industry needs to answer in the near future. Process complexity will undoubtfully increase for 14nm node and beyond, which brings both challenges and opportunities for technology development. A vertically integrated design-technologymanufacturing co-optimization flow is desired to better address the complicated issues new process changes bring. In recent years smart mobile wireless devices have been the fastest growing consumer electronics market. Advanced mobile devices such as smartphones are complex systems with the overriding objective of providing the best userexperience value by harnessing all the technology innovations. Most critical system drivers are better system performance/power efficiency, cost effectiveness, and smaller form factors, which, in turns, drive the need of system design and solution with More-than-Moore innovations. Mobile system-on-chips (SoCs) has become the leading driver for semiconductor technology definition and manufacturing. Here we highlight how the co-optimization strategy influenced architecture, device/circuit, process technology and package, in the face of growing process cost/complexity and variability as well as design rule restrictions.
Outsourcing decision factors in publicly owned electric utilities
NASA Astrophysics Data System (ADS)
Gonzales, James Edward
Purpose. The outsourcing of services in publicly owned electric utilities has generated some controversy. The purpose of this study was to explore this controversy by investigating the relationships between eight key independent variables and a dependent variable, "manager perceptions of overall value of outsourced services." The intent was to provide data so that utilities could make better decisions regarding outsourcing efforts. Theoretical framework. Decision theory was used as the framework for analyzing variables and alternatives used to support the outsourcing decision-making process. By reviewing these eight variables and the projected outputs and outcomes, a more predictive and potentially successful outsourcing effort can be realized. Methodology. A survey was distributed to a sample of 323 publicly owned electric utilities randomly selected from a population of 2,020 in the United States. Analysis of the data was made using statistical techniques including the Chi-Square, Lambda, Spearman's coefficient of rank correlation, as well as the Hypothesis Test, Rank Correlation, to test for relationships among the variables. Findings. Relationships among the eight key variables and perceptions of the overall value of outsourced services were generally weak. The notable exception was with the driving force (reason) for outsourcing decisions where the relationship was strongly positive. Conclusions and recommendations. The data in support of the research questions suggest that seven of the eight key variables may be weakly predictive of perceptions of the overall value of outsourced services. However, the primary driving force for outsourcing was strongly predictive. The data also suggest that many of the sampled utilities did not formally address these variables and alternatives, and therefore may not be achieving maximal results. Further studies utilizing customer perceptions rather than those of outsourcing service managers are recommended. In addition, it is recommended that a smaller sample population be analyzed after identifying one or more champions to ensure cooperation and legitimacy of data. Finally, this study supports the position that a manager's ability to identify and understand the relationships between these eight key variables and desired outcomes and outputs may contribute to more successful outsourcing operations.
Social capital and trust in providers.
Ahern, Melissa M; Hendryx, Michael S
2003-10-01
Trust in providers has been in decline in recent decades. This study attempts to identify sources of trust in characteristics of health care systems and the wider community. The design is cross-sectional. Data are from (1) the 1996 Household Survey of the Community Tracking Study, drawn from 24 Metropolitan Statistical Areas; (2) a 1996 multi-city broadcast media marketing database including key social capital indicators; (3) Interstudy; (4) the American Hospital Association; and (5) the American Medical Association. Independent variables include individual socio-demographic variables, HMO enrollment, community-level health sector variables, and social capital. The dependent variable is self-reported trust in physicians. Data are merged from the various sources and analyzed using SUDAAN. Subjects include adults in the Household Survey who responded to the items on trust in physicians (N=17,653). Trust in physicians is independently predicted by community social capital (p<0.001). Trust is also negatively related to HMO enrollment and to many individual characteristics. The effect of HMOs is not uniform across all communities. Social capital plays a role in how health care is perceived by citizens, and how health care is delivered by providers. Efforts to build trust and collaboration in a community may improve trust in physicians, health care quality, access, and preserve local health care control.
Zhang, Zheshen; Voss, Paul L
2009-07-06
We propose a continuous variable based quantum key distribution protocol that makes use of discretely signaled coherent light and reverse error reconciliation. We present a rigorous security proof against collective attacks with realistic lossy, noisy quantum channels, imperfect detector efficiency, and detector electronic noise. This protocol is promising for convenient, high-speed operation at link distances up to 50 km with the use of post-selection.
Improvement of two-way continuous-variable quantum key distribution with virtual photon subtraction
NASA Astrophysics Data System (ADS)
Zhao, Yijia; Zhang, Yichen; Li, Zhengyu; Yu, Song; Guo, Hong
2017-08-01
We propose a method to improve the performance of two-way continuous-variable quantum key distribution protocol by virtual photon subtraction. The virtual photon subtraction implemented via non-Gaussian post-selection not only enhances the entanglement of two-mode squeezed vacuum state but also has advantages in simplifying physical operation and promoting efficiency. In two-way protocol, virtual photon subtraction could be applied on two sources independently. Numerical simulations show that the optimal performance of renovated two-way protocol is obtained with photon subtraction only used by Alice. The transmission distance and tolerable excess noise are improved by using the virtual photon subtraction with appropriate parameters. Moreover, the tolerable excess noise maintains a high value with the increase in distance so that the robustness of two-way continuous-variable quantum key distribution system is significantly improved, especially at long transmission distance.
NASA Technical Reports Server (NTRS)
1995-01-01
The design of a High-Speed Civil Transport (HSCT) air-breathing propulsion system for multimission, variable-cycle operations was successfully optimized through a soft coupling of the engine performance analyzer NASA Engine Performance Program (NEPP) to a multidisciplinary optimization tool COMETBOARDS that was developed at the NASA Lewis Research Center. The design optimization of this engine was cast as a nonlinear optimization problem, with engine thrust as the merit function and the bypass ratios, r-values of fans, fuel flow, and other factors as important active design variables. Constraints were specified on factors including the maximum speed of the compressors, the positive surge margins for the compressors with specified safety factors, the discharge temperature, the pressure ratios, and the mixer extreme Mach number. Solving the problem by using the most reliable optimization algorithm available in COMETBOARDS would provide feasible optimum results only for a portion of the aircraft flight regime because of the large number of mission points (defined by altitudes, Mach numbers, flow rates, and other factors), diverse constraint types, and overall poor conditioning of the design space. Only the cascade optimization strategy of COMETBOARDS, which was devised especially for difficult multidisciplinary applications, could successfully solve a number of engine design problems for their flight regimes. Furthermore, the cascade strategy converged to the same global optimum solution even when it was initiated from different design points. Multiple optimizers in a specified sequence, pseudorandom damping, and reduction of the design space distortion via a global scaling scheme are some of the key features of the cascade strategy. HSCT engine concept, optimized solution for HSCT engine concept. A COMETBOARDS solution for an HSCT engine (Mach-2.4 mixed-flow turbofan) along with its configuration is shown. The optimum thrust is normalized with respect to NEPP results. COMETBOARDS added value in the design optimization of the HSCT engine.
NASA Astrophysics Data System (ADS)
Reid, M. D.
2000-12-01
Correlations of the type discussed by EPR in their original 1935 paradox for continuous variables exist for the quadrature phase amplitudes of two spatially separated fields. These correlations were first experimentally reported in 1992. We propose to use such EPR beams in quantum cryptography, to transmit with high efficiency messages in such a way that the receiver and sender may later determine whether eavesdropping has occurred. The merit of the new proposal is in the possibility of transmitting a reasonably secure yet predetermined key. This would allow relay of a cryptographic key over long distances in the presence of lossy channels.
Robust shot-noise measurement for continuous-variable quantum key distribution
NASA Astrophysics Data System (ADS)
Kunz-Jacques, Sébastien; Jouguet, Paul
2015-02-01
We study a practical method to measure the shot noise in real time in continuous-variable quantum key distribution systems. The amount of secret key that can be extracted from the raw statistics depends strongly on this quantity since it affects in particular the computation of the excess noise (i.e., noise in excess of the shot noise) added by an eavesdropper on the quantum channel. Some powerful quantum hacking attacks relying on faking the estimated value of the shot noise to hide an intercept and resend strategy were proposed. Here, we provide experimental evidence that our method can defeat the saturation attack and the wavelength attack.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blandino, Rémi; Etesse, Jean; Grangier, Philippe
2014-12-04
We show that the maximum transmission distance of continuous-variable quantum key distribution in presence of a Gaussian noisy lossy channel can be arbitrarily increased using a heralded noiseless linear amplifier. We explicitly consider a protocol using amplitude and phase modulated coherent states with reverse reconciliation. Assuming that the secret key rate drops to zero for a line transmittance T{sub lim}, we find that a noiseless amplifier with amplitude gain g can improve this value to T{sub lim}/g{sup 2}, corresponding to an increase in distance proportional to log g. We also show that the tolerance against noise is increased.
Gaussian-modulated coherent-state measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Ma, Xiang-Chun; Sun, Shi-Hai; Jiang, Mu-Sheng; Gui, Ming; Liang, Lin-Mei
2014-04-01
Measurement-device-independent quantum key distribution (MDI-QKD), leaving the detection procedure to the third partner and thus being immune to all detector side-channel attacks, is very promising for the construction of high-security quantum information networks. We propose a scheme to implement MDI-QKD, but with continuous variables instead of discrete ones, i.e., with the source of Gaussian-modulated coherent states, based on the principle of continuous-variable entanglement swapping. This protocol not only can be implemented with current telecom components but also has high key rates compared to its discrete counterpart; thus it will be highly compatible with quantum networks.
High performance frame synchronization for continuous variable quantum key distribution systems.
Lin, Dakai; Huang, Peng; Huang, Duan; Wang, Chao; Peng, Jinye; Zeng, Guihua
2015-08-24
Considering a practical continuous variable quantum key distribution(CVQKD) system, synchronization is of significant importance as it is hardly possible to extract secret keys from unsynchronized strings. In this paper, we proposed a high performance frame synchronization method for CVQKD systems which is capable to operate under low signal-to-noise(SNR) ratios and is compatible with random phase shift induced by quantum channel. A practical implementation of this method with low complexity is presented and its performance is analysed. By adjusting the length of synchronization frame, this method can work well with large range of SNR values which paves the way for longer distance CVQKD.
Collective attacks and unconditional security in continuous variable quantum key distribution.
Grosshans, Frédéric
2005-01-21
We present here an information theoretic study of Gaussian collective attacks on the continuous variable key distribution protocols based on Gaussian modulation of coherent states. These attacks, overlooked in previous security studies, give a finite advantage to the eavesdropper in the experimentally relevant lossy channel, but are not powerful enough to reduce the range of the reverse reconciliation protocols. Secret key rates are given for the ideal case where Bob performs optimal collective measurements, as well as for the realistic cases where he performs homodyne or heterodyne measurements. We also apply the generic security proof of Christiandl et al. to obtain unconditionally secure rates for these protocols.
Design optimization for cost and quality: The robust design approach
NASA Technical Reports Server (NTRS)
Unal, Resit
1990-01-01
Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.
Sah, Jay P.; Ross, Michael S.; Snyder, James R.; Ogurcak, Danielle E.
2010-01-01
In fire-dependent forests, managers are interested in predicting the consequences of prescribed burning on postfire tree mortality. We examined the effects of prescribed fire on tree mortality in Florida Keys pine forests, using a factorial design with understory type, season, and year of burn as factors. We also used logistic regression to model the effects of burn season, fire severity, and tree dimensions on individual tree mortality. Despite limited statistical power due to problems in carrying out the full suite of planned experimental burns, associations with tree and fire variables were observed. Post-fire pine tree mortality was negatively correlated with tree size and positively correlated with char height and percent crown scorch. Unlike post-fire mortality, tree mortality associated with storm surge from Hurricane Wilma was greater in the large size classes. Due to their influence on population structure and fuel dynamics, the size-selective mortality patterns following fire and storm surge have practical importance for using fire as a management tool in Florida Keys pinelands in the future, particularly when the threats to their continued existence from tropical storms and sea level rise are expected to increase.
Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen
2016-03-31
In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less
Native Peoples-Native Homelands Climate Change Workshop: Lessons Learned
NASA Technical Reports Server (NTRS)
Maynard, Nancy G.
2003-01-01
The Native Peoples-Native Homelands Climate Change Workshop was held on October 28 through November 01,1998, as part of a series of workshops being held around the U.S. to improve the understanding of the potential consequences of climate variability and change for the Nation. This workshop was specifically designed by Native Peoples to examine the impacts of climate change and extreme weather variability on Native Peoples and Native Homelands from an indigenous cultural and spiritual perspective and to develop recommendations as well as identify potential response actions. The workshop brought together interested Native Peoples, representatives of Tribal governments, traditional elders, Tribal leaders, natural resource managers, Tribal College faculty and students, and climate scientists fiom government agencies and universities. It is clear that Tribal colleges and universities play a unique and critical role in the success of these emerging partnerships for decision-making in addition to the important education function for both Native and non-Native communities such as serving as a culturally-appropriate vehicle for access, analysis, control, and protection of indigenous cultural and intellectual property. During the discussions between scientists and policy-makers from both Native and non-Native communities, a number of important lessons emerged which are key to building more effective partnerships between Native and non-Native communities for collaboration and decision-making for a more sustainable future. This talk summarizes the key issues, recommendations, and lessons learned during this workshop.
NASA Astrophysics Data System (ADS)
Heslop, Emma; Aguiar, Eva; Mourre, Baptiste; Juza, Mélanie; Escudier, Romain; Tintoré, Joaquín
2017-04-01
The Ibiza Channel plays an important role in the circulation of the Western Mediterranean Sea, it governs the north/south exchange of different water masses that are known to affect regional ecosystems and is influenced by variability in the different drivers that affect sub-basins to the north (N) and south (S). A complex system. In this study we use a multi-platform approach to resolve the key drivers of this variability, and gain insight into the inter-connection between the N and S of the Western Mediterranean Sea through this choke point. The 6-year glider time series from the quasi-continuous glider endurance line monitoring of the Ibiza Channel, undertaken by SOCIB (Balearic Coastal Ocean observing and Forecasting System), is used as the base from which to identify key sub-seasonal to inter-annual patterns and shifts in water mass properties and transport volumes. The glider data indicates the following key components in the variability of the N/S flow of different water mass through the channel; regional winter mode water production, change in intermediate water mass properties, northward flows of a fresher water mass and the basin-scale circulation. To resolve the drivers of these components of variability, the strength of combining datasets from different sources, glider, modeling, altimetry and moorings, is harnessed. To the north atmospheric forcing in the Gulf of Lions is a dominant driver, while to the south the mesoscale circulation patterns of the Atlantic Jet and Alboran gyres dominate the variability but do not appear to influence the fresher inflows. Evidence of a connection between the northern and southern sub-basins is however indicated. The study highlights importance of sub-seasonal variability and the scale of rapid change possible in the Mediterranean, as well as the benefits of leveraging high resolution glider datasets within a multi-platform and modelling study.
Corcoran, Jennifer M.; Knight, Joseph F.; Gallant, Alisa L.
2013-01-01
Wetland mapping at the landscape scale using remotely sensed data requires both affordable data and an efficient accurate classification method. Random forest classification offers several advantages over traditional land cover classification techniques, including a bootstrapping technique to generate robust estimations of outliers in the training data, as well as the capability of measuring classification confidence. Though the random forest classifier can generate complex decision trees with a multitude of input data and still not run a high risk of over fitting, there is a great need to reduce computational and operational costs by including only key input data sets without sacrificing a significant level of accuracy. Our main questions for this study site in Northern Minnesota were: (1) how does classification accuracy and confidence of mapping wetlands compare using different remote sensing platforms and sets of input data; (2) what are the key input variables for accurate differentiation of upland, water, and wetlands, including wetland type; and (3) which datasets and seasonal imagery yield the best accuracy for wetland classification. Our results show the key input variables include terrain (elevation and curvature) and soils descriptors (hydric), along with an assortment of remotely sensed data collected in the spring (satellite visible, near infrared, and thermal bands; satellite normalized vegetation index and Tasseled Cap greenness and wetness; and horizontal-horizontal (HH) and horizontal-vertical (HV) polarization using L-band satellite radar). We undertook this exploratory analysis to inform decisions by natural resource managers charged with monitoring wetland ecosystems and to aid in designing a system for consistent operational mapping of wetlands across landscapes similar to those found in Northern Minnesota.
Kopanz, Julia; Lichtenegger, Katharina M; Sendlhofer, Gerald; Semlitsch, Barbara; Cuder, Gerald; Pak, Andreas; Pieber, Thomas R; Tax, Christa; Brunner, Gernot; Plank, Johannes
2018-02-09
Insulin charts represent a key component in the inpatient glycemic management process. The aim was to evaluate the quality of structure, documentation, and treatment of diabetic inpatient care to design a new standardized insulin chart for a large university hospital setting. Historically grown blank insulin charts in use at 39 general wards were collected and evaluated for quality structure features. Documentation and treatment quality were evaluated in a consecutive snapshot audit of filled-in charts. The primary end point was the percentage of charts with any medication error. Overall, 20 different blank insulin charts with variable designs and significant structural deficits were identified. A medication error occurred in 55% of the 102 audited filled-in insulin charts, consisting of prescription and management errors in 48% and 16%, respectively. Charts of insulin-treated patients had more medication errors relative to patients treated with oral medication (P < 0.01). Chart design did support neither clinical authorization of individual insulin prescription (10%), nor insulin administration confirmed by nurses' signature (25%), nor treatment of hypoglycemia (0%), which resulted in a reduced documentation and treatment quality in clinical practice 7%, 30%, 25%, respectively. A multitude of charts with variable design characteristics and structural deficits were in use across the inpatient wards. More than half of the inpatients had a chart displaying a medication error. Lack of structure quality features of the charts had an impact on documentation and treatment quality. Based on identified deficits and international standards, a new insulin chart was developed to overcome these quality hurdles.
Rivers and Floodplains as Key Components of Global Terrestrial Water Storage Variability
NASA Astrophysics Data System (ADS)
Getirana, Augusto; Kumar, Sujay; Girotto, Manuela; Rodell, Matthew
2017-10-01
This study quantifies the contribution of rivers and floodplains to terrestrial water storage (TWS) variability. We use state-of-the-art models to simulate land surface processes and river dynamics and to separate TWS into its main components. Based on a proposed impact index, we show that surface water storage (SWS) contributes 8% of TWS variability globally, but that contribution differs widely among climate zones. Changes in SWS are a principal component of TWS variability in the tropics, where major rivers flow over arid regions and at high latitudes. SWS accounts for 22-27% of TWS variability in both the Amazon and Nile Basins. Changes in SWS are negligible in the Western U.S., Northern Africa, Middle East, and central Asia. Based on comparisons with Gravity Recovery and Climate Experiment-based TWS, we conclude that accounting for SWS improves simulated TWS in most of South America, Africa, and Southern Asia, confirming that SWS is a key component of TWS variability.
Optimization of a GO2/GH2 Swirl Coaxial Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
1999-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.
On the Asymptotic Relative Efficiency of Planned Missingness Designs.
Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D
2016-03-01
In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.
32 CFR 2001.26 - Automatic declassification exemption markings.
Code of Federal Regulations, 2010 CFR
2010-07-01
... human intelligence source, or key design concepts of weapons of mass destruction, the revised... or a human intelligence source, or key design concepts of weapons of mass destruction, are exempt... key design concepts of weapons of mass destruction, the marking shall be “50X2-WMD.” (3) In...
NASA Astrophysics Data System (ADS)
Eimori, Takahisa; Anami, Kenji; Yoshimatsu, Norifumi; Hasebe, Tetsuya; Murakami, Kazuaki
2014-01-01
A comprehensive design optimization methodology using intuitive nondimensional parameters of inversion-level and saturation-level is proposed, especially for ultralow-power, low-voltage, and high-performance analog circuits with mixed strong, moderate, and weak inversion metal-oxide-semiconductor transistor (MOST) operations. This methodology is based on the synthesized charge-based MOST model composed of Enz-Krummenacher-Vittoz (EKV) basic concepts and advanced-compact-model (ACM) physics-based equations. The key concept of this methodology is that all circuit and system characteristics are described as some multivariate functions of inversion-level parameters, where the inversion level is used as an independent variable representative of each MOST. The analog circuit design starts from the first step of inversion-level design using universal characteristics expressed by circuit currents and inversion-level parameters without process-dependent parameters, followed by the second step of foundry-process-dependent design and the last step of verification using saturation-level criteria. This methodology also paves the way to an intuitive and comprehensive design approach for many kinds of analog circuit specifications by optimization using inversion-level log-scale diagrams and saturation-level criteria. In this paper, we introduce an example of our design methodology for a two-stage Miller amplifier.
Importance of joint efforts for balanced process of designing and education
NASA Astrophysics Data System (ADS)
Mayorova, V. I.; Bannova, O. K.; Kristiansen, T.-H.; Igritsky, V. A.
2015-06-01
This paper discusses importance of a strategic planning and design process when developing long-term space exploration missions both robotic and manned. The discussion begins with reviewing current and/or traditional international perspectives on space development at the American, Russian and European space agencies. Some analogies and comparisons will be drawn upon analysis of several international student collaborative programs: Summer International workshops at the Bauman Moscow State Technical University, International European Summer Space School "Future Space Technologies and Experiments in Space", Summer school at Stuttgart University in Germany. The paper will focus on discussion about optimization of design and planning processes for successful space exploration missions and will highlight importance of the following: understanding connectivity between different levels of human being and machinery; simultaneous mission planning approach; reflections and correlations between disciplines involved in planning and executing space exploration missions; knowledge gained from different disciplines and through cross-applying and re-applying design approaches between variable space related fields of study and research. The conclusions will summarize benefits and complications of applying balanced design approach at all levels of the design process. Analysis of successes and failures of organizational efforts in space endeavors is used as a methodological approach to identify key questions to be researched as they often cause many planning and design processing problems.
Planning Coverage Campaigns for Mission Design and Analysis: CLASP for DESDynl
NASA Technical Reports Server (NTRS)
Knight, Russell L.; McLaren, David A.; Hu, Steven
2013-01-01
Mission design and analysis presents challenges in that almost all variables are in constant flux, yet the goal is to achieve an acceptable level of performance against a concept of operations, which might also be in flux. To increase responsiveness, automated planning tools are used that allow for the continual modification of spacecraft, ground system, staffing, and concept of operations, while returning metrics that are important to mission evaluation, such as area covered, peak memory usage, and peak data throughput. This approach was applied to the DESDynl mission design using the CLASP planning system, but since this adaptation, many techniques have changed under the hood for CLASP, and the DESDynl mission concept has undergone drastic changes. The software produces mission evaluation products, such as memory highwater marks, coverage percentages, given a mission design in the form of coverage targets, concept of operations, spacecraft parameters, and orbital parameters. It tries to overcome the lack of fidelity and timeliness of mission requirements coverage analysis during mission design. Previous techniques primarily use Excel in ad hoc fashion to approximate key factors in mission performance, often falling victim to overgeneralizations necessary in such an adaptation. The new program allows designers to faithfully represent their mission designs quickly, and get more accurate results just as quickly.
Schell, Lawrence M.; Ravenscroft, Julia; Cole, Maxine; Jacobs, Agnes; Newman, Joan
2005-01-01
In this article we describe a research partnership between the Akwesasne Mohawk Nation and scientists at the University at Albany, State University of New York, initiated to address community and scientific concerns regarding environmental contamination and its health consequences (thyroid hormone function, social adjustment, and school functioning). The investigation focuses on cultural inputs into health disparities. It employs a risk-focusing model of biocultural interaction: behaviors expressing cultural identity and values allocate or focus risk, in this instance the risk of toxicant exposure, which alters health status through the effects of toxicants. As culturally based behaviors and activities fulfill a key role in the model, accurate assessment of subtle cultural and behavioral variables is required and best accomplished through integration of local expert knowledge from the community. As a partnership project, the investigation recognizes the cultural and socioeconomic impacts of research in small communities beyond the production of scientific knowledge. The components of sustainable partnerships are discussed, including strategies that helped promote equity between the partners such as hiring community members as key personnel, integrating local expertise into research design, and developing a local Community Outreach and Education Program. Although challenges arose during the design and implementation of the research project, a collaborative approach has benefited the community and facilitated research. PMID:16330372
Landscape Planning of Schoolyards
NASA Astrophysics Data System (ADS)
Kopeva, A.; Khrapko, O.; Ivanova, O.
2017-11-01
The optimal landscape architecture planning of schoolyards allows for creation of favorable conditions for children personal development and physical fitness. The key principles of schoolyard landscape planning, same as for other areas intended for children, are as follows: establishment of a favorable microclimate, safety, aesthetic and educational environment. Green spaces play an essential role in this respect as they are essential to sanitary, hygienic, structural, and spatial planning performing decorative, artistic, cognitive, and educational functions in these areas. Various types of landscape plantings are used in school areas: borders, lawns, beds, vines, ornamental arrangements, and various potted plants. Children’s safety is the key principle when selecting a landscape design type and the plants’ range. Any allergenic, poisonous, thorny, strong-smelling or life-threatening plants are excluded. Plants on school grounds can serve as visual aids for studies. Drought-resistant, attractive, colorful, abundantly blooming plants with variable leaf texture are preferred. Ornamental trees and shrubs as well as perennials and annuals provide a broad plant range for school grounds.
Millennial Climatic Fluctuations Are Key to the Structure of Last Glacial Ecosystems
Huntley, Brian; Allen, Judy R. M.; Collingham, Yvonne C.; Hickler, Thomas; Lister, Adrian M.; Singarayer, Joy; Stuart, Anthony J.; Sykes, Martin T.; Valdes, Paul J.
2013-01-01
Whereas fossil evidence indicates extensive treeless vegetation and diverse grazing megafauna in Europe and northern Asia during the last glacial, experiments combining vegetation models and climate models have to-date simulated widespread persistence of trees. Resolving this conflict is key to understanding both last glacial ecosystems and extinction of most of the mega-herbivores. Using a dynamic vegetation model (DVM) we explored the implications of the differing climatic conditions generated by a general circulation model (GCM) in “normal” and “hosing” experiments. Whilst the former approximate interstadial conditions, the latter, designed to mimic Heinrich Events, approximate stadial conditions. The “hosing” experiments gave simulated European vegetation much closer in composition to that inferred from fossil evidence than did the “normal” experiments. Given the short duration of interstadials, and the rate at which forest cover expanded during the late-glacial and early Holocene, our results demonstrate the importance of millennial variability in determining the character of last glacial ecosystems. PMID:23613985
Millennial climatic fluctuations are key to the structure of last glacial ecosystems.
Huntley, Brian; Allen, Judy R M; Collingham, Yvonne C; Hickler, Thomas; Lister, Adrian M; Singarayer, Joy; Stuart, Anthony J; Sykes, Martin T; Valdes, Paul J
2013-01-01
Whereas fossil evidence indicates extensive treeless vegetation and diverse grazing megafauna in Europe and northern Asia during the last glacial, experiments combining vegetation models and climate models have to-date simulated widespread persistence of trees. Resolving this conflict is key to understanding both last glacial ecosystems and extinction of most of the mega-herbivores. Using a dynamic vegetation model (DVM) we explored the implications of the differing climatic conditions generated by a general circulation model (GCM) in "normal" and "hosing" experiments. Whilst the former approximate interstadial conditions, the latter, designed to mimic Heinrich Events, approximate stadial conditions. The "hosing" experiments gave simulated European vegetation much closer in composition to that inferred from fossil evidence than did the "normal" experiments. Given the short duration of interstadials, and the rate at which forest cover expanded during the late-glacial and early Holocene, our results demonstrate the importance of millennial variability in determining the character of last glacial ecosystems.
Tasker, Fiona; Newbery, Nina; Burr, Bill; Goddard, Andrew F
2014-04-01
There is currently considerable concern about the attractiveness of hospital medicine as a career and experiences in core medical training (CMT) are a key determinant of whether trainees continue in the medical specialties. Little is understood about the quality and impact of the current CMT programme and this survey was designed to assess this. Three key themes emerged. Firstly, the demands of providing service have led to considerable loss of training opportunities, particularly in outpatients and formal teaching sessions. Trainees spend a lot of this service time doing menial tasks and over 90% report that service takes up 80-100% of their time. Secondly, clinical and educational supervision is variable, with trainees sometimes getting little consultant feedback on their clinical performance. Finally, 44% of trainees report that CMT has not prepared them to be a medical registrar and many trainees are put off acute medical specialties by their experiences in CMT.
Hassani-Pak, Keywan; Rawlings, Christopher
2017-06-13
Genetics and "omics" studies designed to uncover genotype to phenotype relationships often identify large numbers of potential candidate genes, among which the causal genes are hidden. Scientists generally lack the time and technical expertise to review all relevant information available from the literature, from key model species and from a potentially wide range of related biological databases in a variety of data formats with variable quality and coverage. Computational tools are needed for the integration and evaluation of heterogeneous information in order to prioritise candidate genes and components of interaction networks that, if perturbed through potential interventions, have a positive impact on the biological outcome in the whole organism without producing negative side effects. Here we review several bioinformatics tools and databases that play an important role in biological knowledge discovery and candidate gene prioritization. We conclude with several key challenges that need to be addressed in order to facilitate biological knowledge discovery in the future.
Simulation Studies of Satellite Laser CO2 Mission Concepts
NASA Technical Reports Server (NTRS)
Kawa, Stephan Randy; Mao, J.; Abshire, J. B.; Collatz, G. J.; Sun X.; Weaver, C. J.
2011-01-01
Results of mission simulation studies are presented for a laser-based atmospheric CO2 sounder. The simulations are based on real-time carbon cycle process modeling and data analysis. The mission concept corresponds to ASCENDS as recommended by the US National Academy of Sciences Decadal Survey. Compared to passive sensors, active (lidar) sensing of CO2 from space has several potentially significant advantages that hold promise to advance CO2 measurement capability in the next decade. Although the precision and accuracy requirements remain at unprecedented levels of stringency, analysis of possible instrument technology indicates that such sensors are more than feasible. Radiative transfer model calculations, an instrument model with representative errors, and a simple retrieval approach complete the cycle from "nature" run to "pseudodata" CO2. Several mission and instrument configuration options are examined, and the sensitivity to key design variables is shown. Examples are also shown of how the resulting pseudo-measurements might be used to address key carbon cycle science questions.
Simulating Navigation with Virtual 3d Geovisualizations - a Focus on Memory Related Factors
NASA Astrophysics Data System (ADS)
Lokka, I.; Çöltekin, A.
2016-06-01
The use of virtual environments (VE) for navigation-related studies, such as spatial cognition and path retrieval has been widely adopted in cognitive psychology and related fields. What motivates the use of VEs for such studies is that, as opposed to real-world, we can control for the confounding variables in simulated VEs. When simulating a geographic environment as a virtual world with the intention to train navigational memory in humans, an effective and efficient visual design is important to facilitate the amount of recall. However, it is not yet clear what amount of information should be included in such visual designs intended to facilitate remembering: there can be too little or too much of it. Besides the amount of information or level of detail, the types of visual features (`elements' in a visual scene) that should be included in the representations to create memorable scenes and paths must be defined. We analyzed the literature in cognitive psychology, geovisualization and information visualization, and identified the key factors for studying and evaluating geovisualization designs for their function to support and strengthen human navigational memory. The key factors we identified are: i) the individual abilities and age of the users, ii) the level of realism (LOR) included in the representations and iii) the context in which the navigation is performed, thus specific tasks within a case scenario. Here we present a concise literature review and our conceptual development for follow-up experiments.
Concepts for Multi-Speed Rotorcraft Drive System - Status of Design and Testing at NASA GRC
NASA Technical Reports Server (NTRS)
Stevens, Mark A.; Lewicki, David G.; Handschuh, Robert F.
2015-01-01
In several studies and on-going developments for advanced rotorcraft, the need for variable multi-speed capable rotors has been raised. Speed changes of up to 50 have been proposed for future rotorcraft to improve vehicle performance. A rotor speed change during operation not only requires a rotor that can perform effectively over the operating speedload range, but also requires a propulsion system possessing these same capabilities. A study was completed investigating possible drive system arrangements that can accommodate up to a 50 speed change. Key drivers were identified from which simplicity and weight were judged as central. This paper presents the current status of two gear train concepts coupled with the first of two clutch types developed and tested thus far with focus on design lessons learned and areas requiring development. Also, a third concept is presented, a dual input planetary differential as leveraged from a simple planetary with fixed carrier.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826
Jamshidian, Farid; Hubbard, Alan E; Jewell, Nicholas P
2014-06-01
There is a rich literature on the role of placebos in experimental design and evaluation of therapeutic agents or interventions. The importance of masking participants, investigators and evaluators to treatment assignment (treatment or placebo) has long been stressed as a key feature of a successful trial design. Nevertheless, there is considerable variability in the technical definition of the placebo effect and the impact of treatment assignments being unmasked. We suggest a formal concept of a 'perception effect' and define unmasking and placebo effects in the context of randomised trials. We employ modern tools from causal inference to derive semi-parametric estimators of such effects. The methods are illustrated on a motivating example from a recent pain trial where the occurrence of treatment-related side effects acts as a proxy for unmasking. © The Author(s) 2011 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Relating design and environmental variables to reliability
NASA Astrophysics Data System (ADS)
Kolarik, William J.; Landers, Thomas L.
The combination of space application and nuclear power source demands high reliability hardware. The possibilities of failure, either an inability to provide power or a catastrophic accident, must be minimized. Nuclear power experiences on the ground have led to highly sophisticated probabilistic risk assessment procedures, most of which require quantitative information to adequately assess such risks. In the area of hardware risk analysis, reliability information plays a key role. One of the lessons learned from the Three Mile Island experience is that thorough analyses of critical components are essential. Nuclear grade equipment shows some reliability advantages over commercial. However, no statistically significant difference has been found. A recent study pertaining to spacecraft electronics reliability, examined some 2500 malfunctions on more than 300 aircraft. The study classified the equipment failures into seven general categories. Design deficiencies and lack of environmental protection accounted for about half of all failures. Within each class, limited reliability modeling was performed using a Weibull failure model.
Gravity and thermal deformation of large primary mirror in space telescope
NASA Astrophysics Data System (ADS)
Wang, Xin; Jiang, Shouwang; Wan, Jinlong; Shu, Rong
2016-10-01
The technology of integrating mechanical FEA analysis with optical estimation is essential to simulate the gravity deformation of large main mirror and the thermal deformation such as static or temperature gradient of optical structure. We present the simulation results of FEA analysis, data processing, and image performance. Three kinds of support structure for large primary mirror which have the center holding structure, the edge glue fixation and back support, are designed and compared to get the optimal gravity deformation. Variable mirror materials Zerodur/SiC are chosen and analyzed to obtain the small thermal gradient distortion. The simulation accuracy is dependent on FEA mesh quality, the load definition of structure, the fitting error from discrete data to smooth surface. A main mirror with 1m diameter is designed as an example. The appropriate structure material to match mirror, the central supporting structure, and the key aspects of FEA simulation are optimized for space application.
Leung, Gabriel M.; Yu, Philip L. H.; Wong, Irene O. L.; Johnston, Janice M.; Tin, Keith Y. K.
2003-01-01
Objective: Given the slow adoption of medical informatics in Hong Kong and Asia, we sought to understand the contributory barriers and potential incentives associated with information technology implementation. Design and Measurements: A representative sample of 949 doctors (response rate = 77.0%) was asked through a postal survey to rank a list of nine barriers associated with clinical computerization according to self-perceived importance. They ranked seven incentives or catalysts that may influence computerization. We generated mean rank scores and used multidimensional preference analysis to explore key explanatory dimensions of these variables. A hierarchical cluster analysis was performed to identify homogenous subgroups of respondents. We further determined the relationships between the sets of barriers and incentives/catalysts collectively using canonical correlation. Results: Time costs, lack of technical support and large capital investments were the biggest barriers to computerization, whereas improved office efficiency and better-quality care were ranked highest as potential incentives to computerize. Cost vs. noncost, physician-related vs. patient-related, and monetary vs. nonmonetary factors were the key dimensions explaining the barrier variables. Similarly, within-practice vs external and “push” vs “pull” factors accounted for the incentive variables. Four clusters were identified for barriers and three for incentives/catalysts. Canonical correlation revealed that respondents who were concerned with the costs of computerization also perceived financial incentives and government regulation to be important incentives/catalysts toward computerization. Those who found the potential interference with communication important also believed that the promise of improved care from computerization to be a significant incentive. Conclusion: This study provided evidence regarding common barriers associated with clinical computerization. Our findings also identified possible incentive strategies that may be employed to accelerate uptake of computer systems. PMID:12595409
Mortier, Séverine Thérèse F C; Van Bockstal, Pieter-Jan; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas
2016-06-01
Large molecules, such as biopharmaceuticals, are considered the key driver of growth for the pharmaceutical industry. Freeze-drying is the preferred way to stabilise these products when needed. However, it is an expensive, inefficient, time- and energy-consuming process. During freeze-drying, there are only two main process variables to be set, i.e. the shelf temperature and the chamber pressure, however preferably in a dynamic way. This manuscript focuses on the essential use of uncertainty analysis for the determination and experimental verification of the dynamic primary drying Design Space for pharmaceutical freeze-drying. Traditionally, the chamber pressure and shelf temperature are kept constant during primary drying, leading to less optimal process conditions. In this paper it is demonstrated how a mechanistic model of the primary drying step gives the opportunity to determine the optimal dynamic values for both process variables during processing, resulting in a dynamic Design Space with a well-known risk of failure. This allows running the primary drying process step as time efficient as possible, hereby guaranteeing that the temperature at the sublimation front does not exceed the collapse temperature. The Design Space is the multidimensional combination and interaction of input variables and process parameters leading to the expected product specifications with a controlled (i.e., high) probability. Therefore, inclusion of parameter uncertainty is an essential part in the definition of the Design Space, although it is often neglected. To quantitatively assess the inherent uncertainty on the parameters of the mechanistic model, an uncertainty analysis was performed to establish the borders of the dynamic Design Space, i.e. a time-varying shelf temperature and chamber pressure, associated with a specific risk of failure. A risk of failure acceptance level of 0.01%, i.e. a 'zero-failure' situation, results in an increased primary drying process time compared to the deterministic dynamic Design Space; however, the risk of failure is under control. Experimental verification revealed that only a risk of failure acceptance level of 0.01% yielded a guaranteed zero-defect quality end-product. The computed process settings with a risk of failure acceptance level of 0.01% resulted in a decrease of more than half of the primary drying time in comparison with a regular, conservative cycle with fixed settings. Copyright © 2016. Published by Elsevier B.V.
The HPT Value Proposition in the Larger Improvement Arena.
ERIC Educational Resources Information Center
Wallace, Guy W.
2003-01-01
Discussion of human performance technology (HPT) emphasizes the key variable, which is the human variable. Highlights include the Ishikawa Diagram; human performance as one variable of process performance; collaborating with other improvement approaches; value propositions; and benefits to stakeholders, including real return on investments. (LRW)
NASA Astrophysics Data System (ADS)
Hollingsworth, Peter Michael
The drive toward robust systems design, especially with respect to system affordability throughout the system life-cycle, has led to the development of several advanced design methods. While these methods have been extremely successful in satisfying the needs for which they have been developed, they inherently leave a critical area unaddressed. None of them fully considers the effect of requirements on the selection of solution systems. The goal of all of current modern design methodologies is to bring knowledge forward in the design process to the regions where more design freedom is available and design changes cost less. Therefore, it seems reasonable to consider the point in the design process where the greatest restrictions are placed on the final design, the point in which the system level requirements are set. Historically the requirements have been treated as something handed down from above. However, neither the customer nor the solution provider completely understood all of the options that are available in the broader requirements space. If a method were developed that provided the ability to understand the full scope of the requirements space, it would allow for a better comparison of potential solution systems with respect to both the current and potential future requirements. The key to a requirements conscious method is to treat requirements differently from the traditional approach. The method proposed herein is known as Requirements Controlled Design (RCD). By treating the requirements as a set of variables that control the behavior of the system, instead of variables that only define the response of the system, it is possible to determine a-priori what portions of the requirements space that any given system is capable of satisfying. Additionally, it should be possible to identify which systems can satisfy a given set of requirements and the locations where a small change in one or more requirements poses a significant risk to a design program. This thesis puts forth the theory and methodology to enable RCD, and details and validates a specific method called the Modified Strength Pareto Evolutionary Algorithm (MSPEA).
Sampling in ecology and evolution - bridging the gap between theory and practice
Albert, C.H.; Yoccoz, N.G.; Edwards, T.C.; Graham, C.H.; Zimmermann, N.E.; Thuiller, W.
2010-01-01
Sampling is a key issue for answering most ecological and evolutionary questions. The importance of developing a rigorous sampling design tailored to specific questions has already been discussed in the ecological and sampling literature and has provided useful tools and recommendations to sample and analyse ecological data. However, sampling issues are often difficult to overcome in ecological studies due to apparent inconsistencies between theory and practice, often leading to the implementation of simplified sampling designs that suffer from unknown biases. Moreover, we believe that classical sampling principles which are based on estimation of means and variances are insufficient to fully address many ecological questions that rely on estimating relationships between a response and a set of predictor variables over time and space. Our objective is thus to highlight the importance of selecting an appropriate sampling space and an appropriate sampling design. We also emphasize the importance of using prior knowledge of the study system to estimate models or complex parameters and thus better understand ecological patterns and processes generating these patterns. Using a semi-virtual simulation study as an illustration we reveal how the selection of the space (e.g. geographic, climatic), in which the sampling is designed, influences the patterns that can be ultimately detected. We also demonstrate the inefficiency of common sampling designs to reveal response curves between ecological variables and climatic gradients. Further, we show that response-surface methodology, which has rarely been used in ecology, is much more efficient than more traditional methods. Finally, we discuss the use of prior knowledge, simulation studies and model-based designs in defining appropriate sampling designs. We conclude by a call for development of methods to unbiasedly estimate nonlinear ecologically relevant parameters, in order to make inferences while fulfilling requirements of both sampling theory and field work logistics. ?? 2010 The Authors.
Implementation of continuous-variable quantum key distribution with discrete modulation
NASA Astrophysics Data System (ADS)
Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2017-06-01
We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.
Pelagic Habitat Analysis Module (PHAM) for GIS Based Fisheries Decision Support
NASA Technical Reports Server (NTRS)
Kiefer, D. A.; Armstrong, Edward M.; Harrison, D. P.; Hinton, M. G.; Kohin, S.; Snyder, S.; O'Brien, F. J.
2011-01-01
We have assembled a system that integrates satellite and model output with fisheries data We have developed tools that allow analysis of the interaction between species and key environmental variables Demonstrated the capacity to accurately map habitat of Thresher Sharks Alopias vulpinus & pelagicus. Their seasonal migration along the California Current is at least partly driven by the seasonal migration of sardine, key prey of the sharks.We have assembled a system that integrates satellite and model output with fisheries data We have developed tools that allow analysis of the interaction between species and key environmental variables Demonstrated the capacity to accurately map habitat of Thresher Sharks Alopias vulpinus nd pelagicus. Their seasonal migration along the California Current is at least partly driven by the seasonal migration of sardine, key prey of the sharks.
High performance reconciliation for continuous-variable quantum key distribution with LDPC code
NASA Astrophysics Data System (ADS)
Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua
2015-03-01
Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.
NASA Astrophysics Data System (ADS)
Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.; Kim, Taeseong; Thomsen, Kenneth
2016-09-01
The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar to the one computed using the full DLB. This observation is used to propose a scheme that is computationally cheap, and that can be integrated inside an MDO framework, providing a sufficiently reliable estimation of the blade ultimate loading. The latter aspect is of key importance when design variables implementing passive control methodologies are included in the formulation of the optimization problem. An MDO of a 10 MW wind turbine blade is presented as an applied case study to show the efficacy of the reduced DLB concept.
Yu, Xingyue; Cabooter, Deirdre; Dewil, Raf
2018-05-24
This study aims at investigating the efficiency and kinetics of 2,4-DCP degradation via advanced reduction processes (ARP). Using UV light as activation method, the highest degradation efficiency of 2,4-DCP was obtained when using sulphite as a reducing agent. The highest degradation efficiency was observed under alkaline conditions (pH = 10.0), for high sulphite dosage and UV intensity, and low 2,4-DCP concentration. For all process conditions, first-order reaction rate kinetics were applicable. A quadratic polynomial equation fitted by a Box-Behnken Design was used as a statistical model and proved to be precise and reliable in describing the significance of the different process variables. The analysis of variance demonstrated that the experimental results were in good agreement with the predicted model (R 2 = 0.9343), and solution pH, sulphite dose and UV intensity were found to be key process variables in the sulphite/UV ARP. Consequently, the present study provides a promising approach for the efficient degradation of 2,4-DCP with fast degradation kinetics. Copyright © 2018 Elsevier B.V. All rights reserved.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
NASA Astrophysics Data System (ADS)
Pfeiffer, Hans
1999-12-01
Projection reduction exposure with variable axis immersion lenses (PREVAIL) represents the high throughput e-beam projection approach to next generation lithography (NGL), which IBM is pursuing in cooperation with Nikon Corporation as an alliance partner. This paper discusses the challenges and accomplishments of the PREVAIL project. The supreme challenge facing all e-beam lithography approaches has been and still is throughput. Since the throughput of e-beam projection systems is severely limited by the available optical field size, the key to success is the ability to overcome this limitation. The PREVAIL technique overcomes field-limiting off-axis aberrations through the use of variable axis lenses, which electronically shift the optical axis simultaneously with the deflected beam, so that the beam effectively remains on axis. The resist images obtained with the proof-of-concept (POC) system demonstrate that PREVAIL effectively eliminates off-axis aberrations affecting both the resolution and placement accuracy of pixels. As part of the POC system a high emittance gun has been developed to provide uniform illumination of the patterned subfield, and to fill the large numerical aperture projection optics designed to significantly reduce beam blur caused by Coulombinteraction.
Potential distribution of the viral haemorrhagic septicaemia virus in the Great Lakes region
Escobar, Luis E.; Kurath, Gael; Escobar-Dodero, Joaquim; Craft, Meggan E.; Phelps, Nicholas B.D.
2017-01-01
Viral haemorrhagic septicaemia virus (VHSV) genotype IVb has been responsible for large-scale fish mortality events in the Great Lakes of North America. Anticipating the areas of potential VHSV occurrence is key to designing epidemiological surveillance and disease prevention strategies in the Great Lakes basin. We explored the environmental features that could shape the distribution of VHSV, based on remote sensing and climate data via ecological niche modelling. Variables included temperature measured during the day and night, precipitation, vegetation, bathymetry, solar radiation and topographic wetness. VHSV occurrences were obtained from available reports of virus confirmation in laboratory facilities. We fit a Maxent model using VHSV-IVb reports and environmental variables under different parameterizations to identify the best model to determine potential VHSV occurrence based on environmental suitability. VHSV reports were generated from both passive and active surveillance. VHSV occurrences were most abundant near shore sites. We were, however, able to capture the environmental signature of VHSV based on the environmental variables employed in our model, allowing us to identify patterns of VHSV potential occurrence. Our findings suggest that VHSV is not at an ecological equilibrium and more areas could be affected, including areas not in close geographic proximity to past VHSV reports.
Alam, Md Sabir; Garg, Arun; Pottoo, Faheem Hyder; Saifullah, Mohammad Khalid; Tareq, Abu Izneid; Manzoor, Ovais; Mohsin, Mohd; Javed, Md Noushad
2017-11-01
Due to unique inherent catalytic characteristics of different size, shape and surface functionalized gold nanoparticles, their potential applications, are being explored in various fields such as drug delivery, biosensor, diagnosis and theranostics. However conventional process for synthesis of these metallic nanoparticles utilizes toxic reagents as reducing agents, additional capping agent for stability as well as surface functionalization for drug delivery purposes. Hence, in this work suitability of gum Ghatti for reducing, capping and surface functionalization during the synthesis of stable Gold nanoparticles were duly explored. Role and impact of key process variables i.e. volume of chloroauric acid solution, gum solution and temperature at their respective three different levels, as well as mechanism of formation of optimized gold nanoparticles were also investigated using Box- Behnken design. These novel synthesized optimized Gold nanoparticles were further characterized by UV spectrophotometer for its surface plasmon resonance (SPR) at around ∼530nm, dynamic light scattering (DLS) for its hydrodynamic size (112.5nm), PDI (0.222) and zeta potential (-21.3mV) while, transmission electron microscopy (TEM) further revealed surface geometry of these nanoparticles being spherical in shape. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Nguyen, Nhan; Kaul, Upender; Lebofsky, Sonia; Ting, Eric; Chaparro, Daniel; Urnes, James
2015-01-01
This paper summarizes the recent development of an adaptive aeroelastic wing shaping control technology called variable camber continuous trailing edge flap (VCCTEF). As wing flexibility increases, aeroelastic interactions with aerodynamic forces and moments become an increasingly important consideration in aircraft design and aerodynamic performance. Furthermore, aeroelastic interactions with flight dynamics can result in issues with vehicle stability and control. The initial VCCTEF concept was developed in 2010 by NASA under a NASA Innovation Fund study entitled "Elastically Shaped Future Air Vehicle Concept," which showed that highly flexible wing aerodynamic surfaces can be elastically shaped in-flight by active control of wing twist and bending deflection in order to optimize the spanwise lift distribution for drag reduction. A collaboration between NASA and Boeing Research & Technology was subsequently funded by NASA from 2012 to 2014 to further develop the VCCTEF concept. This paper summarizes some of the key research areas conducted by NASA during the collaboration with Boeing Research and Technology. These research areas include VCCTEF design concepts, aerodynamic analysis of VCCTEF camber shapes, aerodynamic optimization of lift distribution for drag minimization, wind tunnel test results for cruise and high-lift configurations, flutter analysis and suppression control of flexible wing aircraft, and multi-objective flight control for adaptive aeroelastic wing shaping control.
NASA Astrophysics Data System (ADS)
Russo, Luigi; Sorrentino, Marco; Polverino, Pierpaolo; Pianese, Cesare
2017-06-01
This work focuses on the development of a fast PEMFC impedance model, built starting from both physical and geometrical variables. Buckingham's π theorem is proposed to define non-dimensional parameters that allow suitably describing the relationships linking the physical variables involved in the process under-study to the fundamental dimensions. This approach is a useful solution for those problems, whose first principles-based models are not known, difficult to build or computationally unfeasible. The key contributions of the proposed similarity theory-based modelling approach are presented and discussed. The major advantage resides in its straightforward online applicability, thanks to very low computational burden, while preserving good level of accuracy. This makes the model suitable for several purposes, such as design, control, diagnostics, state of health monitoring and prognostics. Experimental data, collected in different operating conditions, have been analysed to demonstrate the capability of the model to reproduce PEMFC impedance at different loads and temperatures. This results in a reduction of the experimental effort for the FCS lab characterization. Moreover, it is highlighted the possibility to use the model with scaling-up purposes to reproduce the full stack impedance from single-cell one, thus supporting FC design and development from lab-to commercial system-scale.
Källhammer, Jan-Erik; Smith, Kip
2012-08-01
We investigated five contextual variables that we hypothesized would influence driver acceptance of alerts to pedestrians issued by a night vision active safety system to inform the specification of the system's alerting strategies. Driver acceptance of automotive active safety systems is a key factor to promote their use and implies a need to assess factors influencing driver acceptance. In a field operational test, 10 drivers drove instrumented vehicles equipped with a preproduction night vision system with pedestrian detection software. In a follow-up experiment, the 10 drivers and 25 additional volunteers without experience with the system watched 57 clips with pedestrian encounters gathered during the field operational test. They rated the acceptance of an alert to each pedestrian encounter. Levels of rating concordance were significant between drivers who experienced the encounters and participants who did not. Two contextual variables, pedestrian location and motion, were found to influence ratings. Alerts were more accepted when pedestrians were close to or moving toward the vehicle's path. The study demonstrates the utility of using subjective driver acceptance ratings to inform the design of active safety systems and to leverage expensive field operational test data within the confines of the laboratory. The design of alerting strategies for active safety systems needs to heed the driver's contextual sensitivity to issued alerts.
Approach for environmental baseline water sampling
Smith, K.S.
2011-01-01
Samples collected during the exploration phase of mining represent baseline conditions at the site. As such, they can be very important in forecasting potential environmental impacts should mining proceed, and can become measurements against which future changes are compared. Constituents in stream water draining mined and mineralized areas tend to be geochemically, spatially, and temporally variable, which presents challenges in collecting both exploration and baseline water-quality samples. Because short-term (daily) variations can complicate long-term trends, it is important to consider recent findings concerning geochemical variability of stream-water constituents at short-term timescales in designing sampling plans. Also, adequate water-quality information is key to forecasting potential ecological impacts from mining. Therefore, it is useful to collect baseline water samples adequate tor geochemical and toxicological modeling. This requires complete chemical analyses of dissolved constituents that include major and minor chemical elements as well as physicochemical properties (including pH, specific conductance, dissolved oxygen) and dissolved organic carbon. Applying chemical-equilibrium and appropriate toxicological models to water-quality information leads to an understanding of the speciation, transport, sequestration, bioavailability, and aquatic toxicity of potential contaminants. Insights gained from geochemical and toxicological modeling of water-quality data can be used to design appropriate mitigation and for economic planning for future mining activities.
Optimization of a GO2/GH2 Impinging Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
2001-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) impinging injector element. The unlike impinging element, a fuel-oxidizer- fuel (F-O-F) triplet, is optimized in terms of design variables such as fuel pressure drop, (Delta)P(sub f), oxidizer pressure drop, (Delta)P(sub o), combustor length, L(sub comb), and impingement half-angle, alpha, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 163 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface which includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, specific variable weights are further increased to illustrate the high marginal cost of realizing the last increment of injector performance and thruster weight.
Historic range of variability for upland vegetation in the Medicine Bow National Forest, Wyoming
Gregory K. Dillon; Dennis H. Knight; Carolyn B. Meyer
2005-01-01
An approach for synthesizing the results of ecological research pertinent to land management is the analysis of the historic range of variability (HRV) for key ecosystem variables that are affected by management activities. This report provides an HRV analysis for the upland vegetation of the Medicine Bow National Forest in southeastern Wyoming. The variables include...
Historic range of variability for upland vegetation in the Bighorn National Forest, Wyoming
Carolyn B. Meyer; Dennis H. Knight; Gregory K. Dillon
2005-01-01
An approach for synthesizing the results of ecological research pertinent to land management is the analysis of the historic range of variability (HRV) for key ecosystem variables that are affected by management activities. This report provides an HRV analysis for the upland vegetation of the Bighorn National Forest in northcentral Wyoming. The variables include live...
Sources of Sex Discrimination in Educational Systems: A Conceptual Model
ERIC Educational Resources Information Center
Kutner, Nancy G.; Brogan, Donna
1976-01-01
A conceptual model is presented relating numerous variables contributing to sexism in American education. Discrimination is viewed as intervening between two sets of interrelated independent variables and the dependent variable of sex inequalities in educational attainment. Sex-role orientation changes are the key to significant change in the…
Relative Reinforcer Rates and Magnitudes Do Not Control Concurrent Choice Independently
ERIC Educational Resources Information Center
Elliffe, Douglas; Davison, Michael; Landon, Jason
2008-01-01
One assumption of the matching approach to choice is that different independent variables control choice independently of each other. We tested this assumption for reinforcer rate and magnitude in an extensive parametric experiment. Five pigeons responded for food reinforcement on switching-key concurrent variable-interval variable-interval…
Virtual Levels and Role Models: N-Level Structural Equations Model of Reciprocal Ratings Data.
Mehta, Paras D
2018-01-01
A general latent variable modeling framework called n-Level Structural Equations Modeling (NL-SEM) for dependent data-structures is introduced. NL-SEM is applicable to a wide range of complex multilevel data-structures (e.g., cross-classified, switching membership, etc.). Reciprocal dyadic ratings obtained in round-robin design involve complex set of dependencies that cannot be modeled within Multilevel Modeling (MLM) or Structural Equations Modeling (SEM) frameworks. The Social Relations Model (SRM) for round robin data is used as an example to illustrate key aspects of the NL-SEM framework. NL-SEM introduces novel constructs such as 'virtual levels' that allows a natural specification of latent variable SRMs. An empirical application of an explanatory SRM for personality using xxM, a software package implementing NL-SEM is presented. Results show that person perceptions are an integral aspect of personality. Methodological implications of NL-SEM for the analyses of an emerging class of contextual- and relational-SEMs are discussed.
Psychosocial mediators of the impact of acculturation on adolescent substance abuse.
Saint-Jean, Gilbert; Martinez, Carlos A; Crandall, Lee A
2008-04-01
To identify and evaluate socio-psychological factors that are associated with differences in substance abuse prevalence between non-acculturated and acculturated Florida youth, we employed t-test and logistic regression to analyze self-reported data from 63,000 middle and high school student participants in the 2004 Florida Youth Substance Abuse Survey. Questionnaire items covered socio-demographics, tobacco, alcohol, and illicit substance use; and perceptions and attitudes toward drug use. The outcome variables were past 30 day use of "any illicit drug." The key independent variable was language used at home (English/Another language). The covariates were 32 socio-psychological factors that are considered risk and protective factors for adolescent drug abuse. Findings support the growing body of evidence suggesting that acculturation status is a strong predictor of substance use among adolescents. This effect may be mediated principally through the family and peer/individual psychosocial domains. The findings may have important implications for the design and implementation of drug prevention programs targeting teenagers.
A design for a sustained assessment of climate forcings and feedbacks on land use land cover change
Loveland, Thomas; Mahmood, Rezaul
2014-01-01
Land use and land cover change (LULCC) significantly influences the climate system. Hence, to prepare the nation for future climate change and variability, a sustained assessment of LULCC and its climatic impacts needs to be undertaken. To address this objective, not only do we need to determine contemporary trends in land use and land cover that affect, or are affected by, weather and climate but also identify sectors and regions that are most affected by weather and climate variability. Moreover, it is critical that we recognize land cover and regions that are most vulnerable to climate change and how end-use practices are adapting to climate change. This paper identifies a series of steps that need to be undertaken to address these key items. In addition, national-scale institutional capabilities are identified and discussed. Included in the discussions are challenges and opportunities for collaboration among these institutions for a sustained assessment.
Simplified Physics Based Models Research Topical Report on Task #2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Srikanta; Ganesh, Priya
We present a simplified-physics based approach, where only the most important physical processes are modeled, to develop and validate simplified predictive models of CO2 sequestration in deep saline formation. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. We use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and themore » nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Similar correlations are also developed to predict the average pressure within the injection reservoir, and the pressure buildup within the caprock.« less
Knapp, Sandra; Sagona, Eva; Carbonell, Anna K.Z.; Chiarini, Franco
2017-01-01
Abstract The Solanum elaeagnifolium clade (Elaeagnifolium clade) contains five species of small, often rhizomatous, shrubs from deserts and dry forests in North and South America. Members of the clade were previously classified in sections Leprophora, Nycterium and Lathyrocarpum, and were not thought to be closely related. The group is sister to the species-rich monophyletic Old World clade of spiny solanums. The species of the group have an amphitropical distribution, with three species in Mexico and the southwestern United States and three species in Argentina. Solanum elaeagnifolium occurs in both North and South America, and is a noxious invasive weed in dry areas worldwide. Members of the group are highly variable morphologically, and this variability has led to much synonymy, particularly in the widespread S. elaeagnifolium. We here review the taxonomic history, morphology, relationships and ecology of these species and provide keys for their identification, descriptions, full synonymy (including designations of lectotypes) and nomenclatural notes. Illustrations, distribution maps and preliminary conservation assessments are provided for all species. PMID:29033654
Explaining implementation behaviour of the National Incident Management System (NIMS).
Jensen, Jessica; Youngs, George
2015-04-01
This paper explains the perceived implementation behaviour of counties in the United States with respect to the National Incident Management System (NIMS). The system represents a massive and historic policy mandate designed to restructure, standardise and thereby unify the efforts of a wide variety of emergency management entities. Specifically, this study examined variables identified in the NIMS and policy literature that might influence the behavioural intentions and actual behaviour of counties. It found that three key factors limit or promote how counties intend to implement NIMS and how they actually implement the system: policy characteristics related to NIMS, implementer views and a measure of local capacity. One additional variable-inter-organisational characteristics-was found to influence only actual behaviour. This study's findings suggest that the purpose underlying NIMS may not be fulfilled and confirm what disaster research has long suggested: the potential for standardisation in emergency management is limited. © 2015 The Author(s). Disasters © Overseas Development Institute, 2015.
Safety modeling of urban arterials in Shanghai, China.
Wang, Xuesong; Fan, Tianxiang; Chen, Ming; Deng, Bing; Wu, Bing; Tremont, Paul
2015-10-01
Traffic safety on urban arterials is influenced by several key variables including geometric design features, land use, traffic volume, and travel speeds. This paper is an exploratory study of the relationship of these variables to safety. It uses a comparatively new method of measuring speeds by extracting GPS data from taxis operating on Shanghai's urban network. This GPS derived speed data, hereafter called Floating Car Data (FCD) was used to calculate average speeds during peak and off-peak hours, and was acquired from samples of 15,000+ taxis traveling on 176 segments over 18 major arterials in central Shanghai. Geometric design features of these arterials and surrounding land use characteristics were obtained by field investigation, and crash data was obtained from police reports. Bayesian inference using four different models, Poisson-lognormal (PLN), PLN with Maximum Likelihood priors (PLN-ML), hierarchical PLN (HPLN), and HPLN with Maximum Likelihood priors (HPLN-ML), was used to estimate crash frequencies. Results showed the HPLN-ML models had the best goodness-of-fit and efficiency, and models with ML priors yielded estimates with the lowest standard errors. Crash frequencies increased with increases in traffic volume. Higher average speeds were associated with higher crash frequencies during peak periods, but not during off-peak periods. Several geometric design features including average segment length of arterial, number of lanes, presence of non-motorized lanes, number of access points, and commercial land use, were positively related to crash frequencies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Security of continuous-variable quantum key distribution against general attacks.
Leverrier, Anthony; García-Patrón, Raúl; Renner, Renato; Cerf, Nicolas J
2013-01-18
We prove the security of Gaussian continuous-variable quantum key distribution with coherent states against arbitrary attacks in the finite-size regime. In contrast to previously known proofs of principle (based on the de Finetti theorem), our result is applicable in the practically relevant finite-size regime. This is achieved using a novel proof approach, which exploits phase-space symmetries of the protocols as well as the postselection technique introduced by Christandl, Koenig, and Renner [Phys. Rev. Lett. 102, 020504 (2009)].
Continuous variable quantum cryptography: beating the 3 dB loss limit.
Silberhorn, Ch; Ralph, T C; Lütkenhaus, N; Leuchs, G
2002-10-14
We demonstrate that secure quantum key distribution systems based on continuous variable implementations can operate beyond the apparent 3 dB loss limit that is implied by the beam splitting attack. The loss limit was established for standard minimum uncertainty states such as coherent states. We show that, by an appropriate postselection mechanism, we can enter a region where Eve's knowledge on Alice's key falls behind the information shared between Alice and Bob, even in the presence of substantial losses.
Framework for assessing key variable dependencies in loose-abrasive grinding and polishing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.S.; Aikens, D.M.; Brown, N.J.
1995-12-01
This memo describes a framework for identifying all key variables that determine the figuring performance of loose-abrasive lapping and polishing machines. This framework is intended as a tool for prioritizing R&D issues, assessing the completeness of process models and experimental data, and for providing a mechanism to identify any assumptions in analytical models or experimental procedures. Future plans for preparing analytical models or performing experiments can refer to this framework in establishing the context of the work.
Ultrasonic determination of recrystallization
NASA Technical Reports Server (NTRS)
Generazio, E. R.
1986-01-01
Ultrasonic attenuation was measured for cold worked Nickel 200 samples annealed at increasing temperatures. Localized dislocation density variations, crystalline order and colume percent of recrystallized phase were determined over the anneal temperature range using transmission electron microscopy, X-ray diffraction, and metallurgy. The exponent of the frequency dependence of the attenuation was found to be a key variable relating ultrasonic attenuation to the thermal kinetics of the recrystallization process. Identification of this key variable allows for the ultrasonic determination of onset, degree, and completion of recrystallization.
NASA Astrophysics Data System (ADS)
Guo, Ying; Li, Renjie; Liao, Qin; Zhou, Jian; Huang, Duan
2018-02-01
Discrete modulation is proven to be beneficial to improving the performance of continuous-variable quantum key distribution (CVQKD) in long-distance transmission. In this paper, we suggest a construct to improve the maximal generated secret key rate of discretely modulated eight-state CVQKD using an optical amplifier (OA) with a slight cost of transmission distance. In the proposed scheme, an optical amplifier is exploited to compensate imperfection of Bob's apparatus, so that the generated secret key rate of eight-state protocol is enhanced. Specifically, we investigate two types of optical amplifiers, phase-insensitive amplifier (PIA) and phase-sensitive amplifier (PSA), and thereby obtain approximately equivalent improved performance for eight-state CVQKD system when applying these two different amplifiers. Numeric simulation shows that the proposed scheme can well improve the generated secret key rate of eight-state CVQKD in both asymptotic limit and finite-size regime. We also show that the proposed scheme can achieve the relatively high-rate transmission at long-distance communication system.
NASA Astrophysics Data System (ADS)
Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano
2018-06-01
One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.
NASA Astrophysics Data System (ADS)
Zhao, Yijia; Zhang, Yichen; Xu, Bingjie; Yu, Song; Guo, Hong
2018-04-01
The method of improving the performance of continuous-variable quantum key distribution protocols by postselection has been recently proposed and verified. In continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocols, the measurement results are obtained from untrusted third party Charlie. There is still not an effective method of improving CV-MDI QKD by the postselection with untrusted measurement. We propose a method to improve the performance of coherent-state CV-MDI QKD protocol by virtual photon subtraction via non-Gaussian postselection. The non-Gaussian postselection of transmitted data is equivalent to an ideal photon subtraction on the two-mode squeezed vacuum state, which is favorable to enhance the performance of CV-MDI QKD. In CV-MDI QKD protocol with non-Gaussian postselection, two users select their own data independently. We demonstrate that the optimal performance of the renovated CV-MDI QKD protocol is obtained with the transmitted data only selected by Alice. By setting appropriate parameters of the virtual photon subtraction, the secret key rate and tolerable excess noise are both improved at long transmission distance. The method provides an effective optimization scheme for the application of CV-MDI QKD protocols.
ERIC Educational Resources Information Center
Fischer, Richard B.
1986-01-01
Defines key terms and discusses things to consider when setting fees for a continuing education program. These include (1) the organization's philosophy and mission, (2) certain key variables, (3) pricing strategy options, and (4) the test of reasonableness. (CH)
Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)
2016-03-10
Contractor Address: 10 Moulton Street, Cambridge, MA 02138 Title of the Project: Seaworthy Quantum Key Distribution Design and Validation (SEAKEY...Technologies Kathryn Carson Program Manager Quantum Information Processing 2 | P a g e Approved for public release; distribution is...we have continued work calculating the key rates achievable parametrically with receiver performance. In addition, we describe the initial designs
A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks.
Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen
2018-05-12
Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity.
A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks
Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen
2018-01-01
Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity. PMID:29757244
Becker, Betsy Jane; Aloe, Ariel M; Duvendack, Maren; Stanley, T D; Valentine, Jeffrey C; Fretheim, Atle; Tugwell, Peter
2017-09-01
To outline issues of importance to analytic approaches to the synthesis of quasi-experiments (QEs) and to provide a statistical model for use in analysis. We drew on studies of statistics, epidemiology, and social-science methodology to outline methods for synthesis of QE studies. The design and conduct of QEs, effect sizes from QEs, and moderator variables for the analysis of those effect sizes were discussed. Biases, confounding, design complexities, and comparisons across designs offer serious challenges to syntheses of QEs. Key components of meta-analyses of QEs were identified, including the aspects of QE study design to be coded and analyzed. Of utmost importance are the design and statistical controls implemented in the QEs. Such controls and any potential sources of bias and confounding must be modeled in analyses, along with aspects of the interventions and populations studied. Because of such controls, effect sizes from QEs are more complex than those from randomized experiments. A statistical meta-regression model that incorporates important features of the QEs under review was presented. Meta-analyses of QEs provide particular challenges, but thorough coding of intervention characteristics and study methods, along with careful analysis, should allow for sound inferences. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Monisha; Valenzuela, Jose Maria; Mora, Hector Alejandro Beltran
Countries around the world are in various stages of reforming and restructuring their power systems to better meet development needs and decarbonization commitments. Changes in technology, business models, societal needs, and environmental goals are increasing pressure on countries to consider improvements to their power systems. This report addresses key issues associated with clean restructuring--the transition from traditional, vertically integrated utilities to competitive wholesale markets that rely increasingly on variable renewable electricity sources, demand response, and other clean energy options. The report also includes case studies from Mexico, Denmark, and Germany to provide real-world examples of clean restructuring from different perspectives.
Hayward, R. David; Krause, Neal
2014-01-01
The use of longitudinal designs in the field of religion and health makes it important to understand how attrition bias may affect findings in this area. This study examines attrition in a 4-wave, 8-year study of older adults. Attrition resulted in a sample biased towards more educated and more religiously-involved individuals. Conditional linear growth curve models found that trajectories of change for some variables differed among attrition categories. Ineligibles had worsening depression, declining control, and declining attendance. Mortality was associated with worsening religious coping styles. Refusers experienced worsening depression. Nevertheless, there was no evidence of bias in the key religion and health results. PMID:25257794
Hayward, R David; Krause, Neal
2016-02-01
The use of longitudinal designs in the field of religion and health makes it important to understand how attrition bias may affect findings in this area. This study examines attrition in a 4-wave, 8-year study of older adults. Attrition resulted in a sample biased toward more educated and more religiously involved individuals. Conditional linear growth curve models found that trajectories of change for some variables differed among attrition categories. Ineligibles had worsening depression, declining control, and declining attendance. Mortality was associated with worsening religious coping styles. Refusers experienced worsening depression. Nevertheless, there was no evidence of bias in the key religion and health results.
Carpenter, John; Dickinson, Claire
2016-01-01
A key underlying assumption of interprofessional education (IPE) is that if the professions are brought together they have the opportunity to learn about each other and dispel the negative stereotypes which are presumed to hamper interprofessional collaboration in practice. This article explores the application of contact theory in IPE with reference to eight evaluation studies (1995-2012) which adopted this theoretical perspective. It proposes that educators should pay explicit attention to an intergroup perspective in designing IPE programmes and specifically to the "contact variables" identified by social psychologists studying intergroup encounters. This would increase the chances of the planned contact having a positive effect on attitude change.
Mahoney, Jeannette; Verghese, Joe
2014-01-01
Background. The relationship between executive functions (EF) and gait speed is well established. However, with the exception of dual tasking, the key components of EF that predict differences in gait performance have not been determined. Therefore, the current study was designed to determine whether processing speed, conflict resolution, and intraindividual variability in EF predicted variance in gait performance in single- and dual-task conditions. Methods. Participants were 234 nondemented older adults (mean age 76.48 years; 55% women) enrolled in a community-based cohort study. Gait speed was assessed using an instrumented walkway during single- and dual-task conditions. The flanker task was used to assess EF. Results. Results from the linear mixed effects model showed that (a) dual-task interference caused a significant dual-task cost in gait speed (estimate = 35.99; 95% CI = 33.19–38.80) and (b) of the cognitive predictors, only intraindividual variability was associated with gait speed (estimate = −.606; 95% CI = −1.11 to −.10). In unadjusted analyses, the three EF measures were related to gait speed in single- and dual-task conditions. However, in fully adjusted linear regression analysis, only intraindividual variability predicted performance differences in gait speed during dual tasking (B = −.901; 95% CI = −1.557 to −.245). Conclusion. Among the three EF measures assessed, intraindividual variability but not speed of processing or conflict resolution predicted performance differences in gait speed. PMID:24285744
Effect of Escitalopram on Hot Flash Interference: A Randomized, Controlled Trial
Carpenter, Janet S.; Guthrie, Katherine A.; Larson, Joseph C.; Freeman, Ellen W.; Joffe, Hadine; Reed, Susan D.; Ensrud, Kristine E.; LaCroix, Andrea Z.
2012-01-01
Objectives To estimate the effect of escitalopram 10–20 mg/day versus placebo for reducing hot flash interference in daily life and understand correlates and predictors of reductions in hot flash interference, a key measure of quality of life. Design Multi-site, randomized, double-blind, placebo-controlled clinical trial. Patients 205 midlife women (46% African-American) who met criteria participated. Setting MsFLASH clinical sites in Boston, Indianapolis, Oakland, and Philadelphia. Intervention After baseline, women were randomized to 1 pill of escitalopram 10 mg/day (n=104) or placebo (n=101) with follow-up at 4- and 8-weeks. At week 4, those not achieving 50% fewer hot flashes were increased to 2 pills daily (20 mg/day or 2 placebo pills). Main outcome measures The Hot Flash Related Daily Interference Scale; Correlates were variables from hot flash diaries; Predictors were baseline demographics, clinical variables, depression, anxiety, sleep quality, and hot flashes. Results Compared to placebo, escitalopram significantly reduced hot flash interference by 6.0 points at week 4 and 3.4 points at week 8 more than placebo (p=0.012). Reductions in hot flash interference correlated with changes in hot flash diary variables. However, baseline variables did not significantly predict reductions in hot flash interference. Conclusions Escitalopram 10–20mg/day for 8 weeks improves women’s quality of life and this benefit did not vary by demographic, clinical, mood, sleep, or hot flash variables. PMID:22480818
Kumar, Rajesh; Nguyen, Elizabeth A; Roth, Lindsey A; Oh, Sam S; Gignoux, Christopher R.; Huntsman, Scott; Eng, Celeste; Moreno-Estrada, Andres; Sandoval, Karla; Peñaloza-Espinosa, Rosenda; López-López, Marisol; Avila, Pedro C.; Farber, Harold J.; Tcheurekdjian, Haig; Rodriguez-Cintron, William; Rodriguez-Santana, Jose R; Serebrisky, Denise; Thyne, Shannon M.; Williams, L. Keoki; Winkler, Cheryl; Bustamante, Carlos D.; Pérez-Stable, Eliseo J.; Borrell, Luisa N.; Burchard, Esteban G
2013-01-01
Background Atopy varies by ethnicity even within Latino groups. This variation may be due to environmental, socio-cultural or genetic factors. Objective To examine risk factors for atopy within a nationwide study of U.S. Latino children with and without asthma. Methods Aeroallergen skin test repsonse was analyzed in 1830 US latino subjects. Key determinants of atopy included: country / region of origin, generation in the U.S., acculturation, genetic ancestry and site to which individuals migrated. Serial multivariate zero inflated negative binomial regressions, stratified by asthma status, examined the association of each key determinant variable with the number of positive skin tests. In addition, the independent effect of each key variable was determined by including all key variables in the final models. Results In baseline analyses, African ancestry was associated with 3 times as many positive skin tests in participants with asthma (95% CI:1.62–5.57) and 3.26 times as many positive skin tests in control participants (95% CI: 1.02–10.39). Generation and recruitment site were also associated with atopy in crude models. In final models adjusted for key variables, Puerto Rican [exp(β) (95%CI): 1.31(1.02–1.69)] and mixed ethnicity [exp(β) (95%CI):1.27(1.03–1.56)] asthmatics had a greater probability of positive skin tests compared to Mexican asthmatics. Ancestry associations were abrogated by recruitment site, but not region of origin. Conclusions Puerto Rican ethnicity and mixed origin were associated with degree of atopy within U.S. Latino children with asthma. African ancestry was not associated with degree of atopy after adjusting for recruitment site. Local environment variation, represented by site, was associated with degree of sensitization. PMID:23684070
NASA Astrophysics Data System (ADS)
Ho, M. W.; Devineni, N.; Cook, E. R.; Lall, U.
2017-12-01
As populations and associated economic activity in the US evolve, regional demands for water likewise change. For regions dependent on surface water, dams and reservoirs are critical to storing and managing releases of water and regulating the temporal and spatial availability of water in order to meet these demands. Storage capacities typically range from seasonal storage in the east to multi-annual and decadal-scale storage in the drier west. However, most dams in the US were designed with limited knowledge regarding the range, frequency, and persistence of hydroclimatic extremes. Demands for water supplied by these dams have likewise changed. Furthermore, many dams in the US are now reaching or have already exceeded their economic design life. The converging issues of aging dams, improved knowledge of hydroclimatic variability, and evolving demands for dam services result in a pressing need to evaluate existing reservoir capacities with respect to contemporary water demands, long term hydroclimatic variability, and service reliability into the future. Such an effort is possible given the recent development of two datasets that respectively address hydroclimatic variability in the conterminous United States over the past 555 years and human water demand related water stress over the same region. The first data set is a paleoclimate reconstruction of streamflow variability across the CONUS region based on a tree-ring informed reconstruction of the Palmer Drought Severity Index. This streamflow reconstruction suggested that wet spells with shorter drier spells were a key feature of 20th century streamflow compared with the preceding 450 years. The second data set in an annual cumulative drought index that is a measure of water balance based on water supplied through precipitation and water demands based on evaporative demands, agricultural, urban, and industrial demands. This index identified urban and regional hotspots that were particularly dependent on water transfers and vulnerable to persistent drought risk. These data sets are used in conjunction with the national inventory of dams to assess the current capacity of dams to meet water demands considering variability in streamflow over the past 555 years. A case study in the North-East US is presented.
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.
1998-01-01
A key challenge in designing the new High Speed Civil Transport (HSCT) aircraft is determining a good match between the airframe and engine. Multidisciplinary design optimization can be used to solve the problem by adjusting parameters of both the engine and the airframe. Earlier, an example problem was presented of an HSCT aircraft with four mixed-flow turbofan engines and a baseline mission to carry 305 passengers 5000 nautical miles at a cruise speed of Mach 2.4. The problem was solved by coupling NASA Lewis Research Center's design optimization testbed (COMETBOARDS) with NASA Langley Research Center's Flight Optimization System (FLOPS). The computing time expended in solving the problem was substantial, and the instability of the FLOPS analyzer at certain design points caused difficulties. In an attempt to alleviate both of these limitations, we explored the use of two approximation concepts in the design optimization process. The two concepts, which are based on neural network and linear regression approximation, provide the reanalysis capability and design sensitivity analysis information required for the optimization process. The HSCT aircraft optimization problem was solved by using three alternate approaches; that is, the original FLOPS analyzer and two approximate (derived) analyzers. The approximate analyzers were calibrated and used in three different ranges of the design variables; narrow (interpolated), standard, and wide (extrapolated).
Field Research and Parametric Analysis in a Medical-Surgical Unit.
Nanda, Upali; Pati, Sipra; Nejati, Adeleh
2015-01-01
To study the workplace in a medical-surgical (med-surg) unit and to identify suboptimal environmental conditions that can be improved in the current unit and avoided in future design, through rapidly deployed field research and timely simulation. Literature emphasizes the importance of the healthcare workplace and the effect on patient outcomes. What is lacking are studies conducted on-site and used for immediate application in design to assess and improve workplace conditions. A rapidly deployed field research and simulation study was conducted in a 40-bed med-surg unit of a large healthcare system as part of the process of designing a new medical tower. Online surveys, systematic behavioral observations, semi-structured interviews, sound studies, and advanced spatial analysis through parametric modeling were conducted. The following created challenges for patient monitoring, care coordination, and management: (1) waste and variability in walking, (2) limited point-of-use access to supplies, (3) large distances traveled for minor tasks, and (4) low visibility and connectivity. The corridor is used as a workspace/communication hub. There is a distinct difference in beginning of day and night shift patterns and between walking "distance" and walking "sequence." There is a tendency for nurses to multitask, but a simulation exercise shows that for key tasks like medication delivery, multitasking may not always reduce walking distances. Co-location of medications, supplies, and nourishment; accommodation for work on wheels; and spatial and technological connectivity between care team and patients should be considered while designing a med-surg unit. Understanding the key activity sequences helps determine the proximity of spaces in relationship to patient rooms and each other. © The Author(s) 2015.
Multidisciplinary optimization of a controlled space structure using 150 design variables
NASA Technical Reports Server (NTRS)
James, Benjamin B.
1993-01-01
A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.
Information-Theoretic Metrics for Visualizing Gene-Environment Interactions
Chanda, Pritam ; Zhang, Aidong ; Brazeau, Daniel ; Sucheston, Lara ; Freudenheim, Jo L. ; Ambrosone, Christine ; Ramanathan, Murali
2007-01-01
The purpose of our work was to develop heuristics for visualizing and interpreting gene-environment interactions (GEIs) and to assess the dependence of candidate visualization metrics on biological and study-design factors. Two information-theoretic metrics, the k-way interaction information (KWII) and the total correlation information (TCI), were investigated. The effectiveness of the KWII and TCI to detect GEIs in a diverse range of simulated data sets and a Crohn disease data set was assessed. The sensitivity of the KWII and TCI spectra to biological and study-design variables was determined. Head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and the pedigree disequilibrium test (PDT) methods were obtained. The KWII and TCI spectra, which are graphical summaries of the KWII and TCI for each subset of environmental and genotype variables, were found to detect each known GEI in the simulated data sets. The patterns in the KWII and TCI spectra were informative for factors such as case-control misassignment, locus heterogeneity, allele frequencies, and linkage disequilibrium. The KWII and TCI spectra were found to have excellent sensitivity for identifying the key disease-associated genetic variations in the Crohn disease data set. In head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and PDT methods, the results from visual interpretation of the KWII and TCI spectra performed satisfactorily. The KWII and TCI are promising metrics for visualizing GEIs. They are capable of detecting interactions among numerous single-nucleotide polymorphisms and environmental variables for a diverse range of GEI models. PMID:17924337
Quantum key distribution using basis encoding of Gaussian-modulated coherent states
NASA Astrophysics Data System (ADS)
Huang, Peng; Huang, Jingzheng; Zhang, Zheshen; Zeng, Guihua
2018-04-01
The continuous-variable quantum key distribution (CVQKD) has been demonstrated to be available in practical secure quantum cryptography. However, its performance is restricted strongly by the channel excess noise and the reconciliation efficiency. In this paper, we present a quantum key distribution (QKD) protocol by encoding the secret keys on the random choices of two measurement bases: the conjugate quadratures X and P . The employed encoding method can dramatically weaken the effects of channel excess noise and reconciliation efficiency on the performance of the QKD protocol. Subsequently, the proposed scheme exhibits the capability to tolerate much higher excess noise and enables us to reach a much longer secure transmission distance even at lower reconciliation efficiency. The proposal can work alternatively to strengthen significantly the performance of the known Gaussian-modulated CVQKD protocol and serve as a multiplier for practical secure quantum cryptography with continuous variables.
NASA Astrophysics Data System (ADS)
Zhang, Hang; Mao, Yu; Huang, Duan; Li, Jiawei; Zhang, Ling; Guo, Ying
2018-05-01
We introduce a reliable scheme for continuous-variable quantum key distribution (CV-QKD) by using orthogonal frequency division multiplexing (OFDM). As a spectrally efficient multiplexing technique, OFDM allows a large number of closely spaced orthogonal subcarrier signals used to carry data on several parallel data streams or channels. We place emphasis on modulator impairments which would inevitably arise in the OFDM system and analyze how these impairments affect the OFDM-based CV-QKD system. Moreover, we also evaluate the security in the asymptotic limit and the Pirandola-Laurenza-Ottaviani-Banchi upper bound. Results indicate that although the emergence of imperfect modulation would bring about a slight decrease in the secret key bit rate of each subcarrier, the multiplexing technique combined with CV-QKD results in a desirable improvement on the total secret key bit rate which can raise the numerical value about an order of magnitude.
NASA Astrophysics Data System (ADS)
Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua
2017-02-01
Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.
NASA Astrophysics Data System (ADS)
Wang, Tao; Huang, Peng; Zhou, Yingming; Liu, Weiqi; Zeng, Guihua
2018-01-01
In a practical continuous-variable quantum key distribution (CVQKD) system, real-time shot-noise measurement (RTSNM) is an essential procedure for preventing the eavesdropper exploiting the practical security loopholes. However, the performance of this procedure itself is not analyzed under the real-world condition. Therefore, we indicate the RTSNM practical performance and investigate its effects on the CVQKD system. In particular, due to the finite-size effect, the shot-noise measurement at the receiver's side may decrease the precision of parameter estimation and consequently result in a tight security bound. To mitigate that, we optimize the block size for RTSNM under the ensemble size limitation to maximize the secure key rate. Moreover, the effect of finite dynamics of amplitude modulator in this scheme is studied and its mitigation method is also proposed. Our work indicates the practical performance of RTSNM and provides the real secret key rate under it.
NASA Astrophysics Data System (ADS)
Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano
2018-05-01
We present a rigorous security analysis of continuous-variable measurement-device-independent quantum key distribution (CV MDI QKD) in a finite-size scenario. The security proof is obtained in two steps: by first assessing the security against collective Gaussian attacks, and then extending to the most general class of coherent attacks via the Gaussian de Finetti reduction. Our result combines recent state-of-the-art security proofs for CV QKD with findings about min-entropy calculus and parameter estimation. In doing so, we improve the finite-size estimate of the secret key rate. Our conclusions confirm that CV MDI protocols allow for high rates on the metropolitan scale, and may achieve a nonzero secret key rate against the most general class of coherent attacks after 107-109 quantum signal transmissions, depending on loss and noise, and on the required level of security.
Quantum key distribution using continuous-variable non-Gaussian states
NASA Astrophysics Data System (ADS)
Borelli, L. F. M.; Aguiar, L. S.; Roversi, J. A.; Vidiella-Barranco, A.
2016-02-01
In this work, we present a quantum key distribution protocol using continuous-variable non-Gaussian states, homodyne detection and post-selection. The employed signal states are the photon added then subtracted coherent states (PASCS) in which one photon is added and subsequently one photon is subtracted from the field. We analyze the performance of our protocol, compared with a coherent state-based protocol, for two different attacks that could be carried out by the eavesdropper (Eve). We calculate the secret key rate transmission in a lossy line for a superior channel (beam-splitter) attack, and we show that we may increase the secret key generation rate by using the non-Gaussian PASCS rather than coherent states. We also consider the simultaneous quadrature measurement (intercept-resend) attack, and we show that the efficiency of Eve's attack is substantially reduced if PASCS are used as signal states.
Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)
2015-05-27
Address: 10 Moulton Street, Cambridge, MA 02138 Title of the Project: Seaworthy Quantum Key Distribution Design and Validation (SEAKEY...Technologies Kathryn Carson Program Manager Quantum Information Processing Report Documentation Page Form ApprovedOMB No. 0704-0188 Public...2016 4. TITLE AND SUBTITLE Seaworthy Quantum Key Distribution Design and Validation (SEAKEY) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM
NASA Astrophysics Data System (ADS)
Vanwalleghem, T.; Román, A.; Peña, A.; Laguna, A.; Giráldez, J. V.
2017-12-01
There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties in the critical zone. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of traditional digital soil mapping versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.
Mathematical models of water application for a variable rate irrigating hill-seeder
USDA-ARS?s Scientific Manuscript database
A variable rate irrigating hill-seeder can adjust water application automatically according to the difference in soil moisture content in the field to alleviate drought and save water. Two key problems to realize variable rate water application are how to determine the right amount of water for the ...
Mathematic models of water application for a variable rate irrigating hill-seeder
USDA-ARS?s Scientific Manuscript database
A variable rate irrigating hill-seeder can adjust water application automatically according to the difference in soil moisture content in the field to alleviate drought and save water. Two key problems to realize variable rate water application are how to determine the right amount of water for the ...
Design approaches to experimental mediation☆
Pirlott, Angela G.; MacKinnon, David P.
2016-01-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259
Design approaches to experimental mediation.
Pirlott, Angela G; MacKinnon, David P
2016-09-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.
Rosen, G D
2006-06-01
Meta-analysis is a vague descriptor used to encompass very diverse methods of data collection analysis, ranging from simple averages to more complex statistical methods. Holo-analysis is a fully comprehensive statistical analysis of all available data and all available variables in a specified topic, with results expressed in a holistic factual empirical model. The objectives and applications of holo-analysis include software production for prediction of responses with confidence limits, translation of research conditions to praxis (field) circumstances, exposure of key missing variables, discovery of theoretically unpredictable variables and interactions, and planning future research. Holo-analyses are cited as examples of the effects on broiler feed intake and live weight gain of exogenous phytases, which account for 70% of variation in responses in terms of 20 highly significant chronological, dietary, environmental, genetic, managemental, and nutrient variables. Even better future accountancy of variation will be facilitated if and when authors of papers routinely provide key data for currently neglected variables, such as temperatures, complete feed formulations, and mortalities.
Multivariate analysis of sludge disintegration by microwave-hydrogen peroxide pretreatment process.
Ya-Wei, Wang; Cheng-Min, Gui; Xiao-Tang, Ni; Mei-Xue, Chen; Yuan-Song, Wei
2015-01-01
Microwave irradiation (with H2O2) has been shown to offer considerable advantages owing to its flexible control, low overall cost, and resulting higher soluble chemical oxygen demand (SCOD); accordingly, the method has been proposed recently as a means of improving sludge disintegration. However, the key factor controlling this sludge pretreatment process, pH, has received insufficient attention to date. To address this, the response surface approach (central composite design) was applied to evaluate the effects of total suspended solids (TSS, 2-20 g/L), pH (4-10), and H2O2 dosage (0-2 w/w) and their interactions on 16 response variables (e.g., SCODreleased, pH, H2O2remaining). The results demonstrated that all three factors affect sludge disintegration significantly, and no pronounced interactions between response variables were observed during disintegration, except for three variables (TCOD, TSSremaining, and H2O2 remaining). Quadratic predictive models were constructed for all 16 response variables (R(2): 0.871-0.991). Taking soluble chemical oxygen demand (SCOD) as an example, the model and coefficients derived above were able to predict the performance of microwave pretreatment (enhanced by H2O2 and pH adjustment) from previously published studies. The predictive models developed were able to optimize the treatment process for multiple disintegration objectives. Copyright © 2014 Elsevier B.V. All rights reserved.
A residue-specific shift in stability and amyloidogenicity of antibody variable domains.
Nokwe, Cardine N; Zacharias, Martin; Yagi, Hisashi; Hora, Manuel; Reif, Bernd; Goto, Yuji; Buchner, Johannes
2014-09-26
Variable (V) domains of antibodies are essential for antigen recognition by our adaptive immune system. However, some variants of the light chain V domains (VL) form pathogenic amyloid fibrils in patients. It is so far unclear which residues play a key role in governing these processes. Here, we show that the conserved residue 2 of VL domains is crucial for controlling its thermodynamic stability and fibril formation. Hydrophobic side chains at position 2 stabilize the domain, whereas charged residues destabilize and lead to amyloid fibril formation. NMR experiments identified several segments within the core of the VL domain to be affected by changes in residue 2. Furthermore, molecular dynamic simulations showed that hydrophobic side chains at position 2 remain buried in a hydrophobic pocket, and charged side chains show a high flexibility. This results in a predicted difference in the dissociation free energy of ∼10 kJ mol(-1), which is in excellent agreement with our experimental values. Interestingly, this switch point is found only in VL domains of the κ family and not in VLλ or in VH domains, despite a highly similar domain architecture. Our results reveal novel insight into the architecture of variable domains and the prerequisites for formation of amyloid fibrils. This might also contribute to the rational design of stable variable antibody domains. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.
Bon, C; Toutain, P L; Concordet, D; Gehring, R; Martin-Jimenez, T; Smith, J; Pelligand, L; Martinez, M; Whittem, T; Riviere, J E; Mochel, J P
2018-04-01
A common feature of human and veterinary pharmacokinetics is the importance of identifying and quantifying the key determinants of between-patient variability in drug disposition and effects. Some of these attributes are already well known to the field of human pharmacology such as bodyweight, age, or sex, while others are more specific to veterinary medicine, such as species, breed, and social behavior. Identification of these attributes has the potential to allow a better and more tailored use of therapeutic drugs both in companion and food-producing animals. Nonlinear mixed effects (NLME) have been purposely designed to characterize the sources of variability in drug disposition and response. The NLME approach can be used to explore the impact of population-associated variables on the relationship between drug administration, systemic exposure, and the levels of drug residues in tissues. The latter, while different from the method used by the US Food and Drug Administration for setting official withdrawal times (WT) can also be beneficial for estimating WT of approved animal drug products when used in an extralabel manner. Finally, NLME can also prove useful to optimize dosing schedules, or to analyze sparse data collected in situations where intensive blood collection is technically challenging, as in small animal species presenting limited blood volume such as poultry and fish. © 2017 John Wiley & Sons Ltd.
Grid sensitivity capability for large scale structures
NASA Technical Reports Server (NTRS)
Nagendra, Gopal K.; Wallerstein, David V.
1989-01-01
The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.
An Optimization-Based Approach to Injector Element Design
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)
2000-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, combining results from both elements to simulate a trade study, thrust-to-weight trends are illustrated and examined in detail.
Advanced Stirling Convertor Heater Head Durability and Reliability Quantification
NASA Technical Reports Server (NTRS)
Krause, David L.; Shah, Ashwin R.; Korovaichuk, Igor; Kalluri, Sreeramesh
2008-01-01
The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for long duration Science missions, such as lunar applications, Mars rovers, and deep space missions, that require reliable design lifetimes of up to 17 years. Resistance to creep deformation of the MarM-247 heater head (HH), a structurally critical component of the ASRG Advanced Stirling Convertor (ASC), under high temperatures (up to 850 C) is a key design driver for durability. Inherent uncertainties in the creep behavior of the thin-walled HH and the variations in the wall thickness, control temperature, and working gas pressure need to be accounted for in the life and reliability prediction. Due to the availability of very limited test data, assuring life and reliability of the HH is a challenging task. The NASA Glenn Research Center (GRC) has adopted an integrated approach combining available uniaxial MarM-247 material behavior testing, HH benchmark testing and advanced analysis in order to demonstrate the integrity, life and reliability of the HH under expected mission conditions. The proposed paper describes analytical aspects of the deterministic and probabilistic approaches and results. The deterministic approach involves development of the creep constitutive model for the MarM-247 (akin to the Oak Ridge National Laboratory master curve model used previously for Inconel 718 (Special Metals Corporation)) and nonlinear finite element analysis to predict the mean life. The probabilistic approach includes evaluation of the effect of design variable uncertainties in material creep behavior, geometry and operating conditions on life and reliability for the expected life. The sensitivity of the uncertainties in the design variables on the HH reliability is also quantified, and guidelines to improve reliability are discussed.
Roux, Emmanuel; Gaborit, Pascal; Romaña, Christine A; Girod, Romain; Dessay, Nadine; Dusfour, Isabelle
2013-12-01
Sampling design is a key issue when establishing species inventories and characterizing habitats within highly heterogeneous landscapes. Sampling efforts in such environments may be constrained and many field studies only rely on subjective and/or qualitative approaches to design collection strategy. The region of Cacao, in French Guiana, provides an excellent study site to understand the presence and abundance of Anopheles mosquitoes, their species dynamics and the transmission risk of malaria across various environments. We propose an objective methodology to define a stratified sampling design. Following thorough environmental characterization, a factorial analysis of mixed groups allows the data to be reduced and non-collinear principal components to be identified while balancing the influences of the different environmental factors. Such components defined new variables which could then be used in a robust k-means clustering procedure. Then, we identified five clusters that corresponded to our sampling strata and selected sampling sites in each stratum. We validated our method by comparing the species overlap of entomological collections from selected sites and the environmental similarities of the same sites. The Morisita index was significantly correlated (Pearson linear correlation) with environmental similarity based on i) the balanced environmental variable groups considered jointly (p = 0.001) and ii) land cover/use (p-value < 0.001). The Jaccard index was significantly correlated with land cover/use-based environmental similarity (p-value = 0.001). The results validate our sampling approach. Land cover/use maps (based on high spatial resolution satellite images) were shown to be particularly useful when studying the presence, density and diversity of Anopheles mosquitoes at local scales and in very heterogeneous landscapes.
Transient Seepage for Levee Engineering Analyses
NASA Astrophysics Data System (ADS)
Tracy, F. T.
2017-12-01
Historically, steady-state seepage analyses have been a key tool for designing levees by practicing engineers. However, with the advances in computer modeling, transient seepage analysis has become a potentially viable tool. A complication is that the levees usually have partially saturated flow, and this is significantly more complicated in transient flow. This poster illustrates four elements of our research in partially saturated flow relating to the use of transient seepage for levee design: (1) a comparison of results from SEEP2D, SEEP/W, and SLIDE for a generic levee cross section common to the southeastern United States; (2) the results of a sensitivity study of varying saturated hydraulic conductivity, the volumetric water content function (as represented by van Genuchten), and volumetric compressibility; (3) a comparison of when soils do and do not exhibit hysteresis, and (4) a description of proper and improper use of transient seepage in levee design. The variables considered for the sensitivity and hysteresis studies are pore pressure beneath the confining layer at the toe, the flow rate through the levee system, and a levee saturation coefficient varying between 0 and 1. Getting results for SEEP2D, SEEP/W, and SLIDE to match proved more difficult than expected. After some effort, the results matched reasonably well. Differences in results were caused by various factors, including bugs, different finite element meshes, different numerical formulations of the system of nonlinear equations to be solved, and differences in convergence criteria. Varying volumetric compressibility affected the above test variables the most. The levee saturation coefficient was most affected by the use of hysteresis. The improper use of pore pressures from a transient finite element seepage solution imported into a slope stability computation was found to be the most grievous mistake in using transient seepage in the design of levees.
2013-01-01
Background Sampling design is a key issue when establishing species inventories and characterizing habitats within highly heterogeneous landscapes. Sampling efforts in such environments may be constrained and many field studies only rely on subjective and/or qualitative approaches to design collection strategy. The region of Cacao, in French Guiana, provides an excellent study site to understand the presence and abundance of Anopheles mosquitoes, their species dynamics and the transmission risk of malaria across various environments. We propose an objective methodology to define a stratified sampling design. Following thorough environmental characterization, a factorial analysis of mixed groups allows the data to be reduced and non-collinear principal components to be identified while balancing the influences of the different environmental factors. Such components defined new variables which could then be used in a robust k-means clustering procedure. Then, we identified five clusters that corresponded to our sampling strata and selected sampling sites in each stratum. Results We validated our method by comparing the species overlap of entomological collections from selected sites and the environmental similarities of the same sites. The Morisita index was significantly correlated (Pearson linear correlation) with environmental similarity based on i) the balanced environmental variable groups considered jointly (p = 0.001) and ii) land cover/use (p-value << 0.001). The Jaccard index was significantly correlated with land cover/use-based environmental similarity (p-value = 0.001). Conclusions The results validate our sampling approach. Land cover/use maps (based on high spatial resolution satellite images) were shown to be particularly useful when studying the presence, density and diversity of Anopheles mosquitoes at local scales and in very heterogeneous landscapes. PMID:24289184
NASA Astrophysics Data System (ADS)
Zeng, Xiaohua; Li, Guanghan; Yin, Guodong; Song, Dafeng; Li, Sheng; Yang, Nannan
2018-02-01
Equipping a hydraulic hub-motor auxiliary system (HHMAS), which mainly consists of a hydraulic variable pump, a hydraulic hub-motor, a hydraulic valve block and hydraulic accumulators, with part-time all-wheel-drive functions improves the power performance and fuel economy of heavy commercial vehicles. The coordinated control problem that occurs when HHMAS operates in the auxiliary drive mode is addressed in this paper; the solution to this problem is the key to the maximization of HHMAS. To achieve a reasonable distribution of the engine power between mechanical and hydraulic paths, a nonlinear control scheme based on model predictive control (MPC) is investigated. First, a nonlinear model of HHMAS with vehicle dynamics and tire slip characteristics is built, and a controller-design-oriented model is simplified. Then, a steady-state feedforward + dynamic MPC feedback controller (FMPC) is designed to calculate the control input sequence of engine torque and hydraulic variable pump displacement. Finally, the controller is tested in the MATLAB/Simulink and AMESim co-simulation platform and the hardware-in-the-loop experiment platform, and its performance is compared with that of the existing proportional-integral-derivative controller and the feedforward controller under the same conditions. Simulation results show that the designed FMPC has the best performance, and control performance can be guaranteed in a real-time environment. Compared with the tracking control error of the feedforward controller, that of the designed FMPC is decreased by 85% and the traction efficiency performance is improved by 23% under a low-friction-surface condition. Moreover, under common road conditions for heavy commercial vehicles, the traction force can increase up to 13.4-15.6%.
Integrating Variable Renewable Energy into the Grid: Key Issues, Greening the Grid (Spanish Version)
DOE Office of Scientific and Technical Information (OSTI.GOV)
This is the Spanish version of 'Greening the Grid - Integrating Variable Renewable Energy into the Grid: Key Issues'. To foster sustainable, low-emission development, many countries are establishing ambitious renewable energy targets for their electricity supply. Because solar and wind tend to be more variable and uncertain than conventional sources, meeting these targets will involve changes to power system planning and operations. Grid integration is the practice of developing efficient ways to deliver variable renewable energy (VRE) to the grid. Good integration methods maximize the cost-effectiveness of incorporating VRE into the power system while maintaining or increasing system stability andmore » reliability. When considering grid integration, policy makers, regulators, and system operators consider a variety of issues, which can be organized into four broad topics: New Renewable Energy Generation, New Transmission, Increased System Flexibility, and Planning for a High RE Future.« less
Fast Synthesis of Gibbsite Nanoplates and Process Optimization using Box-Behnken Experimental Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xin; Zhang, Xianwen; Graham, Trent R.
Developing the ability to synthesize compositionally and morphologically well-defined gibbsite particles at the nanoscale with high yield is an ongoing need that has not yet achieved the level of rational design. Here we report optimization of a clean inorganic synthesis route based on statistical experimental design examining the influence of Al(OH)3 gel precursor concentration, pH, and aging time at temperature. At 80 oC, the optimum synthesis conditions of gel concentration at 0.5 M, pH at 9.2, and time at 72 h maximized the reaction yield up to ~87%. The resulting gibbsite product is composed of highly uniform euhedral hexagonal nanoplatesmore » within a basal plane diameter range of 200-400 nm. The independent roles of key system variables in the growth mechanism are considered. On the basis of these optimized experimental conditions, the synthesis procedure, which is both cost-effective and environmentally friendly, has the potential for mass production scale-up of high quality gibbsite material for various fundamental research and industrial applications.« less
Determination of the key parameters affecting historic communications satellite trends
NASA Technical Reports Server (NTRS)
Namkoong, D.
1984-01-01
Data representing 13 series of commercial communications satellites procured between 1968 and 1982 were analyzed to determine the factors that have contributed to the general reduction over time of the per circuit cost of communications satellites. The model by which the data were analyzed was derived from a general telecommunications application and modified to be more directly applicable for communications satellites. In this model satellite mass, bandwidth-years, and technological change were the variable parameters. A linear, least squares, multiple regression routine was used to obtain the measure of significance of the model. Correlation was measured by coefficient of determination (R super 2) and t-statistic. The results showed that no correlation could be established with satellite mass. Bandwidth-year however, did show a significant correlation. Technological change in the bandwidth-year case was a significant factor in the model. This analysis and the conclusions derived are based on mature technologies, i.e., satellite designs that are evolutions of earlier designs rather than the first of a new generation. The findings, therefore, are appropriate to future satellites only if they are a continuation of design evolution.
Dynamic Simulation of a Periodic 10 K Sorption Cryocooler
NASA Technical Reports Server (NTRS)
Bhandari, P.; Rodriguez, J.; Bard, S.; Wade, L.
1994-01-01
A transient thermal simulation model has been developed to simulate the dynamic performance of a multiple-stage 10 K sorption cryocooler for spacecraft sensor cooling applications that require periodic quick-cooldown (under 2 minutes) , negligible vibration, low power consumption, and long life (5 to 10 years). The model was specifically designed to represent the Brilliant Eyes Ten-Kelvin Sorption Cryocooler Experiment (BETSCE), but it can be adapted to represent other sorption cryocooler systems as well. The model simulates the heat transfer, mass transfer, and thermodynamic processes in the cryostat and the sorbent beds for the entire refrigeration cycle, and includes the transient effects of variable hydrogen supply pressures due to expansion and overflow of hydrogen during the cooldown operation. The paper describes model limitations and simplifying assumptions, with estimates of errors induced by them, and presents comparisons of performance predictions with ground experiments. An important benefit of the model is its ability to predict performance sensitivities to variations of key design and operational parameters. The insights thus obtained are expected to lead to higher efficiencies and lower weights for future designs.
Design principles for electrolytes and interfaces for stable lithium-metal batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tikekar, Mukul D.; Choudhury, Snehashis; Tu, Zhengyuan
2016-09-08
The future of electrochemical energy storage hinges on the advancement of science and technology that enables rechargeable batteries that utilize reactive metals as anodes. With specific capacity more than ten times that of the LiC6 anode used in present-day lithium-ion batteries, cells based on Li-metal anodes are of particular interest. Effective strategies for stabilizing the anode in such cells are now understood to be a requirement for progress on exceptional storage technologies, including Li–S and Li–O2 batteries. Multiple challenges—parasitic reactions of Li-metal with liquid electrolytes, unstable and dendritic electrodeposition, and dendrite-induced short circuits—derailed early efforts to commercialize such lithium-metal batteries.more » Here we consider approaches for rationally designing electrolytes and Li-metal/electrolyte interfaces for stable, dendrite-free operation of lithium-metal batteries. On the basis of fundamental understanding of the failure modes of reactive metal anodes, we discuss the key variables that govern the stability of electrodeposition at the Li anode and propose a universal framework for designing stable electrolytes and interfaces for lithium-metal batteries.« less
Characterization of Swirl-Venturi Lean Direct Injection Designs for Aviation Gas-Turbine Combustion
NASA Technical Reports Server (NTRS)
Heath, Christopher M.
2013-01-01
Injector geometry, physical mixing, chemical processes, and engine cycle conditions together govern performance, operability and emission characteristics of aviation gas-turbine combustion systems. The present investigation explores swirl-venturi lean direct injection combustor fundamentals, characterizing the influence of key geometric injector parameters on reacting flow physics and emission production trends. In this computational study, a design space exploration was performed using a parameterized swirl-venturi lean direct injector model. From the parametric geometry, 20 three-element lean direct injection combustor sectors were produced and simulated using steady-state, Reynolds-averaged Navier-Stokes reacting computations. Species concentrations were solved directly using a reduced 18-step reaction mechanism for Jet-A. Turbulence closure was obtained using a nonlinear ?-e model. Results demonstrate sensitivities of the geometric perturbations on axially averaged flow field responses. Output variables include axial velocity, turbulent kinetic energy, static temperature, fuel patternation and minor species mass fractions. Significant trends have been reduced to surrogate model approximations, intended to guide future injector design trade studies and advance aviation gas-turbine combustion research.
FY2017 Updates to the SAS4A/SASSYS-1 Safety Analysis Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fanning, T. H.
The SAS4A/SASSYS-1 safety analysis software is used to perform deterministic analysis of anticipated events as well as design-basis and beyond-design-basis accidents for advanced fast reactors. It plays a central role in the analysis of U.S. DOE conceptual designs, proposed test and demonstration reactors, and in domestic and international collaborations. This report summarizes the code development activities that have taken place during FY2017. Extensions to the void and cladding reactivity feedback models have been implemented, and Control System capabilities have been improved through a new virtual data acquisition system for plant state variables and an additional Block Signal for a variablemore » lag compensator to represent reactivity feedback for novel shutdown devices. Current code development and maintenance needs are also summarized in three key areas: software quality assurance, modeling improvements, and maintenance of related tools. With ongoing support, SAS4A/SASSYS-1 can continue to fulfill its growing role in fast reactor safety analysis and help solidify DOE’s leadership role in fast reactor safety both domestically and in international collaborations.« less
Design principles for electrolytes and interfaces for stable lithium-metal batteries
NASA Astrophysics Data System (ADS)
Tikekar, Mukul D.; Choudhury, Snehashis; Tu, Zhengyuan; Archer, Lynden A.
2016-09-01
The future of electrochemical energy storage hinges on the advancement of science and technology that enables rechargeable batteries that utilize reactive metals as anodes. With specific capacity more than ten times that of the LiC6 anode used in present-day lithium-ion batteries, cells based on Li-metal anodes are of particular interest. Effective strategies for stabilizing the anode in such cells are now understood to be a requirement for progress on exceptional storage technologies, including Li-S and Li-O2 batteries. Multiple challenges—parasitic reactions of Li-metal with liquid electrolytes, unstable and dendritic electrodeposition, and dendrite-induced short circuits—derailed early efforts to commercialize such lithium-metal batteries. Here we consider approaches for rationally designing electrolytes and Li-metal/electrolyte interfaces for stable, dendrite-free operation of lithium-metal batteries. On the basis of fundamental understanding of the failure modes of reactive metal anodes, we discuss the key variables that govern the stability of electrodeposition at the Li anode and propose a universal framework for designing stable electrolytes and interfaces for lithium-metal batteries.
Using near infrared spectroscopy and heart rate variability to detect mental overload.
Durantin, G; Gagnon, J-F; Tremblay, S; Dehais, F
2014-02-01
Mental workload is a key factor influencing the occurrence of human error, especially during piloting and remotely operated vehicle (ROV) operations, where safety depends on the ability of pilots to act appropriately. In particular, excessively high or low mental workload can lead operators to neglect critical information. The objective of the present study is to investigate the potential of functional near infrared spectroscopy (fNIRS) - a non-invasive method of measuring prefrontal cortex activity - in combination with measurements of heart rate variability (HRV), to predict mental workload during a simulated piloting task, with particular regard to task engagement and disengagement. Twelve volunteers performed a computer-based piloting task in which they were asked to follow a dynamic target with their aircraft, a task designed to replicate key cognitive demands associated with real life ROV operating tasks. In order to cover a wide range of mental workload levels, task difficulty was manipulated in terms of processing load and difficulty of control - two critical sources of workload associated with piloting and remotely operating a vehicle. Results show that both fNIRS and HRV are sensitive to different levels of mental workload; notably, lower prefrontal activation as well as a lower LF/HF ratio at the highest level of difficulty, suggest that these measures are suitable for mental overload detection. Moreover, these latter measurements point toward the existence of a quadratic model of mental workload. Copyright © 2013 Elsevier B.V. All rights reserved.
O'Hara, Blythe J; Gale, Joanne; McGill, Bronwyn; Bauman, Adrian; Hebden, Lana; Allman-Farinelli, Margaret; Maxwell, Michelle; Phongsavan, Philayrath
2017-11-01
This study investigated whether participants in a 6-month telephone-based coaching program, who set physical activity, nutrition, and weight loss goals had better outcomes in these domains. Quasi-experimental design. The Australian Get Healthy Information and Coaching Service (GHS), a free population-wide telephone health-coaching service that includes goal setting as a key component of its coaching program. Consenting GHS coaching participants who had completed coaching between February 2009 and December 2012 (n = 4108). At baseline, participants select a goal for the coaching program, and sociodemographic variables are collected. Self-reported weight, height, waist circumference, physical activity, and nutrition-related behaviors are assessed at baseline and 6 months. Descriptive analysis was performed on key sociodemographic variables, and the relationship between goal type and change in health outcomes was assessed using a series of linear mixed models that modeled change from baseline to 6 months. Participants who set goals in relation to weight management and physical activity achieved better results in these areas than those who set alternate goals, losing more than those who set alternate goals (1.5 kg and 0.9 cm in waist circumference) and increasing walking per week (40 minutes), respectively. There was no difference in food-related outcomes for those that set nutrition-related goals. Goal setting for weight management and increasing physical activity in the overweight and obese population, undertaken in a telephone-based coaching program, can be effective.
Seascape models reveal places to focus coastal fisheries management.
Stamoulis, Kostantinos A; Delevaux, Jade M S; Williams, Ivor D; Poti, Matthew; Lecky, Joey; Costa, Bryan; Kendall, Matthew S; Pittman, Simon J; Donovan, Mary K; Wedding, Lisa M; Friedlander, Alan M
2018-06-01
To design effective marine reserves and support fisheries, more information on fishing patterns and impacts for targeted species is needed, as well as better understanding of their key habitats. However, fishing impacts vary geographically and are difficult to disentangle from other factors that influence targeted fish distributions. We developed a set of fishing effort and habitat layers at high resolution and employed machine learning techniques to create regional-scale seascape models and predictive maps of biomass and body length of targeted reef fishes for the main Hawaiian Islands. Spatial patterns of fishing effort were shown to be highly variable and seascape models indicated a low threshold beyond which targeted fish assemblages were severely impacted. Topographic complexity, exposure, depth, and wave power were identified as key habitat variables that influenced targeted fish distributions and defined productive habitats for reef fisheries. High targeted reef fish biomass and body length were found in areas not easily accessed by humans, while model predictions when fishing effort was set to zero showed these high values to be more widely dispersed among suitable habitats. By comparing current targeted fish distributions with those predicted when fishing effort was removed, areas with high recovery potential on each island were revealed, with average biomass recovery of 517% and mean body length increases of 59% on Oahu, the most heavily fished island. Spatial protection of these areas would aid recovery of nearshore coral reef fisheries. © 2018 by the Ecological Society of America.
Temporal variation of velocity and turbulence characteristics at a tidal energy site
NASA Astrophysics Data System (ADS)
Gunawan, B.; Neary, V. S.; Colby, J.
2013-12-01
This study examines the temporal variability, frequency, direction and magnitude of the mean current, turbulence, hydrodynamic force and tidal power availability at a proposed tidal energy site in a tidal channel located in East River, NY, USA. The channel has a width of 190 m, a mean water level of 9.8 m and a mean tidal range of 1.3 m. A two-month velocity measurement was conducted at the design hub-height of a tidal turbine using an acoustic Doppler velocimeter (ADV). The site has semi-diurnal tidal characteristics with tidal current pattern resembles that of sinusoidal function. The five-minute mean currents at the site varied between 0 and 2.4 m s-1. Flood current magnitudes were typically higher that the ebb current magnitudes, which skewed the tidal energy production towards the flood period. The effect of small-scale turbulence on the computed velocity, hydrodynamic load and power densities timeseries were investigated. Excluding the small-scale turbulence may lead to a significant underestimation of the mean and the maximum values of the analyzed variable. Comparison of hydrodynamic conditions with other tidal energy sites indicates that the key parameters for tidal energy site development are likely to be site-specific, which highlight the need to develop a classification system for tidal energy sites. Such a classification system would enable a direct comparison of key parameters between potential project locations and ultimately help investors in the decision making process. Turbulence intensity vs. mean current magnitude
Sah, Jay P.; Ross, Michael S.; Snyder, James R.; ...
2010-01-01
In fire-dependent forests, managers are interested in predicting the consequences of prescribed burning on postfire tree mortality. We examined the effects of prescribed fire on tree mortality in Florida Keys pine forests, using a factorial design with understory type, season, and year of burn as factors. We also used logistic regression to model the effects of burn season, fire severity, and tree dimensions on individual tree mortality. Despite limited statistical power due to problems in carrying out the full suite of planned experimental burns, associations with tree and fire variables were observed. Post-fire pine tree mortality was negatively correlated withmore » tree size and positively correlated with char height and percent crown scorch. Unlike post-fire mortality, tree mortality associated with storm surge from Hurricane Wilma was greater in the large size classes. Due to their influence on population structure and fuel dynamics, the size-selective mortality patterns following fire and storm surge have practical importance for using fire as a management tool in Florida Keys pinelands in the future, particularly when the threats to their continued existence from tropical storms and sea level rise are expected to increase.« less
Transceivers and receivers for quantum key distribution and methods pertaining thereto
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeRose, Christopher; Sarovar, Mohan; Soh, Daniel B.S.
Various technologies for performing continuous-variable (CV) and discrete-variable (DV) quantum key distribution (QKD) with integrated electro-optical circuits are described herein. An integrated DV-QKD system uses Mach-Zehnder modulators to modulate a polarization of photons at a transmitter and select a photon polarization measurement basis at a receiver. An integrated CV-QKD system uses wavelength division multiplexing to send and receive amplitude-modulated and phase-modulated optical signals with a local oscillator signal while maintaining phase coherence between the modulated signals and the local oscillator signal.
Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution.
García-Patrón, Raúl; Cerf, Nicolas J
2006-11-10
A fully general approach to the security analysis of continuous-variable quantum key distribution (CV-QKD) is presented. Provided that the quantum channel is estimated via the covariance matrix of the quadratures, Gaussian attacks are shown to be optimal against all collective eavesdropping strategies. The proof is made strikingly simple by combining a physical model of measurement, an entanglement-based description of CV-QKD, and a recent powerful result on the extremality of Gaussian states [M. M. Wolf, Phys. Rev. Lett. 96, 080502 (2006)10.1103/PhysRevLett.96.080502].
Continuous-variable quantum-key-distribution protocols with a non-Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, Univ. Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex
2011-04-15
In this paper, we consider continuous-variable quantum-key-distribution (QKD) protocols which use non-Gaussian modulations. These specific modulation schemes are compatible with very efficient error-correction procedures, hence allowing the protocols to outperform previous protocols in terms of achievable range. In their simplest implementation, these protocols are secure for any linear quantum channels (hence against Gaussian attacks). We also show how the use of decoy states makes the protocols secure against arbitrary collective attacks, which implies their unconditional security in the asymptotic limit.