Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D
2006-01-01
Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943
Design study of a continuously variable roller cone traction CVT for electric vehicles
NASA Technical Reports Server (NTRS)
Mccoin, D. K.; Walker, R. D.
1980-01-01
Continuously variable ratio transmissions (CVT) featuring cone and roller traction elements and computerized controls are studied. The CVT meets or exceeds all requirements set forth in the design criteria. Further, a scalability analysis indicates the basic concept is applicable to lower and higher power units, with upward scaling for increased power being more readily accomplished.
Genetic-evolution-based optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.
1990-01-01
This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Romero, V. J.
2002-01-01
The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.
NASA Technical Reports Server (NTRS)
Burghart, J. H.; Donoghue, J. F.
1980-01-01
The design and evaluation of a control system for a sedan with a heat engine and a continuously variable transmission, is considered in a effort to minimize fuel consumption and achieve satisfactory dynamic response of vehicle variables as the vehicle is driven over a standard driving cycle. Even though the vehicle system was highly nonlinear, attention was restricted to linear control algorithms which could be easily understood and implemented demonstrated by simulation. Simulation results also revealed that the vehicle could exhibit unexpected dynamic behavior which must be taken into account in any control system design.
NASA Astrophysics Data System (ADS)
Cheung, Wai Ming; Liao, Wei-Hsin
2013-04-01
The use of magnetorheological (MR) fluids in vehicles has been gaining popular recently due to its controllable nature, which gives automotive designers more dimensions of freedom in functional designs. However, not much attention has been paid to apply it to bicycles. This paper is aimed to study the feasibility of applying MR fluids in different dynamic parts of a bicycle such as the transmission and braking systems. MR continuous variable transmission (CVT) and power generator assisted in braking systems were designed and analyzed. Both prototypes were fabricated and tested to evaluate their performances. Experimental results showed that the proposed designs are promising to be used in bicycles.
Reverse design and characteristic study of multi-range HMCVT
NASA Astrophysics Data System (ADS)
Zhu, Zhen; Chen, Long; Zeng, Falin
2017-09-01
The reduction of fuel consumption and increase of transmission efficiency is one of the key problems of the agricultural machinery. Many promising technologies such as hydromechanical continuously variable transmissions (HMCVT) are the focus of research and investments, but there is little technical documentation that describes the design principle and presents the design parameters. This paper presents the design idea and characteristic study of HMCVT, in order to find out the suitable scheme for the big horsepower tractors. Analyzed the kinematics and dynamics of a large horsepower tractor, according to the characteristic parameters, a hydro-mechanical continuously variable transmission has been designed. Compared with the experimental curves and theoretical curves of the stepless speed regulation of transmission, the experimental result illustrates the rationality of the design scheme.
Discrete-continuous variable structural synthesis using dual methods
NASA Technical Reports Server (NTRS)
Schmit, L. A.; Fleury, C.
1980-01-01
Approximation concepts and dual methods are extended to solve structural synthesis problems involving a mix of discrete and continuous sizing type of design variables. Pure discrete and pure continuous variable problems can be handled as special cases. The basic mathematical programming statement of the structural synthesis problem is converted into a sequence of explicit approximate primal problems of separable form. These problems are solved by constructing continuous explicit dual functions, which are maximized subject to simple nonnegativity constraints on the dual variables. A newly devised gradient projection type of algorithm called DUAL 1, which includes special features for handling dual function gradient discontinuities that arise from the discrete primal variables, is used to find the solution of each dual problem. Computational implementation is accomplished by incorporating the DUAL 1 algorithm into the ACCESS 3 program as a new optimizer option. The power of the method set forth is demonstrated by presenting numerical results for several example problems, including a pure discrete variable treatment of a metallic swept wing and a mixed discrete-continuous variable solution for a thin delta wing with fiber composite skins.
NASA Technical Reports Server (NTRS)
Gallo, C.; Kasuba, R.; Pintz, A.; Spring, J.
1986-01-01
The dynamic analysis of a horizontal axis fixed pitch wind turbine generator (WTG) rated at 56 kW is discussed. A mechanical Continuously Variable Transmission (CVT) was incorporated in the drive train to provide variable speed operation capability. One goal of the dynamic analysis was to determine if variable speed operation, by means of a mechanical CVT, is capable of capturing the transient power in the WTG/wind environment. Another goal was to determine the extent of power regulation possible with CVT operation.
Optimal design of compact spur gear reductions
NASA Technical Reports Server (NTRS)
Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.
1992-01-01
The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.
An Algorithm for Integrated Subsystem Embodiment and System Synthesis
NASA Technical Reports Server (NTRS)
Lewis, Kemper
1997-01-01
Consider the statement,'A system has two coupled subsystems, one of which dominates the design process. Each subsystem consists of discrete and continuous variables, and is solved using sequential analysis and solution.' To address this type of statement in the design of complex systems, three steps are required, namely, the embodiment of the statement in terms of entities on a computer, the mathematical formulation of subsystem models, and the resulting solution and system synthesis. In complex system decomposition, the subsystems are not isolated, self-supporting entities. Information such as constraints, goals, and design variables may be shared between entities. But many times in engineering problems, full communication and cooperation does not exist, information is incomplete, or one subsystem may dominate the design. Additionally, these engineering problems give rise to mathematical models involving nonlinear functions of both discrete and continuous design variables. In this dissertation an algorithm is developed to handle these types of scenarios for the domain-independent integration of subsystem embodiment, coordination, and system synthesis using constructs from Decision-Based Design, Game Theory, and Multidisciplinary Design Optimization. Implementation of the concept in this dissertation involves testing of the hypotheses using example problems and a motivating case study involving the design of a subsonic passenger aircraft.
Universal Quantum Computing with Arbitrary Continuous-Variable Encoding.
Lau, Hoi-Kwan; Plenio, Martin B
2016-09-02
Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.
Universal Quantum Computing with Arbitrary Continuous-Variable Encoding
NASA Astrophysics Data System (ADS)
Lau, Hoi-Kwan; Plenio, Martin B.
2016-09-01
Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.
Continuously variable transmission: Assessment of applicability to advance electric vehicles
NASA Technical Reports Server (NTRS)
Loewenthal, S. H.; Parker, R. J.
1981-01-01
A brief historical account of the evolution of continuously variable transmissions (CVT) for automotive use is given. The CVT concepts which are potentially suitable for application with electric and hybrid vehicles are discussed. The arrangement and function of several CVT concepts are cited along with their current developmental status. The results of preliminary design studies conducted on four CVT concepts for use in advanced electric vehicles are discussed.
Taipale-Kovalainen, Krista; Karttunen, Anssi-Pekka; Ketolainen, Jarkko; Korhonen, Ossi
2018-03-30
The objective of this study was to devise robust and stable continuous manufacturing process settings, by exploring the design space after an investigation of the lubrication-based parameters influencing the continuous direct compression tableting of high dose paracetamol tablets. Experimental design was used to generate a structured study plan which involved 19 runs. The formulation variables studied were the type of lubricant (magnesium stearate or stearic acid) and its concentration (0.5, 1.0 and 1.5%). Process variables were total production feed rate (5, 10.5 and 16kg/h), mixer speed rpm (500, 850 and 1200rpm), and mixer inlet port for lubricant (A or B). The continuous direct compression tableting line consisted of loss-in-weight feeders, a continuous mixer and a tablet press. The Quality Target Product Profile (QTPP) was defined for the final product, as the flowability of powder blends (2.5s), tablet strength (147N), dissolution in 2.5min (90%) and ejection force (425N). A design space was identified which fulfilled all the requirements of QTPP. The type and concentration of lubricant exerted the greatest influence on the design space. For example, stearic acid increased the tablet strength. Interestingly, the studied process parameters had only a very minor effect on the quality of the final product and the design space. It is concluded that the continuous direct compression tableting process itself is insensitive and can cope with changes in lubrication, whereas formulation parameters exert a major influence on the end product quality. Copyright © 2017 Elsevier B.V. All rights reserved.
Design studies of continuously variable transmissions for electric vehicles
NASA Technical Reports Server (NTRS)
Parker, R. J.; Loewenthal, S. H.; Fischer, G. K.
1981-01-01
Preliminary design studies were performed on four continuously variable transmission (CVT) concepts for use with a flywheel equipped electric vehicle of 1700 kg gross weight. Requirements of the CVT's were a maximum torque of 450 N-m (330 lb-ft), a maximum output power of 75 kW (100 hp), and a flywheel speed range of 28,000 to 14,000 rpm. Efficiency, size, weight, cost, reliability, maintainability, and controls were evaluated for each of the four concepts which included a steel V-belt type, a flat rubber belt type, a toroidal traction type, and a cone roller traction type. All CVT's exhibited relatively high calculated efficiencies (68 percent to 97 percent) over a broad range of vehicle operating conditions. Estimated weight and size of these transmissions were comparable to or less than equivalent automatic transmission. The design of each concept was carried through the design layout stage.
ERIC Educational Resources Information Center
Shobo, Yetty; Wong, Jen D.; Bell, Angie
2014-01-01
Regression discontinuity (RD), an "as good as randomized," research design is increasingly prominent in education research in recent years; the design gets eligible quasi-experimental designs as close as possible to experimental designs by using a stated threshold on a continuous baseline variable to assign individuals to a…
1981-08-01
spanl]der designs with thick wings, and winglets for transport-category aircraft; and, (2) swept forward wings, variable camber wings with direct...lift control, canards, and blended -wing concepts for fighters. Because efficient transonic performance continues to be an important design requirement
Optimal placement of actuators and sensors in control augmented structural optimization
NASA Technical Reports Server (NTRS)
Sepulveda, A. E.; Schmit, L. A., Jr.
1990-01-01
A control-augmented structural synthesis methodology is presented in which actuator and sensor placement is treated in terms of (0,1) variables. Structural member sizes and control variables are treated simultaneously as design variables. A multiobjective utopian approach is used to obtain a compromise solution for inherently conflicting objective functions such as strucutal mass control effort and number of actuators. Constraints are imposed on transient displacements, natural frequencies, actuator forces and dynamic stability as well as controllability and observability of the system. The combinatorial aspects of the mixed - (0,1) continuous variable design optimization problem are made tractable by combining approximation concepts with branch and bound techniques. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure set forth.
Advanced continuously variable transmissions for electric and hybrid vehicles
NASA Technical Reports Server (NTRS)
Loewenthal, S. H.
1980-01-01
A brief survey of past and present continuously variable transmissions (CVT) which are potentially suitable for application with electric and hybrid vehicles is presented. Discussion of general transmission requirements and benefits attainable with a CVT for electric vehicle use is given. The arrangement and function of several specific CVT concepts are cited along with their current development status. Lastly, the results of preliminary design studies conducted under a NASA contract for DOE on four CVT concepts for use in advanced electric vehicles are reviewed.
Design and calibration of the carousel wind tunnel
NASA Technical Reports Server (NTRS)
Leach, R. N.; Greeley, Ronald; Iversen, James D.; White, Bruce R.; Marshall, John R.
1987-01-01
In the study of planetary aeolian processes the effect of gravity is not readily modeled. Gravity appears in the equations of particle motion along with interparticle forces but the two terms are not separable. A wind tunnel that would permit variable gravity would allow separation of the forces and aid greatly in understanding planetary aeolian processes. The design of the Carousel Wind Tunnel (CWT) allows for a long flow distance in a small sized tunnel since the test section is a continuous circuit and allows for a variable pseudo-gravity. A prototype design was built and calibrated to gain some understanding of the characteristics of the design and the results presented.
NASA Astrophysics Data System (ADS)
Tatchyn, Roman
1992-01-01
Insertion devices that are tuned by electrical period variation are particularly suited for the design of flexible polarized-light sources [R. Tatchyn, J. Appl. Phys. 65, 4107 (1989); R. Tatchyn and T. Cremer, IEEE Trans. Mag. 26, 3102 (1990)]. Important advantages vis-a-vis mechanical or hybrid variable field designs include: (1) significantly more rapid modulation of both polarization and energy, (2) an inherently larger set of polarization modulation capabilities and (3) polarization/energy modulation at continuously optimized values of K. In this paper we outline some of the general considerations that enter into the design of hysteresis-free variable-period/polarizing undulator structures and present the parameters of a recently-completed prototype design capable of generating intense levels of UV/VUV photon flux on SPEAR running at 3 GeV.
Multi-Objective Optimization of a Turbofan for an Advanced, Single-Aisle Transport
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.; Guynn, Mark D.
2012-01-01
Considerable interest surrounds the design of the next generation of single-aisle commercial transports in the Boeing 737 and Airbus A320 class. Aircraft designers will depend on advanced, next-generation turbofan engines to power these airplanes. The focus of this study is to apply single- and multi-objective optimization algorithms to the conceptual design of ultrahigh bypass turbofan engines for this class of aircraft, using NASA s Subsonic Fixed Wing Project metrics as multidisciplinary objectives for optimization. The independent design variables investigated include three continuous variables: sea level static thrust, wing reference area, and aerodynamic design point fan pressure ratio, and four discrete variables: overall pressure ratio, fan drive system architecture (i.e., direct- or gear-driven), bypass nozzle architecture (i.e., fixed- or variable geometry), and the high- and low-pressure compressor work split. Ramp weight, fuel burn, noise, and emissions are the parameters treated as dependent objective functions. These optimized solutions provide insight to the ultrahigh bypass engine design process and provide information to NASA program management to help guide its technology development efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byers, Conleigh; Levin, Todd; Botterud, Audun
A review of capacity markets in the United States in the context of increasing levels of variable renewable energy finds substantial differences with respect to incentives for operational performance, methods to calculate qualifying capacity for variable renewable energy and energy storage, and demand curves for capacity. The review also reveals large differences in historical capacity market clearing prices. The authors conclude that electricity market design must continue to evolve to achieve cost-effective policies for resource adequacy.
Design and preliminary results of a fuel flexible industrial gas turbine combustor
NASA Technical Reports Server (NTRS)
Novick, A. S.; Troth, D. L.; Yacobucci, H. G.
1981-01-01
The design characteristics are presented of a fuel tolerant variable geometry staged air combustor using regenerative/convective cooling. The rich/quench/lean variable geometry combustor is designed to achieve low NO(x) emission from fuels containing fuel bound nitrogen. The physical size of the combustor was calculated for a can-annular combustion system with associated operating conditions for the Allison 570-K engine. Preliminary test results indicate that the concept has the potential to meet emission requirements at maximum continuous power operation. However, airflow sealing and improved fuel/air mixing are necessary to meet Department of Energy program goals.
Structural optimization via a design space hierarchy
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1976-01-01
Mathematical programming techniques provide a general approach to automated structural design. An iterative method is proposed in which design is treated as a hierarchy of subproblems, one being locally constrained and the other being locally unconstrained. It is assumed that the design space is locally convex in the case of good initial designs and that the objective and constraint functions are continuous, with continuous first derivatives. A general design algorithm is outlined for finding a move direction which will decrease the value of the objective function while maintaining a feasible design. The case of one-dimensional search in a two-variable design space is discussed. Possible applications are discussed. A major feature of the proposed algorithm is its application to problems which are inherently ill-conditioned, such as design of structures for optimum geometry.
Noninvasive health condition monitoring device for workers at high altitudes conditions.
Aqueveque, Pablo; Gutierrez, Cristopher; Saavedra, Francisco; Pino, Esteban J
2016-08-01
This work presents the design and implementation of a continuous monitoring device to control the health state of workers, for instance miners, at high altitudes. The extreme ambient conditions are harmful for peoples' health; therefore a continuous control of the workers' vital signs is necessary. The developed system includes physiological variables: electrocardiogram (ECG), respiratory activity and body temperature (BT), and ambient variables: ambient temperature (AT) and relative humidity (RH). The noninvasive sensors are incorporated in a t-shirt to deliver a functional device, and maximum comfort to the users. The device is able to continuously calculate heart rate (HR) and respiration rate (RR), and establish a wireless data transmission to a central monitoring station.
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
40 CFR 64.3 - Monitoring design criteria.
Code of Federal Regulations, 2010 CFR
2010-07-01
... that are adequate to ensure the continuing validity of the data. The owner or operator shall consider... and control device operational variability, the reliability and latitude built into the control...
Liu, Huolong; Galbraith, S C; Ricart, Brendon; Stanton, Courtney; Smith-Goettler, Brandye; Verdi, Luke; O'Connor, Thomas; Lee, Sau; Yoon, Seongkyu
2017-06-15
In this study, the influence of key process variables (screw speed, throughput and liquid to solid (L/S) ratio) of a continuous twin screw wet granulation (TSWG) was investigated using a central composite face-centered (CCF) experimental design method. Regression models were developed to predict the process responses (motor torque, granule residence time), granule properties (size distribution, volume average diameter, yield, relative width, flowability) and tablet properties (tensile strength). The effects of the three key process variables were analyzed via contour and interaction plots. The experimental results have demonstrated that all the process responses, granule properties and tablet properties are influenced by changing the screw speed, throughput and L/S ratio. The TSWG process was optimized to produce granules with specific volume average diameter of 150μm and the yield of 95% based on the developed regression models. A design space (DS) was built based on volume average granule diameter between 90 and 200μm and the granule yield larger than 75% with a failure probability analysis using Monte Carlo simulations. Validation experiments successfully validated the robustness and accuracy of the DS generated using the CCF experimental design in optimizing a continuous TSWG process. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Thompson, Kyle Bonner
An algorithm is described to efficiently compute aerothermodynamic design sensitivities using a decoupled variable set. In a conventional approach to computing design sensitivities for reacting flows, the species continuity equations are fully coupled to the conservation laws for momentum and energy. In this algorithm, the species continuity equations are solved separately from the mixture continuity, momentum, and total energy equations. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This decoupled approach for computing design sensitivities with the adjoint system is demonstrated for inviscid flow in chemical non-equilibrium around a re-entry vehicle with a retro-firing annular nozzle. The sensitivities of the surface temperature and mass flow rate through the nozzle plenum are computed with respect to plenum conditions and verified against sensitivities computed using a complex-variable finite-difference approach. The decoupled scheme significantly reduces the computational time and memory required to complete the optimization, making this an attractive method for high-fidelity design of hypersonic vehicles.
Quantum anonymous voting with unweighted continuous-variable graph states
NASA Astrophysics Data System (ADS)
Guo, Ying; Feng, Yanyan; Zeng, Guihua
2016-08-01
Motivated by the revealing topological structures of continuous-variable graph state (CVGS), we investigate the design of quantum voting scheme, which has serious advantages over the conventional ones in terms of efficiency and graphicness. Three phases are included, i.e., the preparing phase, the voting phase and the counting phase, together with three parties, i.e., the voters, the tallyman and the ballot agency. Two major voting operations are performed on the yielded CVGS in the voting process, namely the local rotation transformation and the displacement operation. The voting information is carried by the CVGS established before hand, whose persistent entanglement is deployed to keep the privacy of votes and the anonymity of legal voters. For practical applications, two CVGS-based quantum ballots, i.e., comparative ballot and anonymous survey, are specially designed, followed by the extended ballot schemes for the binary-valued and multi-valued ballots under some constraints for the voting design. Security is ensured by entanglement of the CVGS, the voting operations and the laws of quantum mechanics. The proposed schemes can be implemented using the standard off-the-shelf components when compared to discrete-variable quantum voting schemes attributing to the characteristics of the CV-based quantum cryptography.
ERIC Educational Resources Information Center
Rowold, Jens; Schilling, Jan
2006-01-01
Purpose: Within the framework of learning in organizations, the concept of career-related continuous learning (CRCL) has gained increasing attention from the research community. The purpose of the present study is to explore the combined effect of job- and career-related variables on formal CRCL activities. Design/methodology/approach: The study…
Field test of classical symmetric encryption with continuous variables quantum key distribution.
Jouguet, Paul; Kunz-Jacques, Sébastien; Debuisschert, Thierry; Fossier, Simon; Diamanti, Eleni; Alléaume, Romain; Tualle-Brouri, Rosa; Grangier, Philippe; Leverrier, Anthony; Pache, Philippe; Painchault, Philippe
2012-06-18
We report on the design and performance of a point-to-point classical symmetric encryption link with fast key renewal provided by a Continuous Variable Quantum Key Distribution (CVQKD) system. Our system was operational and able to encrypt point-to-point communications during more than six months, from the end of July 2010 until the beginning of February 2011. This field test was the first demonstration of the reliability of a CVQKD system over a long period of time in a server room environment. This strengthens the potential of CVQKD for information technology security infrastructure deployments.
The History of the CONCAM Project: All Sky Monitors in the Digital Age
NASA Astrophysics Data System (ADS)
Nemiroff, Robert; Shamir, Lior; Pereira, Wellesley
2018-01-01
The CONtinuous CAMera (CONCAM) project, which ran from 2000 to (about) 2008, consisted of real-time, Internet-connected, fisheye cameras located at major astronomical observatories. At its peak, eleven CONCAMs around the globe monitored most of the night sky, most of the time. Initially designed to search for transients and stellar variability, CONCAMs gained initial notoriety as cloud monitors. As such, CONCAMs made -- and its successors continue to make -- ground-based astronomy more efficient. The original, compact, fisheye-observatory-in-a-suitcase design underwent several iterations, starting with CONCAM0 and with the last version dubbed CONCAM3. Although the CONCAM project itself concluded after centralized funding diminished, today more locally-operated, commercially-designed, CONCAM-like devices operate than ever before. It has even been shown that modern smartphones can operate in a CONCAM-like mode. It is speculated that the re-instatement of better global coordination of current wide-angle sky monitors could lead to better variability monitoring of the brightest stars and transients.
Recent experience in simultaneous control-structure optimization
NASA Technical Reports Server (NTRS)
Salama, M.; Ramaker, R.; Milman, M.
1989-01-01
To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.
Sensitivity analysis for axis rotation diagrid structural systems according to brace angle changes
NASA Astrophysics Data System (ADS)
Yang, Jae-Kwang; Li, Long-Yang; Park, Sung-Soo
2017-10-01
General regular shaped diagrid structures can express diverse shapes because braces are installed along the exterior faces of the structures and the structures have no columns. However, since irregular shaped structures have diverse variables, studies to assess behaviors resulting from various variables are continuously required to supplement the imperfections related to such variables. In the present study, materials elastic modulus and yield strength were selected as variables for strength that would be applied to diagrid structural systems in the form of Twisters among the irregular shaped buildings classified by Vollers and that affect the structural design of these structural systems. The purpose of this study is to conduct sensitivity analysis for axial rotation diagrid structural systems according to changes in brace angles in order to identify the design variables that have relatively larger effects and the tendencies of the sensitivity of the structures according to changes in brace angles and axial rotation angles.
Using the entire history in the analysis of nested case cohort samples.
Rivera, C L; Lumley, T
2016-08-15
Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-30
... Information Collection: The purpose of the proposed methodological study is to continue the Vanguard phase of... design of the Main Study of the National Children's Study. Background The National Children's Study is a... questionnaire containing key variables and designed to collect core data at every study visit contact from the...
An Algorithm for the Mixed Transportation Network Design Problem
Liu, Xinyu; Chen, Qun
2016-01-01
This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803
Combinatorial Multiobjective Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Crossley, William A.; Martin. Eric T.
2002-01-01
The research proposed in this document investigated multiobjective optimization approaches based upon the Genetic Algorithm (GA). Several versions of the GA have been adopted for multiobjective design, but, prior to this research, there had not been significant comparisons of the most popular strategies. The research effort first generalized the two-branch tournament genetic algorithm in to an N-branch genetic algorithm, then the N-branch GA was compared with a version of the popular Multi-Objective Genetic Algorithm (MOGA). Because the genetic algorithm is well suited to combinatorial (mixed discrete / continuous) optimization problems, the GA can be used in the conceptual phase of design to combine selection (discrete variable) and sizing (continuous variable) tasks. Using a multiobjective formulation for the design of a 50-passenger aircraft to meet the competing objectives of minimizing takeoff gross weight and minimizing trip time, the GA generated a range of tradeoff designs that illustrate which aircraft features change from a low-weight, slow trip-time aircraft design to a heavy-weight, short trip-time aircraft design. Given the objective formulation and analysis methods used, the results of this study identify where turboprop-powered aircraft and turbofan-powered aircraft become more desirable for the 50 seat passenger application. This aircraft design application also begins to suggest how a combinatorial multiobjective optimization technique could be used to assist in the design of morphing aircraft.
Project Level Performance Database for Rigid Pavements in Texas, II
DOT National Transportation Integrated Search
2011-08-01
Over the years, the Texas Department of Transportation (TxDOT) has built a number of CRCP (continuously reinforced : concrete pavement) experimental sections to investigate the effects of design, materials, and construction variables on CRCP : struct...
CHAMP - Camera, Handlens, and Microscope Probe
NASA Technical Reports Server (NTRS)
Mungas, G. S.; Beegle, L. W.; Boynton, J.; Sepulveda, C. A.; Balzer, M. A.; Sobel, H. R.; Fisher, T. A.; Deans, M.; Lee, P.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe) is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As an arm-mounted imager, CHAMP supports stereo-imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision range-finding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. Currently designed with a filter wheel with 4 different filters, so that color and black and white images can be obtained over the entire Field-of-View, future designs will increase the number of filter positions to include 8 different filters. Finally, CHAMP incorporates controlled white and UV illumination so that images can be obtained regardless of sun position, and any potential fluorescent species can be identified so the most astrobiologically interesting samples can be identified.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
High-efficiency reconciliation for continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, Zengliang; Yang, Shenshen; Li, Yongmin
2017-04-01
Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.
Universal quantum computation with temporal-mode bilayer square lattices
NASA Astrophysics Data System (ADS)
Alexander, Rafael N.; Yokoyama, Shota; Furusawa, Akira; Menicucci, Nicolas C.
2018-03-01
We propose an experimental design for universal continuous-variable quantum computation that incorporates recent innovations in linear-optics-based continuous-variable cluster state generation and cubic-phase gate teleportation. The first ingredient is a protocol for generating the bilayer-square-lattice cluster state (a universal resource state) with temporal modes of light. With this state, measurement-based implementation of Gaussian unitary gates requires only homodyne detection. Second, we describe a measurement device that implements an adaptive cubic-phase gate, up to a random phase-space displacement. It requires a two-step sequence of homodyne measurements and consumes a (non-Gaussian) cubic-phase state.
Static and Dynamic Aeroelastic Tailoring With Variable Camber Control
NASA Technical Reports Server (NTRS)
Stanford, Bret K.
2016-01-01
This paper examines the use of a Variable Camber Continuous Trailing Edge Flap (VCCTEF) system for aeroservoelastic optimization of a transport wingbox. The quasisteady and unsteady motions of the flap system are utilized as design variables, along with patch-level structural variables, towards minimizing wingbox weight via maneuver load alleviation and active flutter suppression. The resulting system is, in general, very successful at removing structural weight in a feasible manner. Limitations to this success are imposed by including load cases where the VCCTEF system is not active (open-loop) in the optimization process, and also by including actuator operating cost constraints.
Fractional Programming for Communication Systems—Part II: Uplink Scheduling via Matching
NASA Astrophysics Data System (ADS)
Shen, Kaiming; Yu, Wei
2018-05-01
This two-part paper develops novel methodologies for using fractional programming (FP) techniques to design and optimize communication systems. Part I of this paper proposes a new quadratic transform for FP and treats its application for continuous optimization problems. In this Part II of the paper, we study discrete problems, such as those involving user scheduling, which are considerably more difficult to solve. Unlike the continuous problems, discrete or mixed discrete-continuous problems normally cannot be recast as convex problems. In contrast to the common heuristic of relaxing the discrete variables, this work reformulates the original problem in an FP form amenable to distributed combinatorial optimization. The paper illustrates this methodology by tackling the important and challenging problem of uplink coordinated multi-cell user scheduling in wireless cellular systems. Uplink scheduling is more challenging than downlink scheduling, because uplink user scheduling decisions significantly affect the interference pattern in nearby cells. Further, the discrete scheduling variable needs to be optimized jointly with continuous variables such as transmit power levels and beamformers. The main idea of the proposed FP approach is to decouple the interaction among the interfering links, thereby permitting a distributed and joint optimization of the discrete and continuous variables with provable convergence. The paper shows that the well-known weighted minimum mean-square-error (WMMSE) algorithm can also be derived from a particular use of FP; but our proposed FP-based method significantly outperforms WMMSE when discrete user scheduling variables are involved, both in term of run-time efficiency and optimizing results.
Lunkenheimer, Erika; Lichtwarck-Aschoff, Anna; Hollenstein, Tom; Kemp, Christine J.; Granic, Isabela
2016-01-01
Objective Parent-child coercive cycles have been associated with both rigidity and inconsistency in parenting behavior. To explain these mixed findings, we examined real-time variability in maternal responses to children's off-task behavior to determine whether this common trigger of the coercive cycle (responding to child misbehavior) is associated with rigidity or inconsistency in parenting. We also examined the effects of risk factors for coercion (maternal hostility, maternal depressive symptoms, child externalizing problems, and dyadic negativity) on patterns of parenting. Design Mother-child dyads (N = 96; M child age = 41 months) completed a difficult puzzle task, and observations were coded continuously for parent (e.g., directive, teaching) and child behavior (e.g., on-task, off-task). Results Multilevel continuous-time survival analyses revealed that parenting behavior is less variable when children are off-task. However, when risk factors are higher, a different profile emerges. Combined maternal and child risk is associated with markedly lower variability in parenting behavior overall (i.e., rigidity) paired with shifts towards higher variability specifically when children are off-task (i.e., inconsistency). Dyadic negativity (i.e., episodes when children are off-task and parents engage in negative behavior) are also associated with higher parenting variability. Conclusions Risk factors confer rigidity in parenting overall, but in moments when higher-risk parents must respond to child misbehavior, their parenting becomes more variable, suggesting inconsistency and ineffectiveness. This context-dependent shift in parenting behavior may help explain prior mixed findings and offer new directions for family interventions designed to reduce coercive processes. PMID:28190978
ERIC Educational Resources Information Center
Deke, John; Dragoset, Lisa
2015-01-01
Does receipt of School Improvement Grants (SIG) funding to implement a school intervention model have an impact on outcomes for low-performing schools? This study answers this question using a regression discontinuity design (RDD) that exploits cutoff values on the continuous variables used to define SIG eligibility tiers, comparing outcomes in…
International Behavior Analysis: The Operationalization Task
1976-02-15
IJiC. L This report eoBStlttttoa tha first technical report of year t^.-o of the Intcrr.ational Behavior Analysis (IM) Project, vhich is designed ...builders continued to generate "grond designs ," but none of them attempted to specify variable areas in a comprehensive fashion. Ac a result, specific...1970; Meckstroth, 1975). The distinction between "most similar systems" and "most different systems" designs presents the fundamental choice that a
Design optimization of continuous partially prestressed concrete beams
NASA Astrophysics Data System (ADS)
Al-Gahtani, A. S.; Al-Saadoun, S. S.; Abul-Feilat, E. A.
1995-04-01
An effective formulation for optimum design of two-span continuous partially prestressed concrete beams is described in this paper. Variable prestressing forces along the tendon profile, which may be jacked from one end or both ends with flexibility in the overlapping range and location, and the induced secondary effects are considered. The imposed constraints are on flexural stresses, ultimate flexural strength, cracking moment, ultimate shear strength, reinforcement limits cross-section dimensions, and cable profile geometries. These constraints are formulated in accordance with ACI (American Concrete Institute) code provisions. The capabilities of the program to solve several engineering problems are presented.
NASA/USRA advanced design program activity, 1991-1992
NASA Astrophysics Data System (ADS)
Dorrity, J. Lewis; Patel, Suneer
The School of Textile and Fiber Engineering continued to pursue design projects with the Mechanical Engineering School giving the students an outstanding opportunity to interact with students from another discipline. Four problems were defined which had aspects which would be reasonably assigned to an interdisciplinary team. The design problems are described. The projects included lunar preform manufacturing, dust control for Enabler, an industrial sewing machine variable speed controllor, Enabler operation station, and design for producing fiberglass fabric in a lunar environment.
NASA/USRA advanced design program activity, 1991-1992
NASA Technical Reports Server (NTRS)
Dorrity, J. Lewis; Patel, Suneer
1992-01-01
The School of Textile and Fiber Engineering continued to pursue design projects with the Mechanical Engineering School giving the students an outstanding opportunity to interact with students from another discipline. Four problems were defined which had aspects which would be reasonably assigned to an interdisciplinary team. The design problems are described. The projects included lunar preform manufacturing, dust control for Enabler, an industrial sewing machine variable speed controllor, Enabler operation station, and design for producing fiberglass fabric in a lunar environment.
Lazic, Stanley E
2008-07-21
Analysis of variance (ANOVA) is a common statistical technique in physiological research, and often one or more of the independent/predictor variables such as dose, time, or age, can be treated as a continuous, rather than a categorical variable during analysis - even if subjects were randomly assigned to treatment groups. While this is not common, there are a number of advantages of such an approach, including greater statistical power due to increased precision, a simpler and more informative interpretation of the results, greater parsimony, and transformation of the predictor variable is possible. An example is given from an experiment where rats were randomly assigned to receive either 0, 60, 180, or 240 mg/L of fluoxetine in their drinking water, with performance on the forced swim test as the outcome measure. Dose was treated as either a categorical or continuous variable during analysis, with the latter analysis leading to a more powerful test (p = 0.021 vs. p = 0.159). This will be true in general, and the reasons for this are discussed. There are many advantages to treating variables as continuous numeric variables if the data allow this, and this should be employed more often in experimental biology. Failure to use the optimal analysis runs the risk of missing significant effects or relationships.
Design, analysis and test verification of advanced encapsulation systems
NASA Technical Reports Server (NTRS)
Garcia, A., III
1983-01-01
A preliminary reduced variable master was constructed for pressure loading. A study of cell thickness versus cell stress was completed. Work is continuing on encapsulation of qualification modules. A 4 ft x 4 ft 'credit card' construction laminate was made.
Design, analysis and test verification of advanced encapsulation systems
NASA Astrophysics Data System (ADS)
Garcia, A., III
1983-02-01
A preliminary reduced variable master was constructed for pressure loading. A study of cell thickness versus cell stress was completed. Work is continuing on encapsulation of qualification modules. A 4 ft x 4 ft 'credit card' construction laminate was made.
Research on damping properties optimization of variable-stiffness plate
NASA Astrophysics Data System (ADS)
Wen-kai, QI; Xian-tao, YIN; Cheng, SHEN
2016-09-01
This paper investigates damping optimization design of variable-stiffness composite laminated plate, which means fibre paths can be continuously curved and fibre angles are distinct for different regions. First, damping prediction model is developed based on modal dissipative energy principle and verified by comparing with modal testing results. Then, instead of fibre angles, the element stiffness and damping matrixes are translated to be design variables on the basis of novel Discrete Material Optimization (DMO) formulation, thus reducing the computation time greatly. Finally, the modal damping capacity of arbitrary order is optimized using MMA (Method of Moving Asymptotes) method. Meanwhile, mode tracking technique is employed to investigate the variation of modal shape. The convergent performance of interpolation function, first order specific damping capacity (SDC) optimization results and variation of modal shape in different penalty factor are discussed. The results show that the damping properties of the variable-stiffness plate can be increased by 50%-70% after optimization.
Gong, Yan-Xiao; Zhang, ShengLi; Xu, P; Zhu, S N
2016-03-21
We propose to generate a single-mode-squeezing two-mode squeezed vacuum state via a single χ(2) nonlinear photonic crystal. The state is favorable for existing Gaussian entanglement distillation schemes, since local squeezing operations can enhance the final entanglement and the success probability. The crystal is designed for enabling three concurrent quasi-phase-matching parametric-down conversions, and hence relieves the auxiliary on-line bi-side local squeezing operations. The compact source opens up a way for continuous-variable quantum technologies and could find more potential applications in future large-scale quantum networks.
1981-03-01
Abstract (continued) that time. A second independent variable studied was the use of informational prompts consisting of periodic computer displays of...resolved by allowing the person to speak first who had spoken least up to that time. A second independent variable studied was the use of...beneficial in a team which interacts repeatedly over an extended period of time. In the study to be presented here, a specially designed electronic
Linear Modeling of Rotorcraft for Stability Analysis and Preliminary Design
1993-09-01
Bmat ) disp(’ ’) disp(’press any key to continue...’) pause clc elseif choice==4, V...lateral cyclic, pedal.]’) diap(’ ’) diup ( Bmat ) disp(’ ’) disp(’ ’) diup(’ Higenvalue’) diup(’ ’) diap (’Uncoupled’) diup(’ ’) disp(’Longitudinal plant...containing matrix variables V Amat Bmat Rcoup Flataug Glataug Rlataug Plataug Flataug % Glataug Rlonaug Plonaug ’a V * Configuring variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, C
2009-11-12
In FY09 they will (1) complete the implementation, verification, calibration, and sensitivity and scalability analysis of the in-cell virus replication model; (2) complete the design of the cell culture (cell-to-cell infection) model; (3) continue the research, design, and development of their bioinformatics tools: the Web-based structure-alignment-based sequence variability tool and the functional annotation of the genome database; (4) collaborate with the University of California at San Francisco on areas of common interest; and (5) submit journal articles that describe the in-cell model with simulations and the bioinformatics approaches to evaluation of genome variability and fitness.
NASA Technical Reports Server (NTRS)
Urnes, James, Sr.; Nguyen, Nhan; Ippolito, Corey; Totah, Joseph; Trinh, Khanh; Ting, Eric
2013-01-01
Boeing and NASA are conducting a joint study program to design a wing flap system that will provide mission-adaptive lift and drag performance for future transport aircraft having light-weight, flexible wings. This Variable Camber Continuous Trailing Edge Flap (VCCTEF) system offers a lighter-weight lift control system having two performance objectives: (1) an efficient high lift capability for take-off and landing, and (2) reduction in cruise drag through control of the twist shape of the flexible wing. This control system during cruise will command varying flap settings along the span of the wing in order to establish an optimum wing twist for the current gross weight and cruise flight condition, and continue to change the wing twist as the aircraft changes gross weight and cruise conditions for each mission segment. Design weight of the flap control system is being minimized through use of light-weight shape memory alloy (SMA) actuation augmented with electric actuators. The VCCTEF program is developing better lift and drag performance of flexible wing transports with the further benefits of lighter-weight actuation and less drag using the variable camber shape of the flap.
NASA Astrophysics Data System (ADS)
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2017-11-01
This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.
Hu, Qinglei
2007-10-01
This paper presents a dual-stage control system design method for the flexible spacecraft attitude maneuvering control by use of on-off thrusters and active vibration control by input shaper. In this design approach, attitude control system and vibration suppression were designed separately using lower order model. As a stepping stone, an integral variable structure controller with the assumption of knowing the upper bounds of the mismatched lumped perturbation has been designed which ensures exponential convergence of attitude angle and angular velocity in the presence of bounded uncertainty/disturbances. To reconstruct estimates of the system states for use in a full information variable structure control law, an asymptotic variable structure observer is also employed. In addition, the thruster output is modulated in pulse-width pulse-frequency so that the output profile is similar to the continuous control histories. For actively suppressing the induced vibration, the input shaping technique is used to modify the existing command so that less vibration will be caused by the command itself, which only requires information about the vibration frequency and damping of the closed-loop system. The rationale behind this hybrid control scheme is that the integral variable structure controller can achieve good precision pointing, even in the presence of uncertainties/disturbances, whereas the shaped input attenuator is applied to actively suppress the undesirable vibrations excited by the rapid maneuvers. Simulation results for the spacecraft model show precise attitude control and vibration suppression.
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
This manual was designed for use with the fourth of five texts in the Secondary School Advanced Mathematics (SSAM) series. Developed for students who have completed the Secondary School Mathematics (SSM) program and wish to continue their studies in mathematics, this series is designed to review, strengthen, and fill gaps in the material covered…
NASA Astrophysics Data System (ADS)
Phan, Duoc T.; Lim, James B. P.; Sha, Wei; Siew, Calvin Y. M.; Tanyimboh, Tiku T.; Issa, Honar K.; Mohammad, Fouad A.
2013-04-01
Cold-formed steel portal frames are a popular form of construction for low-rise commercial, light industrial and agricultural buildings with spans of up to 20 m. In this article, a real-coded genetic algorithm is described that is used to minimize the cost of the main frame of such buildings. The key decision variables considered in this proposed algorithm consist of both the spacing and pitch of the frame as continuous variables, as well as the discrete section sizes. A routine taking the structural analysis and frame design for cold-formed steel sections is embedded into a genetic algorithm. The results show that the real-coded genetic algorithm handles effectively the mixture of design variables, with high robustness and consistency in achieving the optimum solution. All wind load combinations according to Australian code are considered in this research. Results for frames with knee braces are also included, for which the optimization achieved even larger savings in cost.
Smooth conditional distribution function and quantiles under random censorship.
Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine
2002-09-01
We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).
CHAMP (Camera, Handlens, and Microscope Probe)
NASA Technical Reports Server (NTRS)
Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.
NASA Astrophysics Data System (ADS)
Zhou, Jian; Guo, Ying
2017-02-01
A continuous-variable measurement-device-independent (CV-MDI) multipartite quantum communication protocol is designed to realize multipartite communication based on the GHZ state analysis using Gaussian coherent states. It can remove detector side attack as the multi-mode measurement is blindly done in a suitable Black Box. The entanglement-based CV-MDI multipartite communication scheme and the equivalent prepare-and-measurement scheme are proposed to analyze the security and guide experiment, respectively. The general eavesdropping and coherent attack are considered for the security analysis. Subsequently, all the attacks are ascribed to coherent attack against imperfect links. The asymptotic key rate of the asymmetric configuration is also derived with the numeric simulations illustrating the performance of the proposed protocol.
Long-distance continuous-variable quantum key distribution with a Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jouguet, Paul; SeQureNet, 23 avenue d'Italie, F-75013 Paris; Kunz-Jacques, Sebastien
2011-12-15
We designed high-efficiency error correcting codes allowing us to extract an errorless secret key in a continuous-variable quantum key distribution (CVQKD) protocol using a Gaussian modulation of coherent states and a homodyne detection. These codes are available for a wide range of signal-to-noise ratios on an additive white Gaussian noise channel with a binary modulation and can be combined with a multidimensional reconciliation method proven secure against arbitrary collective attacks. This improved reconciliation procedure considerably extends the secure range of a CVQKD with a Gaussian modulation, giving a secret key rate of about 10{sup -3} bit per pulse at amore » distance of 120 km for reasonable physical parameters.« less
Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O
2011-01-01
The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.
Design and optimisation of novel configurations of stormwater constructed wetlands
NASA Astrophysics Data System (ADS)
Kiiza, Christopher
2017-04-01
Constructed wetlands (CWs) are recognised as a cost-effective technology for wastewater treatment. CWs have been deployed and could be retrofitted into existing urban drainage systems to prevent surface water pollution, attenuate floods and act as sources for reusable water. However, there exist numerous criteria for design configuration and operation of CWs. The aim of the study was to examine effects of design and operational variables on performance of CWs. To achieve this, 8 novel designs of vertical flow CWs were continuously operated and monitored (weekly) for 2years. Pollutant removal efficiency in each CW unit was evaluated from physico-chemical analyses of influent and effluent water samples. Hybrid optimised multi-layer perceptron artificial neural networks (MLP ANNs) were applied to simulate treatment efficiency in the CWs. Subsequently, predictive and analytical models were developed for each design unit. Results show models have sound generalisation abilities; with various design configurations and operational variables influencing performance of CWs. Although some design configurations attained faster and higher removal efficiencies than others; all 8 CW designs produced effluents permissible for discharge into watercourses with strict regulatory standards.
USDA-ARS?s Scientific Manuscript database
Changes in climate and extreme weather have already occurred and are increasing challenges for agriculture nationally and globally, and many of these impacts will continue into the future. This technical bulletin contains information and resources designed to help agricultural producers, service pro...
Survey Design Recommendations.
ERIC Educational Resources Information Center
Fisher, William P., Jr.
2000-01-01
Presents 17 rules of thumb to create surveys that are likely to provide data of high enough quality to meet the requirements for measurement specified in a probabilistic conjoint measurement model. Use of these steps should allow the survey to be joined with others measuring the same variable to ensure continued equating with a single reference…
Broadband non-polarizing terahertz beam splitters with variable split ratio
NASA Astrophysics Data System (ADS)
Wei, Minggui; Xu, Quan; Wang, Qiu; Zhang, Xueqian; Li, Yanfeng; Gu, Jianqiang; Tian, Zhen; Zhang, Xixiang; Han, Jiaguang; Zhang, Weili
2017-08-01
Seeking effective terahertz functional devices has always aroused extensive attention. Of particular interest is the terahertz beam splitter. Here, we have proposed, designed, manufactured, and tested a broadband non-polarizing terahertz beam splitter with a variable split ratio based on an all-dielectric metasurface. The metasurface was created by patterning a dielectric surface of the N-step phase gradient and etching to a few hundred micrometers. The conversion efficiency as high as 81% under the normal incidence at 0.7 THz was achieved. Meanwhile, such a splitter works well over a broad frequency range. The split ratio of the proposed design can be continuously tuned by simply shifting the metasurface, and the angle of emergences can also be easily adjusted by choosing the step of phase gradients. The proposed design is non-polarizing, and its performance is kept under different polarizations.
The Causal Effects of Father Absence
McLanahan, Sara; Tach, Laura; Schneider, Daniel
2014-01-01
The literature on father absence is frequently criticized for its use of cross-sectional data and methods that fail to take account of possible omitted variable bias and reverse causality. We review studies that have responded to this critique by employing a variety of innovative research designs to identify the causal effect of father absence, including studies using lagged dependent variable models, growth curve models, individual fixed effects models, sibling fixed effects models, natural experiments, and propensity score matching models. Our assessment is that studies using more rigorous designs continue to find negative effects of father absence on offspring well-being, although the magnitude of these effects is smaller than what is found using traditional cross-sectional designs. The evidence is strongest and most consistent for outcomes such as high school graduation, children’s social-emotional adjustment, and adult mental health. PMID:24489431
Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dekeyser, W., E-mail: Wouter.Dekeyser@kuleuven.be; Reiter, D.; Baelmans, M.
As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation ofmore » the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.« less
Design, fabrication and systems integration of a satellite tracked, free-drifting ocean data buoy
NASA Technical Reports Server (NTRS)
Wallace, J. W.; Cox, J. W.
1976-01-01
Engineering details are presented of a small free-drifting buoy configuration designed for use in the study of continental shelf water circulation patterns in the Chesapeake Bight of the Western North Atlantic Ocean. The buoy incoporated French instrumentation and was interrogated by the French EOLE satellite to provide position and four channels of temperature data. The buoy design included a variable depth drogue and a power supply sufficient for six weeks of continuous operations. Proof tests of the configuration indicated an adequate design and subsequent field experiments verified the proper functioning of the system.
The Challenge of Reproducibility and Accuracy in Nutrition Research: Resources and Pitfalls1234
Kuszak, Adam J; Williamson, John S; Hopp, D Craig; Betz, Joseph M
2016-01-01
Inconsistent and contradictory results from nutrition studies conducted by different investigators continue to emerge, in part because of the inherent variability of natural products, as well as the unknown and therefore uncontrolled variables in study populations and experimental designs. Given these challenges inherent in nutrition research, it is critical for the progress of the field that researchers strive to minimize variability within studies and enhance comparability between studies by optimizing the characterization, control, and reporting of products, reagents, and model systems used, as well as the rigor and reporting of experimental designs, protocols, and data analysis. Here we describe some recent developments relevant to research on plant-derived products used in nutrition research, highlight some resources for optimizing the characterization and reporting of research using these products, and describe some of the pitfalls that may be avoided by adherence to these recommendations. PMID:26980822
Introduction to the use of regression models in epidemiology.
Bender, Ralf
2009-01-01
Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.
Paz-Pascual, Carmen; Pinedo, Isabel Artieta; Grandes, Gonzalo; de Gamboa, Gurutze Remiro Fernandez; Hermosilla, Itziar Odriozola; de la Hera, Amaia Bacigalupe; Gordon, Janire Payo; Garcia, Guadalupe Manzano; de Pedro, Magdalena Ureta
2008-01-01
Background Antenatal education (AE) started more than 30 years ago with the purpose of decreasing pain during childbirth. Epidural anaesthesia has achieved this objective, and the value of AE is therefore currently questioned. This article describes the protocol and process of a study designed to assess AE results today. Methods/Design A prospective study was designed in which a cohort of 616 nulliparous pregnant women attending midwife offices of the Basque Health Service were followed for 13 months. Three exposure groups were considered based on the number of AE sessions attended: (a) women attending no session, (b) women attending 1 to 4, and (c) women attending 5 or more sessions. Sociodemographic, personality, and outcome variables related to childbirth and breastfeeding were measured. It was expected 40% of pregnant women not to have participated in any AE session. However, 93% had attended at least one session. This low exposure variability decreased statistical power of the study as compared to the initially planned power. Despite this, there was a greater than 80% power for detecting as significant differences between exposure groups of, for instance, 10% in continuation of breastfeeding at one and a half months and in visits for false labour. Women attending more sessions were seen to have a mean higher age and educational level, and to belong to a higher socioeconomic group (p < 0.01). Follow-up was completed in 99% of participants. Discussion Adequate prior estimation of variability in the exposure under study is essential for designing cohort studies. Sociodemographic characteristics may play a confounding role in studies assessing AE and should be controlled in design and analyses. Quality control during the study process and continued collaboration from both public system midwives and eligible pregnant women resulted in a negligible loss rate. PMID:18435856
An inverter/controller subsystem optimized for photovoltaic applications
NASA Technical Reports Server (NTRS)
Pickrell, R. L.; Osullivan, G.; Merrill, W. C.
1978-01-01
Conversion of solar array dc power to ac power stimulated the specification, design, and simulation testing of an inverter/controller subsystem tailored to the photovoltaic power source characteristics. Optimization of the inverter/controller design is discussed as part of an overall photovoltaic power system designed for maximum energy extraction from the solar array. The special design requirements for the inverter/ controller include: a power system controller (PSC) to control continuously the solar array operating point at the maximum power level based on variable solar insolation and cell temperatures; and an inverter designed for high efficiency at rated load and low losses at light loadings to conserve energy.
Design of a WSN for the Sampling of Environmental Variability in Complex Terrain
Martín-Tardío, Miguel A.; Felicísimo, Ángel M.
2014-01-01
In-situ environmental parameter measurements using sensor systems connected to a wireless network have become widespread, but the problem of monitoring large and mountainous areas by means of a wireless sensor network (WSN) is not well resolved. The main reasons for this are: (1) the environmental variability distribution is unknown in the field; (2) without this knowledge, a huge number of sensors would be necessary to ensure the complete coverage of the environmental variability and (3) WSN design requirements, for example, effective connectivity (intervisibility), limiting distances and controlled redundancy, are usually solved by trial and error. Using temperature as the target environmental variable, we propose: (1) a method to determine the homogeneous environmental classes to be sampled using the digital elevation model (DEM) and geometric simulations and (2) a procedure to determine an effective WSN design in complex terrain in terms of the number of sensors, redundancy, cost and spatial distribution. The proposed methodology, based on geographic information systems and binary integer programming can be easily adapted to a wide range of applications that need exhaustive and continuous environmental monitoring with high spatial resolution. The results show that the WSN design is perfectly suited to the topography and the technical specifications of the sensors, and provides a complete coverage of the environmental variability in terms of Sun exposure. However these results still need be validated in the field and the proposed procedure must be refined. PMID:25412218
NASA Astrophysics Data System (ADS)
Hester, Michael Wayne
Nanotechnology offers significant opportunities in providing solutions to existing engineering problems as well as breakthroughs in new fields of science and technology. In order to fully realize benefits from such initiatives, nanomanufacturing methods must be developed to integrate enabling constructs into commercial mainstream. Even though significant advances have been made, widespread industrialization in many areas remains limited. Manufacturing methods, therefore, must continually be developed to bridge gaps between nanoscience discovery and commercialization. A promising technology for integration of top-down nanomanufacturing yet to receive full industrialization is equal channel angular pressing, a process transforming metallic materials into nanostructured or ultra-fine grained materials with significantly improved performance characteristics. To bridge the gap between process potential and actual manufacturing output, a prototype top-down nanomanufacturing system identified as indexing equal channel angular pressing (IX-ECAP) was developed. The unit was designed to capitalize on opportunities of transforming spent or scrap engineering elements into key engineering commodities. A manufacturing system was constructed to impose severe plastic deformation via simple shear in an equal channel angular pressing die on 1100 and 4043 aluminum welding rods. 1/4 fraction factorial split-plot experiments assessed significance of five predictors on the response, microhardness, for the 4043 alloy. Predictor variables included temperature, number of passes, pressing speed, back pressure, and vibration. Main effects were studied employing a resolution III design. Multiple linear regression was used for model development. Initial studies were performed using continuous processing followed by contingency designs involving discrete variable length work pieces. IX-ECAP offered a viable solution in severe plastic deformation processing. Discrete variable length work piece pressing proved very successful. With three passes through the system, 4043 processed material experienced an 88.88% increase in microhardness, 203.4% increase in converted yield strength, and a 98.5% reduction in theoretical final grain size to 103 nanometers using the Hall-Petch relation. The process factor, number of passes, was statistically significant at the 95% confidence level; whereas, temperature was significant at the 90% confidence level. Limitations of system components precluded completion of studies involving continuous pressing. Proposed system redesigns, however, will ensure mainstream commercialization of continuous length work piece processing.
Stochastic search in structural optimization - Genetic algorithms and simulated annealing
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1993-01-01
An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.
Multisite Assessment of Nursing Continuing Education Learning Needs Using an Electronic Tool.
Winslow, Susan; Jackson, Stephanie; Cook, Lesley; Reed, Joanne Williams; Blakeney, Keshia; Zimbro, Kathie; Parker, Cindy
2016-02-01
A continued education needs assessment and associated education plan are required for organizations on the journey for American Nurses Credentialing Center Magnet® designation. Leveraging technology to support the assessment and analysis of continuing education needs was a new venture for a 12-hospital regional health system. The purpose of this performance improvement project was to design and conduct an enhanced process to increase the efficiency and effectiveness of gathering data on nurses' preferences and increase nurse satisfaction with the learner assessment portion of the process. Educators trialed the use of a standardized approach via an electronic survey tool to replace the highly variable processes previously used. Educators were able to view graphical summary of responses by category and setting, which substantially decreased analysis and action planning time for education implementation plans at the system, site, or setting level. Based on these findings, specific continuing education action plans were drafted for each category and classification of nurses. Copyright 2016, SLACK Incorporated.
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2001-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity. However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold. One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumbersome binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back and Dasgupta and Michalesicz. We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
Continuity of care in mental health: understanding and measuring a complex phenomenon.
Burns, T; Catty, J; White, S; Clement, S; Ellis, G; Jones, I R; Lissouba, P; McLaren, S; Rose, D; Wykes, T
2009-02-01
Continuity of care is considered by patients and clinicians an essential feature of good quality care in long-term disorders, yet there is general agreement that it is a complex concept. Most policies emphasize it and encourage systems to promote it. Despite this, there is no accepted definition or measure against which to test policies or interventions designed to improve continuity. We aimed to operationalize a multi-axial model of continuity of care and to use factor analysis to determine its validity for severe mental illness. A multi-axial model of continuity of care comprising eight facets was operationalized for quantitative data collection from mental health service users using 32 variables. Of these variables, 22 were subsequently entered into a factor analysis as independent components, using data from a clinical population considered to require long-term consistent care. Factor analysis produced seven independent continuity factors accounting for 62.5% of the total variance. These factors, Experience and Relationship, Regularity, Meeting Needs, Consolidation, Managed Transitions, Care Coordination and Supported Living, were close but not identical to the original theoretical model. We confirmed that continuity of care is multi-factorial. Our seven factors are intuitively meaningful and appear to work in mental health. These factors should be used as a starting-point in research into the determinants and outcomes of continuity of care in long-term disorders.
Procurement and Retention of Navy Physicians. Report No. CNS 1030.
ERIC Educational Resources Information Center
Devine, Eugene J.
This study is designed to provide a better understanding of the Navy's health-care system and the impact of a draft-free system in attracting an adequate number of physicians. The medical scholarship, proposed variable incentive, and present continuation pay scales are evaluated from the standpoint of financial attractiveness to the physician…
Willecke, N; Szepes, A; Wunderlich, M; Remon, J P; Vervaet, C; De Beer, T
2017-04-30
The overall objective of this work is to understand how excipient characteristics influence the process and product performance for a continuous twin-screw wet granulation process. The knowledge gained through this study is intended to be used for a Quality by Design (QbD)-based formulation design approach and formulation optimization. A total of 9 preferred fillers and 9 preferred binders were selected for this study. The selected fillers and binders were extensively characterized regarding their physico-chemical and solid state properties using 21 material characterization techniques. Subsequently, principal component analysis (PCA) was performed on the data sets of filler and binder characteristics in order to reduce the variety of single characteristics to a limited number of overarching properties. Four principal components (PC) explained 98.4% of the overall variability in the fillers data set, while three principal components explained 93.4% of the overall variability in the data set of binders. Both PCA models allowed in-depth evaluation of similarities and differences in the excipient properties. Copyright © 2017. Published by Elsevier B.V.
Bauer, Seth R.; Salem, Charbel; Connor, Michael J.; Groszek, Joseph; Taylor, Maria E.; Wei, Peilin; Tolwani, Ashita J.
2012-01-01
Summary Background and objectives Current recommendations for piperacillin-tazobactam dosing in patients receiving continuous renal replacement therapy originate from studies with relatively few patients and lower continuous renal replacement therapy doses than commonly used today. This study measured the pharmacokinetic and pharmacodynamic characteristics of piperacillin-tazobactam in patients treated with continuous renal replacement therapy using contemporary equipment and prescriptions. Design, setting, participants, & measurements A multicenter prospective observational study in the intensive care units of two academic medical centers was performed, enrolling patients with AKI or ESRD receiving piperacillin-tazobactam while being treated with continuous renal replacement therapy. Pregnant women, children, and patients with end stage liver disease were excluded from enrollment. Plasma and continuous renal replacement therapy effluent samples were analyzed for piperacillin and tazobactam levels using HPLC. Pharmacokinetic and pharmacodynamic parameters were calculated using standard equations. Multivariate analyses were used to examine the association of patient and continuous renal replacement therapy characteristics with piperacillin pharmacokinetic parameters. Results Forty-two of fifty-five subjects enrolled had complete sampling. Volume of distribution (median=0.38 L/kg, intraquartile range=0.20 L/kg) and elimination rate constants (median=0.104 h−1, intraquartile range=0.052 h−1) were highly variable, and clinical parameters could explain only a small fraction of the large variability in pharmacokinetic parameters. Probability of target attainment for piperacillin was 83% for total drug but only 77% when the unbound fraction was considered. Conclusions There is significant patient to patient variability in pharmacokinetic/pharmacodynamic parameters in patients receiving continuous renal replacement therapy. Many patients did not achieve pharmacodynamic targets, suggesting that therapeutic drug monitoring might optimize therapy. PMID:22282479
Discrete event simulation tool for analysis of qualitative models of continuous processing systems
NASA Technical Reports Server (NTRS)
Malin, Jane T. (Inventor); Basham, Bryan D. (Inventor); Harris, Richard A. (Inventor)
1990-01-01
An artificial intelligence design and qualitative modeling tool is disclosed for creating computer models and simulating continuous activities, functions, and/or behavior using developed discrete event techniques. Conveniently, the tool is organized in four modules: library design module, model construction module, simulation module, and experimentation and analysis. The library design module supports the building of library knowledge including component classes and elements pertinent to a particular domain of continuous activities, functions, and behavior being modeled. The continuous behavior is defined discretely with respect to invocation statements, effect statements, and time delays. The functionality of the components is defined in terms of variable cluster instances, independent processes, and modes, further defined in terms of mode transition processes and mode dependent processes. Model construction utilizes the hierarchy of libraries and connects them with appropriate relations. The simulation executes a specialized initialization routine and executes events in a manner that includes selective inherency of characteristics through a time and event schema until the event queue in the simulator is emptied. The experimentation and analysis module supports analysis through the generation of appropriate log files and graphics developments and includes the ability of log file comparisons.
Stacking-sequence optimization for buckling of laminated plates by integer programming
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Walsh, Joanne L.
1991-01-01
Integer-programming formulations for the design of symmetric and balanced laminated plates under biaxial compression are presented. Both maximization of buckling load for a given total thickness and the minimization of total thickness subject to a buckling constraint are formulated. The design variables that define the stacking sequence of the laminate are zero-one integers. It is shown that the formulation results in a linear optimization problem that can be solved on readily available software. This is in contrast to the continuous case, where the design variables are the thicknesses of layers with specified ply orientations, and the optimization problem is nonlinear. Constraints on the stacking sequence such as a limit on the number of contiguous plies of the same orientation and limits on in-plane stiffnesses are easily accommodated. Examples are presented for graphite-epoxy plates under uniaxial and biaxial compression using a commercial software package based on the branch-and-bound algorithm.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Woodbury, N. W.
1975-01-01
The Mars system is a tool for rapid prediction of aircraft or engine characteristics based on correlation-regression analysis of past designs stored in the data bases. An example of output obtained from the MARS system, which involves derivation of an expression for gross weight of subsonic transport aircraft in terms of nine independent variables is given. The need is illustrated for careful selection of correlation variables and for continual review of the resulting estimation equations. For Vol. 1, see N76-10089.
Tang, Yang; Cook, Thomas D; Kisbu-Sakarya, Yasemin
2018-03-01
In the "sharp" regression discontinuity design (RD), all units scoring on one side of a designated score on an assignment variable receive treatment, whereas those scoring on the other side become controls. Thus the continuous assignment variable and binary treatment indicator are measured on the same scale. Because each must be in the impact model, the resulting multi-collinearity reduces the efficiency of the RD design. However, untreated comparison data can be added along the assignment variable, and a comparative regression discontinuity design (CRD) is then created. When the untreated data come from a non-equivalent comparison group, we call this CRD-CG. Assuming linear functional forms, we show that power in CRD-CG is (a) greater than in basic RD; (b) less sensitive to the location of the cutoff and the distribution of the assignment variable; and that (c) fewer treated units are needed in the basic RD component within the CRD-CG so that savings can result from having fewer treated cases. The theory we develop is used to make numerical predictions about the efficiency of basic RD and CRD-CG relative to each other and to a randomized control trial. Data from the National Head Start Impact study are used to test these predictions. The obtained estimates are closer to the predicted parameters for CRD-CG than for basic RD and are generally quite close to the parameter predictions, supporting the emerging argument that CRD should be the design of choice in many applications for which basic RD is now used. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Roy, Satadru
Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.
Maximum life spiral bevel reduction design
NASA Technical Reports Server (NTRS)
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-01-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Heralded processes on continuous-variable spaces as quantum maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreyrol, Franck; Spagnolo, Nicolò; Blandino, Rémi
2014-12-04
Heralding processes, which only work when a measurement on a part of the system give the good result, are particularly interesting for continuous-variables. They permit non-Gaussian transformations that are necessary for several continuous-variable quantum information tasks. However if maps and quantum process tomography are commonly used to describe quantum transformations in discrete-variable space, they are much rarer in the continuous-variable domain. Also, no convenient tool for representing maps in a way more adapted to the particularities of continuous variables have yet been explored. In this paper we try to fill this gap by presenting such a tool.
Attitude estimation of earth orbiting satellites by decomposed linear recursive filters
NASA Technical Reports Server (NTRS)
Kou, S. R.
1975-01-01
Attitude estimation of earth orbiting satellites (including Large Space Telescope) subjected to environmental disturbances and noises was investigated. Modern control and estimation theory is used as a tool to design an efficient estimator for attitude estimation. Decomposed linear recursive filters for both continuous-time systems and discrete-time systems are derived. By using this accurate estimation of the attitude of spacecrafts, state variable feedback controller may be designed to achieve (or satisfy) high requirements of system performance.
Kosmides, Victoria S.; Hochberg, Marc C.
1984-01-01
This report describes the development, design specifications, features and implementation of a data base management system (DBMS) for clinical and epidemiologic studies in SLE. The DBMS is multidimensional with arrays formulated across patients, studies and variables. The major impact of this DBMS has been to increase the efficiency of managing and analyzing vast amounts of clinical and laboratory data and, as a result, to allow for continued growth in research productivity in areas related to SLE.
Online continuing medical education (CME) for GPs: does it work? A systematic review.
Thepwongsa, Isaraporn; Kirby, Catherine N; Schattner, Peter; Piterman, Leon
2014-10-01
Numerous studies have assessed the effectiveness of online continuing medical education (CME) designed to improve healthcare professionals' care of patients. The effects of online educational interventions targeted at general practitioners (GP), however, have not been systematically reviewed. A computer search was conducted through seven databases for studies assessing changes in GPs' knowledge and practice, or patient outcomes following an online educational intervention. Eleven studies met the eligibility criteria. Most studies (8/11, 72.7%) found a significant improvement in at least one of the following outcomes: satisfaction, knowledge or practice change. There was little evidence for the impact of online CME on patient outcomes. Variability in study design, characteristics of online and outcome measures limited conclusions on the effects of online CME. Online CME could improve GP satisfaction, knowledge and practices but there are very few well-designed studies that focus on this delivery method of GP education.
Joseph, Jeffrey I; Torjman, Marc C; Strasma, Paul J
2015-07-01
Hyperglycemia, hypoglycemia, and glycemic variability have been associated with increased morbidity, mortality, length of stay, and cost in a variety of critical care and non-critical care patient populations in the hospital. The results from prospective randomized clinical trials designed to determine the risks and benefits of intensive insulin therapy and tight glycemic control have been confusing; and at times conflicting. The limitations of point-of-care blood glucose (BG) monitoring in the hospital highlight the great clinical need for an automated real-time continuous glucose monitoring system (CGMS) that can accurately measure the concentration of glucose every few minutes. Automation and standardization of the glucose measurement process have the potential to significantly improve BG control, clinical outcome, safety and cost. © 2015 Diabetes Technology Society.
Qubit-Programmable Operations on Quantum Light Fields
Barbieri, Marco; Spagnolo, Nicolò; Ferreyrol, Franck; Blandino, Rémi; Smith, Brian J.; Tualle-Brouri, Rosa
2015-01-01
Engineering quantum operations is a crucial capability needed for developing quantum technologies and designing new fundamental physics tests. Here we propose a scheme for realising a controlled operation acting on a travelling continuous-variable quantum field, whose functioning is determined by a discrete input qubit. This opens a new avenue for exploiting advantages of both information encoding approaches. Furthermore, this approach allows for the program itself to be in a superposition of operations, and as a result it can be used within a quantum processor, where coherences must be maintained. Our study can find interest not only in general quantum state engineering and information protocols, but also details an interface between different physical platforms. Potential applications can be found in linking optical qubits to optical systems for which coupling is best described in terms of their continuous variables, such as optomechanical devices. PMID:26468614
NASA Astrophysics Data System (ADS)
Xie, Cailang; Guo, Ying; Liao, Qin; Zhao, Wei; Huang, Duan; Zhang, Ling; Zeng, Guihua
2018-03-01
How to narrow the gap of security between theory and practice has been a notoriously urgent problem in quantum cryptography. Here, we analyze and provide experimental evidence of the clock jitter effect on the practical continuous-variable quantum key distribution (CV-QKD) system. The clock jitter is a random noise which exists permanently in the clock synchronization in the practical CV-QKD system, it may compromise the system security because of its impact on data sampling and parameters estimation. In particular, the practical security of CV-QKD with different clock jitter against collective attack is analyzed theoretically based on different repetition frequencies, the numerical simulations indicate that the clock jitter has more impact on a high-speed scenario. Furthermore, a simplified experiment is designed to investigate the influence of the clock jitter.
The Galileo scan platform pointing control system - A modern control theoretic viewpoint
NASA Technical Reports Server (NTRS)
Sevaston, G. E.; Macala, G. A.; Man, G. K.
1985-01-01
The current Galileo scan platform pointing control system (SPPCS) is described, and ways in which modern control concepts could serve to enhance it are considered. Of particular interest are: the multi-variable design model and overall control system architecture, command input filtering, feedback compensator and command input design, stability robustness constraint for both continuous time control systems and for sampled data control systems, and digital implementation of the control system. The proposed approach leads to the design of a system that is similar to current Galileo SPPCS configuration, but promises to be more systematic.
Maria K. Janowiak; Daniel D. Dostie; Michael A. Wilson; Michael J. Kucera; R. Howard Skinner; Jerry L. Hatfield; David Hollinger; Christopher W. Swanston
2016-01-01
Changes in climate and extreme weather are already increasing challenges for agriculture nationally and globally, and many of these impacts will continue into the future. This technical bulletin contains information and resources designed to help agricultural producers, service providers, and educators in the Midwest and Northeast regions of the United States integrate...
Variables to Consider in Planning Research for Effective Instruction: A Conceptual Framework.
ERIC Educational Resources Information Center
Uprichard, A. Edward
In this paper the belief is stated that researchers need to develop some type of conceptual frame for improving continuity of studies and specificity of treatment. This paper describes such a conceptual frame and its implications for research. The paper states that the framework was designed to help researchers identify, classify, and/or quantify…
Meteorological Measurement Guide
1992-01-01
measurements by inverting the equation for acoustic propa- gation through air . Uncertainties in this inversion, because of variability of atmospheric...shields can produce highly accurate relative air temperature measurements suitable for temperature gradient calculation. Well-designed radiation shields... measurement , clear- air profiling, and weather echo interpretations. The atmosphere is in a continuous state of change as patches of air with different
An Integrated Approach for Conducting a Behavioral Systems Analysis
ERIC Educational Resources Information Center
Diener, Lori H.; McGee, Heather M.; Miguel, Caio F.
2009-01-01
The aim of this paper is to illustrate how to conduct a Behavioral Systems Analysis (BSA) to aid in the design of targeted performance improvement interventions. BSA is a continuous process of analyzing the right variables to the right extent to aid in planning and managing performance at the organization, process, and job levels. BSA helps to…
Dual methods and approximation concepts in structural synthesis
NASA Technical Reports Server (NTRS)
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins.
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2000-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity (Reeves 1997). However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold (Glover 1998). One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumber-some binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back (1996) and Dasgupta and Michalesicz (1997). We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
A discrete decentralized variable structure robotic controller
NASA Technical Reports Server (NTRS)
Tumeh, Zuheir S.
1989-01-01
A decentralized trajectory controller for robotic manipulators is designed and tested using a multiprocessor architecture and a PUMA 560 robot arm. The controller is made up of a nominal model-based component and a correction component based on a variable structure suction control approach. The second control component is designed using bounds on the difference between the used and actual values of the model parameters. Since the continuous manipulator system is digitally controlled along a trajectory, a discretized equivalent model of the manipulator is used to derive the controller. The motivation for decentralized control is that the derived algorithms can be executed in parallel using a distributed, relatively inexpensive, architecture where each joint is assigned a microprocessor. Nonlinear interaction and coupling between joints is treated as a disturbance torque that is estimated and compensated for.
Optimal Synthesis of Compliant Mechanisms using Subdivision and Commercial FEA (DETC2004-57497)
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Canfield, Stephen
2004-01-01
The field of distributed-compliance mechanisms has seen significant work in developing suitable topology optimization tools for their design. These optimal design tools have grown out of the techniques of structural optimization. This paper will build on the previous work in topology optimization and compliant mechanism design by proposing an alternative design space parameterization through control points and adding another step to the process, that of subdivision. The control points allow a specific design to be represented as a solid model during the optimization process. The process of subdivision creates an additional number of control points that help smooth the surface (for example a C(sup 2) continuous surface depending on the method of subdivision chosen) creating a manufacturable design free of some traditional numerical instabilities. Note that these additional control points do not add to the number of design parameters. This alternative parameterization and description as a solid model effectively and completely separates the design variables from the analysis variables during the optimization procedure. The motivation behind this work is to create an automated design tool from task definition to functional prototype created on a CNC or rapid-prototype machine. This paper will describe the proposed compliant mechanism design process and will demonstrate the procedure on several examples common in the literature.
Huang, Bo; Kuan, Pei Fen
2014-11-01
Delayed dose limiting toxicities (i.e. beyond first cycle of treatment) is a challenge for phase I trials. The time-to-event continual reassessment method (TITE-CRM) is a Bayesian dose-finding design to address the issue of long observation time and early patient drop-out. It uses a weighted binomial likelihood with weights assigned to observations by the unknown time-to-toxicity distribution, and is open to accrual continually. To avoid dosing at overly toxic levels while retaining accuracy and efficiency for DLT evaluation that involves multiple cycles, we propose an adaptive weight function by incorporating cyclical data of the experimental treatment with parameters updated continually. This provides a reasonable estimate for the time-to-toxicity distribution by accounting for inter-cycle variability and maintains the statistical properties of consistency and coherence. A case study of a First-in-Human trial in cancer for an experimental biologic is presented using the proposed design. Design calibrations for the clinical and statistical parameters are conducted to ensure good operating characteristics. Simulation results show that the proposed TITE-CRM design with adaptive weight function yields significantly shorter trial duration, does not expose patients to additional risk, is competitive against the existing weighting methods, and possesses some desirable properties. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cahyadi, Christine; Heng, Paul Wan Sia; Chan, Lai Wah
2011-03-01
The aim of this study was to identify and optimize the critical process parameters of the newly developed Supercell quasi-continuous coater for optimal tablet coat quality. Design of experiments, aided by multivariate analysis techniques, was used to quantify the effects of various coating process conditions and their interactions on the quality of film-coated tablets. The process parameters varied included batch size, inlet temperature, atomizing pressure, plenum pressure, spray rate and coating level. An initial screening stage was carried out using a 2(6-1(IV)) fractional factorial design. Following these preliminary experiments, optimization study was carried out using the Box-Behnken design. Main response variables measured included drug-loading efficiency, coat thickness variation, and the extent of tablet damage. Apparent optimum conditions were determined by using response surface plots. The process parameters exerted various effects on the different response variables. Hence, trade-offs between individual optima were necessary to obtain the best compromised set of conditions. The adequacy of the optimized process conditions in meeting the combined goals for all responses was indicated by the composite desirability value. By using response surface methodology and optimization, coating conditions which produced coated tablets of high drug-loading efficiency, low incidences of tablet damage and low coat thickness variation were defined. Optimal conditions were found to vary over a large spectrum when different responses were considered. Changes in processing parameters across the design space did not result in drastic changes to coat quality, thereby demonstrating robustness in the Supercell coating process. © 2010 American Association of Pharmaceutical Scientists
Continuous-variable quantum homomorphic signature
NASA Astrophysics Data System (ADS)
Li, Ke; Shang, Tao; Liu, Jian-wei
2017-10-01
Quantum cryptography is believed to be unconditionally secure because its security is ensured by physical laws rather than computational complexity. According to spectrum characteristic, quantum information can be classified into two categories, namely discrete variables and continuous variables. Continuous-variable quantum protocols have gained much attention for their ability to transmit more information with lower cost. To verify the identities of different data sources in a quantum network, we propose a continuous-variable quantum homomorphic signature scheme. It is based on continuous-variable entanglement swapping and provides additive and subtractive homomorphism. Security analysis shows the proposed scheme is secure against replay, forgery and repudiation. Even under nonideal conditions, it supports effective verification within a certain verification threshold.
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano
2015-01-01
As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246
Ebbeling, Cara B; Wadden, Thomas A; Ludwig, David S
2011-01-01
Background: The circumstances under which the glycemic index (GI) and glycemic load (GL) are derived do not reflect real-world eating behavior. Thus, the ecologic validity of these constructs is incompletely known. Objective: This study examined the relation of dietary intake to glycemic response when foods are consumed under free-living conditions. Design: Participants were 26 overweight or obese adults with type 2 diabetes who participated in a randomized trial of lifestyle modification. The current study includes baseline data, before initiation of the intervention. Participants wore a continuous glucose monitor and simultaneously kept a food diary for 3 d. The dietary variables included GI, GL, and intakes of energy, fat, protein, carbohydrate, sugars, and fiber. The glycemic response variables included AUC, mean and SD of continuous glucose monitoring (CGM) values, percentage of CGM values in euglycemic and hyperglycemic ranges, and mean amplitude of glycemic excursions. Relations between daily dietary intake and glycemic outcomes were examined. Results: Data were available from 41 d of monitoring. Partial correlations, controlled for energy intake, indicated that GI or GL was significantly associated with each glycemic response outcome. In multivariate analyses, dietary GI accounted for 10% to 18% of the variance in each glycemic variable, independent of energy and carbohydrate intakes (P < 0.01). Conclusions: The data support the ecologic validity of the GI and GL constructs in free-living obese adults with type 2 diabetes. GI was the strongest and most consistent independent predictor of glycemic stability and variability. PMID:22071699
Design and Modeling of a Variable Heat Rejection Radiator
NASA Technical Reports Server (NTRS)
Miller, Jennifer R.; Birur, Gajanana C.; Ganapathi, Gani B.; Sunada, Eric T.; Berisford, Daniel F.; Stephan, Ryan
2011-01-01
Variable Heat Rejection Radiator technology needed for future NASA human rated & robotic missions Primary objective is to enable a single loop architecture for human-rated missions (1) Radiators are typically sized for maximum heat load in the warmest continuous environment resulting in a large panel area (2) Large radiator area results in fluid being susceptible to freezing at low load in cold environment and typically results in a two-loop system (3) Dual loop architecture is approximately 18% heavier than single loop architecture (based on Orion thermal control system mass) (4) Single loop architecture requires adaptability to varying environments and heat loads
A Sequential Shifting Algorithm for Variable Rotor Speed Control
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Edwards, Jason M.; DeCastro, Jonathan A.
2007-01-01
A proof of concept of a continuously variable rotor speed control methodology for rotorcraft is described. Variable rotor speed is desirable for several reasons including improved maneuverability, agility, and noise reduction. However, it has been difficult to implement because turboshaft engines are designed to operate within a narrow speed band, and a reliable drive train that can provide continuous power over a wide speed range does not exist. The new methodology proposed here is a sequential shifting control for twin-engine rotorcraft that coordinates the disengagement and engagement of the two turboshaft engines in such a way that the rotor speed may vary over a wide range, but the engines remain within their prescribed speed bands and provide continuous torque to the rotor; two multi-speed gearboxes facilitate the wide rotor speed variation. The shifting process begins when one engine slows down and disengages from the transmission by way of a standard freewheeling clutch mechanism; the other engine continues to apply torque to the rotor. Once one engine disengages, its gear shifts, the multi-speed gearbox output shaft speed resynchronizes and it re-engages. This process is then repeated with the other engine. By tailoring the sequential shifting, the rotor may perform large, rapid speed changes smoothly, as demonstrated in several examples. The emphasis of this effort is on the coordination and control aspects for proof of concept. The engines, rotor, and transmission are all simplified linear models, integrated to capture the basic dynamics of the problem.
Fields, Dail; Roman, Paul M; Blum, Terry C
2012-01-01
Objective To examine the relationships among general management systems, patient-focused quality management/continuous process improvement (TQM/CPI) processes, resource availability, and multiple dimensions of substance use disorder (SUD) treatment. Data Sources/Study Setting Data are from a nationally representative sample of 221 SUD treatment centers through the National Treatment Center Study (NTCS). Study Design The design was a cross-sectional field study using latent variable structural equation models. The key variables are management practices, TQM/continuous quality improvement (CQI) practices, resource availability, and treatment center performance. Data Collection Interviews and questionnaires provided data from treatment center administrative directors and clinical directors in 2007–2008. Principal Findings Patient-focused TQM/CQI practices fully mediated the relationship between internal management practices and performance. The effects of TQM/CQI on performance are significantly larger for treatment centers with higher levels of staff per patient. Conclusions Internal management practices may create a setting that supports implementation of specific patient-focused practices and protocols inherent to TQM/CQI processes. However, the positive effects of internal management practices on treatment center performance occur through use of specific patient-focused TQM/CPI practices and have more impact when greater amounts of supporting resources are present. PMID:22098342
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
Anonymous voting for multi-dimensional CV quantum system
NASA Astrophysics Data System (ADS)
Rong-Hua, Shi; Yi, Xiao; Jin-Jing, Shi; Ying, Guo; Moon-Ho, Lee
2016-06-01
We investigate the design of anonymous voting protocols, CV-based binary-valued ballot and CV-based multi-valued ballot with continuous variables (CV) in a multi-dimensional quantum cryptosystem to ensure the security of voting procedure and data privacy. The quantum entangled states are employed in the continuous variable quantum system to carry the voting information and assist information transmission, which takes the advantage of the GHZ-like states in terms of improving the utilization of quantum states by decreasing the number of required quantum states. It provides a potential approach to achieve the efficient quantum anonymous voting with high transmission security, especially in large-scale votes. Project supported by the National Natural Science Foundation of China (Grant Nos. 61272495, 61379153, and 61401519), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130162110012), and the MEST-NRF of Korea (Grant No. 2012-002521).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, J.F.; Wever, D.M.
1981-07-01
Three processes developed by Pittsburgh Energy Technology Center (PETC), Ledgemont Laboratories, and Ames Laboratories for the oxydesulfurization of coal were evaluated in continuous processing equipment designed, built, and/or adapted for the purpose at the DOE-owned Multi-Use Fuels and Energy Processes Test Plant (MEP) located at TRW's Capistrano Test Site in California. The three processes differed primarily in the chemical additives (none, sodium carbonate, or ammonia), fed to the 20% to 40% coal/water slurries, and in the oxygen content of the feed gas stream. Temperature, pressure, residence time, flow rates, slurry concentration and stirrer speed were the other primary independent variables.more » The amount of organic sulfur removed, total sulfur removed and the Btu recovery were the primary dependent variables. Evaluation of the data presented was not part of the test effort.« less
Experimental study on all-fiber-based unidimensional continuous-variable quantum key distribution
NASA Astrophysics Data System (ADS)
Wang, Xuyang; Liu, Wenyuan; Wang, Pu; Li, Yongmin
2017-06-01
We experimentally demonstrated an all-fiber-based unidimensional continuous-variable quantum key distribution (CV QKD) protocol and analyzed its security under collective attack in realistic conditions. A pulsed balanced homodyne detector, which could not be accessed by eavesdroppers, with phase-insensitive efficiency and electronic noise, was considered. Furthermore, a modulation method and an improved relative phase-locking technique with one amplitude modulator and one phase modulator were designed. The relative phase could be locked precisely with a standard deviation of 0.5° and a mean of almost zero. Secret key bit rates of 5.4 kbps and 700 bps were achieved for transmission fiber lengths of 30 and 50 km, respectively. The protocol, which simplified the CV QKD system and reduced the cost, displayed a performance comparable to that of a symmetrical counterpart under realistic conditions. It is expected that the developed protocol can facilitate the practical application of the CV QKD.
Wilkes, Donald F.; Purvis, James W.; Miller, A. Keith
1997-01-01
An infinitely variable transmission is capable of operating between a maximum speed in one direction and a minimum speed in an opposite direction, including a zero output angular velocity, while being supplied with energy at a constant angular velocity. Input energy is divided between a first power path carrying an orbital set of elements and a second path that includes a variable speed adjustment mechanism. The second power path also connects with the orbital set of elements in such a way as to vary the rate of angular rotation thereof. The combined effects of power from the first and second power paths are combined and delivered to an output element by the orbital element set. The transmission can be designed to operate over a preselected ratio of forward to reverse output speeds.
Development of a composite tailoring procedure for airplane wing
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Zhang, Sen
1995-01-01
The development of a composite wing box section using a higher order-theory is proposed for accurate and efficient estimation of both static and dynamic responses. The theory includes the effect of through-the-thickness transverse shear deformations which is important in laminated composites and is ignored in the classical approach. The box beam analysis is integrated with an aeroelastic analysis to investigate the effect of composite tailoring using a formal design optimization technique. A hybrid optimization procedure is proposed for addressing both continuous and discrete design variables.
Singh, Ravendra; Ierapetritou, Marianthi; Ramachandran, Rohit
2013-11-01
The next generation of QbD based pharmaceutical products will be manufactured through continuous processing. This will allow the integration of online/inline monitoring tools, coupled with an efficient advanced model-based feedback control systems, to achieve precise control of process variables, so that the predefined product quality can be achieved consistently. The direct compaction process considered in this study is highly interactive and involves time delays for a number of process variables due to sensor placements, process equipment dimensions, and the flow characteristics of the solid material. A simple feedback regulatory control system (e.g., PI(D)) by itself may not be sufficient to achieve the tight process control that is mandated by regulatory authorities. The process presented herein comprises of coupled dynamics involving slow and fast responses, indicating the requirement of a hybrid control scheme such as a combined MPC-PID control scheme. In this manuscript, an efficient system-wide hybrid control strategy for an integrated continuous pharmaceutical tablet manufacturing process via direct compaction has been designed. The designed control system is a hybrid scheme of MPC-PID control. An effective controller parameter tuning strategy involving an ITAE method coupled with an optimization strategy has been used for tuning of both MPC and PID parameters. The designed hybrid control system has been implemented in a first-principles model-based flowsheet that was simulated in gPROMS (Process System Enterprise). Results demonstrate enhanced performance of critical quality attributes (CQAs) under the hybrid control scheme compared to only PID or MPC control schemes, illustrating the potential of a hybrid control scheme in improving pharmaceutical manufacturing operations. Copyright © 2013 Elsevier B.V. All rights reserved.
Advanced propulsion system for hybrid vehicles
NASA Technical Reports Server (NTRS)
Norrup, L. V.; Lintz, A. T.
1980-01-01
A number of hybrid propulsion systems were evaluated for application in several different vehicle sizes. A conceptual design was prepared for the most promising configuration. Various system configurations were parametrically evaluated and compared, design tradeoffs performed, and a conceptual design produced. Fifteen vehicle/propulsion systems concepts were parametrically evaluated to select two systems and one vehicle for detailed design tradeoff studies. A single hybrid propulsion system concept and vehicle (five passenger family sedan)were selected for optimization based on the results of the tradeoff studies. The final propulsion system consists of a 65 kW spark-ignition heat engine, a mechanical continuously variable traction transmission, a 20 kW permanent magnet axial-gap traction motor, a variable frequency inverter, a 386 kg lead-acid improved state-of-the-art battery, and a transaxle. The system was configured with a parallel power path between the heat engine and battery. It has two automatic operational modes: electric mode and heat engine mode. Power is always shared between the heat engine and battery during acceleration periods. In both modes, regenerative braking energy is absorbed by the battery.
Influence of Patient Characteristics on Success of Ambulatory Blood Pressure Monitoring
Fravel, Michelle A.; Ernst, Michael E.; Weber, Cynthia A.; Dawson, Jeffrey D.; Carter, Barry L.; Bergus, George R.
2014-01-01
Study Objective To examine the influence of specific patient characteristics on the success of ambulatory blood pressure monitoring (ABPM). Design Retrospective analysis. Setting University-affiliated family care center. Patients Five hundred thirty patients (mean age 52.7 yrs, range 14–90 yrs) who were undergoing ABPM between January 1, 2001, and July 1, 2007. Measurement and Main Results Specific patient characteristics were identified through an electronic medical record review and then examined for association with ABPM session success rate. These patient characteristics included age, sex, weight, height, body mass index (BMI), occupation, clinic blood pressure, travel distance to clinic, and presence of diabetes mellitus or renal disease. The percentage of valid readings obtained during an ABPM session was analyzed continuously (0–100%), whereas overall session success was analyzed dichotomously (0–79% or 80–100%). Univariate and multivariate regression analyses were performed to examine the influence of patient characteristics on the percentage of valid readings and the overall likelihood of achieving a successful session. In the 530 patients, the average percentage of valid readings was 90%, and a successful ABPM session (≥ 80% valid readings) was obtained in 84.7% (449 patients). A diagnosis of diabetes was found to negatively predict ABPM session success (continuous variable analysis, p=0.019; dichotomous variable analysis, odds ratio [OR] 0.45, 95% confidence interval [CI] 0.23–0.87, p=0.019), as did renal disease (continuous variable analysis, p=0.006; dichotomous variable analysis, OR 0.39, 95% CI 0.17–0.90, p=0.027) and increasing BMI (continuous variable analysis, p<0.001; dichotomous variable analysis, OR 0.78, 95% CI 0.65–0.93, p=0.005). Renal disease and BMI remained significant predictors in adjusted analyses. Conclusion For most patients, ABPM was successful; however, elevated BMI and renal disease were associated with less complete ABPM session results. Adaptation and individualization of the ABPM process may be necessary to improve results in these patients. PMID:18956994
Implementation of in-line infrared monitor in full-scale anaerobic digestion process.
Spanjers, H; Bouvier, J C; Steenweg, P; Bisschops, I; van Gils, W; Versprille, B
2006-01-01
During start up but also during normal operation, anaerobic reactor systems should be run and monitored carefully to secure trouble-free operation, because the process is vulnerable to disturbances such as temporary overloading, biomass wash out and influent toxicity. The present method of monitoring is usually by manual sampling and subsequent laboratory analysis. Data collection, processing and feedback to system operation is manual and ad hoc, and involves high-level operator skills and attention. As a result, systems tend to be designed at relatively conservative design loading rates resulting in significant over-sizing of reactors and thus increased systems cost. It is therefore desirable to have on-line and continuous access to performance data on influent and effluent quality. Relevant variables to indicate process performance include VFA, COD, alkalinity, sulphate, and, if aerobic post-treatment is considered, total nitrogen, ammonia and nitrate. Recently, mid-IR spectrometry was demonstrated on a pilot scale to be suitable for in-line simultaneous measurement of these variables. This paper describes a full-scale application of the technique to test its ability to monitor continuously and without human intervention the above variables simultaneously in two process streams. For VFA, COD, sulphate, ammonium and TKN good agreement was obtained between in-line and manual measurements. During a period of six months the in-line measurements had to be interrupted several times because of clogging. It appeared that the sample pre-treatment unit was not able to cope with high solids concentrations all the time.
Vega-Garzon, Lina Patricia; Gomez-Miranda, Ingry Natalia; Peñuela, Gustavo A
2018-05-01
Response Surface Methodology was used for optimizing operating variables for a multi-frequency ultrasound reactor using BP-3 as a model compound. The response variable was the Triclosan degradation percent after 10 sonication min. Frequency at levels from 574, 856 and 1134 kHz were used. Power density, pulse time (PT), silent time (ST) and PT/ST ratio effects were also analyzed. 2 2 and 2 3 experimental designs were used for screening purposes and a central composite design was used for optimization. An optimum value of 79.2% was obtained for a frequency of 574 kHz, a power density of 200 W/L, and a PT/ST ratio of 10. Significant variables were frequency and power level, the first having an optimum value after which degradation decreases while power density level had a strong positive effect on the whole operational range. PT, ST, and PT/ST ratio were not significant variables although it was shown that pulsed mode ultrasound has better degradation rates than continuous mode ultrasound; the effect less significant at higher power levels. Copyright © 2017. Published by Elsevier B.V.
Ryberg, Karen R.; Vecchia, Aldo V.
2006-01-01
This report presents the results of a study conducted by the U.S. Geological Survey, in cooperation with the North Dakota State Water Commission, the Devils Lake Basin Joint Water Resource Board, and the Red River Joint Water Resource District, to analyze historical water-quality trends in three dissolved major ions, three nutrients, and one dissolved trace element for eight stations in the Devils Lake Basin in North Dakota and to develop an efficient sampling design to monitor the future trends. A multiple-regression model was used to detect and remove streamflow-related variability in constituent concentrations. To separate the natural variability in concentration as a result of variability in streamflow from the variability in concentration as a result of other factors, the base-10 logarithm of daily streamflow was divided into four components-a 5-year streamflow anomaly, an annual streamflow anomaly, a seasonal streamflow anomaly, and a daily streamflow anomaly. The constituent concentrations then were adjusted for streamflow-related variability by removing the 5-year, annual, seasonal, and daily variability. Constituents used for the water-quality trend analysis were evaluated for a step trend to examine the effect of Channel A on water quality in the basin and a linear trend to detect gradual changes with time from January 1980 through September 2003. The fitted upward linear trends for dissolved calcium concentrations during 1980-2003 for two stations were significant. The fitted step trends for dissolved sulfate concentrations for three stations were positive and similar in magnitude. Of the three upward trends, one was significant. The fitted step trends for dissolved chloride concentrations were positive but insignificant. The fitted linear trends for the upstream stations were small and insignificant, but three of the downward trends that occurred during 1980-2003 for the remaining stations were significant. The fitted upward linear trends for dissolved nitrite plus nitrate as nitrogen concentrations during 1987-2003 for two stations were significant. However, concentrations during recent years appear to be lower than those for the 1970s and early 1980s but higher than those for the late 1980s and early 1990s. The fitted downward linear trend for dissolved ammonia concentrations for one station was significant. The fitted linear trends for total phosphorus concentrations for two stations were significant. Upward trends for total phosphorus concentrations occurred from the late 1980s to 2003 for most stations, but a small and insignificant downward trend occurred for one station. Continued monitoring will be needed to determine if the recent trend toward higher dissolved nitrite plus nitrate as nitrogen and total phosphorus concentrations continues in the future. For continued monitoring of water-quality trends in the upper Devils Lake Basin, an efficient sampling design consists of five major-ion, nutrient, and trace-element samples per year at three existing stream stations and at three existing lake stations. This sampling design requires the collection of 15 stream samples and 15 lake samples per year rather than 16 stream samples and 20 lake samples per year as in the 1992-2003 program. Thus, the design would result in a program that is less costly and more efficient than the 1992-2003 program but that still would provide the data needed to monitor water-quality trends in the Devils Lake Basin.
Mutilating Data and Discarding Variance: The Dangers of Dichotomizing Continuous Variables.
ERIC Educational Resources Information Center
Kroff, Michael W.
This paper reviews issues involved in converting continuous variables to nominal variables to be used in the OVA techniques. The literature dealing with the dangers of dichotomizing continuous variables is reviewed. First, the assumptions invoked by OVA analyses are reviewed in addition to concerns regarding the loss of variance and a reduction in…
Randomized trial of intermittent or continuous amnioinfusion for variable decelerations.
Rinehart, B K; Terrone, D A; Barrow, J H; Isler, C M; Barrilleaux, P S; Roberts, W E
2000-10-01
To determine whether continuous or intermittent bolus amnioinfusion is more effective in relieving variable decelerations. Patients with repetitive variable decelerations were randomized to an intermittent bolus or continuous amnioinfusion. The intermittent bolus infusion group received boluses of 500 mL of normal saline, each over 30 minutes, with boluses repeated if variable decelerations recurred. The continuous infusion group received a bolus infusion of 500 mL of normal saline over 30 minutes and then 3 mL per minute until delivery occurred. The ability of the amnioinfusion to abolish variable decelerations was analyzed, as were maternal demographic and pregnancy outcome variables. Power analysis indicated that 64 patients would be required. Thirty-five patients were randomized to intermittent infusion and 30 to continuous infusion. There were no differences between groups in terms of maternal demographics, gestational age, delivery mode, neonatal outcome, median time to resolution of variable decelerations, or the number of times variable decelerations recurred. The median volume infused in the intermittent infusion group (500 mL) was significantly less than that in the continuous infusion group (905 mL, P =.003). Intermittent bolus amnioinfusion is as effective as continuous infusion in relieving variable decelerations in labor. Further investigation is necessary to determine whether either of these techniques is associated with increased occurrence of rare complications such as cord prolapse or uterine rupture.
NASA Technical Reports Server (NTRS)
Deal, J. H.
1975-01-01
One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.
Continuously-Variable Positive-Mesh Power Transmission
NASA Technical Reports Server (NTRS)
Johnson, J. L.
1982-01-01
Proposed transmission with continuously-variable speed ratio couples two mechanical trigonometric-function generators. Transmission is expected to handle higher loads than conventional variable-pulley drives; and, unlike variable pulley, positive traction through entire drive train with no reliance on friction to transmit power. Able to vary speed continuously through zero and into reverse. Possible applications in instrumentation where drive-train slippage cannot be tolerated.
Continuous variables logic via coupled automata using a DNAzyme cascade with feedback.
Lilienthal, S; Klein, M; Orbach, R; Willner, I; Remacle, F; Levine, R D
2017-03-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series.
Continuous-variable controlled-Z gate using an atomic ensemble
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Mingfeng; Jiang Nianquan; Jin Qingli
2011-06-15
The continuous-variable controlled-Z gate is a canonical two-mode gate for universal continuous-variable quantum computation. It is considered as one of the most fundamental continuous-variable quantum gates. Here we present a scheme for realizing continuous-variable controlled-Z gate between two optical beams using an atomic ensemble. The gate is performed by simply sending the two beams propagating in two orthogonal directions twice through a spin-squeezed atomic medium. Its fidelity can run up to one if the input atomic state is infinitely squeezed. Considering the noise effects due to atomic decoherence and light losses, we show that the observed fidelities of the schememore » are still quite high within presently available techniques.« less
An inverter/controller subsystem optimized for photovoltaic applications
NASA Technical Reports Server (NTRS)
Pickrell, R. L.; Merrill, W. C.; Osullivan, G.
1978-01-01
Conversion of solar array dc power to ac power stimulated the specification, design, and simulation testing of an inverter/controller subsystem tailored to the photovoltaic power source characteristics. This paper discusses the optimization of the inverter/controller design as part of an overall Photovoltaic Power System (PPS) designed for maximum energy extraction from the solar array. The special design requirements for the inverter/controller include: (1) a power system controller (PSC) to control continuously the solar array operating point at the maximum power level based on variable solar insolation and cell temperatures; and (2) an inverter designed for high efficiency at rated load and low losses at light loadings to conserve energy. It must be capable of operating connected to the utility line at a level set by an external controller (PSC).
Testing of a Loop Heat Pipe Subjected to Variable Accelerating Forces
NASA Technical Reports Server (NTRS)
Ku, Jentung; Ottenstein, Laura; Kaya, Tarik; Rogers, Paul; Hoff, Craig
2000-01-01
This paper presents viewgraphs of the functionality of a loop heat pipe that was subjected to variable accelerating forces. The topics include: 1) Summary of LHP (Loop Heat Pipe) Design Parameters; 2) Picture of the LHP; 3) Schematic of Test Setup; 4) Test Configurations; 5) Test Profiles; 6) Overview of Test Results; 7) Start-up; 8) Typical Start-up without Temperature Overshoot; 9) Start-up with a Large Temperature Overshoot; 10) LHP Operation Under Stationary Condition; 11) LHP Operation Under Continuous Acceleration; 12) LHP Operation Under Periodic Acceleration; 13) Effects of Acceleration on Temperature Oscillation and Hysteresis; 14) Temperature Oscillation/Hysteresis vs Spin Rate; and 15) Summary.
Schelvis, Roosmarijn M C; Oude Hengel, Karen M; Burdorf, Alex; Blatter, Birgitte M; Strijk, Jorien E; van der Beek, Allard J
2015-09-01
Occupational health researchers regularly conduct evaluative intervention research for which a randomized controlled trial (RCT) may not be the most appropriate design (eg, effects of policy measures, organizational interventions on work schedules). This article demonstrates the appropriateness of alternative designs for the evaluation of occupational health interventions, which permit causal inferences, formulated along two study design approaches: experimental (stepped-wedge) and observational (propensity scores, instrumental variables, multiple baseline design, interrupted time series, difference-in-difference, and regression discontinuity). For each design, the unique characteristics are presented including the advantages and disadvantages compared to the RCT, illustrated by empirical examples in occupational health. This overview shows that several appropriate alternatives for the RCT design are feasible and available, which may provide sufficiently strong evidence to guide decisions on implementation of interventions in workplaces. Researchers are encouraged to continue exploring these designs and thus contribute to evidence-based occupational health.
NASA Astrophysics Data System (ADS)
Chasalevris, Athanasios; Dohnal, Fadi
2015-02-01
The idea for a journal bearing with variable geometry was formerly developed and investigated on its principles of operation giving very optimistic theoretical results for the vibration quenching of simple and more complicated rotor bearing systems during the passage through the first critical speed. The journal bearing with variable geometry is presented in this paper in its final form with the detailed design procedure. The current journal bearing was constructed in order to be applied in a simple real rotor bearing system that already exists as an experimental facility. The current paper presents details on the manufactured prototype bearing as an experimental continuation of previous works that presented the simulation of the operating principle of this journal bearing. The design parameters are discussed thoroughly under the numerical simulation for the fluid film pressure in dependency of the variable fluid film thickness during the operation conditions. The implementation of the variable geometry bearing in an experimental rotor bearing system is outlined. Various measurements highlight the efficiency of the proposed bearing element in vibration quenching during the passage through resonance. The inspiration for the current idea is based on the fact that the alteration of the fluid film characteristics of stiffness and damping during the passage through resonance results in vibration quenching. This alteration of the bearing characteristics is achieved by the introduction of an additional fluid film thickness using the passive displacement of the lower half-bearing part. • The contribution of the current journal bearing in vibration quenching. • Experimental evidence for the VGJB contribution.
An integrated optimum design approach for high speed prop rotors
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Mccarthy, Thomas R.
1995-01-01
The objective is to develop an optimization procedure for high-speed and civil tilt-rotors by coupling all of the necessary disciplines within a closed-loop optimization procedure. Both simplified and comprehensive analysis codes are used for the aerodynamic analyses. The structural properties are calculated using in-house developed algorithms for both isotropic and composite box beam sections. There are four major objectives of this study. (1) Aerodynamic optimization: The effects of blade aerodynamic characteristics on cruise and hover performance of prop-rotor aircraft are investigated using the classical blade element momentum approach with corrections for the high lift capability of rotors/propellers. (2) Coupled aerodynamic/structures optimization: A multilevel hybrid optimization technique is developed for the design of prop-rotor aircraft. The design problem is decomposed into a level for improved aerodynamics with continuous design variables and a level with discrete variables to investigate composite tailoring. The aerodynamic analysis is based on that developed in objective 1 and the structural analysis is performed using an in-house code which models a composite box beam. The results are compared to both a reference rotor and the optimum rotor found in the purely aerodynamic formulation. (3) Multipoint optimization: The multilevel optimization procedure of objective 2 is extended to a multipoint design problem. Hover, cruise, and take-off are the three flight conditions simultaneously maximized. (4) Coupled rotor/wing optimization: Using the comprehensive rotary wing code CAMRAD, an optimization procedure is developed for the coupled rotor/wing performance in high speed tilt-rotor aircraft. The developed procedure contains design variables which define the rotor and wing planforms.
Stronger steerability criterion for more uncertain continuous-variable systems
NASA Astrophysics Data System (ADS)
Chowdhury, Priyanka; Pramanik, Tanumoy; Majumdar, A. S.
2015-10-01
We derive a fine-grained uncertainty relation for the measurement of two incompatible observables on a single quantum system of continuous variables, and show that continuous-variable systems are more uncertain than discrete-variable systems. Using the derived fine-grained uncertainty relation, we formulate a stronger steering criterion that is able to reveal the steerability of NOON states that has hitherto not been possible using other criteria. We further obtain a monogamy relation for our steering inequality which leads to an, in principle, improved lower bound on the secret key rate of a one-sided device independent quantum key distribution protocol for continuous variables.
Robustness of quantum key distribution with discrete and continuous variables to channel noise
NASA Astrophysics Data System (ADS)
Lasota, Mikołaj; Filip, Radim; Usenko, Vladyslav C.
2017-06-01
We study the robustness of quantum key distribution protocols using discrete or continuous variables to the channel noise. We introduce the model of such noise based on coupling of the signal to a thermal reservoir, typical for continuous-variable quantum key distribution, to the discrete-variable case. Then we perform a comparison of the bounds on the tolerable channel noise between these two kinds of protocols using the same noise parametrization, in the case of implementation which is perfect otherwise. Obtained results show that continuous-variable protocols can exhibit similar robustness to the channel noise when the transmittance of the channel is relatively high. However, for strong loss discrete-variable protocols are superior and can overcome even the infinite-squeezing continuous-variable protocol while using limited nonclassical resources. The requirement on the probability of a single-photon production which would have to be fulfilled by a practical source of photons in order to demonstrate such superiority is feasible thanks to the recent rapid development in this field.
NASA Astrophysics Data System (ADS)
Sadegh, M.; Moftakhari, H.; AghaKouchak, A.
2017-12-01
Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.
The use of structural analysis to develop antecedent-based interventions for students with autism.
Stichter, Janine P; Randolph, Jena K; Kay, Denise; Gage, Nicholas
2009-06-01
Evidence continues to maintain that the use of antecedent variables (i.e., instructional practices, and environmental characteristics) increase prosocial and adaptive behaviors of students with disabilities (e.g., Kern et al. in J Appl Behav Anal 27(1):7-19, 1994; Stichter et al. in Behav Disord 30:401-418, 2005). This study extends the literature by systematically utilizing practitioner-implemented structural analyzes within school settings to determine antecedent variables affecting the prosocial behavior of students with autism. Optimal antecedents were combined into intervention packages and assessed utilizing a multiple baseline design across settings. All three students demonstrated improvement across all three settings. Rates of engagement and social interaction were obtained from classroom peers to serve as benchmark data. Findings indicate that practitioners can implement structural analyzes and design corresponding interventions for students with ASD within educational settings.
Conceptual design study of an Improved Gas Turbine (IGT) powertrain
NASA Technical Reports Server (NTRS)
Johnson, R. A.
1979-01-01
Design concepts for an improved automotive gas turbine powertrain are discussed. Twenty percent fuel economy improvement (over 1976), competitive costs (initial and life cycle), high reliability/life, low emissions, and noise/safety compliance were among the factors considered. The powertrain selected consists of a two shaft gas turbine engine with variable geometry aerodynamic components and a single disk rotating regenerator. The regenerator disk, gasifier turbine rotor, and several hot section flowpath parts are ceramic. The powertrain utilizes a conventional automatic transmission. The closest competitor was a single shaft turbine engine matched to a continuously variable transmission (CVT). Both candidate powertrain systems were found to be similar in many respects; however, the CVT represented a significant increase in development cost, technical risk, and production start-up costs over the conventional automatic transmission. Installation of the gas turbine powertrain was investigated for a transverse mounted, front wheel drive vehicle.
Frye, Victoria; Henny, Kirk; Bonner, Sebastian; Williams, Kim; Bond, Keosha T; Hoover, Donald R; Lucy, Debbie; Greene, Emily; Koblin, Beryl A
2013-01-01
In the United States, heterosexual transmission is the second leading cause of HIV/AIDS, and two-thirds of all heterosexually acquired cases diagnosed between 2005 and 2008 occurred among African-Americans. Few HIV prevention interventions have been designed specifically for African-American heterosexual men not seeking clinical treatment. Here we report results of a single-arm intervention trial of a theory-based HIV prevention intervention designed to increase condom use, reduce concurrent partnering and increase HIV testing among heterosexually active African-American men living in high HIV prevalence areas of New York City. We tested our hypothesis using McNemar discordant pairs exact test for binary variables and paired t-tests for continuous variables. We observed statistically significant declines in mean number of total and new female partners, unprotected sex partners, and partner concurrency in both primary and nonprimary sex partnerships between baseline and 3 months postintervention.
Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard
2002-01-01
The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.
Correlates of Injury-forced Work Reduction for Massage Therapists and Bodywork Practitioners.
Blau, Gary; Monos, Christopher; Boyer, Ed; Davis, Kathleen; Flanagan, Richard; Lopez, Andrea; Tatum, Donna S
2013-01-01
Injury-forced work reduction (IFWR) has been acknowledged as an all-too-common occurrence for massage therapists and bodywork practitioners (M & Bs). However, little prior research has specifically investigated demographic, work attitude, and perceptual correlates of IFWR among M & Bs. To test two hypotheses, H1 and H2. H1 is that the accumulated cost variables set ( e.g., accumulated costs, continuing education costs) will account for a significant amount of IFWR variance beyond control/demographic (e.g., social desirability response bias, gender, years in practice, highest education level) and work attitude/perception variables (e.g., job satisfaction, affective occupation commitment, occupation identification, limited occupation alternatives) sets. H2 is that the two exhaustion variables (i.e., physical exhaustion, work exhaustion) set will account for significant IFWR variance beyond control/demographic, work attitude/perception, and accumulated cost variables sets. An online survey sample of 2,079 complete-data M & Bs was collected. Stepwise regression analysis was used to test the study hypotheses. The research design first controlled for control/demographic (Step1) and work attitude/perception variables sets (Step 2), before then testing for the successive incremental impact of two variable sets, accumulated costs (Step 3) and exhaustion variables (Step 4) for explaining IFWR. RESULTS SUPPORTED BOTH STUDY HYPOTHESES: accumulated cost variables set (H1) and exhaustion variables set (H2) each significantly explained IFWR after the control/demographic and work attitude/perception variables sets. The most important correlate for explaining IFWR was higher physical exhaustion, but work exhaustion was also significant. It is not just physical "wear and tear", but also "mental fatigue", that can lead to IFWR for M & Bs. Being female, having more years in practice, and having higher continuing education costs were also significant correlates of IFWR. Lower overall levels of work exhaustion, physical exhaustion, and IFWR were found in the present sample. However, since both types of exhaustion significantly and positively impact IFWR, taking sufficient time between massages and, if possible, varying one's massage technique to replenish one's physical and mental energy seem important. Failure to take required continuing education units, due to high costs, also increases risk for IFWR. Study limitations and future research issues are discussed.
NASA Astrophysics Data System (ADS)
Brunner, Manuela Irene; Seibert, Jan; Favre, Anne-Catherine
2018-02-01
Traditional design flood estimation approaches have focused on peak discharges and have often neglected other hydrograph characteristics such as hydrograph volume and shape. Synthetic design hydrograph estimation procedures overcome this deficiency by jointly considering peak discharge, hydrograph volume, and shape. Such procedures have recently been extended to allow for the consideration of process variability within a catchment by a flood-type specific construction of design hydrographs. However, they depend on observed runoff time series and are not directly applicable in ungauged catchments where such series are not available. To obtain reliable flood estimates, there is a need for an approach that allows for the consideration of process variability in the construction of synthetic design hydrographs in ungauged catchments. In this study, we therefore propose an approach that combines a bivariate index flood approach with event-type specific synthetic design hydrograph construction. First, regions of similar flood reactivity are delineated and a classification rule that enables the assignment of ungauged catchments to one of these reactivity regions is established. Second, event-type specific synthetic design hydrographs are constructed using the pooled data divided by event type from the corresponding reactivity region in a bivariate index flood procedure. The approach was tested and validated on a dataset of 163 Swiss catchments. The results indicated that 1) random forest is a suitable classification model for the assignment of an ungauged catchment to one of the reactivity regions, 2) the combination of a bivariate index flood approach and event-type specific synthetic design hydrograph construction enables the consideration of event types in ungauged catchments, and 3) the use of probabilistic class memberships in regional synthetic design hydrograph construction helps to alleviate the problem of misclassification. Event-type specific synthetic design hydrograph sets enable the inclusion of process variability into design flood estimation and can be used as a compromise between single best estimate synthetic design hydrographs and continuous simulation studies.
Method and apparatus for executing an asynchronous clutch-to-clutch shift in a hybrid transmission
Demirovic, Besim; Gupta, Pinaki; Kaminsky, Lawrence A.; Naqvi, Ali K.; Heap, Anthony H.; Sah, Jy-Jen F.
2014-08-12
A hybrid transmission includes first and second electric machines. A method for operating the hybrid transmission in response to a command to execute a shift from an initial continuously variable mode to a target continuously variable mode includes increasing torque of an oncoming clutch associated with operating in the target continuously variable mode and correspondingly decreasing a torque of an off-going clutch associated with operating in the initial continuously variable mode. Upon deactivation of the off-going clutch, torque outputs of the first and second electric machines and the torque of the oncoming clutch are controlled to synchronize the oncoming clutch. Upon synchronization of the oncoming clutch, the torque for the oncoming clutch is increased and the transmission is operated in the target continuously variable mode.
Kopp, Blaine S.; Nielsen, Martha; Glisic, Dejan; Neckles, Hilary A.
2009-01-01
This report documents results of pilot tests of a protocol for monitoring estuarine nutrient enrichment for the Vital Signs Monitoring Program of the National Park Service Northeast Coastal and Barrier Network. Data collected from four parks during protocol development in 2003-06 are presented: Gateway National Recreation Area, Colonial National Historic Park, Fire Island National Seashore, and Assateague Island National Seashore. The monitoring approach incorporates several spatial and temporal designs to address questions at a hierarchy of scales. Indicators of estuarine response to nutrient enrichment were sampled using a probability design within park estuaries during a late-summer index period. Monitoring variables consisted of dissolved-oxygen concentration, chlorophyll a concentration, water temperature, salinity, attenuation of downwelling photosynthetically available radiation (PAR), and turbidity. The statistical sampling design allowed the condition of unsampled locations to be inferred from the distribution of data from a set of randomly positioned "probability" stations. A subset of sampling stations was sampled repeatedly during the index period, and stations were not rerandomized in subsequent years. These "trend stations" allowed us to examine temporal variability within the index period, and to improve the sensitivity of the monitoring protocol to detecting change through time. Additionally, one index site in each park was equipped for continuous monitoring throughout the index period. Thus, the protocol includes elements of probabilistic and targeted spatial sampling, and the temporal intensity ranges from snapshot assessments to continuous monitoring.
The whole earth telescope - A new astronomical instrument
NASA Technical Reports Server (NTRS)
Nather, R. E.; Winget, D. E.; Clemens, J. C.; Hansen, C. J.; Hine, B. P.
1990-01-01
A new multimirror ground-based telescope for time-series photometry of rapid variable stars, designed to minimize or eliminate gaps in the brightness record caused by the rotation of the earth, is described. A sequence of existing telescopes distributed in longitude, coordinated from a single control center, is used to measure designated target stars so long as they are in darkness. Data are returned by electronic mail to the control center, where they are analyzed in real time. This instrument is the first to provide data of continuity and quality that permit true high-resolution power spectroscopy of pulsating white dwarf stars.
Real-time control systems: feedback, scheduling and robustness
NASA Astrophysics Data System (ADS)
Simon, Daniel; Seuret, Alexandre; Sename, Olivier
2017-08-01
The efficient control of real-time distributed systems, where continuous components are governed through digital devices and communication networks, needs a careful examination of the constraints arising from the different involved domains inside co-design approaches. Thanks to the robustness of feedback control, both new control methodologies and slackened real-time scheduling schemes are proposed beyond the frontiers between these traditionally separated fields. A methodology to design robust aperiodic controllers is provided, where the sampling interval is considered as a control variable of the system. Promising experimental results are provided to show the feasibility and robustness of the approach.
Kinematic Methods of Designing Free Form Shells
NASA Astrophysics Data System (ADS)
Korotkiy, V. A.; Khmarova, L. I.
2017-11-01
The geometrical shell model is formed in light of the set requirements expressed through surface parameters. The shell is modelled using the kinematic method according to which the shell is formed as a continuous one-parameter set of curves. The authors offer a kinematic method based on the use of second-order curves with a variable eccentricity as a form-making element. Additional guiding ruled surfaces are used to control the designed surface form. The authors made a software application enabling to plot a second-order curve specified by a random set of five coplanar points and tangents.
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
This text is the fourth of five in the Secondary School Advanced Mathematics (SSAM) series which was designed to meet the needs of students who have completed the Secondary School Mathematics (SSM) program, and wish to continue their study of mathematics. This text begins with a brief discussion of quadratic equations which motivates the…
NASA Technical Reports Server (NTRS)
Nguyen, Nhan; Kaul, Upender; Lebofsky, Sonia; Ting, Eric; Chaparro, Daniel; Urnes, James
2015-01-01
This paper summarizes the recent development of an adaptive aeroelastic wing shaping control technology called variable camber continuous trailing edge flap (VCCTEF). As wing flexibility increases, aeroelastic interactions with aerodynamic forces and moments become an increasingly important consideration in aircraft design and aerodynamic performance. Furthermore, aeroelastic interactions with flight dynamics can result in issues with vehicle stability and control. The initial VCCTEF concept was developed in 2010 by NASA under a NASA Innovation Fund study entitled "Elastically Shaped Future Air Vehicle Concept," which showed that highly flexible wing aerodynamic surfaces can be elastically shaped in-flight by active control of wing twist and bending deflection in order to optimize the spanwise lift distribution for drag reduction. A collaboration between NASA and Boeing Research & Technology was subsequently funded by NASA from 2012 to 2014 to further develop the VCCTEF concept. This paper summarizes some of the key research areas conducted by NASA during the collaboration with Boeing Research and Technology. These research areas include VCCTEF design concepts, aerodynamic analysis of VCCTEF camber shapes, aerodynamic optimization of lift distribution for drag minimization, wind tunnel test results for cruise and high-lift configurations, flutter analysis and suppression control of flexible wing aircraft, and multi-objective flight control for adaptive aeroelastic wing shaping control.
NASA Astrophysics Data System (ADS)
Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang
2017-10-01
The purpose of the present work is to study the buckling problem with plate/shell topology optimization of orthotropic material. A model of buckling topology optimization is established based on the independent, continuous, and mapping method, which considers structural mass as objective and buckling critical loads as constraints. Firstly, composite exponential function (CEF) and power function (PF) as filter functions are introduced to recognize the element mass, the element stiffness matrix, and the element geometric stiffness matrix. The filter functions of the orthotropic material stiffness are deduced. Then these filter functions are put into buckling topology optimization of a differential equation to analyze the design sensitivity. Furthermore, the buckling constraints are approximately expressed as explicit functions with respect to the design variables based on the first-order Taylor expansion. The objective function is standardized based on the second-order Taylor expansion. Therefore, the optimization model is translated into a quadratic program. Finally, the dual sequence quadratic programming (DSQP) algorithm and the global convergence method of moving asymptotes algorithm with two different filter functions (CEF and PF) are applied to solve the optimal model. Three numerical results show that DSQP&CEF has the best performance in the view of structural mass and discretion.
Design and control of a variable geometry turbofan with an independently modulated third stream
NASA Astrophysics Data System (ADS)
Simmons, Ronald J.
Emerging 21st century military missions task engines to deliver the fuel efficiency of a high bypass turbofan while retaining the ability to produce the high specific thrust of a low bypass turbofan. This study explores the possibility of satisfying such competing demands by adding a second independently modulated bypass stream to the basic turbofan architecture. This third stream can be used for a variety of purposes including: providing a cool heat sink for dissipating aircraft heat loads, cooling turbine cooling air, and providing a readily available stream of constant pressure ratio air for lift augmentation. Furthermore, by modulating airflow to the second and third streams, it is possible to continuously match the engine's airflow demand to the inlet's airflow supply thereby reducing spillage and increasing propulsive efficiency. This research begins with a historical perspective of variable cycle engines and shows a logical progression to proposed architectures. Then a novel method for investigating optimal performance is presented which determines most favorable on design variable geometry settings, most beneficial moment to terminate flow holding, and an optimal scheduling of variable features for fuel efficient off design operation. Mission analysis conducted across the three candidate missions verifies that these three stream variable cycles can deliver fuel savings in excess of 30% relative to a year 2000 reference turbofan. This research concludes by evaluating the relative impact of each variable technology on the performance of adaptive engine architectures. The most promising technologies include modulated turbine cooling air, variable high pressure turbine inlet area and variable third stream nozzle throat area. With just these few features it is possible to obtain nearly optimal performance, including 90% or more of the potential fuel savings, with far fewer variable features than are available in the study engine. It is abundantly clear that three stream variable architectures can significantly outperform existing two stream turbofans in both fuel efficiency and at the vehicle system level with only a modest increase in complexity and weight. Such engine architectures should be strongly considered for future military applications.
Long-distance continuous-variable quantum key distribution by controlling excess noise
NASA Astrophysics Data System (ADS)
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise.
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-13
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network. PMID:26758727
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
ERIC Educational Resources Information Center
Bauer, Daniel J.; Curran, Patrick J.
2004-01-01
Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Araujo, Jose Adroalado de
1974-05-15
The paper deals with the ammonium diuranate continuous precipitation with a high chemical purity degree from uranyl nitrate solutions, using 1.2 and 2.4 ammonium hydroxide solutions and gaseous NH{sub 3} as a precipitating agent. The precipitations were carried out in a continuous procedure with one and two stages. The variables studied were the NH[sub 4}OH solutions concentration, ADU precipitation curve, the flow rate of reactants, the temperature of the precipitation, pH of the suspended ADU, and ammonium diuranate filtrability. The experimental work performed and the data obtained permitted the design of a chemical reactor for the continuous nuclear grade ADUmore » precipitation at the Chemical Engineering Department of the Atomic Energy Institute of Sao Paulo.« less
Why continuous simulation? The role of antecedent moisture in design flood estimation
NASA Astrophysics Data System (ADS)
Pathiraja, S.; Westra, S.; Sharma, A.
2012-06-01
Continuous simulation for design flood estimation is increasingly becoming a viable alternative to traditional event-based methods. The advantage of continuous simulation approaches is that the catchment moisture state prior to the flood-producing rainfall event is implicitly incorporated within the modeling framework, provided the model has been calibrated and validated to produce reasonable simulations. This contrasts with event-based models in which both information about the expected sequence of rainfall and evaporation preceding the flood-producing rainfall event, as well as catchment storage and infiltration properties, are commonly pooled together into a single set of "loss" parameters which require adjustment through the process of calibration. To identify the importance of accounting for antecedent moisture in flood modeling, this paper uses a continuous rainfall-runoff model calibrated to 45 catchments in the Murray-Darling Basin in Australia. Flood peaks derived using the historical daily rainfall record are compared with those derived using resampled daily rainfall, for which the sequencing of wet and dry days preceding the heavy rainfall event is removed. The analysis shows that there is a consistent underestimation of the design flood events when antecedent moisture is not properly simulated, which can be as much as 30% when only 1 or 2 days of antecedent rainfall are considered, compared to 5% when this is extended to 60 days of prior rainfall. These results show that, in general, it is necessary to consider both short-term memory in rainfall associated with synoptic scale dependence, as well as longer-term memory at seasonal or longer time scale variability in order to obtain accurate design flood estimates.
Violation of Bell's Inequality Using Continuous Variable Measurements
NASA Astrophysics Data System (ADS)
Thearle, Oliver; Janousek, Jiri; Armstrong, Seiji; Hosseini, Sara; Schünemann Mraz, Melanie; Assad, Syed; Symul, Thomas; James, Matthew R.; Huntington, Elanor; Ralph, Timothy C.; Lam, Ping Koy
2018-01-01
A Bell inequality is a fundamental test to rule out local hidden variable model descriptions of correlations between two physically separated systems. There have been a number of experiments in which a Bell inequality has been violated using discrete-variable systems. We demonstrate a violation of Bell's inequality using continuous variable quadrature measurements. By creating a four-mode entangled state with homodyne detection, we recorded a clear violation with a Bell value of B =2.31 ±0.02 . This opens new possibilities for using continuous variable states for device independent quantum protocols.
Phase-noise limitations in continuous-variable quantum key distribution with homodyne detection
NASA Astrophysics Data System (ADS)
Corvaja, Roberto
2017-02-01
In continuous-variables quantum key distribution with coherent states, the advantage of performing the detection by using standard telecoms components is counterbalanced by the lack of a stable phase reference in homodyne detection due to the complexity of optical phase-locking circuits and to the unavoidable phase noise of lasers, which introduces a degradation on the achievable secure key rate. Pilot-assisted phase-noise estimation and postdetection compensation techniques are used to implement a protocol with coherent states where a local laser is employed and it is not locked to the received signal, but a postdetection phase correction is applied. Here the reduction of the secure key rate determined by the laser phase noise, for both individual and collective attacks, is analytically evaluated and a scheme of pilot-assisted phase estimation proposed, outlining the tradeoff in the system design between phase noise and spectral efficiency. The optimal modulation variance as a function of the phase-noise amount is derived.
Oğuz, Yüksel; Güney, İrfan; Çalık, Hüseyin
2013-01-01
The control strategy and design of an AC/DC/AC IGBT-PMW power converter for PMSG-based variable-speed wind energy conversion systems (VSWECS) operation in grid/load-connected mode are presented. VSWECS consists of a PMSG connected to a AC-DC IGBT-based PWM rectifier and a DC/AC IGBT-based PWM inverter with LCL filter. In VSWECS, AC/DC/AC power converter is employed to convert the variable frequency variable speed generator output to the fixed frequency fixed voltage grid. The DC/AC power conversion has been managed out using adaptive neurofuzzy controlled inverter located at the output of controlled AC/DC IGBT-based PWM rectifier. In this study, the dynamic performance and power quality of the proposed power converter connected to the grid/load by output LCL filter is focused on. Dynamic modeling and control of the VSWECS with the proposed power converter is performed by using MATLAB/Simulink. Simulation results show that the output voltage, power, and frequency of VSWECS reach to desirable operation values in a very short time. In addition, when PMSG based VSWECS works continuously with the 4.5 kHz switching frequency, the THD rate of voltage in the load terminal is 0.00672%. PMID:24453905
Oğuz, Yüksel; Güney, İrfan; Çalık, Hüseyin
2013-01-01
The control strategy and design of an AC/DC/AC IGBT-PMW power converter for PMSG-based variable-speed wind energy conversion systems (VSWECS) operation in grid/load-connected mode are presented. VSWECS consists of a PMSG connected to a AC-DC IGBT-based PWM rectifier and a DC/AC IGBT-based PWM inverter with LCL filter. In VSWECS, AC/DC/AC power converter is employed to convert the variable frequency variable speed generator output to the fixed frequency fixed voltage grid. The DC/AC power conversion has been managed out using adaptive neurofuzzy controlled inverter located at the output of controlled AC/DC IGBT-based PWM rectifier. In this study, the dynamic performance and power quality of the proposed power converter connected to the grid/load by output LCL filter is focused on. Dynamic modeling and control of the VSWECS with the proposed power converter is performed by using MATLAB/Simulink. Simulation results show that the output voltage, power, and frequency of VSWECS reach to desirable operation values in a very short time. In addition, when PMSG based VSWECS works continuously with the 4.5 kHz switching frequency, the THD rate of voltage in the load terminal is 0.00672%.
Temporal variability of the Atlantic meridional overturning circulation at 26.5 degrees N.
Cunningham, Stuart A; Kanzow, Torsten; Rayner, Darren; Baringer, Molly O; Johns, William E; Marotzke, Jochem; Longworth, Hannah R; Grant, Elizabeth M; Hirschi, Joël J-M; Beal, Lisa M; Meinen, Christopher S; Bryden, Harry L
2007-08-17
The vigor of Atlantic meridional overturning circulation (MOC) is thought to be vulnerable to global warming, but its short-term temporal variability is unknown so changes inferred from sparse observations on the decadal time scale of recent climate change are uncertain. We combine continuous measurements of the MOC (beginning in 2004) using the purposefully designed transatlantic Rapid Climate Change array of moored instruments deployed along 26.5 degrees N, with time series of Gulf Stream transport and surface-layer Ekman transport to quantify its intra-annual variability. The year-long average overturning is 18.7 +/- 5.6 sverdrups (Sv) (range: 4.0 to 34.9 Sv, where 1 Sv = a flow of ocean water of 10(6) cubic meters per second). Interannual changes in the overturning can be monitored with a resolution of 1.5 Sv.
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
NASA Astrophysics Data System (ADS)
Henderson, Charles; Dancy, Melissa; Niewiadomska-Bugaj, Magdalena
2012-12-01
During the fall of 2008 a web survey, designed to collect information about pedagogical knowledge and practices, was completed by a representative sample of 722 physics faculty across the United States (50.3% response rate). This paper presents partial results to describe how 20 potential predictor variables correlate with faculty knowledge about and use of research-based instructional strategies (RBIS). The innovation-decision process was conceived of in terms of four stages: knowledge versus no knowledge, trial versus no trial, continuation versus discontinuation, and high versus low use. The largest losses occur at the continuation stage, with approximately 1/3 of faculty discontinuing use of all RBIS after trying one or more of these strategies. Nine of the predictor variables were statistically significant for at least one of these stages when controlling for other variables. Knowledge and/or use of RBIS are significantly correlated with reading teaching-related journals, attending talks and workshops related to teaching, attending the physics and astronomy new faculty workshop, having an interest in using more RBIS, being female, being satisfied with meeting instructional goals, and having a permanent, full-time position. The types of variables that are significant at each stage vary substantially. These results suggest that common dissemination strategies are good at creating knowledge about RBIS and motivation to try a RBIS, but more work is needed to support faculty during implementation and continued use of RBIS. Also, contrary to common assumptions, faculty age, institutional type, and percentage of job related to teaching were not found to be barriers to knowledge or use at any stage. High research productivity and large class sizes were not found to be barriers to use of at least some RBIS.
Design and Testing of a Variable Pressure Regulator for the Constellation Space Suit
NASA Technical Reports Server (NTRS)
Gill, Larry; Campbell, Colin
2008-01-01
The next generation space suit requires additional capabilities for controlling and adjusting internal pressure than previous design suits. Next generation suit pressures will range from slight pressure, for astronaut prebreath comfort, to hyperbaric pressure levels for emergency medical treatment. Carleton was awarded a contract in 2008 to design and build a proof of concept bench top demonstrator regulator having five setpoints which are selectable using input electronic signaling. Although the basic regulator architecture is very similar to the existing SOP regulator used in the current EMU, the major difference is the electrical selectivity of multiple setpoints rather than the mechanical On/Off feature found on the SOP regulator. The concept regulator employs a linear actuator stepper motor combination to provide variable compression to a custom design main regulator spring. This concept allows for a continuously adjustable outlet pressures from 8.2 psid (maximum) down to "firm" zero thus effectively allowing it to serve as a shutoff valve. This paper details the regulator design and presents test results on regulation band width, command set point accuracy; slue rate and regulation stability, particularly when the set point is being slued. Projections for a flight configuration version are also offered for performance, architectural layout and weight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elder, J.C.; Littlefield, L.G.; Tillery, M.I.
1978-06-01
A preliminary design of a prototype particulate stack sampler (PPSS) has been prepared, and development of several components is under way. The objective of this Environmental Protection Agency (EPA)-sponsored program is to develop and demonstrate a prototype sampler with capabilities similar to EPA Method 5 apparatus but without some of the more troublesome aspects. Features of the new design include higher sampling flow; display (on demand) of all variables and periodic calculation of percent isokinetic, sample volume, and stack velocity; automatic control of probe and filter heaters; stainless steel surfaces in contact with the sample stream; single-point particle size separationmore » in the probe nozzle; null-probe capability in the nozzle; and lower weight in the components of the sampling train. Design considerations will limit use of the PPSS to stack gas temperatures under approximately 300/sup 0/C, which will exclude sampling some high-temperature stacks such as incinerators. Although need for filter weighing has not been eliminated in the new design, introduction of a variable-slit virtual impactor nozzle may eliminate the need for mass analysis of particles washed from the probe. Component development has shown some promise for continuous humidity measurement by an in-line wet-bulb, dry-bulb psychrometer.« less
The intergenerational transmission of conduct problems.
Raudino, Alessandra; Fergusson, David M; Woodward, Lianne J; Horwood, L John
2013-03-01
Drawing on prospective longitudinal data, this paper examines the intergenerational transmission of childhood conduct problems in a sample of 209 parents and their 331 biological offspring studied as part of the Christchurch Health and Developmental Study. The aims were to estimate the association between parental and offspring conduct problems and to examine the extent to which this association could be explained by (a) confounding social/family factors from the parent's childhood and (b) intervening factors reflecting parental behaviours and family functioning. The same item set was used to assess childhood conduct problems in parents and offspring. Two approaches to data analysis (generalised estimating equation regression methods and latent variable structural equation modelling) were used to examine possible explanations of the intergenerational continuity in behaviour. Regression analysis suggested that there was moderate intergenerational continuity (r = 0.23, p < 0.001) between parental and offspring conduct problems. This continuity was not explained by confounding factors but was partially mediated by parenting behaviours, particularly parental over-reactivity. Latent variable modelling designed to take account of non-observed common genetic and environmental factors underlying the continuities in problem behaviours across generations also suggested that parenting behaviour played a role in mediating the intergenerational transmission of conduct problems. There is clear evidence of intergenerational continuity in conduct problems. In part this association reflects a causal chain process in which parental conduct problems are associated (directly or indirectly) with impaired parenting behaviours that in turn influence risks of conduct problems in offspring.
Designing flexible engineering systems utilizing embedded architecture options
NASA Astrophysics Data System (ADS)
Pierce, Jeff G.
This dissertation develops and applies an integrated framework for embedding flexibility in an engineered system architecture. Systems are constantly faced with unpredictability in the operational environment, threats from competing systems, obsolescence of technology, and general uncertainty in future system demands. Current systems engineering and risk management practices have focused almost exclusively on mitigating or preventing the negative consequences of uncertainty. This research recognizes that high uncertainty also presents an opportunity to design systems that can flexibly respond to changing requirements and capture additional value throughout the design life. There does not exist however a formalized approach to designing appropriately flexible systems. This research develops a three stage integrated flexibility framework based on the concept of architecture options embedded in the system design. Stage One defines an eight step systems engineering process to identify candidate architecture options. This process encapsulates the operational uncertainty though scenario development, traces new functional requirements to the affected design variables, and clusters the variables most sensitive to change. The resulting clusters can generate insight into the most promising regions in the architecture to embed flexibility in the form of architecture options. Stage Two develops a quantitative option valuation technique, grounded in real options theory, which is able to value embedded architecture options that exhibit variable expiration behavior. Stage Three proposes a portfolio optimization algorithm, for both discrete and continuous options, to select the optimal subset of architecture options, subject to budget and risk constraints. Finally, the feasibility, extensibility and limitations of the framework are assessed by its application to a reconnaissance satellite system development problem. Detailed technical data, performance models, and cost estimates were compiled for the Tactical Imaging Constellation Architecture Study and leveraged to complete a realistic proof-of-concept.
Using PAT to accelerate the transition to continuous API manufacturing.
Gouveia, Francisca F; Rahbek, Jesper P; Mortensen, Asmus R; Pedersen, Mette T; Felizardo, Pedro M; Bro, Rasmus; Mealy, Michael J
2017-01-01
Significant improvements can be realized by converting conventional batch processes into continuous ones. The main drivers include reduction of cost and waste, increased safety, and simpler scale-up and tech transfer activities. Re-designing the process layout offers the opportunity to incorporate a set of process analytical technologies (PAT) embraced in the Quality-by-Design (QbD) framework. These tools are used for process state estimation, providing enhanced understanding of the underlying variability in the process impacting quality and yield. This work describes a road map for identifying the best technology to speed-up the development of continuous processes while providing the basis for developing analytical methods for monitoring and controlling the continuous full-scale reaction. The suitability of in-line Raman, FT-infrared (FT-IR), and near-infrared (NIR) spectroscopy for real-time process monitoring was investigated in the production of 1-bromo-2-iodobenzene. The synthesis consists of three consecutive reaction steps including the formation of an unstable diazonium salt intermediate, which is critical to secure high yield and avoid formation of by-products. All spectroscopic methods were able to capture critical information related to the accumulation of the intermediate with very similar accuracy. NIR spectroscopy proved to be satisfactory in terms of performance, ease of installation, full-scale transferability, and stability to very adverse process conditions. As such, in-line NIR was selected to monitor the continuous full-scale production. The quantitative method was developed against theoretical concentration values of the intermediate since representative sampling for off-line reference analysis cannot be achieved. The rapid and reliable analytical system allowed the following: speeding up the design of the continuous process and a better understanding of the manufacturing requirements to ensure optimal yield and avoid unreacted raw materials and by-products in the continuous reactor effluent. Graphical Abstract Using PAT to accelerate the transition to continuous API manufacturing.
A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables
ERIC Educational Resources Information Center
Vernizzi, Graziano; Nakai, Miki
2015-01-01
It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…
Design of discrete and continuous super-resolving Toraldo pupils in the microwave range.
Olmi, Luca; Bolli, Pietro; Mugnai, Daniela
2018-03-20
The concept of super-resolution refers to various methods for improving the angular resolution of an optical imaging system beyond the classical diffraction limit. In optical microscopy, several techniques have been successfully developed with the aim of narrowing the central lobe of the illumination point spread function. In astronomy, however, no similar techniques can be used. A feasible method to design antennas and telescopes with angular resolution better than the diffraction limit consists of using variable transmittance pupils. In particular, discrete binary phase masks (0 or π ) with finite phase-jump positions, known as Toraldo pupils (TPs), have the advantage of being easy to fabricate but offer relatively little flexibility in terms of achieving specific trade-offs between design parameters, such as the angular width of the main lobe and the intensity of sidelobes. In this paper, we show that a complex transmittance filter (equivalent to a continuous TP, i.e., consisting of infinitely narrow concentric rings) can achieve more easily the desired trade-off between design parameters. We also show how the super-resolution effect can be generated with both amplitude- and phase-only masks and confirm the expected performance with electromagnetic numerical simulations in the microwave range.
ERIC Educational Resources Information Center
Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo
2012-01-01
Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…
Internal-Performance Evaluation of Two Fixed-Divergent-Shroud Ejectors
NASA Technical Reports Server (NTRS)
Mihaloew, James R.
1960-01-01
Ejectors designed for use in a Mach 2.2 aircraft were evaluated over a range of representative primary pressure ratios and ejector corrected weight-flow ratios. Basic thrust and pumping characteristics are discussed in terms of an assumed engine operating schedule to illustrate the variation of performance with Mach number. The two designs differed about 16 percent in the shroud longitudinal spacing ratio. For corrected ejector weight-flow ratios up to 0.10, the performance of the fixed-shroud ejector designs is comparable with that of a similar continuously variable ejector except at conditions corresponding to acceleration with afterburning from Mach 0.4 to 1.2. In this region, the ejector thrust ratio decreased to a minimum of 0.96.
NASA Technical Reports Server (NTRS)
Howlett, R. A.
1975-01-01
A continuation of the NASA/P and WA study to evaluate various types of propulsion systems for advanced commercial supersonic transports has resulted in the identification of two very promising engine concepts. They are the Variable Stream Control Engine which provides independent temperature and velocity control for two coannular exhaust streams, and a derivative of this engine, a Variable Cycle Engine that employs a rear flow-inverter valve to vary the bypass ratio of the cycle. Both concepts are based on advanced engine technology and have the potential for significant improvements in jet noise, exhaust emissions and economic characteristics relative to current technology supersonic engines. Extensive research and technology programs are required in several critical areas that are unique to these supersonic Variable Cycle Engines to realize these potential improvements. Parametric cycle and integration studies of conventional and Variable Cycle Engines are reviewed, features of the two most promising engine concepts are described, and critical technology requirements and required programs are summarized.
Enhanced Requirements for Assessment in a Competency-Based, Time-Variable Medical Education System.
Gruppen, Larry D; Ten Cate, Olle; Lingard, Lorelei A; Teunissen, Pim W; Kogan, Jennifer R
2018-03-01
Competency-based, time-variable medical education has reshaped the perceptions and practices of teachers, curriculum designers, faculty developers, clinician educators, and program administrators. This increasingly popular approach highlights the fact that learning among different individuals varies in duration, foundation, and goal. Time variability places particular demands on the assessment data that are so necessary for making decisions about learner progress. These decisions may be formative (e.g., feedback for improvement) or summative (e.g., decisions about advancing a student). This article identifies challenges to collecting assessment data and to making assessment decisions in a time-variable system. These challenges include managing assessment data, defining and making valid assessment decisions, innovating in assessment, and modeling the considerable complexity of assessment in real-world settings and richly interconnected social systems. There are hopeful signs of creativity in assessment both from researchers and practitioners, but the transition from a traditional to a competency-based medical education system will likely continue to create much controversy and offer opportunities for originality and innovation in assessment.
How Robust Is Linear Regression with Dummy Variables?
ERIC Educational Resources Information Center
Blankmeyer, Eric
2006-01-01
Researchers in education and the social sciences make extensive use of linear regression models in which the dependent variable is continuous-valued while the explanatory variables are a combination of continuous-valued regressors and dummy variables. The dummies partition the sample into groups, some of which may contain only a few observations.…
Continuation Power Flow with Variable-Step Variable-Order Nonlinear Predictor
NASA Astrophysics Data System (ADS)
Kojima, Takayuki; Mori, Hiroyuki
This paper proposes a new continuation power flow calculation method for drawing a P-V curve in power systems. The continuation power flow calculation successively evaluates power flow solutions through changing a specified value of the power flow calculation. In recent years, power system operators are quite concerned with voltage instability due to the appearance of deregulated and competitive power markets. The continuation power flow calculation plays an important role to understand the load characteristics in a sense of static voltage instability. In this paper, a new continuation power flow with a variable-step variable-order (VSVO) nonlinear predictor is proposed. The proposed method evaluates optimal predicted points confirming with the feature of P-V curves. The proposed method is successfully applied to IEEE 118-bus and IEEE 300-bus systems.
Dijkstra, Arie; Zuidema, Rixt; Vos, Diederick; van Kalken, Marike
2014-09-13
The Allen Carr training (ACt) is a popular one-session smoking cessation group training that is provided by licensed organizations that have the permission to use the Allen Carr method. However, few data are available on the effectiveness of the training. In a quasi-experimental design the effects of the existing practice of providing the ACt to smokers (n = 124) in companies on abstinence, were compared to changes in abstinence in a cohort of similar smokers in the general population (n = 161). To increase comparability of the smokers in both conditions, smokers in the control condition were matched on the group level on baseline characteristics (fourteen variables) to the smokers in the ACt. The main outcome measure was self-reported continuous abstinence after 13 months, which was validated using a CO measurement in the Act condition. Logistic regression analyses showed that when baseline characteristics were comparable, significantly more responding smokers were continuously abstinent in the ACt condition compared to the control condition, Exp(B) = 6.52 (41.1% and 9.6%, respectively). The all-cases analysis was also significant, Exp(B) = 5.09 (31.5% and 8.3%, respectively). Smokers following the ACt in their company were about 6 times more likely to be abstinent, assessed after 13 months, compared to similar smokers in the general population. Although smokers in both conditions did not differ significantly on 14 variables that might be related to cessation success, the quasi-experimental design allows no definite conclusion about the effectiveness of the ACt. Still, these data support the provision of the ACt in companies.
Coordinating the effects of multiple variables: a skill fundamental to scientific thinking.
Kuhn, Deanna; Pease, Maria; Wirkala, Clarice
2009-07-01
The skill of predicting outcomes based on simultaneous effects of multiple factors was examined. Over five sessions, 91 sixth graders engaged this task either individually or in pairs and either preceded or followed by six sessions on the more widely studied inquiry task that requires designing and interpreting experiments to identify individual effects. Final assessment, while indicating a high level of mastery on the inquiry task, showed progress but continuing conceptual challenges on the multivariable prediction task having to do with understanding of variables, variable levels, and consistency of a variable's operation across occasions. Task order had a significant but limited effect, and social collaboration conferred only a temporary benefit that disappeared in a final individual assessment. In a follow-up study, the lack of effect of social collaboration was confirmed, as was that of feedback on incorrect answers. Although fundamental to science, the concept that variables operate jointly and, under equivalent conditions, consistently across occasions is one that children appear to acquire only gradually and, therefore, one that cannot be assumed to be in place.
Does glycemic variability impact mood and quality of life?
Penckofer, Sue; Quinn, Lauretta; Byrn, Mary; Ferrans, Carol; Miller, Michael; Strange, Poul
2012-04-01
Diabetes is a chronic condition that significantly impacts quality of life. Poor glycemic control is associated with more diabetes complications, depression, and worse quality of life. The impact of glycemic variability on mood and quality of life has not been studied. A descriptive exploratory design was used. Twenty-three women with type 2 diabetes wore a continuous glucose monitoring system for 72 h and completed a series of questionnaires. Measurements included (1) glycemic control shown by glycated hemoglobin and 24-h mean glucose, (2) glycemic variability shown by 24-h SD of the glucose readings, continuous overall net glycemic action (CONGA), and Fourier statistical models to generate smoothed curves to assess rate of change defined as "energy," and (3) mood (depression, anxiety, anger) and quality of life by questionnaires. Women with diabetes and co-morbid depression had higher anxiety, more anger, and lower quality of life than those without depression. Certain glycemic variability measures were associated with mood and quality of life. The 24-h SD of the glucose readings and the CONGA measures were significantly associated with health-related quality of life after adjusting for age and weight. Fourier models indicated that certain energy components were significantly associated with depression, trait anxiety, and overall quality of life. Finally, subjects with higher trait anxiety tended to have steeper glucose excursions. Data suggest that greater glycemic variability may be associated with lower quality of life and negative moods. Implications include replication of the study in a larger sample for the assessment of blood glucose fluctuations as they impact mood and quality of life.
Theoretical and experimental investigations of thermal conditions of household biogas plant
NASA Astrophysics Data System (ADS)
Zhelykh, Vasil; Furdas, Yura; Dzeryn, Oleksandra
2016-06-01
The construction of domestic continuous bioreactor is proposed. The modeling of thermal modes of household biogas plant using graph theory was done. The correction factor taking into account with the influence of variables on its value was determined. The system of balance equations for the desired thermal conditions in the bioreactor was presented. The graphical and analytical capabilities were represented that can be applied in the design of domestic biogas plants of organic waste recycling.
SAINT: A combined simulation language for modeling man-machine systems
NASA Technical Reports Server (NTRS)
Seifert, D. J.
1979-01-01
SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
Axial force and efficiency tests of fixed center variable speed belt drive
NASA Technical Reports Server (NTRS)
Bents, D. J.
1981-01-01
An investigation of how the axial force varies with the centerline force at different speed ratios, speeds, and loads, and how the drive's transmission efficiency is affected by these related forces is described. The tests, intended to provide a preliminary performance and controls characterization for a variable speed belt drive continuously variable transmission (CVT), consisted of the design and construction of an experimental test rig geometrically similar to the CVT, and operation of that rig at selected speed ratios and power levels. Data are presented which show: how axial forces exerted on the driver and driven sheaves vary with the centerline force at constant values of speed ratio, speed, and output power; how the transmission efficiency varies with centerline force and how it is also a function of the V belt coefficient; and the axial forces on both sheaves as normalized functions of the traction coefficient.
Candela, L.; Olea, R.A.; Custodio, E.
1988-01-01
Groundwater quality observation networks are examples of discontinuous sampling on variables presenting spatial continuity and highly skewed frequency distributions. Anywhere in the aquifer, lognormal kriging provides estimates of the variable being sampled and a standard error of the estimate. The average and the maximum standard error within the network can be used to dynamically improve the network sampling efficiency or find a design able to assure a given reliability level. The approach does not require the formulation of any physical model for the aquifer or any actual sampling of hypothetical configurations. A case study is presented using the network monitoring salty water intrusion into the Llobregat delta confined aquifer, Barcelona, Spain. The variable chloride concentration used to trace the intrusion exhibits sudden changes within short distances which make the standard error fairly invariable to changes in sampling pattern and to substantial fluctuations in the number of wells. ?? 1988.
Inoue, K; Ochi, H; Taketsuka, M; Saito, H; Sakurai, K; Ichihashi, N; Iwatsuki, K; Kokubo, S
2008-05-01
A systematic analysis was carried out by using response surface methodology to create a quantitative model of the synergistic effects of conditions in a continuous freezer [mix flow rate (L/h), overrun (%), cylinder pressure (kPa), drawing temperature ( degrees C), and dasher speed (rpm)] on the principal constituent parameters of ice cream [rate of fat destabilization (%), mean air cell diameter (mum), and mean ice crystal diameter (mum)]. A central composite face-centered design was used for this study. Thirty-one combinations of the 5 above-mentioned freezer conditions were designed (including replicates at the center point), and ice cream samples were manufactured and examined in a continuous freezer under the selected conditions. The responses were the 3 variables given above. A quadratic model was constructed, with the freezer conditions as the independent variables and the ice cream characteristics as the dependent variables. The coefficients of determination (R(2)) were greater than 0.9 for all 3 responses, but Q(2), the index used here for the capability of the model for predicting future observed values of the responses, was negative for both the mean ice crystal diameter and the mean air cell diameter. Therefore, pruned models were constructed by removing terms that had contributed little to the prediction in the original model and by refitting the regression model. It was demonstrated that these pruned models provided good fits to the data in terms of R(2), Q(2), and ANOVA. The effects of freezer conditions were expressed quantitatively in terms of the 3 responses. The drawing temperature ( degrees C) was found to have a greater effect on ice cream characteristics than any of the other factors.
Quasi-continuous stochastic simulation framework for flood modelling
NASA Astrophysics Data System (ADS)
Moustakis, Yiannis; Kossieris, Panagiotis; Tsoukalas, Ioannis; Efstratiadis, Andreas
2017-04-01
Typically, flood modelling in the context of everyday engineering practices is addressed through event-based deterministic tools, e.g., the well-known SCS-CN method. A major shortcoming of such approaches is the ignorance of uncertainty, which is associated with the variability of soil moisture conditions and the variability of rainfall during the storm event.In event-based modeling, the sole expression of uncertainty is the return period of the design storm, which is assumed to represent the acceptable risk of all output quantities (flood volume, peak discharge, etc.). On the other hand, the varying antecedent soil moisture conditions across the basin are represented by means of scenarios (e.g., the three AMC types by SCS),while the temporal distribution of rainfall is represented through standard deterministic patterns (e.g., the alternative blocks method). In order to address these major inconsistencies,simultaneously preserving the simplicity and parsimony of the SCS-CN method, we have developed a quasi-continuous stochastic simulation approach, comprising the following steps: (1) generation of synthetic daily rainfall time series; (2) update of potential maximum soil moisture retention, on the basis of accumulated five-day rainfall; (3) estimation of daily runoff through the SCS-CN formula, using as inputs the daily rainfall and the updated value of soil moisture retention;(4) selection of extreme events and application of the standard SCS-CN procedure for each specific event, on the basis of synthetic rainfall.This scheme requires the use of two stochastic modelling components, namely the CastaliaR model, for the generation of synthetic daily data, and the HyetosMinute model, for the disaggregation of daily rainfall to finer temporal scales. Outcomes of this approach are a large number of synthetic flood events, allowing for expressing the design variables in statistical terms and thus properly evaluating the flood risk.
Ionospheric Variability as Observed by the CTECS and CORISS Sensors
NASA Astrophysics Data System (ADS)
Bishop, R. L.; Redding, M.; Straus, P. R.
2013-12-01
The Compact Total Electron Content Sensor (CTECS) is a GPS radio occultation instrument designed for cubesat platforms that utilizes a COTS receiver, modified firmware, and a custom designed antenna. CTECS was placed on the Pico Satellite Solar Cell Testbed 2 (PSSC2) nanosat that was installed on the Space Shuttle Atlantis (STS-135). PSSC2 was successfully released from the shuttle on 20 July 2011 near 380 km altitude. Because of attitude control and power issues, only 13.5 hours of data was collected during its approximately 5-month mission life. The C/NOFS Occultation Receiver for Ionospheric Sensing and Specification (CORISS) GPS radio occultation sensor on the C/NOFS satellite has collected data nearly continuously from May 2008 to June 2013. Both CTECS and CORISS obtain Total Electron Content and scintillation data. In this presentation the CTECS data is first validated against CORISS and available ground-based observations. Then combining the CTECS and CORISS data, low and mid latitude ionospheric variability including scintillation events is presented.
NASA Astrophysics Data System (ADS)
Brander, K. M.; Dickson, R. R.; Edwards, M.
2003-08-01
The Continuous Plankton Recorder (CPR) survey was conceived from the outset as a programme of applied research designed to assist the fishing industry. Its survival and continuing vigour after 70 years is a testament to its utility, which has been achieved in spite of great changes in our understanding of the marine environment and in our concerns over how to manage it. The CPR has been superseded in several respects by other technologies, such as acoustics and remote sensing, but it continues to provide unrivalled seasonal and geographic information about a wide range of zooplankton and phytoplankton taxa. The value of this coverage increases with time and provides the basis for placing recent observations into the context of long-term, large-scale variability and thus suggesting what the causes are likely to be. Information from the CPR is used extensively in judging environmental impacts and producing quality status reports (QSR); it has shown the distributions of fish stocks, which had not previously been exploited; it has pointed to the extent of ungrazed phytoplankton production in the North Atlantic, which was a vital element in establishing the importance of carbon sequestration by phytoplankton. The CPR continues to be the principal source of large-scale, long-term information about the plankton ecosystem of the North Atlantic. It has recently provided extensive information about the biodiversity of the plankton and about the distribution of introduced species. It serves as a valuable example for the design of future monitoring of the marine environment and it has been essential to the design and implementation of most North Atlantic plankton research.
2D Inviscid and Viscous Inverse Design Using Continuous Adjoint and Lax-Wendroff Formulation
NASA Astrophysics Data System (ADS)
Proctor, Camron Lisle
The continuous adjoint (CA) technique for optimization and/or inverse-design of aerodynamic components has seen nearly 30 years of documented success in academia. The benefits of using CA versus a direct sensitivity analysis are shown repeatedly in the literature. However, the use of CA in industry is relatively unheard-of. The sparseness of industry contributions to the field may be attributed to the tediousness of the derivation and/or to the difficulties in implementation due to the lack of well-documented adjoint numerical methods. The focus of this work has been to thoroughly document the techniques required to build a two-dimensional CA inverse-design tool. To this end, this work begins with a short background on computational fluid dynamics (CFD) and the use of optimization tools in conjunction with CFD tools to solve aerodynamic optimization problems. A thorough derivation of the continuous adjoint equations and the accompanying gradient calculations for inviscid and viscous constraining equations follows the introduction. Next, the numerical techniques used for solving the partial differential equations (PDEs) governing the flow equations and the adjoint equations are described. Numerical techniques for the supplementary equations are discussed briefly. Subsequently, a verification of the efficacy of the inverse design tool, for the inviscid adjoint equations as well as possible numerical implementation pitfalls are discussed. The NACA0012 airfoil is used as an initial airfoil and surface pressure distribution and the NACA16009 is used as the desired pressure and vice versa. Using a Savitsky-Golay gradient filter, convergence (defined as a cost function<1E-5) is reached in approximately 220 design iteration using 121 design variables. The inverse-design using inviscid adjoint equations results are followed by the discussion of the viscous inverse design results and techniques used to further the convergence of the optimizer. The relationship between limiting step-size and convergence in a line-search optimization is shown to slightly decrease the final cost function at significant computational cost. A gradient damping technique is presented and shown to increase the convergence rate for the optimization in viscous problems, at a negligible increase in computational cost, but is insufficient to converge the solution. Systematically including adjacent surface vertices in the perturbation of a design variable, also a surface vertex, is shown to affect the convergence capability of the viscous optimizer. Finally, a comparison of using inviscid adjoint equations, as opposed to viscous adjoint equations, on viscous flow is presented, and the inviscid adjoint paired with viscous flow is found to reduce the cost function further than the viscous adjoint for the presented problem.
Knowledge modeling tool for evidence-based design.
Durmisevic, Sanja; Ciftcioglu, Ozer
2010-01-01
The aim of this study is to take evidence-based design (EBD) to the next level by activating available knowledge, integrating new knowledge, and combining them for more efficient use by the planning and design community. This article outlines a framework for a performance-based measurement tool that can provide the necessary decision support during the design or evaluation of a healthcare environment by estimating the overall design performance of multiple variables. New knowledge in EBD adds continuously to complexity (the "information explosion"), and it becomes impossible to consider all aspects (design features) at the same time, much less their impact on final building performance. How can existing knowledge and the information explosion in healthcare-specifically the domain of EBD-be rendered manageable? Is it feasible to create a computational model that considers many design features and deals with them in an integrated way, rather than one at a time? The found evidence is structured and readied for computation through a "fuzzification" process. The weights are calculated using an analytical hierarchy process. Actual knowledge modeling is accomplished through a fuzzy neural tree structure. The impact of all inputs on the outcome-in this case, patient recovery-is calculated using sensitivity analysis. Finally, the added value of the model is discussed using a hypothetical case study of a patient room. The proposed model can deal with the complexities of various aspects and the relationships among variables in a coordinated way, allowing existing and new pieces of evidence to be integrated in a knowledge tree structure that facilitates understanding of the effects of various design interventions on overall design performance.
Loya, Fred; Novakovic-Agopian, Tatjana; Binder, Deborah; Rossi, Annemarie; Rome, Scott; Murphy, Michelle; Chen, Anthony J-W
2017-01-01
Primary Objective. To investigate the long-term use and perceived benefit(s) of strategies included in Goal-Oriented Attentional Self-Regulation (GOALS) training (Novakovic-Agopian et al., 2011) by individuals with acquired brain injury (ABI) and chronic executive dysfunction. Research Design. Longitudinal follow-up of training. Methods and Procedures. Sixteen participants with chronic ABI participated in structured telephone interviews 20 months (range 11 to 31 months) following completion of GOALS training. Participants responded to questions regarding the range of strategies they continued to utilize, perceived benefit(s) of strategy use, situations in which strategy use was found helpful, and functional changes attributed to training. Results. Nearly all participants (94%) reported continued use of at least one trained strategy in their daily lives, with 75% of participants also reporting improved functioning resulting from training. However, there was considerable variability with respect to the specific strategies individuals found helpful as well as the perceived impact of training on overall functioning. Conclusions. GOALS training shows promising long-term benefits for individuals in the chronic phase of brain injury. Identifying individual- and injury-level factors that account for variability in continued strategy use and the perceived long-term benefits of training will help with ongoing intervention development.
Novakovic-Agopian, Tatjana; Binder, Deborah; Rossi, Annemarie; Rome, Scott; Murphy, Michelle; Chen, Anthony J.-W.
2017-01-01
Primary Objective. To investigate the long-term use and perceived benefit(s) of strategies included in Goal-Oriented Attentional Self-Regulation (GOALS) training (Novakovic-Agopian et al., 2011) by individuals with acquired brain injury (ABI) and chronic executive dysfunction. Research Design. Longitudinal follow-up of training. Methods and Procedures. Sixteen participants with chronic ABI participated in structured telephone interviews 20 months (range 11 to 31 months) following completion of GOALS training. Participants responded to questions regarding the range of strategies they continued to utilize, perceived benefit(s) of strategy use, situations in which strategy use was found helpful, and functional changes attributed to training. Results. Nearly all participants (94%) reported continued use of at least one trained strategy in their daily lives, with 75% of participants also reporting improved functioning resulting from training. However, there was considerable variability with respect to the specific strategies individuals found helpful as well as the perceived impact of training on overall functioning. Conclusions. GOALS training shows promising long-term benefits for individuals in the chronic phase of brain injury. Identifying individual- and injury-level factors that account for variability in continued strategy use and the perceived long-term benefits of training will help with ongoing intervention development. PMID:28265472
The Relation of Finite Element and Finite Difference Methods
NASA Technical Reports Server (NTRS)
Vinokur, M.
1976-01-01
Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.
Assessment of Caregiver Inventory for Rett Syndrome
Lane, Jane B.; Salter, Amber R.; Jones, Nancy E.; Cutter, Gary; Horrigan, Joseph; Skinner, Steve A.; Kaufmann, Walter E.; Glaze, Daniel G.; Neul, Jeffrey L.; Percy, Alan K.
2017-01-01
Rett syndrome (RTT) requires total caregiver attention and leads to potential difficulties throughout life. The Caregiver Burden Inventory, designed for Alzheimer disease, was modified to a RTT Caregiver Inventory Assessment (RTT CIA). Reliability and face, construct, and concurrent validity were assessed in caregivers of individuals with RTT. Chi-square or Fisher’s exact test for categorical variables and t-tests or Wilcoxon two-sample tests for continuous variables were utilized. Survey completed by 198 caregivers; 70 caregivers completed follow-up assessment. Exploratory factor analysis revealed good agreement for Physical Burden, Emotional Burden, and Social Burden. Internal reliability was high (Cronbach’s alpha: 0.898). RTT CIA represents a reliable and valid measure, providing a needed metric of caregiver burden in this disorder. PMID:28132121
Method for curing polymers using variable-frequency microwave heating
Lauf, R.J.; Bible, D.W.; Paulauskas, F.L.
1998-02-24
A method for curing polymers incorporating a variable frequency microwave furnace system designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity is disclosed. By varying the frequency of the microwave signal, non-uniformities within the cavity are minimized, thereby achieving a more uniform cure throughout the workpiece. A directional coupler is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter is provided for measuring the power delivered to the microwave furnace. A second power meter detects the magnitude of reflected power. The furnace cavity may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing. 15 figs.
NASA Astrophysics Data System (ADS)
Liu, Gaoyu; Lu, Kun; Zou, Donglin; Xie, Zhongliang; Rao, Zhushi; Ta, Na
2017-07-01
The control of the longitudinal pulsating force and the vibration generated is very important to improve the stealth performance of a submarine. Magnetorheological elastomer (MRE) is a kind of intelligent composite material, whose mechanical properties can be continuously, rapidly and reversibly controlled by an external magnetic field. It can be used as variable-stiffness components in the design of a semi-active dynamic vibration absorber (SDVA), which is one of the effective means of longitudinal vibration control. In this paper, an SDVA is designed based on the MRE’s magnetic-induced variable stiffness characteristic. Firstly, a mechanical model of the propulsion shaft system with the SDVA is proposed, theoretically discussed and numerically validated. Then, the mechanical performance of the MRE under different magnetic fields is tested. In addition, the magnetic circuit and the overall structure of the SDVA are designed. Furthermore, electromagnetic and thermodynamic simulations are carried out to guarantee the structural design. The frequency shift property of the SDVA is found through dynamic simulations and validated by a frequency shift experiment. Lastly, the vibration absorption capacity of the SDVA is investigated. The results show that the magnetorheological effect of the MRE and the frequency shift of the SDVA are obvious; the SDVA has relatively acceptable vibration absorption capacity.
Testing quantum contextuality of continuous-variable states
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKeown, Gerard; Paternostro, Mauro; Paris, Matteo G. A.
2011-06-15
We investigate the violation of noncontextuality by a class of continuous-variable states, including variations of entangled coherent states and a two-mode continuous superposition of coherent states. We generalize the Kochen-Specker (KS) inequality discussed by Cabello [A. Cabello, Phys. Rev. Lett. 101, 210401 (2008)] by using effective bidimensional observables implemented through physical operations acting on continuous-variable states, in a way similar to an approach to the falsification of Bell-Clauser-Horne-Shimony-Holt inequalities put forward recently. We test for state-independent violation of KS inequalities under variable degrees of state entanglement and mixedness. We then demonstrate theoretically the violation of a KS inequality for anymore » two-mode state by using pseudospin observables and a generalized quasiprobability function.« less
Lilienthal, S.; Klein, M.; Orbach, R.; Willner, I.; Remacle, F.
2017-01-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series. PMID:28507669
Hansen, John P
2003-01-01
Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.
Mathematical models of continuous flow electrophoresis: Electrophoresis technology
NASA Technical Reports Server (NTRS)
Saville, Dudley A.
1986-01-01
Two aspects of continuous flow electrophoresis were studied: (1) the structure of the flow field in continuous flow devices; and (2) the electrokinetic properties of suspended particles relevant to electrophoretic separations. Mathematical models were developed to describe flow structure and stability, with particular emphasis on effects due to buoyancy. To describe the fractionation of an arbitrary particulate sample by continuous flow electrophoresis, a general mathematical model was constructed. In this model, chamber dimensions, field strength, buffer composition, and other design variables can be altered at will to study their effects on resolution and throughput. All these mathematical models were implemented on a digital computer and the codes are available for general use. Experimental and theoretical work with particulate samples probed how particle mobility is related to buffer composition. It was found that ions on the surface of small particles are mobile, contrary to the widely accepted view. This influences particle mobility and suspension conductivity. A novel technique was used to measure the mobility of particles in concentrated suspensions.
Virtual continuity of measurable functions and its applications
NASA Astrophysics Data System (ADS)
Vershik, A. M.; Zatitskii, P. B.; Petrov, F. V.
2014-12-01
A classical theorem of Luzin states that a measurable function of one real variable is `almost' continuous. For measurable functions of several variables the analogous statement (continuity on a product of sets having almost full measure) does not hold in general. The search for a correct analogue of Luzin's theorem leads to a notion of virtually continuous functions of several variables. This apparently new notion implicitly appears in the statements of embedding theorems and trace theorems for Sobolev spaces. In fact it reveals the nature of such theorems as statements about virtual continuity. The authors' results imply that under the conditions of Sobolev theorems there is a well-defined integration of a function with respect to a wide class of singular measures, including measures concentrated on submanifolds. The notion of virtual continuity is also used for the classification of measurable functions of several variables and in some questions on dynamical systems, the theory of polymorphisms, and bistochastic measures. In this paper the necessary definitions and properties of admissible metrics are recalled, several definitions of virtual continuity are given, and some applications are discussed. Bibliography: 24 titles.
A novel approach to modeling atmospheric convection
NASA Astrophysics Data System (ADS)
Goodman, A.
2016-12-01
The inadequate representation of clouds continues to be a large source of uncertainty in the projections from global climate models (GCMs). With continuous advances in computational power, however, the ability for GCMs to explicitly resolve cumulus convection will soon be realized. For this purpose, Jung and Arakawa (2008) proposed the Vector Vorticity Model (VVM), in which vorticity is the predicted variable instead of momentum. This has the advantage of eliminating the pressure gradient force within the framework of an anelastic system. However, the VVM was designed for use on a planar quadrilateral grid, making it unsuitable for implementation in global models discretized on the sphere. Here we have proposed a modification to the VVM where instead the curl of the horizontal vorticity is the primary predicted variable. This allows us to maintain the benefits of the original VVM while working within the constraints of a non-quadrilateral mesh. We found that our proposed model produced results from a warm bubble simulation that were consistent with the VVM. Further improvements that can be made to the VVM are also discussed.
Evaluation of vertical profiles to design continuous descent approach procedure
NASA Astrophysics Data System (ADS)
Pradeep, Priyank
The current research focuses on predictability, variability and operational feasibility aspect of Continuous Descent Approach (CDA), which is among the key concepts of the Next Generation Air Transportation System (NextGen). The idle-thrust CDA is a fuel economical, noise and emission abatement procedure, but requires increased separation to accommodate for variability and uncertainties in vertical and speed profiles of arriving aircraft. Although a considerable amount of researches have been devoted to the estimation of potential benefits of the CDA, only few have attempted to explain the predictability, variability and operational feasibility aspect of CDA. The analytical equations derived using flight dynamics and Base of Aircraft and Data (BADA) Total Energy Model (TEM) in this research gives insight into dependency of vertical profile of CDA on various factors like wind speed and gradient, weight, aircraft type and configuration, thrust settings, atmospheric factors (deviation from ISA (DISA), pressure and density of the air) and descent speed profile. Application of the derived equations to idle-thrust CDA gives an insight into sensitivity of its vertical profile to multiple factors. This suggests fixed geometric flight path angle (FPA) CDA has higher degree of predictability and lesser variability at the cost of non-idle and low thrust engine settings. However, with optimized design this impact can be overall minimized. The CDA simulations were performed using Future ATM Concept Evaluation Tool (FACET) based on radar-track and aircraft type data (BADA) of the real air-traffic to some of the busiest airports in the USA (ATL, SFO and New York Metroplex (JFK, EWR and LGA)). The statistical analysis of the vertical profiles of CDA shows 1) mean geometric FPAs derived from various simulated vertical profiles are consistently shallower than 3° glideslope angle and 2) high level of variability in vertical profiles of idle-thrust CDA even in absence of uncertainties in external factors. Analysis from operational feasibility perspective suggests that two key features of the performance based Flight Management System (FMS) i.e. required time of arrival (RTA) and geometric descent path would help in reduction of unpredictability associated with arrival time and vertical profile of aircraft guided by the FMS coupled with auto-pilot (AP) and auto-throttle (AT). The statistical analysis of the vertical profiles of CDA also suggests that for procedure design window type, 'AT or above' and 'AT or below' altitude and FPA constraints are more realistic and useful compared to obsolete 'AT' type altitude constraint.
Fuel economy screening study of advanced automotive gas turbine engines
NASA Technical Reports Server (NTRS)
Klann, J. L.
1980-01-01
Fuel economy potentials were calculated and compared among ten turbomachinery configurations. All gas turbine engines were evaluated with a continuously variable transmission in a 1978 compact car. A reference fuel economy was calculated for the car with its conventional spark ignition piston engine and three speed automatic transmission. Two promising engine/transmission combinations, using gasoline, had 55 to 60 percent gains over the reference fuel economy. Fuel economy sensitivities to engine design parameter changes were also calculated for these two combinations.
Simulation of an enzyme-based glucose sensor
NASA Astrophysics Data System (ADS)
Sha, Xianzheng; Jablecki, Michael; Gough, David A.
2001-09-01
An important biosensor application is the continuous monitoring blood or tissue fluid glucose concentration in people with diabetes. Our research focuses on the development of a glucose sensor based on potentiostatic oxygen electrodes and immobilized glucose oxidase for long- term application as an implant in tissues. As the sensor signal depends on many design variables, a trial-and-error approach to sensor optimization can be time-consuming. Here, the properties of an implantable glucose sensor are optimized by a systematic computational simulation approach.
Turbine Engine Control Synthesis. Volume 1. Optimal Controller Synthesis and Demonstration
1975-03-01
Nomenclature (Continued) Symbol Deseription M Matrix (of Table 12) M Mach number N Rotational speed, rpm N ’ Nonlinear rotational speed, rpm P Power lever... P Pressure, N /m 2; bfh/ft 2 PLA Power lever angle PR = PT3/PT2 Pressure ratio ( P Power, ft-lbf/sec Q Matrix (of Table 30) R Universal gas constant, 53...function, i = 1, 2, 3, ... in Inlet n Stage number designation out Outlet p Variable associated with particle s Static condition _se Static condition
Augmented Computer Mouse Would Measure Applied Force
NASA Technical Reports Server (NTRS)
Li, Larry C. H.
1993-01-01
Proposed computer mouse measures force of contact applied by user. Adds another dimension to two-dimensional-position-measuring capability of conventional computer mouse; force measurement designated to represent any desired continuously variable function of time and position, such as control force, acceleration, velocity, or position along axis perpendicular to computer video display. Proposed mouse enhances sense of realism and intuition in interaction between operator and computer. Useful in such applications as three-dimensional computer graphics, computer games, and mathematical modeling of dynamics.
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.
Experimental Observations of Vortex Ring Interaction with the Fluid Adjacent to a Surface.
1983-10-01
minute. The water enters the inlet tank from a distribution manifold pipe and rises vertically through a 15 cm. thick plastic sponge. The flow then passes...parts exposed to water are made from PVC plastic to resist corrosion. The generator was designed to have interchangeable parts which allow the generation...of vortex rings over a range of caracteristics . The motor speed is continuously variable up to a speed of 7400 rpm. Cams with stroke lengths of 0.64
Quality evaluation on an e-learning system in continuing professional education of nurses.
Lin, I-Chun; Chien, Yu-Mei; Chang, I-Chiu
2006-01-01
Maintaining high quality in Web-based learning is a powerful means of increasing the overall efficiency and effectiveness of distance learning. Many studies have evaluated Web-based learning but seldom evaluate from the information systems (IS) perspective. This study applied the famous IS Success model in measuring the quality of a Web-based learning system using a Web-based questionnaire for data collection. One hundred and fifty four nurses participated in the survey. Based on confirmatory factor analysis, the variables of the research model fit for measuring the quality of a Web-based learning system. As Web-based education continues to grow worldwide, the results of this study may assist the system adopter (hospital executives), the learner (nurses), and the system designers in making reasonable and informed judgments with regard to the quality of Web-based learning system in continuing professional education.
A new polytopic approach for the unknown input functional observer design
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed
2018-03-01
In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.
Managerial process improvement: a lean approach to eliminating medication delivery.
Hussain, Aftab; Stewart, LaShonda M; Rivers, Patrick A; Munchus, George
2015-01-01
Statistical evidence shows that medication errors are a major cause of injuries that concerns all health care oganizations. Despite all the efforts to improve the quality of care, the lack of understanding and inability of management to design a robust system that will strategically target those factors is a major cause of distress. The paper aims to discuss these issues. Achieving optimum organizational performance requires two key variables; work process factors and human performance factors. The approach is that healthcare administrators must take in account both variables in designing a strategy to reduce medication errors. However, strategies that will combat such phenomena require that managers and administrators understand the key factors that are causing medication delivery errors. The authors recommend that healthcare organizations implement the Toyota Production System (TPS) combined with human performance improvement (HPI) methodologies to eliminate medication delivery errors in hospitals. Despite all the efforts to improve the quality of care, there continues to be a lack of understanding and the ability of management to design a robust system that will strategically target those factors associated with medication errors. This paper proposes a solution to an ambiguous workflow process using the TPS combined with the HPI system.
A Bayesian Semiparametric Latent Variable Model for Mixed Responses
ERIC Educational Resources Information Center
Fahrmeir, Ludwig; Raach, Alexander
2007-01-01
In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…
Design, experimentation, and modeling of a novel continuous biodrying process
NASA Astrophysics Data System (ADS)
Navaee-Ardeh, Shahram
Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous biodrying reactor were the type of biomass feed and the outlet relative humidity profiles. The biomass feed is mill specific and since one mill was studied for this study, the nutrition level of the biomass feed was found adequate for the microbial activity, and hence the type of biomass is a fixed parameter. The influence of outlet relative humidity profile was investigated on the overall performance and the complexity index of the continuous biodrying reactor. The best biodrying efficiency was achieved at an outlet relative humidity profile which controls the removal of unbound water at the wet-bulb temperature in the 1st and 2nd compartments of the reactor, and the removal of bound water at the dry-bulb temperature in the 3rd and 4th compartments. Through a systematic modeling approach, a 2-D model was developed to describe the transport phenomena in the continuous biodrying reactor. The results of the 2-D model were in satisfactory agreement with the experimental data. It was found that about 30% w/w of the total water removal (drying rate) takes place in the 1st and 2nd compartments mainly under a convection dominated mechanism, whereas about 70% w/w of the total water removal takes place in the 3rd and 4th compartments where a bioheat-diffusion dominated mechanism controls the transport phenomena. The 2-D model was found to be an appropriate tool for the estimation of the total water removal rate (drying rate) in the continuous biodrying reactor when compared to the 1-D model. A dimensionless analysis was performed on the 2-D model and established the preliminary criteria for the scale-up of the continuous biodrying process. Finally, a techno-economic assessment of the continuous biodrying process revealed that there is great potential for the implementation of the biodrying process in Canadian pulp and paper mills. The techno-economic results were compared to the other competitive existing drying technologies. It was proven that the continuous biodrying process results in significant economic benefits and has great potential to address the current industrial problems associated with sludge management.
NASA Astrophysics Data System (ADS)
Gan, L.; Yang, F.; Shi, Y. F.; He, H. L.
2017-11-01
Many occasions related to batteries demand to know how much continuous and instantaneous power can batteries provide such as the rapidly developing electric vehicles. As the large-scale applications of lithium-ion batteries, lithium-ion batteries are used to be our research object. Many experiments are designed to get the lithium-ion battery parameters to ensure the relevance and reliability of the estimation. To evaluate the continuous and instantaneous load capability of a battery called state-of-function (SOF), this paper proposes a fuzzy logic algorithm based on battery state-of-charge(SOC), state-of-health(SOH) and C-rate parameters. Simulation and experimental results indicate that the proposed approach is suitable for battery SOF estimation.
An ABS control logic based on wheel force measurement
NASA Astrophysics Data System (ADS)
Capra, D.; Galvagno, E.; Ondrak, V.; van Leeuwen, B.; Vigliani, A.
2012-12-01
The paper presents an anti-lock braking system (ABS) control logic based on the measurement of the longitudinal forces at the hub bearings. The availability of force information allows to design a logic that does not rely on the estimation of the tyre-road friction coefficient, since it continuously tries to exploit the maximum longitudinal tyre force. The logic is designed by means of computer simulation and then tested on a specific hardware in the loop test bench: the experimental results confirm that measured wheel force can lead to a significant improvement of the ABS performances in terms of stopping distance also in the presence of road with variable friction coefficient.
NASA Astrophysics Data System (ADS)
Blume, T.; Hassler, S. K.; Weiler, M.
2017-12-01
Hydrological science still struggles with the fact that while we wish for spatially continuous images or movies of state variables and fluxes at the landscape scale, most of our direct measurements are point measurements. To date regional measurements resolving landscape scale patterns can only be obtained by remote sensing methods, with the common drawback that they remain near the earth surface and that temporal resolution is generally low. However, distributed monitoring networks at the landscape scale provide the opportunity for detailed and time-continuous pattern exploration. Even though measurements are spatially discontinuous, the large number of sampling points and experimental setups specifically designed for the purpose of landscape pattern investigation open up new avenues of regional hydrological analyses. The CAOS hydrological observatory in Luxembourg offers a unique setup to investigate questions of temporal stability, pattern evolution and persistence of certain states. The experimental setup consists of 45 sensor clusters. These sensor clusters cover three different geologies, two land use classes, five different landscape positions, and contrasting aspects. At each of these sensor clusters three soil moisture/soil temperature profiles, basic climate variables, sapflow, shallow groundwater, and stream water levels were measured continuously for the past 4 years. We will focus on characteristic landscape patterns of various hydrological state variables and fluxes, studying their temporal stability on the one hand and the dependence of patterns on hydrological states on the other hand (e.g. wet vs dry). This is extended to time-continuous pattern analysis based on time series of spatial rank correlation coefficients. Analyses focus on the absolute values of soil moisture, soil temperature, groundwater levels and sapflow, but also investigate the spatial pattern of the daily changes of these variables. The analysis aims at identifying hydrologic signatures of the processes or landscape characteristics acting as major controls. While groundwater, soil water and transpiration are closely linked by the water cycle, they are controlled by different processes and we expect this to be reflected in interlinked but not necessarily congruent patterns and responses.
NASA Astrophysics Data System (ADS)
Sleeter, B. M.; Daniel, C.; Frid, L.; Fortin, M. J.
2016-12-01
State-and-transition simulation models (STSMs) provide a general approach for incorporating uncertainty into forecasts of landscape change. Using a Monte Carlo approach, STSMs generate spatially-explicit projections of the state of a landscape based upon probabilistic transitions defined between states. While STSMs are based on the basic principles of Markov chains, they have additional properties that make them applicable to a wide range of questions and types of landscapes. A current limitation of STSMs is that they are only able to track the fate of discrete state variables, such as land use/land cover (LULC) classes. There are some landscape modelling questions, however, for which continuous state variables - for example carbon biomass - are also required. Here we present a new approach for integrating continuous state variables into spatially-explicit STSMs. Specifically we allow any number of continuous state variables to be defined for each spatial cell in our simulations; the value of each continuous variable is then simulated forward in discrete time as a stochastic process based upon defined rates of change between variables. These rates can be defined as a function of the realized states and transitions of each cell in the STSM, thus providing a connection between the continuous variables and the dynamics of the landscape. We demonstrate this new approach by (1) developing a simple IPCC Tier 3 compliant model of ecosystem carbon biomass, where the continuous state variables are defined as terrestrial carbon biomass pools and the rates of change as carbon fluxes between pools, and (2) integrating this carbon model with an existing LULC change model for the state of Hawaii, USA.
Advances in Residential Design Related to the Influence of Geomagnetism.
Glaria, Francisco; Arnedo, Israel; Sánchez-Ostiz, Ana
2018-02-23
Since the origin of the Modern Movement, there has been a basic commitment to improving housing conditions and the well-being of occupants, especially given the prediction that 2/3 of humanity will reside in cities by 2050. Moreover, a compact model of the city with tall buildings and urban densification at this scale will be generated. Continuous constructive and technological advances have developed solid foundations on safety, energy efficiency, habitability, and sustainability in housing design. However, studies on improving the quality of life in these areas continue to be a challenge for architects and engineers. This paper seeks to contribute health-related information to the study of residential design, specifically the influence of the geomagnetic field on its occupants. After compiling information on the effects of geomagnetic fields from different medical studies over 23 years, a case study of a 16-story high-rise building is presented, with the goal of proposing architectural design recommendations for long-term occupation in the same place. The purpose of the present work is three-fold: first, to characterize the geomagnetic field variability of buildings; second, to identify the causes and possible related mechanisms; and third, to define architectural criteria on the arrangement of uses and constructive elements for housing.
Advances in Residential Design Related to the Influence of Geomagnetism
Arnedo, Israel; Sánchez-Ostiz, Ana
2018-01-01
Since the origin of the Modern Movement, there has been a basic commitment to improving housing conditions and the well-being of occupants, especially given the prediction that 2/3 of humanity will reside in cities by 2050. Moreover, a compact model of the city with tall buildings and urban densification at this scale will be generated. Continuous constructive and technological advances have developed solid foundations on safety, energy efficiency, habitability, and sustainability in housing design. However, studies on improving the quality of life in these areas continue to be a challenge for architects and engineers. This paper seeks to contribute health-related information to the study of residential design, specifically the influence of the geomagnetic field on its occupants. After compiling information on the effects of geomagnetic fields from different medical studies over 23 years, a case study of a 16-story high-rise building is presented, with the goal of proposing architectural design recommendations for long-term occupation in the same place. The purpose of the present work is three-fold: first, to characterize the geomagnetic field variability of buildings; second, to identify the causes and possible related mechanisms; and third, to define architectural criteria on the arrangement of uses and constructive elements for housing. PMID:29473902
Automation design and crew coordination
NASA Technical Reports Server (NTRS)
Segal, Leon D.
1993-01-01
Advances in technology have greatly impacted the appearance of the modern aircraft cockpit. Where once one would see rows upon rows. The introduction of automation has greatly altered the demands on the pilots and the dynamics of aircrew task performance. While engineers and designers continue to implement the latest technological innovations in the cockpit - claiming higher reliability and decreased workload - a large percentage of aircraft accidents are still attributed to human error. Rather than being the main instigators of accidents, operators tend to be the inheritors of system defects created by poor design, incorrect installation, faulty maintenance and bad management decisions. This paper looks at some of the variables that need to be considered if we are to eliminate at least one of these inheritances - poor design. Specifically, this paper describes the first part of a comprehensive study aimed at identifying the effects of automation on crew coordination.
Does Glycemic Variability Impact Mood and Quality of Life?
Quinn, Lauretta; Byrn, Mary; Ferrans, Carol; Miller, Michael; Strange, Poul
2012-01-01
Abstract Background Diabetes is a chronic condition that significantly impacts quality of life. Poor glycemic control is associated with more diabetes complications, depression, and worse quality of life. The impact of glycemic variability on mood and quality of life has not been studied. Methods A descriptive exploratory design was used. Twenty-three women with type 2 diabetes wore a continuous glucose monitoring system for 72 h and completed a series of questionnaires. Measurements included (1) glycemic control shown by glycated hemoglobin and 24-h mean glucose, (2) glycemic variability shown by 24-h SD of the glucose readings, continuous overall net glycemic action (CONGA), and Fourier statistical models to generate smoothed curves to assess rate of change defined as “energy,” and (3) mood (depression, anxiety, anger) and quality of life by questionnaires. Results Women with diabetes and co-morbid depression had higher anxiety, more anger, and lower quality of life than those without depression. Certain glycemic variability measures were associated with mood and quality of life. The 24-h SD of the glucose readings and the CONGA measures were significantly associated with health-related quality of life after adjusting for age and weight. Fourier models indicated that certain energy components were significantly associated with depression, trait anxiety, and overall quality of life. Finally, subjects with higher trait anxiety tended to have steeper glucose excursions. Conclusions Data suggest that greater glycemic variability may be associated with lower quality of life and negative moods. Implications include replication of the study in a larger sample for the assessment of blood glucose fluctuations as they impact mood and quality of life. PMID:22324383
Sacco, Ralph L.; Khatri, Minesh; Rundek, Tatjana; Xu, Qiang; Gardener, Hannah; Boden-Albala, Bernadette; Di Tullio, Marco R.; Homma, Shunichi; Elkind, Mitchell SV; Paik, Myunghee C
2010-01-01
Objective To improve global vascular risk prediction with behavioral and anthropometric factors. Background Few cardiovascular risk models are designed to predict the global vascular risk of MI, stroke, or vascular death in multi-ethnic individuals, and existing schemes do not fully include behavioral risk factors. Methods A randomly-derived, population-based, prospective cohort of 2737 community participants free of stroke and coronary artery disease were followed annually for a median of 9.0 years in the Northern Manhattan Study (mean age 69 years; 63.2% women; 52.7% Hispanic, 24.9% African-American, 19.9% white). A global vascular risk score (GVRS) predictive of stroke, myocardial infarction, or vascular death was developed by adding variables to the traditional Framingham cardiovascular variables based on the likelihood ratio criterion. Model utility was assessed through receiver operating characteristics, calibration, and effect on reclassification of subjects. Results Variables which significantly added to the traditional Framingham profile included waist circumference, alcohol consumption, and physical activity. Continuous measures for blood pressure and fasting blood sugar were used instead of hypertension and diabetes. Ten -year event-free probabilities were 0.95 for the first quartile of GVRS, 0.89 for the second quartile, 0.79 for the third quartile, and 0.56 for the fourth quartile. The addition of behavioral factors in our model improved prediction of 10 -year event rates compared to a model restricted to the traditional variables. Conclusion A global vascular risk score that combines both traditional, behavioral, and anthropometric risk factors, uses continuous variables for physiological parameters, and is applicable to non-white subjects could improve primary prevention strategies. PMID:19958966
McCarthy, Bridie; O'Donovan, Moira; Twomey, Angela
2008-02-01
Despite wide agreement about the importance of effective communication in nursing there is continuing evidence of the need for nurses to improve their communication skills. Consequently, there is a growing demand for more therapeutic and person-centred communication courses. Studies on communication education reveal considerable variability on the design and operationalisation of these programmes. Additionally, the literature highlights that nurse educators are continually challenged with developing and implementing these programmes. Communication skills are generally taught in years one and two of undergraduate nursing degree programmes. This is a stage when students have minimal contact with patients and clients. We suggest that a communication skills module should be included in all final years of undergraduate nursing programmes. With an array of clinical experiences to draw from, final year nursing students are better placed to apply the skills of effective communication in practice. In this paper, we present the design, implementation and evaluation of an advanced communication skills module undertaken by fourth year undergraduate nursing students completing a Bachelor of Science (BSc) degree - nursing programme at one university in the Republic of Ireland.
General implementation of arbitrary nonlinear quadrature phase gates
NASA Astrophysics Data System (ADS)
Marek, Petr; Filip, Radim; Ogawa, Hisashi; Sakaguchi, Atsushi; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira
2018-02-01
We propose general methodology of deterministic single-mode quantum interaction nonlinearly modifying single quadrature variable of a continuous-variable system. The methodology is based on linear coupling of the system to ancillary systems subsequently measured by quadrature detectors. The nonlinear interaction is obtained by using the data from the quadrature detection for dynamical manipulation of the coupling parameters. This measurement-induced methodology enables direct realization of arbitrary nonlinear quadrature interactions without the need to construct them from the lowest-order gates. Such nonlinear interactions are crucial for more practical and efficient manipulation of continuous quadrature variables as well as qubits encoded in continuous-variable systems.
Optimality of Gaussian attacks in continuous-variable quantum cryptography.
Navascués, Miguel; Grosshans, Frédéric; Acín, Antonio
2006-11-10
We analyze the asymptotic security of the family of Gaussian modulated quantum key distribution protocols for continuous-variables systems. We prove that the Gaussian unitary attack is optimal for all the considered bounds on the key rate when the first and second momenta of the canonical variables involved are known by the honest parties.
A Primer on Logistic Regression.
ERIC Educational Resources Information Center
Woldbeck, Tanya
This paper introduces logistic regression as a viable alternative when the researcher is faced with variables that are not continuous. If one is to use simple regression, the dependent variable must be measured on a continuous scale. In the behavioral sciences, it may not always be appropriate or possible to have a measured dependent variable on a…
Continuous-variable quantum network coding for coherent states
NASA Astrophysics Data System (ADS)
Shang, Tao; Li, Ke; Liu, Jian-wei
2017-04-01
As far as the spectral characteristic of quantum information is concerned, the existing quantum network coding schemes can be looked on as the discrete-variable quantum network coding schemes. Considering the practical advantage of continuous variables, in this paper, we explore two feasible continuous-variable quantum network coding (CVQNC) schemes. Basic operations and CVQNC schemes are both provided. The first scheme is based on Gaussian cloning and ADD/SUB operators and can transmit two coherent states across with a fidelity of 1/2, while the second scheme utilizes continuous-variable quantum teleportation and can transmit two coherent states perfectly. By encoding classical information on quantum states, quantum network coding schemes can be utilized to transmit classical information. Scheme analysis shows that compared with the discrete-variable paradigms, the proposed CVQNC schemes provide better network throughput from the viewpoint of classical information transmission. By modulating the amplitude and phase quadratures of coherent states with classical characters, the first scheme and the second scheme can transmit 4{log _2}N and 2{log _2}N bits of information by a single network use, respectively.
Kuluski, Kerry; Bechsgaard, Gitte; Ridgway, Jennifer; Katz, Joel
2016-01-01
Introduction. The purpose of this study was to evaluate a specialized yoga intervention for inpatients in a rehabilitation and complex continuing care hospital. Design. Single-cohort repeated measures design. Methods. Participants (N = 10) admitted to a rehabilitation and complex continuing care hospital were recruited to participate in a 50–60 min Hatha Yoga class (modified for wheelchair users/seated position) once a week for eight weeks, with assigned homework practice. Questionnaires on pain (pain, pain interference, and pain catastrophizing), psychological variables (depression, anxiety, and experiences with injustice), mindfulness, self-compassion, and spiritual well-being were collected at three intervals: pre-, mid-, and post-intervention. Results. Repeated measures ANOVAs revealed a significant main effect of time indicating improvements over the course of the yoga program on the (1) anxiety subscale of the Hospital Anxiety and Depression Scale, F(2,18) = 4.74, p < .05, and η p 2 = .35, (2) Self-Compassion Scale-Short Form, F(2,18) = 3.71, p < .05, and η p 2 = .29, and (3) Magnification subscale of the Pain Catastrophizing Scale, F(2,18) = 3. 66, p < .05, and η p 2 = .29. Discussion. The results suggest that an 8-week Hatha Yoga program improves pain-related factors and psychological experiences in individuals admitted to a rehabilitation and complex continuing care hospital. PMID:28115969
Improving the use of environmental diversity as a surrogate for species representation.
Albuquerque, Fabio; Beier, Paul
2018-01-01
The continuous p-median approach to environmental diversity (ED) is a reliable way to identify sites that efficiently represent species. A recently developed maximum dispersion (maxdisp) approach to ED is computationally simpler, does not require the user to reduce environmental space to two dimensions, and performed better than continuous p-median for datasets of South African animals. We tested whether maxdisp performs as well as continuous p-median for 12 datasets that included plants and other continents, and whether particular types of environmental variables produced consistently better models of ED. We selected 12 species inventories and atlases to span a broad range of taxa (plants, birds, mammals, reptiles, and amphibians), spatial extents, and resolutions. For each dataset, we used continuous p-median ED and maxdisp ED in combination with five sets of environmental variables (five combinations of temperature, precipitation, insolation, NDVI, and topographic variables) to select environmentally diverse sites. We used the species accumulation index (SAI) to evaluate the efficiency of ED in representing species for each approach and set of environmental variables. Maxdisp ED represented species better than continuous p-median ED in five of 12 biodiversity datasets, and about the same for the other seven biodiversity datasets. Efficiency of ED also varied with type of variables used to define environmental space, but no particular combination of variables consistently performed best. We conclude that maxdisp ED performs at least as well as continuous p-median ED, and has the advantage of faster and simpler computation. Surprisingly, using all 38 environmental variables was not consistently better than using subsets of variables, nor did any subset emerge as consistently best or worst; further work is needed to identify the best variables to define environmental space. Results can help ecologists and conservationists select sites for species representation and assist in conservation planning.
Kirby, James B.; Bollen, Kenneth A.
2009-01-01
Structural Equation Modeling with latent variables (SEM) is a powerful tool for social and behavioral scientists, combining many of the strengths of psychometrics and econometrics into a single framework. The most common estimator for SEM is the full-information maximum likelihood estimator (ML), but there is continuing interest in limited information estimators because of their distributional robustness and their greater resistance to structural specification errors. However, the literature discussing model fit for limited information estimators for latent variable models is sparse compared to that for full information estimators. We address this shortcoming by providing several specification tests based on the 2SLS estimator for latent variable structural equation models developed by Bollen (1996). We explain how these tests can be used to not only identify a misspecified model, but to help diagnose the source of misspecification within a model. We present and discuss results from a Monte Carlo experiment designed to evaluate the finite sample properties of these tests. Our findings suggest that the 2SLS tests successfully identify most misspecified models, even those with modest misspecification, and that they provide researchers with information that can help diagnose the source of misspecification. PMID:20419054
[Clinical research IV. Relevancy of the statistical test chosen].
Talavera, Juan O; Rivas-Ruiz, Rodolfo
2011-01-01
When we look at the difference between two therapies or the association of a risk factor or prognostic indicator with its outcome, we need to evaluate the accuracy of the result. This assessment is based on a judgment that uses information about the study design and statistical management of the information. This paper specifically mentions the relevance of the statistical test selected. Statistical tests are chosen mainly from two characteristics: the objective of the study and type of variables. The objective can be divided into three test groups: a) those in which you want to show differences between groups or inside a group before and after a maneuver, b) those that seek to show the relationship (correlation) between variables, and c) those that aim to predict an outcome. The types of variables are divided in two: quantitative (continuous and discontinuous) and qualitative (ordinal and dichotomous). For example, if we seek to demonstrate differences in age (quantitative variable) among patients with systemic lupus erythematosus (SLE) with and without neurological disease (two groups), the appropriate test is the "Student t test for independent samples." But if the comparison is about the frequency of females (binomial variable), then the appropriate statistical test is the χ(2).
Control Law Design for Propofol Infusion to Regulate Depth of Hypnosis: A Nonlinear Control Strategy
Khaqan, Ali; Bilal, Muhammad; Ilyas, Muhammad; Ijaz, Bilal; Ali Riaz, Raja
2016-01-01
Maintaining the depth of hypnosis (DOH) during surgery is one of the major objectives of anesthesia infusion system. Continuous administration of Propofol infusion during surgical procedures is essential but increases the undue load of an anesthetist in operating room working in a multitasking setup. Manual and target controlled infusion (TCI) systems are not good at handling instabilities like blood pressure changes and heart rate variability arising due to interpatient variability. Patient safety, large interindividual variability, and less postoperative effects are the main factors to motivate automation in anesthesia. The idea of automated system for Propofol infusion excites the control engineers to come up with a more sophisticated and safe system that handles optimum delivery of drug during surgery and avoids postoperative effects. In contrast to most of the investigations with linear control strategies, the originality of this research work lies in employing a nonlinear control technique, backstepping, to track the desired hypnosis level of patients during surgery. This effort is envisioned to unleash the true capabilities of this nonlinear control technique for anesthesia systems used today in biomedical field. The working of the designed controller is studied on the real dataset of five patients undergoing surgery. The controller tracks the desired hypnosis level within the acceptable range for surgery. PMID:27293475
Khaqan, Ali; Bilal, Muhammad; Ilyas, Muhammad; Ijaz, Bilal; Ali Riaz, Raja
2015-01-01
Maintaining the depth of hypnosis (DOH) during surgery is one of the major objectives of anesthesia infusion system. Continuous administration of Propofol infusion during surgical procedures is essential but increases the undue load of an anesthetist in operating room working in a multitasking setup. Manual and target controlled infusion (TCI) systems are not good at handling instabilities like blood pressure changes and heart rate variability arising due to interpatient variability. Patient safety, large interindividual variability, and less postoperative effects are the main factors to motivate automation in anesthesia. The idea of automated system for Propofol infusion excites the control engineers to come up with a more sophisticated and safe system that handles optimum delivery of drug during surgery and avoids postoperative effects. In contrast to most of the investigations with linear control strategies, the originality of this research work lies in employing a nonlinear control technique, backstepping, to track the desired hypnosis level of patients during surgery. This effort is envisioned to unleash the true capabilities of this nonlinear control technique for anesthesia systems used today in biomedical field. The working of the designed controller is studied on the real dataset of five patients undergoing surgery. The controller tracks the desired hypnosis level within the acceptable range for surgery.
Number versus Continuous Quantity in Numerosity Judgments by Fish
ERIC Educational Resources Information Center
Agrillo, Christian; Piffer, Laura; Bisazza, Angelo
2011-01-01
In quantity discrimination tasks, adults, infants and animals have been sometimes observed to process number only after all continuous variables, such as area or density, have been controlled for. This has been taken as evidence that processing number may be more cognitively demanding than processing continuous variables. We tested this hypothesis…
Preliminary design of a supersonic cruise aircraft high-pressure turbine
NASA Technical Reports Server (NTRS)
Aceto, L. D.; Calderbank, J. C.
1983-01-01
Development of the supersonic cruise aircraft engine continued in this National Aeronautics and Space Administration (NASA) sponsored Pratt and Whitney program for the Preliminary Design of an Advanced High-Pressure Turbine. Airfoil cooling concepts and the technology required to implement these concepts received particular emphasis. Previous supersonic cruise aircraft mission studies were reviewed and the Variable Stream Control Engine (VSCE) was chosen as the candidate or the preliminary turbine design. The design was evaluated for the supersonic cruise mission. The advanced technology to be generated from these designs showed benefits in the supersonic cruise application and subsonic cruise application. The preliminary design incorporates advanced single crystal materials, thermal barrier coatings, and oxidation resistant coatings for both the vane and blade. The 1990 technology vane and blade designs have cooled turbine efficiency of 92.3 percent, 8.05 percent Wae cooling and a 10,000 hour life. An alternate design with 1986 technology has 91.9 percent efficiency and 12.43 percent Wae cooling at the same life. To achieve these performance and life results, technology programs must be pursued to provide the 1990's technology assumed for this study.
Miskell, Georgia; Salmond, Jennifer A; Williams, David E
2018-04-01
Portable low-cost instruments have been validated and used to measure ambient nitrogen dioxide (NO 2 ) at multiple sites over a small urban area with 20min time resolution. We use these results combined with land use regression (LUR) and rank correlation methods to explore the effects of traffic, urban design features, and local meteorology and atmosphere chemistry on small-scale spatio-temporal variations. We measured NO 2 at 45 sites around the downtown area of Vancouver, BC, in spring 2016, and constructed four different models: i) a model based on averaging concentrations observed at each site over the whole measurement period, and separate temporal models for ii) morning, iii) midday, and iv) afternoon. Redesign of the temporal models using the average model predictors as constants gave three 'hybrid' models that used both spatial and temporal variables. These accounted for approximately 50% of the total variation with mean absolute error±5ppb. Ranking sites by concentration and by change in concentration across the day showed a shift of high NO 2 concentrations across the central city from morning to afternoon. Locations could be identified in which NO 2 concentration was determined by the geography of the site, and others as ones in which the concentration changed markedly from morning to afternoon indicating the importance of temporal controls. Rank correlation results complemented LUR in identifying significant urban design variables that impacted NO 2 concentration. High variability across a relatively small space was partially described by predictor variables related to traffic (bus stop density, speed limits, traffic counts, distance to traffic lights), atmospheric chemistry (ozone, dew point), and environment (land use, trees). A high-density network recording continuously would be needed fully to capture local variations. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantum simulation of quantum field theory using continuous variables
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George; ...
2015-12-14
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Quantum simulation of quantum field theory using continuous variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
NASA Astrophysics Data System (ADS)
Ravanbakhsh, Ali; Franchini, Sebastián
2012-10-01
In recent years, there has been continuing interest in the participation of university research groups in space technology studies by means of their own microsatellites. The involvement in such projects has some inherent challenges, such as limited budget and facilities. Also, due to the fact that the main objective of these projects is for educational purposes, usually there are uncertainties regarding their in orbit mission and scientific payloads at the early phases of the project. On the other hand, there are predetermined limitations for their mass and volume budgets owing to the fact that most of them are launched as an auxiliary payload in which the launch cost is reduced considerably. The satellite structure subsystem is the one which is most affected by the launcher constraints. This can affect different aspects, including dimensions, strength and frequency requirements. In this paper, the main focus is on developing a structural design sizing tool containing not only the primary structures properties as variables but also the system level variables such as payload mass budget and satellite total mass and dimensions. This approach enables the design team to obtain better insight into the design in an extended design envelope. The structural design sizing tool is based on analytical structural design formulas and appropriate assumptions including both static and dynamic models of the satellite. Finally, a Genetic Algorithm (GA) multiobjective optimization is applied to the design space. The result is a Pareto-optimal based on two objectives, minimum satellite total mass and maximum payload mass budget, which gives a useful insight to the design team at the early phases of the design.
Life extending control: An interdisciplinary engineering thrust
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control (LEC) is introduced. Possible extensions to the cyclic damage prediction approach are presented based on the identification of a model from elementary forms. Several candidate elementary forms are presented. These extensions will result in a continuous or differential form of the damage prediction model. Two possible approaches to the LEC based on the existing cyclic damage prediction method, the measured variables LEC and the estimated variables LEC, are defined. Here, damage estimates or measurements would be used directly in the LEC. A simple hydraulic actuator driven position control system example is used to illustrate the main ideas behind LEC. Results from a simple hydraulic actuator example demonstrate that overall system performance (dynamic plus life) can be maximized by accounting for component damage in the control design.
Method for curing polymers using variable-frequency microwave heating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lauf, R.J.; Bible, D.W.; Paulauskas, F.L.
1998-02-24
A method for curing polymers incorporating a variable frequency microwave furnace system designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity is disclosed. By varying the frequency of the microwave signal, non-uniformities within the cavity are minimized, thereby achieving a more uniform cure throughout the workpiece. A directional coupler is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter is provided for measuring the power delivered to the microwave furnace. A second power meter detects the magnitude of reflected power. Themore » furnace cavity may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing. 15 figs.« less
Method for curing polymers using variable-frequency microwave heating
Lauf, Robert J.; Bible, Don W.; Paulauskas, Felix L.
1998-01-01
A method for curing polymers (11) incorporating a variable frequency microwave furnace system (10) designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity (34). By varying the frequency of the microwave signal, non-uniformities within the cavity (34) are minimized, thereby achieving a more uniform cure throughout the workpiece (36). A directional coupler (24) is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter (30) is provided for measuring the power delivered to the microwave furnace (32). A second power meter (26) detects the magnitude of reflected power. The furnace cavity (34) may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing.
Resonance: The science behind the art of sonic drilling
NASA Astrophysics Data System (ADS)
Lucon, Peter Andrew
The research presented in this dissertation quantifies the system dynamics and the influence of control variables of a sonic drill system. The investigation began with an initial body of work funded by the Department of Energy under a Small Business Innovative Research Phase I Grant, grant number: DE-FG02-06ER84618, to investigate the feasibility of using sonic drills to drill micro well holes to depths of 1500 feet. The Department of Energy funding enabled feasibility testing using a 750 hp sonic drill owned by Jeffery Barrow, owner of Water Development Co. During the initial feasibility testing, data was measured and recorded at the sonic drill head while the sonic drill penetrated to a depth of 120 feet. To demonstrate feasibility, the system had to be well understood to show that testing of a larger sonic drill could simulate the results of drilling a micro well hole of 2.5 inch diameter. A first-order model of the system was developed that produced counter-intuitive findings that enabled the feasibility of using this method to drill deeper and produce micro-well holes to 1500 feet using sonic drills. Although funding was not continued, the project work continued. This continued work expanded on the sonic drill models by understanding the governing differential equation and solving the boundary value problem, finite difference methods, and finite element methods to determine the significance of the control variables that can affect the sonic drill. Using a design of experiment approach and commercially available software, the significance of the variables to the effectiveness of the drill system were determined. From the significant variables, as well as the real world testing, a control system schematic for a sonic drill was derived and is patent pending. The control system includes sensors, actuators, personal logic controllers, as well as a human machine interface. It was determined that the control system should control the resonant mode and the weight on the bit as the primary two control variables. The sonic drill can also be controlled using feedback from sensors mounted on the sonic drill head, which is the driver for the sonic drill located above ground
Strilka, Richard J; Stull, Mamie C; Clemens, Michael S; McCaver, Stewart C; Armen, Scott B
2016-01-27
The critically ill can have persistent dysglycemia during the "subacute" recovery phase of their illness because of altered gene expression; it is also not uncommon for these patients to receive continuous enteral nutrition during this time. The optimal short-acting subcutaneous insulin therapy that should be used in this clinical scenario, however, is unknown. Our aim was to conduct a qualitative numerical study of the glucose-insulin dynamics within this patient population to answer the above question. This analysis may help clinicians design a relevant clinical trial. Eight virtual patients with stress hyperglycemia were simulated by means of a mathematical model. Each virtual patient had a different combination of insulin resistance and insulin deficiency that defined their unique stress hyperglycemia state; the rate of gluconeogenesis was also doubled. The patients received 25 injections of subcutaneous regular or Lispro insulin (0-6 U) with 3 rates of continuous nutrition. The main outcome measurements were the change in mean glucose concentration, the change in glucose variability, and hypoglycemic episodes. These end points were interpreted by how the ultradian oscillations of glucose concentration were affected by each insulin preparation. Subcutaneous regular insulin lowered both mean glucose concentrations and glucose variability in a linear fashion. No hypoglycemic episodes were noted. Although subcutaneous Lispro insulin lowered mean glucose concentrations, glucose variability increased in a nonlinear fashion. In patients with high insulin resistance and nutrition at goal, "rebound hyperglycemia" was noted after the insulin analog was rapidly metabolized. When the nutritional source was removed, hypoglycemia tended to occur at higher Lispro insulin doses. Finally, patients with severe insulin resistance seemed the most sensitive to insulin concentration changes. Subcutaneous regular insulin consistently lowered mean glucose concentrations and glucose variability; its linear dose-response curve rendered the preparation better suited for a sliding-scale protocol. The longer duration of action of subcutaneous regular insulin resulted in better glycemic-control metrics for patients who were continuously postprandial. Clinical trials are needed to examine whether these numerical results represent the glucose-insulin dynamics that occur in intensive care units; if present, their clinical effects should be evaluated.
Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N
2016-06-01
When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the effect size which the study has a high probability of correctly detecting based on the available prior information on the difference [Formula: see text] and the standard deviation [Formula: see text] provides a valuable, substantiated estimate that can form the basis for discussion about the study's feasibility during the design phase. © The Author(s) 2016.
Nursing intellectual capital theory: operationalization and empirical validation of concepts.
Covell, Christine L; Sidani, Souraya
2013-08-01
To present the operationalization of concepts in the nursing intellectual capital theory and the results of a methodological study aimed at empirically validating the concepts. The nursing intellectual capital theory proposes that the stocks of nursing knowledge in an organization are embedded in two concepts, nursing human capital and nursing structural capital. The theory also proposes that two concepts in the work environment, nurse staffing and employer support for nursing continuing professional development, influence nursing human capital. A cross-sectional design. A systematic three-step process was used to operationalize the concepts of the theory. In 2008, data were collected for 147 inpatient units from administrative departments and unit managers in 6 Canadian hospitals. Exploratory factor analyses were conducted to determine if the indicator variables accurately reflect their respective concepts. The proposed indicator variables collectively measured the nurse staffing concept. Three indicators were retained to construct nursing human capital: clinical expertise and experience concept. The nursing structural capital and employer support for nursing continuing professional development concepts were not validated empirically. The nurse staffing and the nursing human capital: clinical expertise and experience concepts will be brought forward for further model testing. Refinement for some of the indicator variables of the concepts is indicated. Additional research is required with different sources of data to confirm the findings. © 2012 Blackwell Publishing Ltd.
Li, Yongqiang; Abbaspour, Mohammadreza R; Grootendorst, Paul V; Rauth, Andrew M; Wu, Xiao Yu
2015-08-01
This study was performed to optimize the formulation of polymer-lipid hybrid nanoparticles (PLN) for the delivery of an ionic water-soluble drug, verapamil hydrochloride (VRP) and to investigate the roles of formulation factors. Modeling and optimization were conducted based on a spherical central composite design. Three formulation factors, i.e., weight ratio of drug to lipid (X1), and concentrations of Tween 80 (X2) and Pluronic F68 (X3), were chosen as independent variables. Drug loading efficiency (Y1) and mean particle size (Y2) of PLN were selected as dependent variables. The predictive performance of artificial neural networks (ANN) and the response surface methodology (RSM) were compared. As ANN was found to exhibit better recognition and generalization capability over RSM, multi-objective optimization of PLN was then conducted based upon the validated ANN models and continuous genetic algorithms (GA). The optimal PLN possess a high drug loading efficiency (92.4%, w/w) and a small mean particle size (∼100nm). The predicted response variables matched well with the observed results. The three formulation factors exhibited different effects on the properties of PLN. ANN in coordination with continuous GA represent an effective and efficient approach to optimize the PLN formulation of VRP with desired properties. Copyright © 2015 Elsevier B.V. All rights reserved.
Continuous direct compression as manufacturing platform for sustained release tablets.
Van Snick, B; Holman, J; Cunningham, C; Kumar, A; Vercruysse, J; De Beer, T; Remon, J P; Vervaet, C
2017-03-15
This study presents a framework for process and product development on a continuous direct compression manufacturing platform. A challenging sustained release formulation with high content of a poorly flowing low density drug was selected. Two HPMC grades were evaluated as matrix former: standard Methocel CR and directly compressible Methocel DC2. The feeding behavior of each formulation component was investigated by deriving feed factor profiles. The maximum feed factor was used to estimate the drive command and depended strongly upon the density of the material. Furthermore, the shape of the feed factor profile allowed definition of a customized refill regime for each material. Inline NIRs was used to estimate the residence time distribution (RTD) in the mixer and monitor blend uniformity. Tablet content and weight variability were determined as additional measures of mixing performance. For Methocel CR, the best axial mixing (i.e. feeder fluctuation dampening) was achieved when an impeller with high number of radial mixing blades operated at low speed. However, the variability in tablet weight and content uniformity deteriorated under this condition. One can therefore conclude that balancing axial mixing with tablet quality is critical for Methocel CR. However, reformulating with the direct compressible Methocel DC2 as matrix former improved tablet quality vastly. Furthermore, both process and product were significantly more robust to changes in process and design variables. This observation underpins the importance of flowability during continuous blending and die-filling. At the compaction stage, blends with Methocel CR showed better tabletability driven by a higher compressibility as the smaller CR particles have a higher bonding area. However, tablets of similar strength were achieved using Methocel DC2 by targeting equal porosity. Compaction pressure impacted tablet properties and dissolution. Hence controlling thickness during continuous manufacturing of sustained release tablets was crucial to ensure reproducible dissolution. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Baukal, Charles E.; Ausburn, Lynna J.
2017-05-01
Continuing engineering education (CEE) is important to ensure engineers maintain proficiency over the life of their careers. However, relatively few studies have examined designing effective training for working engineers. Research has indicated that both learner instructional preferences and prior knowledge can impact the learning process, but it has not established if these factors are interrelated. The study reported here considered relationships of prior knowledge and three aspects of learning preferences of working engineers at a manufacturing company: learning strategy choices, verbal-visual cognitive styles, and multimedia preferences. Prior knowledge was not found to be significantly related to engineers' learning preferences, indicating independence of effects of these variables on learning. The study also examined relationships of this finding to the Multimedia Cone of Abstraction and implications for its use as an instructional design tool for CEE.
NASA Astrophysics Data System (ADS)
Shevelev, M.; Aryshev, A.; Terunuma, N.; Urakawa, J.
2017-10-01
The interest in producing ultrashort electron bunches has risen sharply among scientists working on the design of high-gradient wakefield accelerators. One attractive approach generating electron bunches is to illuminate a photocathode with a train of femtosecond laser pulses. In this paper we describe the design and testing of a laser system for an rf gun based on a commercial titanium-sapphire laser technology. The technology allows the production of four femtosecond laser pulses with a continuously variable pulse delay. We also use the designed system to demonstrate the experimental generation of an electron microbunch train obtained by illuminating a cesium-telluride semiconductor photocathode. We use conventional diagnostics to characterize the electron microbunches produced and confirm that it may be possible to control the main parameter of an electron microbunch train.
Ordinary Least Squares Estimation of Parameters in Exploratory Factor Analysis with Ordinal Data
ERIC Educational Resources Information Center
Lee, Chun-Ting; Zhang, Guangjian; Edwards, Michael C.
2012-01-01
Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable.…
NASA Astrophysics Data System (ADS)
Graves, Catherine E.; Dávila, Noraica; Merced-Grafals, Emmanuelle J.; Lam, Si-Ty; Strachan, John Paul; Williams, R. Stanley
2017-03-01
Applications of memristor devices are quickly moving beyond computer memory to areas of analog and neuromorphic computation. These applications require the design of devices with different characteristics from binary memory, such as a large tunable range of conductance. A complete understanding of the conduction mechanisms and their corresponding state variable(s) is crucial for optimizing performance and designs in these applications. Here we present measurements of low bias I-V characteristics of 6 states in a Ta/ tantalum-oxide (TaOx)/Pt memristor spanning over 2 orders of magnitude in conductance and temperatures from 100 K to 500 K. Our measurements show that the 300 K device conduction is dominated by a temperature-insensitive current that varies with non-volatile memristor state, with an additional leakage contribution from a thermally-activated current channel that is nearly independent of the memristor state. We interpret these results with a parallel conduction model of Mott hopping and Schottky emission channels, fitting the voltage and temperature dependent experimental data for all memristor states with only two free parameters. The memristor conductance is linearly correlated with N, the density of electrons near EF participating in the Mott hopping conduction, revealing N to be the dominant state variable for low bias conduction in this system. Finally, we show that the Mott hopping sites can be ascribed to oxygen vacancies, where the local oxygen vacancy density responsible for critical hopping pathways controls the memristor conductance.
Extragalactic Science With Kepler
NASA Astrophysics Data System (ADS)
Fanelli, Michael N.; Marcum, P.
2012-01-01
Although designed as an exoplanet and stellar astrophysics experiment, the Kepler mission provides a unique capability to explore the essentially unknown photometric stability of galactic systems at millimag levels using Kepler's blend of high precision and continuous monitoring. Time series observations of galaxies are sensitive to both quasi-continuous variability, driven by accretion activity from embedded active nuclei, and random, episodic events, such as supernovae. In general, galaxies lacking active nuclei are not expected to be variable with the timescales and amplitudes observed in stellar sources and are free of source motions that affect stars (e.g., parallax). These sources can serve as a population of quiescent, non-variable sources, which may be used to quantify the photometric stability and noise characteristics of the Kepler photometer. A factor limiting galaxy monitoring in the Kepler FOV is the overall lack of detailed quantitative information for the galaxy population. Despite these limitations, a significant number of galaxies are being observed, forming the Kepler Galaxy Archive. Observed sources total approximately 100, 250, and 700 in Cycles 1-3 (Cycle 3 began in June 2011). In this poster we interpret the properties of a set of 20 galaxies monitored during quarters 4 through 8, their associated light curves, photometric and astrometric precision and potential variability. We describe data analysis issues relevant to extended sources and available software tools. In addition, we detail ongoing surveys that are providing new photometric and morphological information for galaxies over the entire field. These new datasets will both aid the interpretation of the time series, and improve source selection, e.g., help identify candidate AGNs and starburst systems, for further monitoring.
Validation of public health competencies and impact variables for low- and middle-income countries.
Zwanikken, Prisca Ac; Alexander, Lucy; Huong, Nguyen Thanh; Qian, Xu; Valladares, Laura Magana; Mohamed, Nazar A; Ying, Xiao Hua; Gonzalez-Robledo, Maria Cecilia; Linh, Le Cu; Wadidi, Marwa Se Abuzaid; Tahir, Hanan; Neupane, Sunisha; Scherpbier, Albert
2014-01-20
The number of Master of Public Health (MPH) programmes in low- and middle-income countries (LMICs) is increasing, but questions have been raised regarding the relevance of their outcomes and impacts on context. Although processes for validating public health competencies have taken place in recent years in many high-income countries, validation in LMICs is needed. Furthermore, impact variables of MPH programmes in the workplace and in society have not been developed. A set of public health competencies and impact variables in the workplace and in society was designed using the competencies and learning objectives of six participating institutions offering MPH programmes in or for LMICs, and the set of competencies of the Council on Linkages Between Academia and Public Health Practice as a reference. The resulting competencies and impact variables differ from those of the Council on Linkages in scope and emphasis on social determinants of health, context specificity and intersectoral competencies. A modified Delphi method was used in this study to validate the public health competencies and impact variables; experts and MPH alumni from China, Vietnam, South Africa, Sudan, Mexico and the Netherlands reviewed them and made recommendations. The competencies and variables were validated across two Delphi rounds, first with public health experts (N = 31) from the six countries, then with MPH alumni (N = 30). After the first expert round, competencies and impact variables were refined based on the quantitative results and qualitative comments. Both rounds showed high consensus, more so for the competencies than the impact variables. The response rate was 100%. This is the first time that public health competencies have been validated in LMICs across continents. It is also the first time that impact variables of MPH programmes have been proposed and validated in LMICs across continents. The high degree of consensus between experts and alumni suggests that these public health competencies and impact variables can be used to design and evaluate MPH programmes, as well as for individual and team assessment and continuous professional development in LMICs.
Validation of public health competencies and impact variables for low- and middle-income countries
2014-01-01
Background The number of Master of Public Health (MPH) programmes in low- and middle-income countries (LMICs) is increasing, but questions have been raised regarding the relevance of their outcomes and impacts on context. Although processes for validating public health competencies have taken place in recent years in many high-income countries, validation in LMICs is needed. Furthermore, impact variables of MPH programmes in the workplace and in society have not been developed. Method A set of public health competencies and impact variables in the workplace and in society was designed using the competencies and learning objectives of six participating institutions offering MPH programmes in or for LMICs, and the set of competencies of the Council on Linkages Between Academia and Public Health Practice as a reference. The resulting competencies and impact variables differ from those of the Council on Linkages in scope and emphasis on social determinants of health, context specificity and intersectoral competencies. A modified Delphi method was used in this study to validate the public health competencies and impact variables; experts and MPH alumni from China, Vietnam, South Africa, Sudan, Mexico and the Netherlands reviewed them and made recommendations. Results The competencies and variables were validated across two Delphi rounds, first with public health experts (N = 31) from the six countries, then with MPH alumni (N = 30). After the first expert round, competencies and impact variables were refined based on the quantitative results and qualitative comments. Both rounds showed high consensus, more so for the competencies than the impact variables. The response rate was 100%. Conclusion This is the first time that public health competencies have been validated in LMICs across continents. It is also the first time that impact variables of MPH programmes have been proposed and validated in LMICs across continents. The high degree of consensus between experts and alumni suggests that these public health competencies and impact variables can be used to design and evaluate MPH programmes, as well as for individual and team assessment and continuous professional development in LMICs. PMID:24438672
Secure quantum key distribution using continuous variables of single photons.
Zhang, Lijian; Silberhorn, Christine; Walmsley, Ian A
2008-03-21
We analyze the distribution of secure keys using quantum cryptography based on the continuous variable degree of freedom of entangled photon pairs. We derive the information capacity of a scheme based on the spatial entanglement of photons from a realistic source, and show that the standard measures of security known for quadrature-based continuous variable quantum cryptography (CV-QKD) are inadequate. A specific simple eavesdropping attack is analyzed to illuminate how secret information may be distilled well beyond the bounds of the usual CV-QKD measures.
Hauk, Olaf; Davis, Matthew H; Pulvermüller, Friedemann
2008-09-01
Psycholinguistic research has documented a range of variables that influence visual word recognition performance. Many of these variables are highly intercorrelated. Most previous studies have used factorial designs, which do not exploit the full range of values available for continuous variables, and are prone to skewed stimulus selection as well as to effects of the baseline (e.g. when contrasting words with pseudowords). In our study, we used a parametric approach to study the effects of several psycholinguistic variables on brain activation. We focussed on the variable word frequency, which has been used in numerous previous behavioural, electrophysiological and neuroimaging studies, in order to investigate the neuronal network underlying visual word processing. Furthermore, we investigated the variable orthographic typicality as well as a combined variable for word length and orthographic neighbourhood size (N), for which neuroimaging results are still either scarce or inconsistent. Data were analysed using multiple linear regression analysis of event-related fMRI data acquired from 21 subjects in a silent reading paradigm. The frequency variable correlated negatively with activation in left fusiform gyrus, bilateral inferior frontal gyri and bilateral insulae, indicating that word frequency can affect multiple aspects of word processing. N correlated positively with brain activity in left and right middle temporal gyri as well as right inferior frontal gyrus. Thus, our analysis revealed multiple distinct brain areas involved in visual word processing within one data set.
A nano continuous variable transmission system from nanotubes
NASA Astrophysics Data System (ADS)
Cai, Kun; Shi, Jiao; Xie, Yi Min; Qin, Qing H.
2018-02-01
A nano continuous variable transmission (nano-CVT) system is proposed by means of carbon nanotubes (CNTs). The dynamic behavior of the CNT-based nanosystem is assessed using molecular dynamics simulations. The system contains a rotary CNT-motor and a CNT-bearing. The tube axes of the nanomotor and the rotor in the bearing are laid in parallel, and the distance between them is known as the eccentricity of the rotor with a diameter of d. By changing the eccentricity (e) of the rotor from 0 to d, some interesting rotation transmission phenomena are discovered, whose procedures can be used to design various nanodevices. This might include the failure of rotation transmission—i.e. the rotor has no rotation—when e ≥ d at an extremely low temperature, or when the edges of the two tubes are orthogonal at their intersections in any condition. This hints that the state of the nanosystem can be used as an on/off switch or breaker. For a system with e = d and a high temperature, the rotor rotates in the reverse direction of the motor. This means that the output signal (rotation) is the reverse of the input signal. When changing the eccentricity from 0 to d continuously, the output signal gradually decreases from a positive value to a negative value; as a result a nano-CVT system is obtained.
Kilinc, Deniz; Demir, Alper
2017-08-01
The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.
NASA Astrophysics Data System (ADS)
Gunduz, Mustafa Emre
Many government agencies and corporations around the world have found the unique capabilities of rotorcraft indispensable. Incorporating such capabilities into rotorcraft design poses extra challenges because it is a complicated multidisciplinary process. The concept of applying several disciplines to the design and optimization processes may not be new, but it does not currently seem to be widely accepted in industry. The reason for this might be the lack of well-known tools for realizing a complete multidisciplinary design and analysis of a product. This study aims to propose a method that enables engineers in some design disciplines to perform a fairly detailed analysis and optimization of a design using commercially available software as well as codes developed at Georgia Tech. The ultimate goal is when the system is set up properly, the CAD model of the design, including all subsystems, will be automatically updated as soon as a new part or assembly is added to the design; or it will be updated when an analysis and/or an optimization is performed and the geometry needs to be modified. Designers and engineers will be involved in only checking the latest design for errors or adding/removing features. Such a design process will take dramatically less time to complete; therefore, it should reduce development time and costs. The optimization method is demonstrated on an existing helicopter rotor originally designed in the 1960's. The rotor is already an effective design with novel features. However, application of the optimization principles together with high-speed computing resulted in an even better design. The objective function to be minimized is related to the vibrations of the rotor system under gusty wind conditions. The design parameters are all continuous variables. Optimization is performed in a number of steps. First, the most crucial design variables of the objective function are identified. With these variables, Latin Hypercube Sampling method is used to probe the design space of several local minima and maxima. After analysis of numerous samples, an optimum configuration of the design that is more stable than that of the initial design is reached. The above process requires several software tools: CATIA as the CAD tool, ANSYS as the FEA tool, VABS for obtaining the cross-sectional structural properties, and DYMORE for the frequency and dynamic analysis of the rotor. MATLAB codes are also employed to generate input files and read output files of DYMORE. All these tools are connected using ModelCenter.
Design study of toroidal traction CVT for electric vehicles
NASA Technical Reports Server (NTRS)
Raynard, A. E.; Kraus, J.; Bell, D. D.
1980-01-01
The development, evaluation, and optimization of a preliminary design concept for a continuously variable transmission (CVT) to couple the high-speed output shaft of an energy storage flywheel to the drive train of an electric vehicle is discussed. An existing computer simulation program was modified and used to compare the performance of five CVT design configurations. Based on this analysis, a dual-cavity full-toroidal drive with regenerative gearing is selected for the CVT design configuration. Three areas are identified that will require some technological development: the ratio control system, the traction fluid properities, and evaluation of the traction contact performance. Finally, the suitability of the selected CVT design concept for alternate electric and hybrid vehicle applications and alternate vehicle sizes and maximum output torques is determined. In all cases the toroidal traction drive design concept is applicable to the vehicle system. The regenerative gearing could be eliminated in the electric powered vehicle because of the reduced ratio range requirements. In other cases the CVT with regenerative gearing would meet the design requirements after appropriate adjustments in size and reduction gearing ratio.
Semi-Supervised Learning of Lift Optimization of Multi-Element Three-Segment Variable Camber Airfoil
NASA Technical Reports Server (NTRS)
Kaul, Upender K.; Nguyen, Nhan T.
2017-01-01
This chapter describes a new intelligent platform for learning optimal designs of morphing wings based on Variable Camber Continuous Trailing Edge Flaps (VCCTEF) in conjunction with a leading edge flap called the Variable Camber Krueger (VCK). The new platform consists of a Computational Fluid Dynamics (CFD) methodology coupled with a semi-supervised learning methodology. The CFD component of the intelligent platform comprises of a full Navier-Stokes solution capability (NASA OVERFLOW solver with Spalart-Allmaras turbulence model) that computes flow over a tri-element inboard NASA Generic Transport Model (GTM) wing section. Various VCCTEF/VCK settings and configurations were considered to explore optimal design for high-lift flight during take-off and landing. To determine globally optimal design of such a system, an extremely large set of CFD simulations is needed. This is not feasible to achieve in practice. To alleviate this problem, a recourse was taken to a semi-supervised learning (SSL) methodology, which is based on manifold regularization techniques. A reasonable space of CFD solutions was populated and then the SSL methodology was used to fit this manifold in its entirety, including the gaps in the manifold where there were no CFD solutions available. The SSL methodology in conjunction with an elastodynamic solver (FiDDLE) was demonstrated in an earlier study involving structural health monitoring. These CFD-SSL methodologies define the new intelligent platform that forms the basis for our search for optimal design of wings. Although the present platform can be used in various other design and operational problems in engineering, this chapter focuses on the high-lift study of the VCK-VCCTEF system. Top few candidate design configurations were identified by solving the CFD problem in a small subset of the design space. The SSL component was trained on the design space, and was then used in a predictive mode to populate a selected set of test points outside of the given design space. The new design test space thus populated was evaluated by using the CFD component by determining the error between the SSL predictions and the true (CFD) solutions, which was found to be small. This demonstrates the proposed CFD-SSL methodologies for isolating the best design of the VCK-VCCTEF system, and it holds promise for quantitatively identifying best designs of flight systems, in general.
Pineda, David A.; Lopera, Francisco; Puerta, Isabel C.; Trujillo-Orrego, Natalia; Aguirre-Acevedo, Daniel C.; Hincapié-Henao, Liliana; Arango, Clara P.; Acosta, Maria T.; Holzinger, Sandra I.; Palacio, Juan David; Pineda-Alvarez, Daniel E.; Velez, Jorge I.; Martinez, Ariel F.; Lewis, John E.
2014-01-01
Endophenotypes are neurobiological markers cosegregating and associated with illness. These biomarkers represent a promising strategy to dissect ADHD biological causes. This study was aimed at contrasting the genetics of neuropsychological tasks for intelligence, attention, memory, visual-motor skills, and executive function in children from multigenerational and extended pedigrees that cluster ADHD in a genetic isolate. In a sample of 288 children and adolescents, 194 (67.4%) ADHD affected and 94 (32.6%) unaffected, a battery of neuropsychological tests was utilized to assess the association between genetic transmission and the ADHD phenotype. We found significant differences between affected and unaffected children in the WISC block design, PIQ and FSIQ, continuous vigilance, and visual-motor skills, and these variables exhibited a significant heritability. Given the association between these neuropsychological variables and ADHD, and also the high genetic component underlying their transmission in the studied pedigrees, we suggest that these variables be considered as potential cognitive endophenotypes suitable as quantitative trait loci (QTLs) in future studies of linkage and association. PMID:21779842
Analyst-to-Analyst Variability in Simulation-Based Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glickman, Matthew R.; Romero, Vicente J.
This report describes findings from the culminating experiment of the LDRD project entitled, "Analyst-to-Analyst Variability in Simulation-Based Prediction". For this experiment, volunteer participants solving a given test problem in engineering and statistics were interviewed at different points in their solution process. These interviews are used to trace differing solutions to differing solution processes, and differing processes to differences in reasoning, assumptions, and judgments. The issue that the experiment was designed to illuminate -- our paucity of understanding of the ways in which humans themselves have an impact on predictions derived from complex computational simulations -- is a challenging and openmore » one. Although solution of the test problem by analyst participants in this experiment has taken much more time than originally anticipated, and is continuing past the end of this LDRD, this project has provided a rare opportunity to explore analyst-to-analyst variability in significant depth, from which we derive evidence-based insights to guide further explorations in this important area.« less
Realtime Multichannel System for Beat to Beat QT Interval Variability
NASA Technical Reports Server (NTRS)
Starc, Vito; Schlegel, Todd T.
2006-01-01
The measurement of beat-to-beat QT interval variability (QTV) shows clinical promise for identifying several types of cardiac pathology. However, until now, there has been no device capable of displaying, in real time on a beattobeat basis, changes in QTV in all 12 conventional leads in a continuously monitored patient. While several software programs have been designed to analyze QTV, heretofore, such programs have all involved only a few channels (at most) and/or have required laborious user interaction or offline calculations and postprocessing, limiting their clinical utility. This paper describes a PC-based ECG software program that in real time, acquires, analyzes and displays QTV and also PQ interval variability (PQV) in each of the eight independent channels that constitute the 12lead conventional ECG. The system also processes certain related signals that are derived from singular value decomposition and that help to reduce the overall effects of noise on the realtime QTV and PQV results.
NASA Astrophysics Data System (ADS)
Reuter, Bryan; Oliver, Todd; Lee, M. K.; Moser, Robert
2017-11-01
We present an algorithm for a Direct Numerical Simulation of the variable-density Navier-Stokes equations based on the velocity-vorticity approach introduced by Kim, Moin, and Moser (1987). In the current work, a Helmholtz decomposition of the momentum is performed. Evolution equations for the curl and the Laplacian of the divergence-free portion are formulated by manipulation of the momentum equations and the curl-free portion is reconstructed by enforcing continuity. The solution is expanded in Fourier bases in the homogeneous directions and B-Spline bases in the inhomogeneous directions. Discrete equations are obtained through a mixed Fourier-Galerkin and collocation weighted residual method. The scheme is designed such that the numerical solution conserves mass locally and globally by ensuring the discrete divergence projection is exact through the use of higher order splines in the inhomogeneous directions. The formulation is tested on multiple variable-density flow problems.
NASA Astrophysics Data System (ADS)
Williams-Rossi, Dara
Despite the positive outcomes for inquiry-based science education and recommendations from national and state standards, many teachers continue to rely upon more traditional methods of instruction This causal-comparative study was designed to determine the effects of the Inquiry Institute, a professional development program that is intended to strengthen science teachers' pedagogical knowledge and provide practice with inquiry methods based from a constructivist approach. This study will provide a understanding of a cause and effect relationship within three levels of the independent variable---length of participation in the Inquiry Institute (zero, three, or six days)---to determine whether or not the three groups differ on the dependent variables---beliefs, implementation, and barriers. Quantitative data were collected with the Science Inquiry Survey, a researcher-developed instrument designed to also ascertain qualitative information with the use of open-ended survey items. One-way ANOVAs were applied to the data to test for a significant difference in the means of the three groups. The findings of this study indicate that lengthier professional development in the Inquiry Institute holds the most benefits for the participants.
The LCLS variable-energy hard X-ray single-shot spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rich, David; Zhu, Diling; Turner, James
2016-01-01
The engineering design, implementation, operation and performance of the new variable-energy hard X-ray single-shot spectrometer (HXSSS) for the LCLS free-electron laser (FEL) are reported. The HXSSS system is based on a cylindrically bent Si thin crystal for dispersing the incident polychromatic FEL beam. A spatially resolved detector system consisting of a Ce:YAG X-ray scintillator screen, an optical imaging system and a low-noise pixelated optical camera is used to record the spectrograph. The HXSSS provides single-shot spectrum measurements for users whose experiments depend critically on the knowledge of the self-amplified spontaneous emission FEL spectrum. It also helps accelerator physicists for themore » continuing studies and optimization of self-seeding, various improved mechanisms for lasing mechanisms, and FEL performance improvements. The designed operating energy range of the HXSSS is from 4 to 20 keV, with the spectral range of order larger than 2% and a spectral resolution of 2 × 10 -5or better. Those performance goals have all been achieved during the commissioning of the HXSSS.« less
The LCLS variable-energy hard X-ray single-shot spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rich, David; Zhu, Diling; Turner, James
The engineering design, implementation, operation and performance of the new variable-energy hard X-ray single-shot spectrometer (HXSSS) for the LCLS free-electron laser (FEL) are reported. The HXSSS system is based on a cylindrically bent Si thin crystal for dispersing the incident polychromatic FEL beam. A spatially resolved detector system consisting of a Ce:YAG X-ray scintillator screen, an optical imaging system and a low-noise pixelated optical camera is used to record the spectrograph. The HXSSS provides single-shot spectrum measurements for users whose experiments depend critically on the knowledge of the self-amplified spontaneous emission FEL spectrum. It also helps accelerator physicists for themore » continuing studies and optimization of self-seeding, various improved mechanisms for lasing mechanisms, and FEL performance improvements. The designed operating energy range of the HXSSS is from 4 to 20 keV, with the spectral range of order larger than 2% and a spectral resolution of 2 × 10 -5or better. Those performance goals have all been achieved during the commissioning of the HXSSS.« less
Addressing drug adherence using an operations management model.
Nunlee, Martin; Bones, Michelle
2014-01-01
OBJECTIVE To provide a model that enables health systems and pharmacy benefit managers to provide medications reliably and test for reliability and validity in the analysis of adherence to drug therapy of chronic disease. SUMMARY The quantifiable model described here can be used in conjunction with behavioral designs of drug adherence assessments. The model identifies variables that can be reproduced and expanded across the management of chronic diseases with drug therapy. By creating a reorder point system for reordering medications, the model uses a methodology commonly seen in operations research. The design includes a safety stock of medication and current supply of medication, which increases the likelihood that patients will have a continuous supply of medications, thereby positively affecting adherence by removing barriers. CONCLUSION This method identifies an adherence model that quantifies variables related to recommendations from health care providers; it can assist health care and service delivery systems in making decisions that influence adherence based on the expected order cycle days and the expected daily quantity of medication administered. This model addresses the possession of medication as a barrier to adherence.
The role of space techniques in the understanding of solar variability
NASA Astrophysics Data System (ADS)
Bonnet, R. M.
1981-12-01
The advantages of using space for solar observations are discussed, and include avoidance of atmospheric effects, continuous observations by satellites, and the possibilities of solar studies from other planets or from above the ecliptic plane. Space-based viewing has allowed energy spectra studies from 310 nm down to gamma ray range, although instrument degradation due to radiation has often resulted in less precise instrument performance. Hands-on calibration on the Shuttle or the Salyut space station is seen as ameliorating the problem. Solar seismology, the design of a solar probe, solar magnetic measurement, and X-ray observations of coronal holes are outlined; the Solar Polar Mission is designed to carry UV, X-ray, and gamma ray measuring equipment. X-ray points (XRP), discovered from magnetic measurements on board Skylab, revealed that XRP varies 180 deg out of phase with respect to the sunspot number. Features and origins of the UV spectra are reviewed, and the necessity for precise measurement of the absolute intensity of the chromosphere is stressed as the means of understanding solar variability.
High Temperature Variable Conductance Heat Pipes for Radioisotope Stirling Systems
NASA Technical Reports Server (NTRS)
Tarau, Calin; Walker, Kara L.; Anderson, William G.
2009-01-01
In a Stirling radioisotope system, heat must continually be removed from the GPHS modules, to maintain the GPHS modules and surrounding insulation at acceptable temperatures. Normally, the Stirling convertor provides this cooling. If the Stirling convertor stops in the current system, the insulation is designed to spoil, preventing damage to the GPHS, but also ending the mission. An alkali-metal Variable Conductance Heat Pipe (VCHP) is under development to allow multiple stops and restarts of the Stirling convertor. The status of the ongoing effort in developing this technology is presented in this paper. An earlier, preliminary design had a radiator outside the Advanced Stirling Radioisotope Generator (ASRG) casing, used NaK as the working fluid, and had the reservoir located on the cold side adapter flange. The revised design has an internal radiator inside the casing, with the reservoir embedded inside the insulation. A large set of advantages are offered by this new design. In addition to reducing the overall size and mass of the VCHP, simplicity, compactness and easiness in assembling the VCHP with the ASRG are significantly enhanced. Also, the permanently elevated temperatures of the entire VCHP allows the change of the working fluid from a binary compound (NaK) to single compound (Na). The latter, by its properties, allows higher performance and further mass reduction of the system. Preliminary design and analysis shows an acceptable peak temperature of the ASRG case of 140 C while the heat losses caused by the addition of the VCHP are 1.8 W.
Analysis and Design of High-Order Parallel Resonant Converters
NASA Astrophysics Data System (ADS)
Batarseh, Issa Eid
1990-01-01
In this thesis, a special state variable transformation technique has been derived for the analysis of high order dc-to-dc resonant converters. Converters comprised of high order resonant tanks have the advantage of utilizing the parasitic elements by making them part of the resonant tank. A new set of state variables is defined in order to make use of two-dimensional state-plane diagrams in the analysis of high order converters. Such a method has been successfully used for the analysis of the conventional Parallel Resonant Converters (PRC). Consequently, two -dimensional state-plane diagrams are used to analyze the steady state response for third and fourth order PRC's when these converters are operated in the continuous conduction mode. Based on this analysis, a set of control characteristic curves for the LCC-, LLC- and LLCC-type PRC are presented from which various converter design parameters are obtained. Various design curves for component value selections and device ratings are given. This analysis of high order resonant converters shows that the addition of the reactive components to the resonant tank results in converters with better performance characteristics when compared with the conventional second order PRC. Complete design procedure along with design examples for 2nd, 3rd and 4th order converters are presented. Practical power supply units, normally used for computer applications, were built and tested by using the LCC-, LLC- and LLCC-type commutation schemes. In addition, computer simulation results are presented for these converters in order to verify the theoretical results.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-06-15
Ever-tightening regulations on fuel economy and carbon emissions demand continual innovation in finding ways for reducing vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials by adding material diversity, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing thickness while retaining sufficient strength and ductility required for durability and safety. Such a project was proposed and is currently being executed under themore » auspices of the United States Automotive Materials Partnership (USAMP) funded by the Department of Energy. Under this program, new steel alloys (Third Generation Advanced High Strength Steel or 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. In this project the principal phases identified are (i) material identification, (ii) formability optimization and (iii) multi-disciplinary vehicle optimization. This paper serves as an introduction to the LS-OPT methodology and therefore mainly focuses on the first phase, namely an approach to integrate material identification using material models of different length scales. For this purpose, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a Homogenized State Variable (SV) model, is discussed and demonstrated. The paper concludes with proposals for integrating the multi-scale methodology into the overall vehicle design.« less
Space fabrication demonstration system. [beam builder and induction fastening
NASA Technical Reports Server (NTRS)
1980-01-01
The development effort on the composite beam cap fabricator was completed within cost and close to abbreviated goals. The design and analysis of flight weight primary and secondary beam builder structures proceeded satisfactorily but remains curtailed until further funding is made available to complete the work. The induction fastening effort remains within cost and schedule constraints. Tests of the LARC prototype induction welder is continuing in an instrumented test stand comprised of a Dumore drill press (air over oil feed for variable applied loads) and a dynamometer to measure actual welding loads. Continued testing shows that the interface screening must be well impregnated with resin to ensure proper flow when bonding graphite-acrylic lap shear samples. Specimens prepared from 0.030 inch thick graphite-polyethersulfone are also available for future induction fastening evaluation.
NASA Technical Reports Server (NTRS)
ChangDiaz, Franklin R.; Squire, J. P.; Ilin, A. V.; Jacobson, V. T.; Glover, T. W.; Baity, F. W.; Carter, M. D.; Goulding, R. H.; Breizman, B. N.
1999-01-01
Experimental and theoretical studies on the Variable Specific Impulse Magnetoplasma Rocket (VASIMR) have continued through a NASA led collaborative program involving several research groups. In the experimental area, performance characterization of the VASIMR helicon plasma source has been obtained over a portion of the parameter space, with helium and hydrogen propellant. Density (10(exp 18) - 10(exp 19)/ cubic meter) and temperature (5 eV) were measured at moderate degree of ionization in two separate experimental devices. Helicon design improvement and optimization will be discussed. Experiments with the ion cyclotron resonance heating (ICRH) subsection have begun and preliminary results will be discussed. Theoretical picture and integrated numerical simulation continue to be refined to account for the main physics elements of the VASIMR, including RF absorption and particle acceleration with subsequent detachment in the magnetic nozzle.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
NASA Astrophysics Data System (ADS)
Kaskhedikar, Apoorva Prakash
According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI pertinent to the building type. The ability to identify and rank the important variables is of great importance in practical implementation of the benchmarking tools which rely on query-based building and HVAC variable filters specified by the user.
NASA Astrophysics Data System (ADS)
Teh, R. Y.; Reid, M. D.
2014-12-01
Following previous work, we distinguish between genuine N -partite entanglement and full N -partite inseparability. Accordingly, we derive criteria to detect genuine multipartite entanglement using continuous-variable (position and momentum) measurements. Our criteria are similar but different to those based on the van Loock-Furusawa inequalities, which detect full N -partite inseparability. We explain how the criteria can be used to detect the genuine N -partite entanglement of continuous variable states generated from squeezed and vacuum state inputs, including the continuous-variable Greenberger-Horne-Zeilinger state, with explicit predictions for up to N =9 . This makes our work accessible to experiment. For N =3 , we also present criteria for tripartite Einstein-Podolsky-Rosen (EPR) steering. These criteria provide a means to demonstrate a genuine three-party EPR paradox, in which any single party is steerable by the remaining two parties.
NASA Astrophysics Data System (ADS)
Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang
2018-06-01
We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.
Continuous-variable protocol for oblivious transfer in the noisy-storage model.
Furrer, Fabian; Gehring, Tobias; Schaffner, Christian; Pacher, Christoph; Schnabel, Roman; Wehner, Stephanie
2018-04-13
Cryptographic protocols are the backbone of our information society. This includes two-party protocols which offer protection against distrustful players. Such protocols can be built from a basic primitive called oblivious transfer. We present and experimentally demonstrate here a quantum protocol for oblivious transfer for optical continuous-variable systems, and prove its security in the noisy-storage model. This model allows us to establish security by sending more quantum signals than an attacker can reliably store during the protocol. The security proof is based on uncertainty relations which we derive for continuous-variable systems, that differ from the ones used in quantum key distribution. We experimentally demonstrate in a proof-of-principle experiment the proposed oblivious transfer protocol for various channel losses by using entangled two-mode squeezed states measured with balanced homodyne detection. Our work enables the implementation of arbitrary two-party quantum cryptographic protocols with continuous-variable communication systems.
Continuous variable quantum key distribution with modulated entangled states.
Madsen, Lars S; Usenko, Vladyslav C; Lassen, Mikael; Filip, Radim; Andersen, Ulrik L
2012-01-01
Quantum key distribution enables two remote parties to grow a shared key, which they can use for unconditionally secure communication over a certain distance. The maximal distance depends on the loss and the excess noise of the connecting quantum channel. Several quantum key distribution schemes based on coherent states and continuous variable measurements are resilient to high loss in the channel, but are strongly affected by small amounts of channel excess noise. Here we propose and experimentally address a continuous variable quantum key distribution protocol that uses modulated fragile entangled states of light to greatly enhance the robustness to channel noise. We experimentally demonstrate that the resulting quantum key distribution protocol can tolerate more noise than the benchmark set by the ideal continuous variable coherent state protocol. Our scheme represents a very promising avenue for extending the distance for which secure communication is possible.
NASA Astrophysics Data System (ADS)
Das, Siddhartha; Siopsis, George; Weedbrook, Christian
2018-02-01
With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.
NASA Technical Reports Server (NTRS)
Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.; Weisskopf, M. C.
2011-01-01
We describe a mathematical formalism for determining the mirror shell nodal positions and detector tilts that optimize the spatial resolution averaged over a field-of-view for a nested x-ray telescope, assuming known mirror segment surface prescriptions and known detector focal surface. The results are expressed in terms of ensemble averages over variable combinations of the ray positions and wave vectors in the flat focal plane intersecting the optical axis at the nominal on-axis focus, which can be determined by Monte-Carlo ray traces of the individual mirror shells. This work is part of our continuing efforts to provide analytical tools to aid in the design process for wide-field survey x-ray astronomy missions.
NASA Technical Reports Server (NTRS)
Elsner, Ronald; O'Dell, Stephen; Ramsey, Brian; Weisskopf, Martin
2011-01-01
We describe a mathematical formalism for determining the mirror shell nodal positions and detector tilts that optimize the spatial resolution averaged over a field-of-view for a nested x-ray telescope, assuming known mirror segment surface prescriptions and known detector focal surface. The results are expressed in terms of ensemble averages over variable combinations of the ray positions and wavevectors in the flat focal plane intersecting the optical axis at the nominal on-axis focus, which can be determined by Monte-Carlo ray traces of the individual mirror shells. This work is part of our continuing efforts to provide analytical tools to aid in the design process for wide-field survey x-ray astronomy missions.
NASA Astrophysics Data System (ADS)
Cao, Jingchen; Peng, Songang; Liu, Wei; Wu, Quantan; Li, Ling; Geng, Di; Yang, Guanhua; Ji, Zhouyu; Lu, Nianduan; Liu, Ming
2018-02-01
We present a continuous surface-potential-based compact model for molybdenum disulfide (MoS2) field effect transistors based on the multiple trapping release theory and the variable-range hopping theory. We also built contact resistance and velocity saturation models based on the analytical surface potential. This model is verified with experimental data and is able to accurately predict the temperature dependent behavior of the MoS2 field effect transistor. Our compact model is coded in Verilog-A, which can be implemented in a computer-aided design environment. Finally, we carried out an active matrix display simulation, which suggested that the proposed model can be successfully applied to circuit design.
Continuous engineering of nano-cocrystals for medical and energetic applications.
Spitzer, D; Risse, B; Schnell, F; Pichot, V; Klaumünzer, M; Schaefer, M R
2014-10-10
Cocrystals, solid mixtures of different molecules on molecular scale, are supposed to be tailor made materials with improved employability compared to their pristine individual components in domains such as medicine and explosives. In medicine, cocrystals are obtained by crystallization of active pharmaceutical ingredients with precisely chosen coformers to design medicaments that demonstrate enhanced stability, high solubility, and therefore high bioavailability and optimized drug up-take. Nanoscaling may further advance these characteristica compared to their micronsized counterparts - because of a larger surface to volume ratio of nanoparticles. In the field of energetic materials, cocrystals offer the opportunity to design smart explosives, combining high reactivity with significantly reduced sensitivity, nowadays essential for a safe manipulation and handling. Furthermore, cocrystals are used in ferroelectrics, non-linear material response and electronic organics. However, state of the art batch processes produce low volume of cocrystals of variable quality and only have produced micronsized cocrystals so far, no nano-cocrystals. Here we demonstrate the continuous preparation of pharmaceutical and energetic micro- and nano-cocrystals using the Spray Flash Evaporation process. Our laboratory scale pilot plant continuously prepared up to 8 grams per hour of Caffeine/Oxalic acid 2:1, Caffeine/Glutaric acid 1:1, TNT/CL-20 1:1 and HMX/Cl-20 1:2 nano- and submicronsized cocrystals.
Continuous engineering of nano-cocrystals for medical and energetic applications
NASA Astrophysics Data System (ADS)
Spitzer, D.; Risse, B.; Schnell, F.; Pichot, V.; Klaumünzer, M.; Schaefer, M. R.
2014-10-01
Cocrystals, solid mixtures of different molecules on molecular scale, are supposed to be tailor made materials with improved employability compared to their pristine individual components in domains such as medicine and explosives. In medicine, cocrystals are obtained by crystallization of active pharmaceutical ingredients with precisely chosen coformers to design medicaments that demonstrate enhanced stability, high solubility, and therefore high bioavailability and optimized drug up-take. Nanoscaling may further advance these characteristica compared to their micronsized counterparts - because of a larger surface to volume ratio of nanoparticles. In the field of energetic materials, cocrystals offer the opportunity to design smart explosives, combining high reactivity with significantly reduced sensitivity, nowadays essential for a safe manipulation and handling. Furthermore, cocrystals are used in ferroelectrics, non-linear material response and electronic organics. However, state of the art batch processes produce low volume of cocrystals of variable quality and only have produced micronsized cocrystals so far, no nano-cocrystals. Here we demonstrate the continuous preparation of pharmaceutical and energetic micro- and nano-cocrystals using the Spray Flash Evaporation process. Our laboratory scale pilot plant continuously prepared up to 8 grams per hour of Caffeine/Oxalic acid 2:1, Caffeine/Glutaric acid 1:1, TNT/CL-20 1:1 and HMX/Cl-20 1:2 nano- and submicronsized cocrystals.
Continuous engineering of nano-cocrystals for medical and energetic applications
Spitzer, D.; Risse, B.; Schnell, F.; Pichot, V.; Klaumünzer, M.; Schaefer, M. R.
2014-01-01
Cocrystals, solid mixtures of different molecules on molecular scale, are supposed to be tailor made materials with improved employability compared to their pristine individual components in domains such as medicine and explosives. In medicine, cocrystals are obtained by crystallization of active pharmaceutical ingredients with precisely chosen coformers to design medicaments that demonstrate enhanced stability, high solubility, and therefore high bioavailability and optimized drug up-take. Nanoscaling may further advance these characteristica compared to their micronsized counterparts – because of a larger surface to volume ratio of nanoparticles. In the field of energetic materials, cocrystals offer the opportunity to design smart explosives, combining high reactivity with significantly reduced sensitivity, nowadays essential for a safe manipulation and handling. Furthermore, cocrystals are used in ferroelectrics, non-linear material response and electronic organics. However, state of the art batch processes produce low volume of cocrystals of variable quality and only have produced micronsized cocrystals so far, no nano-cocrystals. Here we demonstrate the continuous preparation of pharmaceutical and energetic micro- and nano-cocrystals using the Spray Flash Evaporation process. Our laboratory scale pilot plant continuously prepared up to 8 grams per hour of Caffeine/Oxalic acid 2:1, Caffeine/Glutaric acid 1:1, TNT/CL-20 1:1 and HMX/Cl-20 1:2 nano- and submicronsized cocrystals. PMID:25300652
Jabbour, Richard J; Shun-Shin, Matthew J; Finegold, Judith A; Afzal Sohaib, S M; Cook, Christopher; Nijjer, Sukhjinder S; Whinnett, Zachary I; Manisty, Charlotte H; Brugada, Josep; Francis, Darrel P
2015-01-06
Biventricular pacing (CRT) shows clear benefits in heart failure with wide QRS, but results in narrow QRS have appeared conflicting. We tested the hypothesis that study design might have influenced findings. We identified all reports of CRT-P/D therapy in subjects with narrow QRS reporting effects on continuous physiological variables. Twelve studies (2074 patients) met these criteria. Studies were stratified by presence of bias-resistance steps: the presence of a randomized control arm over a single arm, and blinded outcome measurement. Change in each endpoint was quantified using a standardized effect size (Cohen's d). We conducted separate meta-analyses for each variable in turn, stratified by trial quality. In non-randomized, non-blinded studies, the majority of variables (10 of 12, 83%) showed significant improvement, ranging from a standardized mean effect size of +1.57 (95%CI +0.43 to +2.7) for ejection fraction to +2.87 (+1.78 to +3.95) for NYHA class. In the randomized, non-blinded study, only 3 out of 6 variables (50%) showed improvement. For the randomized blinded studies, 0 out of 9 variables (0%) showed benefit, ranging from -0.04 (-0.31 to +0.22) for ejection fraction to -0.1 (-0.73 to +0.53) for 6-minute walk test. Differences in degrees of resistance to bias, rather than choice of endpoint, explain the variation between studies of CRT in narrow-QRS heart failure addressing physiological variables. When bias-resistance features are implemented, it becomes clear that these patients do not improve in any tested physiological variable. Guidance from studies without careful planning to resist bias may be far less useful than commonly perceived. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Variability, constraints, and creativity. Shedding light on Claude Monet.
Stokes, P D
2001-04-01
Recent experimental research suggests 2 things. The first is that along with learning how to do something, people also learn how variably or differently to continue doing it. The second is that high variability is maintained by constraining, precluding a currently successful, often repetitive solution to a problem. In this view, Claude Monet's habitually high level of variability in painting was acquired during his childhood and early apprenticeship and was maintained throughout his adult career by a continuous series of task constraints imposed by the artist on his own work. For Monet, variability was rewarded and rewarding.
On the dynamic rounding-off in analogue and RF optimal circuit sizing
NASA Astrophysics Data System (ADS)
Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena
2014-04-01
Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.
Fields, Dail; Roman, Paul M; Blum, Terry C
2012-06-01
To examine the relationships among general management systems, patient-focused quality management/continuous process improvement (TQM/CPI) processes, resource availability, and multiple dimensions of substance use disorder (SUD) treatment. Data are from a nationally representative sample of 221 SUD treatment centers through the National Treatment Center Study (NTCS). The design was a cross-sectional field study using latent variable structural equation models. The key variables are management practices, TQM/continuous quality improvement (CQI) practices, resource availability, and treatment center performance. Interviews and questionnaires provided data from treatment center administrative directors and clinical directors in 2007-2008. Patient-focused TQM/CQI practices fully mediated the relationship between internal management practices and performance. The effects of TQM/CQI on performance are significantly larger for treatment centers with higher levels of staff per patient. Internal management practices may create a setting that supports implementation of specific patient-focused practices and protocols inherent to TQM/CQI processes. However, the positive effects of internal management practices on treatment center performance occur through use of specific patient-focused TQM/CPI practices and have more impact when greater amounts of supporting resources are present. © Health Research and Educational Trust.
NASA Technical Reports Server (NTRS)
DeSmidt, Hans A.; Smith, Edward C.; Bill, Robert C.; Wang, Kon-Well
2013-01-01
This project develops comprehensive modeling and simulation tools for analysis of variable rotor speed helicopter propulsion system dynamics. The Comprehensive Variable-Speed Rotorcraft Propulsion Modeling (CVSRPM) tool developed in this research is used to investigate coupled rotor/engine/fuel control/gearbox/shaft/clutch/flight control system dynamic interactions for several variable rotor speed mission scenarios. In this investigation, a prototypical two-speed Dual-Clutch Transmission (DCT) is proposed and designed to achieve 50 percent rotor speed variation. The comprehensive modeling tool developed in this study is utilized to analyze the two-speed shift response of both a conventional single rotor helicopter and a tiltrotor drive system. In the tiltrotor system, both a Parallel Shift Control (PSC) strategy and a Sequential Shift Control (SSC) strategy for constant and variable forward speed mission profiles are analyzed. Under the PSC strategy, selecting clutch shift-rate results in a design tradeoff between transient engine surge margins and clutch frictional power dissipation. In the case of SSC, clutch power dissipation is drastically reduced in exchange for the necessity to disengage one engine at a time which requires a multi-DCT drive system topology. In addition to comprehensive simulations, several sections are dedicated to detailed analysis of driveline subsystem components under variable speed operation. In particular an aeroelastic simulation of a stiff in-plane rotor using nonlinear quasi-steady blade element theory was conducted to investigate variable speed rotor dynamics. It was found that 2/rev and 4/rev flap and lag vibrations were significant during resonance crossings with 4/rev lagwise loads being directly transferred into drive-system torque disturbances. To capture the clutch engagement dynamics, a nonlinear stick-slip clutch torque model is developed. Also, a transient gas-turbine engine model based on first principles mean-line compressor and turbine approximations is developed. Finally an analysis of high frequency gear dynamics including the effect of tooth mesh stiffness variation under variable speed operation is conducted including experimental validation. Through exploring the interactions between the various subsystems, this investigation provides important insights into the continuing development of variable-speed rotorcraft propulsion systems.
Waring, W S; Rhee, J Y; Bateman, D N; Leggett, G E; Jamie, H
2008-11-01
Antidepressant overdose may be associated with significant cardiotoxicity, and recent data have shown that acute toxic effects are associated with impaired heart rate variability. This study was designed to examine the feasibility of non-invasive heart rate variability recording in patients that present to hospital after deliberate antidepressant ingestion. This was a prospective study of 72 consecutive patients attending the Emergency Department after deliberate antidepressant overdose and 72 age-matched patients that ingested paracetamol, as a control group. Single time-point continuous electrocardiographic recordings were used to allow spectral analyses of heart rate variability determined in low-frequency (LF) and high-frequency (HF) domains. The LF:HF ratio was used to represent overall sympathovagal cardiac activity. Antidepressant overdose was associated with reduced overall heart rate variability: 1329 vs. 2018 ms(2) (P = 0.0239 by Mann-Whitney test). Variability in the LF domain was higher (64.8 vs. 49.8, P = 0.0006), whereas that in the HF domain was lower (24.3 vs. 36.4, P = 0.0001), and the LF:HF ratio was higher in the antidepressant group (2.4 vs. 1.2, P = 0.0003). Antidepressant overdose is associated with impaired heart rate variability in a pattern consistent with excess cardiac sympathetic activity. Further work is required to establish the significance of these findings and to explore whether the impairment of heart rate variability may be used to predict the development of arrhythmia in this patient group.
Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.
Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J
2018-05-24
Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.
Equivalence between entanglement and the optimal fidelity of continuous variable teleportation.
Adesso, Gerardo; Illuminati, Fabrizio
2005-10-07
We devise the optimal form of Gaussian resource states enabling continuous-variable teleportation with maximal fidelity. We show that a nonclassical optimal fidelity of N-user teleportation networks is necessary and sufficient for N-party entangled Gaussian resources, yielding an estimator of multipartite entanglement. The entanglement of teleportation is equivalent to the entanglement of formation in a two-user protocol, and to the localizable entanglement in a multiuser one. Finally, we show that the continuous-variable tangle, quantifying entanglement sharing in three-mode Gaussian states, is defined operationally in terms of the optimal fidelity of a tripartite teleportation network.
NASA Astrophysics Data System (ADS)
Yoshikawa, Jun-ichi; Yokoyama, Shota; Kaji, Toshiyuki; Sornphiphatphong, Chanond; Shiozawa, Yu; Makino, Kenzo; Furusawa, Akira
2016-09-01
In recent quantum optical continuous-variable experiments, the number of fully inseparable light modes has drastically increased by introducing a multiplexing scheme either in the time domain or in the frequency domain. Here, modifying the time-domain multiplexing experiment reported in the work of Yokoyama et al. [Nat. Photonics 7, 982 (2013)], we demonstrate the successive generation of fully inseparable light modes for more than one million modes. The resulting multi-mode state is useful as a dual-rail continuous variable cluster state. We circumvent the previous problem of optical phase drifts, which has limited the number of fully inseparable light modes to around ten thousands, by continuous feedback control of the optical system.
NASA Astrophysics Data System (ADS)
Downing, B. D.; Pellerin, B. A.; Bergamaschi, B. A.; Saraceno, J.
2011-12-01
Studying controls on geochemical processes in rivers and streams is difficult because concentration and composition often changes rapidly in response to physical and biological forcings. Understanding biogeochemical dynamics in rivers will improve current understanding of the role of watershed sources to carbon cycling, river and stream ecology, and loads to estuaries and oceans. Continuous measurements of dissolved organic carbon (DOC), nitrate (NO3-) and soluble reactive phosphate (SRP) concentrations are now possible, along with some information about DOC composition. In situ sensors designed to measure these constituents provide high frequency, real-time data that can elucidate hydrologic and biogeochemical controls which are difficult to detect using more traditional sampling approaches. Here we present a coupled approach, using in situ optical instrumentation with discharge measurements to provide quantitative estimates of constituent loads to investigate C, NO3- and SRP sources and processing in the Sacramento River, CA, USA. Continuous measurement of DOC concentration was conducted by use of a miniature in situ fluorometer (Turner Designs Cyclops) designed to measure chromophoric dissolved organic matter fluorescence (FDOM) over the course of an entire year. Nitrate was measured concurrently using a Satlantic SUNA and phosphate was measured using a WETLabs model Cycle-P instrument for a two week period in July 2011. Continuous measurement from these instruments paired with continuous measurement of physical water quality variables such as temperature, pH, specific conductance, dissolved oxygen, and turbidity, were used to investigate physical and chemical dynamics of DOC, NO3-, SRP over varying time scales. Deploying these instruments at pre-existing USGS discharge gages allowed for calculation of instantaneous and integrated constituent fluxes, as well as filling in gaps in our understanding biogeochemical processes and transport. Results from the study show that diurnal, event driven and seasonal changes are key to calculating accurate watershed fluxes and detecting transient sources of DOC, NO3- and SRP.
Experiences and recommendations in deploying a real-time, water quality monitoring system
NASA Astrophysics Data System (ADS)
O'Flynn, B.; Regan, F.; Lawlor, A.; Wallace, J.; Torres, J.; O'Mathuna, C.
2010-12-01
Monitoring of water quality at a river basin level to meet the requirements of the Water Framework Directive (WFD) using conventional sampling and laboratory-based techniques poses a significant financial burden. Wireless sensing systems offer the potential to reduce these costs considerably, as well as provide more useful, continuous monitoring capabilities by giving an accurate idea of the changing environmental and water quality in real time. It is unlikely that the traditional spot/grab sampling will provide a reasonable estimate of the true maximum and/or mean concentration for a particular physicochemical variable in a water body with marked temporal variability. When persistent fluctuations occur, it is likely only to be detected through continuous measurements, which have the capability of detecting sporadic peaks of concentration. Thus, in situ sensors capable of continuous sampling of parameters required under the WFD would therefore provide more up-to-date information, cut monitoring costs and provide better coverage representing long-term trends in fluctuations of pollutant concentrations. DEPLOY is a technology demonstration project, which began planning and station selection and design in August 2008 aiming to show how state-of-the-art technology could be implemented for cost-effective, continuous and real-time monitoring of a river catchment. The DEPLOY project is seen as an important building block in the realization of a wide area autonomous network of sensors capable of monitoring the spatial and temporal distribution of important water quality and environmental target parameters. The demonstration sites chosen are based in the River Lee, which flows through Ireland's second largest city, Cork, and were designed to include monitoring stations in five zones considered typical of significant river systems--these monitor water quality parameters such as pH, temperature, depth, conductivity, turbidity and dissolved oxygen. Over one million data points have been collected since the multi-sensor system was deployed in May 2009. Extreme meteorological events have occurred during the period of deployment and the collection of real-time water quality data as well as the knowledge, experience and recommendations for future deployments are discussed.
Curve Number Application in Continuous Runoff Models: An Exercise in Futility?
NASA Astrophysics Data System (ADS)
Lamont, S. J.; Eli, R. N.
2006-12-01
The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.
Towards 10 meV resolution: The design of an ultrahigh resolution soft X-ray RIXS spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dvorak, Joseph; Jarrige, Ignace; Bisogni, Valentina
Here we present the optical design of the Centurion soft X-ray resonant inelastic X-ray scattering (RIXS) spectrometer to be located on the SIX beamline at NSLS-II. The spectrometer is designed to reach a resolving power of 100 000 at 1000 eV at its best resolution. It is also designed to have continuously variable 2θ motion over a range of 112° using a custom triple rotating flange. We have analyzed several possible spectrometer designs capable of reaching the target resolution. After careful analysis, we have adopted a Hettrick-Underwood spectrometer design, with an additional plane mirror to maintain a fixed direction formore » the outgoing beam. The spectrometer can cancel defocus and coma aberrations at all energies, has an erect focal plane, and minimizes mechanical motions of the detector. When the beamline resolution is accounted for, the net spectral resolution will be 14 meV at 1000 eV. Lastly, this will open up many low energy excitations to study and will expand greatly the power of soft X-ray RIXS.« less
Towards 10 meV resolution: The design of an ultrahigh resolution soft X-ray RIXS spectrometer
Dvorak, Joseph; Jarrige, Ignace; Bisogni, Valentina; ...
2016-11-10
Here we present the optical design of the Centurion soft X-ray resonant inelastic X-ray scattering (RIXS) spectrometer to be located on the SIX beamline at NSLS-II. The spectrometer is designed to reach a resolving power of 100 000 at 1000 eV at its best resolution. It is also designed to have continuously variable 2θ motion over a range of 112° using a custom triple rotating flange. We have analyzed several possible spectrometer designs capable of reaching the target resolution. After careful analysis, we have adopted a Hettrick-Underwood spectrometer design, with an additional plane mirror to maintain a fixed direction formore » the outgoing beam. The spectrometer can cancel defocus and coma aberrations at all energies, has an erect focal plane, and minimizes mechanical motions of the detector. When the beamline resolution is accounted for, the net spectral resolution will be 14 meV at 1000 eV. Lastly, this will open up many low energy excitations to study and will expand greatly the power of soft X-ray RIXS.« less
Soft X-ray variability over the present minimum of solar activity as observed by SphinX
NASA Astrophysics Data System (ADS)
Gburek, S.; Siarkowski, M.; Kepa, A.; Sylwester, J.; Kowalinski, M.; Bakala, J.; Podgorski, P.; Kordylewski, Z.; Plocieniak, S.; Sylwester, B.; Trzebinski, W.; Kuzin, S.
2011-04-01
Solar Photometer in X-rays (SphinX) is an instrument designed to observe the Sun in X-rays in the energy range 0.85-15.00 keV. SphinX is incorporated within the Russian TESIS X and EUV telescope complex aboard the CORONAS-Photon satellite which was launched on January 30, 2009 at 13:30 UT from the Plesetsk Cosmodrome, northern Russia. Since February, 2009 SphinX has been measuring solar X-ray radiation nearly continuously. The principle of SphinX operation and the content of the instrument data archives is studied. Issues related to dissemination of SphinX calibration, data, repository mirrors locations, types of data and metadata are discussed. Variability of soft X-ray solar flux is studied using data collected by SphinX over entire mission duration.
ERIC Educational Resources Information Center
Kondakci, Yasar; Zayim, Merve; Beycioglu, Kadir; Sincar, Mehmet; Ugurlu, Celal T
2016-01-01
This study aims at building a theoretical base for continuous change in education and using this base to test the mediating roles of two key contextual variables, knowledge sharing and trust, in the relationship between the distributed leadership perceptions and continuous change behaviours of teachers. Data were collected from 687 public school…
Shape Optimization and Modular Discretization for the Development of a Morphing Wingtip
NASA Astrophysics Data System (ADS)
Morley, Joshua
Better knowledge in the areas of aerodynamics and optimization has allowed designers to develop efficient wingtip structures in recent years. However, the requirements faced by wingtip devices can be considerably different amongst an aircraft's flight regimes. Traditional static wingtip devices are then a compromise between conflicting requirements, resulting in less than optimal performance within each regime. Alternatively, a morphing wingtip can reconfigure leading to improved performance over a range of dissimilar flight conditions. Developed within this thesis, is a modular morphing wingtip concept that centers on the use of variable geometry truss mechanisms to permit morphing. A conceptual design framework is established to aid in the development of the concept. The framework uses a metaheuristic optimization procedure to determine optimal continuous wingtip configurations. The configurations are then discretized for the modular concept. The functionality of the framework is demonstrated through a design study on a hypothetical wing/winglet within the thesis.
Optimization of a GO2/GH2 Swirl Coaxial Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
1999-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.
A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation
NASA Astrophysics Data System (ADS)
Byun, K.; Hamlet, A. F.
2017-12-01
There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.
On the Asymptotic Relative Efficiency of Planned Missingness Designs.
Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D
2016-03-01
In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.
Thomas, Patricia E; LeFlore, Judy
2013-01-01
Infants born prematurely with respiratory distress syndrome are at high risk for complications from mechanical ventilation. Strategies are needed to minimize their days on the ventilator. The purpose of this study was to compare extubation success rates in infants treated with 2 different types of continuous positive airway pressure devices. A retrospective cohort study design was used. Data were retrieved from electronic medical records for patients in a large, metropolitan, level III neonatal intensive care unit. A sample of 194 premature infants with respiratory distress syndrome was selected, 124 of whom were treated with nasal intermittent positive pressure ventilation and 70 with bi-level variable flow nasal continuous positive airway pressure (bi-level nasal continuous positive airway pressure). Infants in both groups had high extubation success rates (79% of nasal intermittent positive pressure ventilation group and 77% of bi-level nasal continuous positive airway pressure group). Although infants in the bi-level nasal continuous positive airway pressure group were extubated sooner, there was no difference in duration of oxygen therapy between the 2 groups. Promoting early extubation and extubation success is a vital strategy to reduce complications of mechanical ventilation that adversely affect premature infants with respiratory distress syndrome.
Integrative modeling and novel particle swarm-based optimal design of wind farms
NASA Astrophysics Data System (ADS)
Chowdhury, Souma
To meet the energy needs of the future, while seeking to decrease our carbon footprint, a greater penetration of sustainable energy resources such as wind energy is necessary. However, a consistent growth of wind energy (especially in the wake of unfortunate policy changes and reported under-performance of existing projects) calls for a paradigm shift in wind power generation technologies. This dissertation develops a comprehensive methodology to explore, analyze and define the interactions between the key elements of wind farm development, and establish the foundation for designing high-performing wind farms. The primary contribution of this research is the effective quantification of the complex combined influence of wind turbine features, turbine placement, farm-land configuration, nameplate capacity, and wind resource variations on the energy output of the wind farm. A new Particle Swarm Optimization (PSO) algorithm, uniquely capable of preserving population diversity while addressing discrete variables, is also developed to provide powerful solutions towards optimizing wind farm configurations. In conventional wind farm design, the major elements that influence the farm performance are often addressed individually. The failure to fully capture the critical interactions among these factors introduces important inaccuracies in the projected farm performance and leads to suboptimal wind farm planning. In this dissertation, we develop the Unrestricted Wind Farm Layout Optimization (UWFLO) methodology to model and optimize the performance of wind farms. The UWFLO method obviates traditional assumptions regarding (i) turbine placement, (ii) turbine-wind flow interactions, (iii) variation of wind conditions, and (iv) types of turbines (single/multiple) to be installed. The allowance of multiple turbines, which demands complex modeling, is rare in the existing literature. The UWFLO method also significantly advances the state of the art in wind farm optimization by allowing simultaneous optimization of the type and the location of the turbines. Layout optimization (using UWFLO) of a hypothetical 25-turbine commercial-scale wind farm provides a remarkable 4.4% increase in capacity factor compared to a conventional array layout. A further 2% increase in capacity factor is accomplished when the types of turbines are also optimally selected. The scope of turbine selection and placement however depends on the land configuration and the nameplate capacity of the farm. Such dependencies are not clearly defined in the existing literature. We develop response surface-based models, which implicitly employ UWFLO, to quantify and analyze the roles of these other crucial design factors in optimal wind farm planning. The wind pattern at a site can vary significantly from year to year, which is not adequately captured by conventional wind distribution models. The resulting ill-predictability of the annual distribution of wind conditions introduces significant uncertainties in the estimated energy output of the wind farm. A new method is developed to characterize these wind resource uncertainties and model the propagation of these uncertainties into the estimated farm output. The overall wind pattern/regime also varies from one region to another, which demands turbines with capabilities uniquely suited for different wind regimes. Using the UWFLO method, we model the performance potential of currently available turbines for different wind regimes, and quantify their feature-based expected market suitability. Such models can initiate an understanding of the product variation that current turbine manufacturers should pursue, to adequately satisfy the needs of the naturally diverse wind energy market. The wind farm design problems formulated in this dissertation involve highly multimodal objective and constraint functions and a large number of continuous and discrete variables. An effective modification of the PSO algorithm is developed to address such challenging problems. Continuous search, as in conventional PSO, is implemented as the primary search strategy; discrete variables are then updated using a nearest-allowed-discrete-point criterion. Premature stagnation of particles due to loss of population diversity is one of the primary drawbacks of the basic PSO dynamics. A new measure of population diversity is formulated, which unlike existing metrics capture both the overall spread and the distribution of particles in the variable space. This diversity metric is then used to apply (i) an adaptive repulsion away from the best global solution in the case of continuous variables, and (ii) a stochastic update of the discrete variables. The new PSO algorithm provides competitive performance compared to a popular genetic algorithm, when applied to solve a comprehensive set of 98 mixed-integer nonlinear programming problems.
Smart optical writing head design for laser-based manufacturing
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, Nabeel A.
2014-03-01
Proposed is a smart optical writing head design suitable for high precision industrial laser based machining and manufacturing applications. The design uses an Electronically Controlled Variable Focus Lens (ECVFL) which enables the highest achievable spatial resolution of writing head spot sizes for axial target distances reaching 8 meters. A proof-of-concept experiment is conducted using a visible wavelength laser with a collimated beam that is coupled to beam conditioning optics which includes an electromagnetically actuated deformable membrane liquid ECVFL cascaded with a bias convex lens of fixed focal length. Electronic tuning and control of the ECVFL keeps the laser writing head far-field spot beam radii under 1 mm that is demonstrated over a target range of 20 cm to 800 cm. Applications for the proposed writing head design, which can accommodate both continuous wave and pulsed wave sources, include laser machining, high precision industrial molding of components, as well as materials processing requiring material sensitive optical power density control.
Seven Pervasive Statistical Flaws in Cognitive Training Interventions
Moreau, David; Kirk, Ian J.; Waldie, Karen E.
2016-01-01
The prospect of enhancing cognition is undoubtedly among the most exciting research questions currently bridging psychology, neuroscience, and evidence-based medicine. Yet, convincing claims in this line of work stem from designs that are prone to several shortcomings, thus threatening the credibility of training-induced cognitive enhancement. Here, we present seven pervasive statistical flaws in intervention designs: (i) lack of power; (ii) sampling error; (iii) continuous variable splits; (iv) erroneous interpretations of correlated gain scores; (v) single transfer assessments; (vi) multiple comparisons; and (vii) publication bias. Each flaw is illustrated with a Monte Carlo simulation to present its underlying mechanisms, gauge its magnitude, and discuss potential remedies. Although not restricted to training studies, these flaws are typically exacerbated in such designs, due to ubiquitous practices in data collection or data analysis. The article reviews these practices, so as to avoid common pitfalls when designing or analyzing an intervention. More generally, it is also intended as a reference for anyone interested in evaluating claims of cognitive enhancement. PMID:27148010
Lind, Marcus; Polonsky, William; Hirsch, Irl B; Heise, Tim; Bolinder, Jan; Dahlqvist, Sofia; Pehrsson, Nils-Gunnar; Moström, Peter
2016-05-01
The majority of individuals with type 1 diabetes today have glucose levels exceeding guidelines. The primary aim of this study was to evaluate whether continuous glucose monitoring (CGM), using the Dexcom G4 stand-alone system, improves glycemic control in adults with type 1 diabetes treated with multiple daily insulin injections (MDI). Individuals with type 1 diabetes and inadequate glycemic control (HbA1c ≥ 7.5% = 58 mmol/mol) treated with MDI were randomized in a cross-over design to the Dexcom G4 versus conventional care for 6 months followed by a 4-month wash-out period. Masked CGM was performed before randomization, during conventional treatment, and during the wash-out period to evaluate effects on hypoglycemia, hyperglycemia, and glycemic variability. Questionnaires were used to evaluate diabetes treatment satisfaction, fear of hypoglycemia, hypoglycemia confidence, diabetes-related distress, overall well-being, and physical activity during the different phases of the trial. The primary endpoint was the difference in HbA1c at the end of each treatment phase. A total of 205 patients were screened, of whom 161 were randomized between February and December 2014. Study completion is anticipated in April 2016. It is expected that the results of this study will establish whether using the Dexcom G4 stand-alone system in individuals with type 1 diabetes treated with MDI improves glycemic control, reduces hypoglycemia, and influences quality-of-life indicators and glycemic variability. © 2016 Diabetes Technology Society.
Concurrent generation of multivariate mixed data with variables of dissimilar types.
Amatya, Anup; Demirtas, Hakan
2016-01-01
Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.
NASA Astrophysics Data System (ADS)
Zhongjie, Y.; Schafer, K. V.; Slater, L. D.; Varner, R. K.; Amante, J.; Comas, X.; Reeve, A. S.; Alcivar, W.; Gonzalez, D.
2012-12-01
Northern peatlands are an important source of methane (CH4) release to the atmosphere, estimated at between 20 and 50 Tg/yr. Recent work on CH4 emissions from peatlands has demonstrated that ebullition can be a more important emission pathway than previously assumed. However, accurate quantification of the atmospheric CH4 burden due to ebullition is still very limited because ebullition exhibits high spatiotemporal variability such that sudden episodic events are difficult to capture and quantify with existing experimental methods. We have initiated a novel measurement program to better quantify the spatiotemporal variability in CH4 flux in peatlands, and to examine potential effects of vegetation and environmental factors, e.g. atmospheric pressure, water table, etc on these releases. A flow-through system was designed, consisting of a closed static chamber and a fast methane analyzer (FMA) (LI-COR model 7700) that has been employed at both the field and laboratory scale. The CH4 concentration in the air flowing through the chamber is continuously measured by the analyzer and used to reconstruct continuous CH4 emission fluxes. The high sampling rate of the FMA makes it sensitive to both ebullition and diffusion of gaseous CH4, capturing short duration, episodic ebullition fluxes. Non-steady static chamber measurements were also conducted to cross-validate the continuous measurements. Results acquired during summer 2011 show that episodic ebullition occurred more frequently at the pool site where previous studies indicate extensive wood layers at depth and the vegetation was a mix of Sphagnum and wooded heath. During a 3 day period of continuous measurements captured the passage of a tropical storm Irene, where short term episodic releases of CH4, ranging from 113 mg CH4/m2/d to 202 mg CH4/m2/d, were observed at the time of lowest atmospheric pressure, providing new evidence that atmospheric pressure is an important factor to controlling CH4 ebullition from peatlands. While traditional techniques, e.g. static chamber measurement can only occasionally detect the occurrence of ebullition, the continuous measurement by using a flow-through system is able to resolve spatiotemporal complexity of episodic CH4 ebullition events. These continuous CH4 measurements provide new insights into the timing of CH4 ebullition from peatlands to the atmosphere as climate changes and the role of environmental variables in regulating these CH4 releases.
Power calculator for instrumental variable analysis in pharmacoepidemiology
Walker, Venexia M; Davies, Neil M; Windmeijer, Frank; Burgess, Stephen; Martin, Richard M
2017-01-01
Abstract Background Instrumental variable analysis, for example with physicians’ prescribing preferences as an instrument for medications issued in primary care, is an increasingly popular method in the field of pharmacoepidemiology. Existing power calculators for studies using instrumental variable analysis, such as Mendelian randomization power calculators, do not allow for the structure of research questions in this field. This is because the analysis in pharmacoepidemiology will typically have stronger instruments and detect larger causal effects than in other fields. Consequently, there is a need for dedicated power calculators for pharmacoepidemiological research. Methods and Results The formula for calculating the power of a study using instrumental variable analysis in the context of pharmacoepidemiology is derived before being validated by a simulation study. The formula is applicable for studies using a single binary instrument to analyse the causal effect of a binary exposure on a continuous outcome. An online calculator, as well as packages in both R and Stata, are provided for the implementation of the formula by others. Conclusions The statistical power of instrumental variable analysis in pharmacoepidemiological studies to detect a clinically meaningful treatment effect is an important consideration. Research questions in this field have distinct structures that must be accounted for when calculating power. The formula presented differs from existing instrumental variable power formulae due to its parametrization, which is designed specifically for ease of use by pharmacoepidemiologists. PMID:28575313
Computer Simulations and Literature Survey of Continuously Variable Transmissions for Use in Buses
DOT National Transportation Integrated Search
1981-12-01
Numerous studies have been conducted on the concept of flywheel energy storage for buses. Flywheel systems require a continuously variable transmission (CVT) of some type to transmit power between the flywheel and the drive wheels. However, a CVT can...
Radiation oncology career decision variables for graduating trainees seeking positions in 2003-2004
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Lynn D.; Flynn, Daniel F.; Haffty, Bruce G.
2005-06-01
Purpose: Radiation oncology trainees must consider an array of variables when deciding upon an academic or private practice career path. This prospective evaluation of the 2004 graduating radiation oncology trainees, evaluates such variables and provides additional descriptive data. Methods: A survey that included 15 questions (one subjective, eleven categorical, and 3 continuous variables) was mailed to the 144 graduating radiation oncology trainees in United States programs in January of 2004. Questions were designed to gather information regarding factors that may have influenced career path choices. The responses were anonymous, and no identifying information was sought. Survey data were collated andmore » analyzed for differences in both categorical and continuous variables as they related to choice of academic or private practice career path. Results: Sixty seven (47%) of the surveys were returned. Forty-five percent of respondents indicated pursuit of an academic career. All respondents participated in research during training with 73% participating in research publication authorship. Post graduate year-3 was the median in which career path was chosen, and 20% thought that a fellowship position was 'perhaps' necessary to secure an academic position. Thirty percent of the respondents revealed that the timing of the American Board of Radiology examination influenced their career path decision. Eighteen variables were offered as possibly influencing career path choice within the survey, and the top five identified by those seeking an academic path were: (1) colleagues, (2) clinical research, (3) teaching, (4) geography, (5) and support staff. For those seeking private practice, the top choices were: (1) lifestyle, (2) practice environment, (3) patient care, (4) geography, (5) colleagues. Female gender (p = 0.064), oral meeting presentation (p = 0.053), and international meeting presentation (p 0.066) were the variables most significantly associated with pursuing an academic career path. The following variables were ranked significantly differently in hierarchy (p < 0.05) by those seeking an academic versus private practice path with respect to having influence on the career decision: lifestyle, income, case-mix, autonomy, ability to sub-specialize, basic research, clinical research, teaching, patient care, board structure, practice environment, and mentoring. Conclusion: These data offer descriptive information regarding variables that lead to radiation oncology trainee career path decisions. Such information may be of use in modification of training programs to meet future personnel and programmatic needs within the specialty.« less
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F; Schnabel, Roman
2015-10-30
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein-Podolsky-Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components.
Gehring, Tobias; Händchen, Vitus; Duhme, Jörg; Furrer, Fabian; Franz, Torsten; Pacher, Christoph; Werner, Reinhard F.; Schnabel, Roman
2015-01-01
Secret communication over public channels is one of the central pillars of a modern information society. Using quantum key distribution this is achieved without relying on the hardness of mathematical problems, which might be compromised by improved algorithms or by future quantum computers. State-of-the-art quantum key distribution requires composable security against coherent attacks for a finite number of distributed quantum states as well as robustness against implementation side channels. Here we present an implementation of continuous-variable quantum key distribution satisfying these requirements. Our implementation is based on the distribution of continuous-variable Einstein–Podolsky–Rosen entangled light. It is one-sided device independent, which means the security of the generated key is independent of any memoryfree attacks on the remote detector. Since continuous-variable encoding is compatible with conventional optical communication technology, our work is a step towards practical implementations of quantum key distribution with state-of-the-art security based solely on telecom components. PMID:26514280
Vukich, M; Schulman, A H; Giordani, T; Natali, L; Kalendar, R; Cavallini, A
2009-10-01
The inter-retrotransposon amplified polymorphism (IRAP) protocol was applied for the first time within the genus Helianthus to assess intraspecific variability based on retrotransposon sequences among 36 wild accessions and 26 cultivars of Helianthus annuus L., and interspecific variability among 39 species of Helianthus. Two groups of LTRs, one belonging to a Copia-like retroelement and the other to a putative retrotransposon of unknown nature (SURE) have been isolated, sequenced and primers were designed to obtain IRAP fingerprints. The number of polymorphic bands in H. annuus wild accessions is as high as in Helianthus species. If we assume that a polymorphic band can be related to a retrotransposon insertion, this result suggests that retrotransposon activity continued after Helianthus speciation. Calculation of similarity indices from binary matrices (Shannon's and Jaccard's indices) show that variability is reduced among domesticated H. annuus. On the contrary, similarity indices among Helianthus species were as large as those observed among wild H. annuus accessions, probably related to their scattered geographic distribution. Principal component analysis of IRAP fingerprints allows the distinction between perennial and annual Helianthus species especially when the SURE element is concerned.
1986-04-11
NASA 834, an F-14 Navy Tomcat, seen here in flight, was used at Dryden in 1986 and 1987 in a program known as the Variable-Sweep Transition Flight Experiment (VSTFE). This program explored laminar flow on variable sweep aircraft at high subsonic speeds. An F-14 aircraft was chosen as the carrier vehicle for the VSTFE program primarily because of its variable-sweep capability, Mach and Reynolds number capability, availability, and favorable wing pressure distribution. The variable sweep outer-panels of the F-14 aircraft were modified with natural laminar flow gloves to provide not only smooth surfaces but also airfoils that can produce a wide range of pressure distributions for which transition location can be determined at various flight conditions and sweep angles. Glove I, seen here installed on the upper surface of the left wing, was a "cleanup" or smoothing of the basic F-14 wing, while Glove II was designed to provide specific pressure distributions at Mach 0.7. Laminar flow research continued at Dryden with a research program on the NASA 848 F-16XL, a laminar flow experiment involving a wing-mounted panel with millions of tiny laser cut holes drawing off turbulent boundary layer air with a suction pump.
1987-04-22
NASA 834, an F-14 Navy Tomcat, seen here in flight, was used at Dryden in 1986 and 1987 in a program known as the Variable-Sweep Transition Flight Experiment (VSTFE). This program explored laminar flow on variable sweep aircraft at high subsonic speeds. An F-14 aircraft was chosen as the carrier vehicle for the VSTFE program primarily because of its variable-sweep capability, Mach and Reynolds number capability, availability, and favorable wing pressure distribution. The variable sweep outer-panels of the F-14 aircraft were modified with natural laminar flow gloves to provide not only smooth surfaces but also airfoils that can produce a wide range of pressure distributions for which transition location can be determined at various flight conditions and sweep angles. Glove I, seen here installed on the upper surface of the left wing, was a "cleanup" or smoothing of the basic F-14 wing, while Glove II was designed to provide specific pressure distributions at Mach 0.7. Laminar flow research continued at Dryden with a research program on the NASA 848 F-16XL, a laminar flow experiment involving a wing-mounted panel with millions of tiny laser cut holes drawing off turbulent boundary layer air with a suction pump.
Postpartum fatigue in the active-duty military woman.
Rychnovsky, Jacqueline D
2007-01-01
(a) To describe fatigue levels in military active-duty women, (b) to describe the relationship among selected predictor variables of fatigue, and (c) to examine the relationship between predictor variables, fatigue levels, and performance (as measured by functional status) after childbirth. Based on the Theory of Unpleasant Symptoms, a longitudinal, prospective design. A large military medical facility in the southwest United States. A convenience sample of 109 military active-duty women. Postpartum fatigue. Women were found to be moderately fatigued across time, with no change in fatigue levels from 2 to 6 weeks after delivery. All variables correlated with fatigue during hospitalization and at 2 weeks after delivery, and depression, anxiety, maternal sleep, and functional status correlated with fatigue at 6 weeks after delivery. Regression analyses indicated that maternal anxiety predicted fatigue at 6 weeks after delivery. Over half the women had not regained full functional status when they returned to work, and 40% still displayed symptoms of postpartum depression and anxiety. Military women continue to experiencing postpartum fatigue when they return to the workplace. Future research is needed to examine issues surrounding fatigue and its associated variables during the first year after delivery.
Regression Discontinuity for Causal Effect Estimation in Epidemiology.
Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till
Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.
Posterior composite restoration update: focus on factors influencing form and function
Bohaty, Brenda S; Ye, Qiang; Misra, Anil; Sene, Fabio; Spencer, Paulette
2013-01-01
Restoring posterior teeth with resin-based composite materials continues to gain popularity among clinicians, and the demand for such aesthetic restorations is increasing. Indeed, the most common aesthetic alternative to dental amalgam is resin composite. Moderate to large posterior composite restorations, however, have higher failure rates, more recurrent caries, and increased frequency of replacement. Investigators across the globe are researching new materials and techniques that will improve the clinical performance, handling characteristics, and mechanical and physical properties of composite resin restorative materials. Despite such attention, large to moderate posterior composite restorations continue to have a clinical lifetime that is approximately one-half that of the dental amalgam. While there are numerous recommendations regarding preparation design, restoration placement, and polymerization technique, current research indicates that restoration longevity depends on several variables that may be difficult for the dentist to control. These variables include the patient’s caries risk, tooth position, patient habits, number of restored surfaces, the quality of the tooth–restoration bond, and the ability of the restorative material to produce a sealed tooth–restoration interface. Although clinicians tend to focus on tooth form when evaluating the success and failure of posterior composite restorations, the emphasis must remain on advancing our understanding of the clinical variables that impact the formation of a durable seal at the restoration–tooth interface. This paper presents an update of existing technology and underscores the mechanisms that negatively impact the durability of posterior composite restorations in permanent teeth. PMID:23750102
Impact damage resistance of composite fuselage structure, part 2
NASA Technical Reports Server (NTRS)
Dost, Ernest F.; Finn, Scott R.; Murphy, Daniel P.; Huisken, Amy B.
1993-01-01
The strength of laminated composite materials may be significantly reduced by foreign object impact induced damage. An understanding of the damage state is required in order to predict the behavior of structure under operational loads or to optimize the structural configuration. Types of damage typically induced in laminated materials during an impact event include transverse matrix cracking, delamination, and/or fiber breakage. The details of the damage state and its influence on structural behavior depend on the location of the impact. Damage in the skin may act as a soft inclusion or affect panel stability, while damage occurring over a stiffener may include debonding of the stiffener flange from the skin. An experiment to characterize impact damage resistance of fuselage structure as a function of structural configuration and impact threat was performed. A wide range of variables associated with aircraft fuselage structure such as material type and stiffener geometry (termed, intrinsic variables) and variables related to the operating environment such as impactor mass and diameter (termed, extrinsic variables) were studied using a statistically based design-of-experiments technique. The experimental design resulted in thirty-two different 3-stiffener panels. These configured panels were impacted in various locations with a number of impactor configurations, weights, and energies. The results obtained from an examination of impacts in the skin midbay and hail simulation impacts are documented. The current discussion is a continuation of that work with a focus on nondiscrete characterization of the midbay hail simulation impacts and discrete characterization of impact damage for impacts over the stiffener.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
This annual report summarizes the work completed during the third year of technical effort on the referenced contract. Principal developments continue to focus on the Probabilistic Finite Element Method (PFEM) which has been under development for three years. Essentially all of the linear capabilities within the PFEM code are in place. Major progress in the application or verifications phase was achieved. An EXPERT module architecture was designed and partially implemented. EXPERT is a user interface module which incorporates an expert system shell for the implementation of a rule-based interface utilizing the experience and expertise of the user community. The Fast Probability Integration (FPI) Algorithm continues to demonstrate outstanding performance characteristics for the integration of probability density functions for multiple variables. Additionally, an enhanced Monte Carlo simulation algorithm was developed and demonstrated for a variety of numerical strategies.
Infrared Space Observatory (ISO) Key Project: the Birth and Death of Planets
NASA Technical Reports Server (NTRS)
Stencel, Robert E.; Creech-Eakman, Michelle; Fajardo-Acosta, Sergio; Backman, Dana
1999-01-01
This program was designed to continue to analyze observations of stars thought to be forming protoplanets, using the European Space Agency's Infrared Space Observatory, ISO, as one of NASA Key Projects with ISO. A particular class of Infrared Astronomy Satellite (IRAS) discovered stars, known after the prototype, Vega, are principal targets for these observations aimed at examining the evidence for processes involved in forming, or failing to form, planetary systems around other stars. In addition, this program continued to provide partial support for related science in the WIRE, SOFIA and Space Infrared Telescope Facility (SIRTF) projects, plus approved ISO supplementary time observations under programs MCREE1 29 and VEGADMAP. Their goals include time dependent changes in SWS spectra of Long Period Variable stars and PHOT P32 mapping experiments of recognized protoplanetary disk candidate stars.
Cardiopulmonary disease in newborns: a study in continuing medical education.
Weinberg, A D; McNamara, D G; Christiansen, C H; Taylor, F M; Armitage, M
1979-03-01
A film emphasizing the importance of tachypnea as an early manifestation of congenital heart disease was shown to physicians and nurses at 27 hospitals as part of their regular continuing medical education activities. To evaluate the effects of the program, investigators developed a pretest-posttest design which included a nonequivalent control group. Pretest and posttest data were obtained through chart audit of referrals from subjects in experimental and control groups. Dependent variables used to test the hypothesis included the age at which infants were referred and the age at which tachypnea was noted. Analysis of the data yielded significant gain scores for the experimental group, while changes in the control group were not significant. The findings indicate that a need-oriented educational program can have a measurable impact on improving the quality of patient care.
NASA Astrophysics Data System (ADS)
Buono, D.; Nocerino, G.; Solimeno, S.; Porzio, A.
2014-07-01
Entanglement, one of the most intriguing aspects of quantum mechanics, marks itself into different features of quantum states. For this reason different criteria can be used for verifying entanglement. In this paper we review some of the entanglement criteria casted for continuous variable states and link them to peculiar aspects of the original debate on the famous Einstein-Podolsky-Rosen (EPR) paradox. We also provide a useful expression for valuating Bell-type non-locality on Gaussian states. We also present the experimental measurement of a particular realization of the Bell operator over continuous variable entangled states produced by a sub-threshold type-II optical parametric oscillators (OPOs).
Practical limitation for continuous-variable quantum cryptography using coherent States.
Namiki, Ryo; Hirano, Takuya
2004-03-19
In this Letter, first, we investigate the security of a continuous-variable quantum cryptographic scheme with a postselection process against individual beam splitting attack. It is shown that the scheme can be secure in the presence of the transmission loss owing to the postselection. Second, we provide a loss limit for continuous-variable quantum cryptography using coherent states taking into account excess Gaussian noise on quadrature distribution. Since the excess noise is reduced by the loss mechanism, a realistic intercept-resend attack which makes a Gaussian mixture of coherent states gives a loss limit in the presence of any excess Gaussian noise.
Takeda, Shuntaro; Furusawa, Akira
2017-09-22
We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.
NASA Astrophysics Data System (ADS)
Takeda, Shuntaro; Furusawa, Akira
2017-09-01
We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.
NASA Astrophysics Data System (ADS)
Jinxia, Feng; Zhenju, Wan; Yuanji, Li; Kuanshou, Zhang
2018-01-01
Continuous variable quantum entanglement at a telecommunication wavelength of 1550 nm is experimentally generated using a single nondegenerate optical parametric amplifier based on a type-II periodically poled KTiOPO4 crystal. The triply resonant of the nondegenerate optical parametric amplifier is adjusted by tuning the crystal temperature and tilting the orientation of the crystal in the optical cavity. Einstein-Podolsky-Rosen-entangled beams with quantum correlations of 8.3 dB for both the amplitude and phase quadratures are experimentally generated. This system can be used for continuous variable fibre-based quantum communication.
Broadband continuous-variable entanglement source using a chirped poling nonlinear crystal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, J. S.; Sun, L.; Yu, X. Q.
2010-01-15
Aperiodically poled nonlinear crystal can be used as a broadband continuous-variable entanglement source and has strong stability under perturbations. We study the conversion dynamics of the sum-frequency generation and the quantum correlation of the two pump fields in a chirped-structure nonlinear crystal using the quantum stochastic method. The results show that there exists a frequency window for the pumps where two optical fields can perform efficient upconversion. The two pump fields are demonstrated to be entangled in the window and the chirped-structure crystal can be used as a continuous-variable entanglement source with a broad response bandwidth.
Optimization of a GO2/GH2 Impinging Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
2001-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) impinging injector element. The unlike impinging element, a fuel-oxidizer- fuel (F-O-F) triplet, is optimized in terms of design variables such as fuel pressure drop, (Delta)P(sub f), oxidizer pressure drop, (Delta)P(sub o), combustor length, L(sub comb), and impingement half-angle, alpha, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 163 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface which includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, specific variable weights are further increased to illustrate the high marginal cost of realizing the last increment of injector performance and thruster weight.
NASA Astrophysics Data System (ADS)
Hameer, Sameer
Rotorcraft transmission design is limited by empirical weight trends that are proportional to the power/torque raised to the two-thirds coupled with the relative inexperience industry has with the employment of variable speed transmission to heavy lift helicopters of the order of 100,000 lbs gross weight and 30,000 installed horsepower. The advanced rotorcraft transmission program objectives are to reduce transmission weight by at least 25%, reduce sound pressure levels by at least 10 dB, have a 5000 hr mean time between removal, and also incorporate the use of split torque technology in rotorcraft drivetrains of the future. The major obstacle that challenges rotorcraft drivetrain design is the selection, design, and optimization of a variable speed transmission in the goal of achieving a 50% reduction in rotor speed and its ability to handle high torque with light weight gears, as opposed to using a two-speed transmission which has inherent structural problems and is highly unreliable due to the embodiment of the traction type transmission, complex clutch and brake system. This thesis selects a nontraction pericyclic continuously variable transmission (P-CVT) as the best approach for a single main rotor heavy lift helicopter. The objective is to target and overcome the above mentioned obstacle for drivetrain design. Overcoming this obstacle provides advancement in the state of the art of drivetrain design over existing planetary and split torque transmissions currently used in helicopters. The goal of the optimization process was to decrease weight, decrease noise, increase efficiency, and increase safety and reliability. The objective function utilized the minimization of the weight and the major constraint is the tooth bending stress of the facegears. The most important parameters of the optimization process are weight, maintainability, and reliability which are cross-functionally related to each other, and these parameters are related to the torques and operating speeds. The analysis of the split torque type P-CVT achieved a weight reduction of 42.5% and 40.7% over planetary and split torque transmissions respectively. In addition, a 19.5 dB sound pressure level reduction was achieved using active gear struts, and also the use of fabricated steel truss like housing provided a higher maintainability and reliability, low cost, and low weight over cast magnesium housing currently employed in helicopters. The static finite element analysis of the split torque type P-CVT, both 2-D and 3-D, yielded stresses below the allowable bending stress of the material. The goal of the finite element analysis is to see if the designed product has met its functional requirements. The safety assessment of the split torque type P-CVT yielded a 99% probability of mission success based on a Monte Carlo simulation using stochastic-petri net analysis and a failure hazard analysis. This was followed by an FTA/RBD analysis which yielded an overall system failure rate of 140.35 failures per million hours, and a preliminary certification and time line of certification was performed. The use of spherical facegears and pericyclic kinematics has advanced the state of the art in drivetrain design primarily in the reduction of weight and noise coupled with high safety, reliability, and efficiency.
Nimphius, Sophia; McGuigan, Michael R; Suchomel, Timothy J; Newton, Robert U
2016-06-01
This study assessed reliability of discrete ground reaction force (GRF) variables over multiple pitching trials, investigated the relationships between discrete GRF variables and pitch velocity (PV) and assessed the variability of the "force signature" or continuous force-time curve during the pitching motion of windmill softball pitchers. Intraclass correlation coefficient (ICC) for all discrete variables was high (0.86-0.99) while the coefficient of variance (CV) was low (1.4-5.2%). Two discrete variables were significantly correlated to PV; second vertical peak force (r(5)=0.81, p=0.03) and time between peak forces (r(5)=-0.79; p=0.03). High ICCs and low CVs support the reliability of discrete GRF and PV variables over multiple trials and significant correlations indicate there is a relationship between the ability to produce force and the timing of this force production with PV. The mean of all pitchers' curve-average standard deviation of their continuous force-time curves demonstrated low variability (CV=4.4%) indicating a repeatable and identifiable "force signature" pattern during this motion. As such, the continuous force-time curve in addition to discrete GRF variables should be examined in future research as a potential method to monitor or explain changes in pitching performance. Copyright © 2016 Elsevier B.V. All rights reserved.
Synthetic optimization of air turbine for dental handpieces.
Shi, Z Y; Dong, T
2014-01-01
A synthetic optimization of Pelton air turbine in dental handpieces concerning the power output, compressed air consumption and rotation speed in the mean time is implemented by employing a standard design procedure and variable limitation from practical dentistry. The Pareto optimal solution sets acquired by using the Normalized Normal Constraint method are mainly comprised of two piecewise continuous parts. On the Pareto frontier, the supply air stagnation pressure stalls at the lower boundary of the design space, the rotation speed is a constant value within the recommended range from literature, the blade tip clearance insensitive to while the nozzle radius increases with power output and mass flow rate of compressed air to which the residual geometric dimensions are showing an opposite trend within their respective "pieces" compared to the nozzle radius.
Mixed Transportation Network Design under a Sustainable Development Perspective
Qin, Jin; Ni, Ling-lin; Shi, Feng
2013-01-01
A mixed transportation network design problem considering sustainable development was studied in this paper. Based on the discretization of continuous link-grade decision variables, a bilevel programming model was proposed to describe the problem, in which sustainability factors, including vehicle exhaust emissions, land-use scale, link load, and financial budget, are considered. The objective of the model is to minimize the total amount of resources exploited under the premise of meeting all the construction goals. A heuristic algorithm, which combined the simulated annealing and path-based gradient projection algorithm, was developed to solve the model. The numerical example shows that the transportation network optimized with the method above not only significantly alleviates the congestion on the link, but also reduces vehicle exhaust emissions within the network by up to 41.56%. PMID:23476142
Mixed transportation network design under a sustainable development perspective.
Qin, Jin; Ni, Ling-lin; Shi, Feng
2013-01-01
A mixed transportation network design problem considering sustainable development was studied in this paper. Based on the discretization of continuous link-grade decision variables, a bilevel programming model was proposed to describe the problem, in which sustainability factors, including vehicle exhaust emissions, land-use scale, link load, and financial budget, are considered. The objective of the model is to minimize the total amount of resources exploited under the premise of meeting all the construction goals. A heuristic algorithm, which combined the simulated annealing and path-based gradient projection algorithm, was developed to solve the model. The numerical example shows that the transportation network optimized with the method above not only significantly alleviates the congestion on the link, but also reduces vehicle exhaust emissions within the network by up to 41.56%.
ACCESS 3. Approximation concepts code for efficient structural synthesis: User's guide
NASA Technical Reports Server (NTRS)
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
A user's guide is presented for ACCESS-3, a research oriented program which combines dual methods and a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and dual algorithms of mathematical programming are applied in the design optimization procedure. This program retains all of the ACCESS-2 capabilities and the data preparation formats are fully compatible. Four distinct optimizer options were added: interior point penalty function method (NEWSUMT); second order primal projection method (PRIMAL2); second order Newton-type dual method (DUAL2); and first order gradient projection-type dual method (DUAL1). A pure discrete and mixed continuous-discrete design variable capability, and zero order approximation of the stress constraints are also included.
Safety performance functions incorporating design consistency variables.
Montella, Alfonso; Imbriani, Lella Liana
2015-01-01
Highway design which ensures that successive elements are coordinated in such a way as to produce harmonious and homogeneous driver performances along the road is considered consistent and safe. On the other hand, an alignment which requires drivers to handle high speed gradients and does not meet drivers' expectancy is considered inconsistent and produces higher crash frequency. To increase the usefulness and the reliability of existing safety performance functions and contribute to solve inconsistencies of existing highways as well as inconsistencies arising in the design phase, we developed safety performance functions for rural motorways that incorporate design consistency measures. Since the design consistency variables were used only for curves, two different sets of models were fitted for tangents and curves. Models for the following crash characteristics were fitted: total, single-vehicle run-off-the-road, other single vehicle, multi vehicle, daytime, nighttime, non-rainy weather, rainy weather, dry pavement, wet pavement, property damage only, slight injury, and severe injury (including fatal). The design consistency parameters in this study are based on operating speed models developed through an instrumented vehicle equipped with a GPS continuous speed tracking from a field experiment conducted on the same motorway where the safety performance functions were fitted (motorway A16 in Italy). Study results show that geometric design consistency has a significant effect on safety of rural motorways. Previous studies on the relationship between geometric design consistency and crash frequency focused on two-lane rural highways since these highways have the higher crash rates and are generally characterized by considerable inconsistencies. Our study clearly highlights that the achievement of proper geometric design consistency is a key design element also on motorways because of the safety consequences of design inconsistencies. The design consistency measures which are significant explanatory variables of the safety performance functions developed in this study are: (1) consistency in driving dynamics, i.e., difference between side friction assumed with respect to the design speed and side friction demanded at the 85th percentile speed; (2) operating speed consistency, i.e., absolute value of the 85th percentile speed reduction through successive elements of the road; (3) inertial speed consistency, i.e., difference between the operating speed in the curve and the average operating speed along the 5 km preceding the beginning of the curve; and (4) length of tangent preceding the curve (only for run-off-the-road crashes). Copyright © 2014 Elsevier Ltd. All rights reserved.
Direct handling of equality constraints in multilevel optimization
NASA Technical Reports Server (NTRS)
Renaud, John E.; Gabriele, Gary A.
1990-01-01
In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.
Adaptive Optics for the Thirty Meter Telescope
NASA Astrophysics Data System (ADS)
Ellerbroek, Brent
2013-12-01
This paper provides an overview of the progress made since the last AO4ELT conference towards developing the first-light AO architecture for the Thirty Meter Telescope (TMT). The Preliminary Design of the facility AO system NFIRAOS has been concluded by the Herzberg Institute of Astrophysics. Work on the client Infrared Imaging Spectrograph (IRIS) has progressed in parallel, including a successful Conceptual Design Review and prototyping of On-Instrument WFS (OIWFS) hardware. Progress on the design for the Laser Guide Star Facility (LGSF) continues at the Institute of Optics and Electronics in Chengdu, China, including the final acceptance of the Conceptual Design and modest revisions for the updated TMT telescope structure. Design and prototyping activities continue for lasers, wavefront sensing detectors, detector readout electronics, real-time control (RTC) processors, and deformable mirrors (DMs) with their associated drive electronics. Highlights include development of a prototype sum frequency guide star laser at the Technical Institute of Physics and Chemistry (Beijing); fabrication/test of prototype natural- and laser-guide star wavefront sensor CCDs for NFIRAOS by MIT Lincoln Laboratory and W.M. Keck Observatory; a trade study of RTC control algorithms and processors, with prototyping of GPU and FPGA architectures by TMT and the Dominion Radio Astrophysical Observatory; and fabrication/test of a 6x60 actuator DM prototype by CILAS. Work with the University of British Columbia LIDAR is continuing, in collaboration with ESO, to measure the spatial/temporal variability of the sodium layer and characterize the sodium coupling efficiency of several guide star laser systems. AO performance budgets have been further detailed. Modeling topics receiving particular attention include performance vs. computational cost tradeoffs for RTC algorithms; optimizing performance of the tip/tilt, plate scale, and sodium focus tracking loops controlled by the NGS on-instrument wavefront sensors, sky coverage, PSF reconstruction for LGS MCAO, and precision astrometry for the galactic center and other observations.
Fleury, Marie-Josée; Grenier, Guy; Bamvita, Jean-Marie
2018-02-01
This study aimed to identify variables associated with quality of life (QoL) and mediating variables among 338 service users with mental disorders in Quebec (Canada). Data were collected using nine standardized questionnaires and participant medical records. Quality of life was assessed with the Satisfaction with Life Domains Scale. Independent variables were organized into a six-block conceptual framework. Using structural equation modeling, associated and mediating variables related to QoL were identified. Lower seriousness of needs was the strongest variable associated with QoL, followed by recovery, greater service continuity, gender (male), adequacy of help received, not living alone, absence of substance use or mood disorders, and higher functional status, in that order. Recovery was the single mediating variable linking lower seriousness of needs, higher service continuity, and reduced alcohol use with QoL. Findings suggest that greater service continuity creates favorable conditions for recovery, reducing seriousness of needs and increasing QoL among service users. Lack of recovery-oriented services may affect QoL among alcohol users, as substance use disorders were associated directly and negatively with QoL. Decision makers and mental health professionals should promote service continuity, and closer collaboration between primary care and specialized services, while supporting recovery-oriented services that encourage service user involvement in their treatment and follow-up. Community-based organizations should aim to reduce the seriousness of needs particularly for female service users and those living alone.
Design study of flat belt CVT for electric vehicles
NASA Technical Reports Server (NTRS)
Kumm, E. L.
1980-01-01
A continuously variable transmission (CVT) was studied, using a novel flat belt pulley arrangement which couples the high speed output shaft of an energy storage flywheel to the drive train of an electric vehicle. A specific CVT arrangement was recommended and its components were selected and sized, based on the design requirements of a 1700 KG vehicle. A design layout was prepared and engineering calculations made of component efficiencies and operating life. The transmission efficiency was calculated to be significantly over 90% with the expected vehicle operation. A design consistent with automotive practice for low future production costs was considered, together with maintainability. The technology advancements required to develop the flat belt CVT were identified and an estimate was made of how the size of the flat belt CVT scales to larger and smaller design output torques. The suitability of the flat belt CVT for alternate application to an electric vehicle powered by an electric motor without flywheel and to a hybrid electric vehicle powered by an electric motor with an internal combustion engine was studied.
Talbot, H Keipp; Nian, Hui; Chen, Qingxia; Zhu, Yuwei; Edwards, Kathryn M; Griffin, Marie R
2016-04-04
Previous influenza vaccine effectiveness studies were criticized for their failure to control for frailty. This study was designed to see if the test-negative study design overcomes this bias. Adults ≥ 50 years of age with respiratory symptoms were enrolled from November 2006 through May 2012 during the influenza season (excluding the 2009-2010 H1N1 pandemic season) to perform yearly test-negative control influenza vaccine effectiveness studies in Nashville, TN. At enrollment, both a nasal and throat swab sample were obtained and tested for influenza by RT-PCR. Frailty was calculated using a modified Rockwood Index that included 60 variables ascertained in a retrospective chart review giving a score of 0 to 1. Subjects were divided into three strata: non frail (≤ 0.08), pre-frail (> 0.08 to < 0.25), and frail (≥ 0.25). Vaccine effectiveness was calculated using the formula [1-adjusted odds ratio (OR)] × 100%. Adjusted ORs for individual years and all years combined were estimated by penalized multivariable logistic regression. Of 1023 hospitalized adults enrolled, 866 (84.7%) participants had complete immunization status, molecular influenza testing and covariates to calculate frailty. There were 83 influenza-positive cases and 783 test-negative controls overall, who were 74% white, 25% black, and 59% female. The median frailty index was 0.167 (Interquartile: 0.117, 0.267). The frailty index was 0.167 (0.100, 0.233) for the influenza positive cases compared to 0.183 (0.133, 0.267) for influenza negative controls (p = 0.07). Vaccine effectiveness estimates were 55.2% (95%CI: 30.5, 74.2), 60.4% (95%CI: 29.5, 74.4), and 54.3% (95%CI: 28.8, 74.0) without the frailty variable, including frailty as a continuous variable, and including frailty as a categorical variable, respectively. Using the case positive test negative study design to assess vaccine effectiveness, our measure of frailty was not a significant confounder as inclusion of this measure did not significantly change vaccine effectiveness estimates. Copyright © 2016 Elsevier Ltd. All rights reserved.
Stochastic satisficing account of confidence in uncertain value-based decisions
Bahrami, Bahador; Keramati, Mehdi
2018-01-01
Every day we make choices under uncertainty; choosing what route to work or which queue in a supermarket to take, for example. It is unclear how outcome variance, e.g. uncertainty about waiting time in a queue, affects decisions and confidence when outcome is stochastic and continuous. How does one evaluate and choose between an option with unreliable but high expected reward, and an option with more certain but lower expected reward? Here we used an experimental design where two choices’ payoffs took continuous values, to examine the effect of outcome variance on decision and confidence. We found that our participants’ probability of choosing the good (high expected reward) option decreased when the good or the bad options’ payoffs were more variable. Their confidence ratings were affected by outcome variability, but only when choosing the good option. Unlike perceptual detection tasks, confidence ratings correlated only weakly with decisions’ time, but correlated with the consistency of trial-by-trial choices. Inspired by the satisficing heuristic, we propose a “stochastic satisficing” (SSAT) model for evaluating options with continuous uncertain outcomes. In this model, options are evaluated by their probability of exceeding an acceptability threshold, and confidence reports scale with the chosen option’s thus-defined satisficing probability. Participants’ decisions were best explained by an expected reward model, while the SSAT model provided the best prediction of decision confidence. We further tested and verified the predictions of this model in a second experiment. Our model and experimental results generalize the models of metacognition from perceptual detection tasks to continuous-value based decisions. Finally, we discuss how the stochastic satisficing account of decision confidence serves psychological and social purposes associated with the evaluation, communication and justification of decision-making. PMID:29621325
Kicking the Camel: Adolescent Smoking Behaviors after Two Years.
ERIC Educational Resources Information Center
Shillington, Audrey M.; Clapp, John D.
2000-01-01
Public Health Model was used to examine relationships between smoking severity (never smokers, former smokers, continued smokers) and host and environmental variables. Results indicate former smokers are more like never smokers on most risk and protective variables. Final analyses indicated continued smokers are more likely to be Non-Black and…
ERIC Educational Resources Information Center
Soria, Krista M.; Thomas-Card, Traci
2014-01-01
In this study, we explored whether college students' motivations for participating in community service were associated with their perceptions that service enhanced their desire to continue participating in communityfocused activities after graduation, after statistically controlling for demographic variables and other variables of interest.…
GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA
In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...
Multi-stage continuous (chemostat) culture fermentation (MCCF) with variable fermentor volumes was carried out to study utilizing glucose and xylose for ethanol production by means of mixed sugar fermentation (MSF). Variable fermentor volumes were used to enable enhanced sugar u...
Binary full adder, made of fusion gates, in a subexcitable Belousov-Zhabotinsky system
NASA Astrophysics Data System (ADS)
Adamatzky, Andrew
2015-09-01
In an excitable thin-layer Belousov-Zhabotinsky (BZ) medium a localized perturbation leads to the formation of omnidirectional target or spiral waves of excitation. A subexcitable BZ medium responds to asymmetric local perturbation by producing traveling localized excitation wave-fragments, distant relatives of dissipative solitons. The size and life span of an excitation wave-fragment depend on the illumination level of the medium. Under the right conditions the wave-fragments conserve their shape and velocity vectors for extended time periods. I interpret the wave-fragments as values of Boolean variables. When two or more wave-fragments collide they annihilate or merge into a new wave-fragment. States of the logic variables, represented by the wave-fragments, are changed in the result of the collision between the wave-fragments. Thus, a logical gate is implemented. Several theoretical designs and experimental laboratory implementations of Boolean logic gates have been proposed in the past but little has been done cascading the gates into binary arithmetical circuits. I propose a unique design of a binary one-bit full adder based on a fusion gate. A fusion gate is a two-input three-output logical device which calculates the conjunction of the input variables and the conjunction of one input variable with the negation of another input variable. The gate is made of three channels: two channels cross each other at an angle, a third channel starts at the junction. The channels contain a BZ medium. When two excitation wave-fragments, traveling towards each other along input channels, collide at the junction they merge into a single wave-front traveling along the third channel. If there is just one wave-front in the input channel, the front continues its propagation undisturbed. I make a one-bit full adder by cascading two fusion gates. I show how to cascade the adder blocks into a many-bit full adder. I evaluate the feasibility of my designs by simulating the evolution of excitation in the gates and adders using the numerical integration of Oregonator equations.
Nanomedicines for HIV therapy.
Siccardi, Marco; Martin, Philip; McDonald, Tom O; Liptrott, Neill J; Giardiello, Marco; Rannard, Steve; Owen, Andrew
2013-02-01
Heterogeneity in response to HIV treatments has been attributed to several causes including variability in pharmacokinetic exposure. Nanomedicine applications have a variety of advantages compared with traditional formulations, such as the potential to increase bioavailability and specifically target the site of action. Our group is focusing on the development of nanoformulations using a closed-loop design process in which nanoparticle optimization (disposition, activity and safety) is a continuous process based on experimental pharmacological data from in vitro and in vivo models. Solid drug nanoparticles, polymer-based drug-delivery carriers as well as nanoemulsions are nanomedicine options with potential application to improve antiretroviral deployment.
Development of a high temperature capacitive pressure transducer
NASA Technical Reports Server (NTRS)
Egger, R. L.
1977-01-01
High temperature pressure transducers capable of continuous operation while exposed to 650 C were developed and evaluated over a full-scale differential pressure range of + or - 69 kPa. The design of the pressure transducers was based on the use of a diaphragm to respond to pressure, variable capacitive elements arranged to operate as a differential capacitor to measure diaphragm response and on the use of fused silica for the diaphragm and its supporting assembly. The uncertainty associated with measuring + or - 69 kPa pressures between 20C and 650C was less than + or - 6%.
Robust control of a parallel hybrid drivetrain with a CVT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, T.; Schroeder, D.
1996-09-01
In this paper the design of a robust control system for a parallel hybrid drivetrain is presented. The drivetrain is based on a continuously variable transmission (CVT) and is therefore a highly nonlinear multiple-input-multiple-output system (MIMO-System). Input-Output-Linearization offers the possibility of linearizing and of decoupling the system. Since for example the vehicle mass varies with the load and the efficiency of the gearbox depends strongly on the actual working point, an exact linearization of the plant will mostly fail. Therefore a robust control algorithm based on sliding mode is used to control the drivetrain.
NEON: High Frequency Monitoring Network for Watershed-Scale Processes and Aquatic Ecology
NASA Astrophysics Data System (ADS)
Vance, J. M.; Fitzgerald, M.; Parker, S. M.; Roehm, C. L.; Goodman, K. J.; Bohall, C.; Utz, R.
2014-12-01
Networked high frequency hydrologic and water quality measurements needed to investigate physical and biogeochemical processes at the watershed scale and create robust models are limited and lacking standardization. Determining the drivers and mechanisms of ecological changes in aquatic systems in response to natural and anthropogenic pressures is challenging due to the large amounts of terrestrial, aquatic, atmospheric, biological, chemical, and physical data it requires at varied spatiotemporal scales. The National Ecological Observatory Network (NEON) is a continental-scale infrastructure project designed to provide data to address the impacts of climate change, land-use, and invasive species on ecosystem structure and function. Using a combination of standardized continuous in situ measurements and observational sampling, the NEON Aquatic array will produce over 200 data products across its spatially-distributed field sites for 30 years to facilitate spatiotemporal analysis of the drivers of ecosystem change. Three NEON sites in Alabama were chosen to address linkages between watershed-scale processes and ecosystem changes along an eco-hydrological gradient within the Tombigbee River Basin. The NEON Aquatic design, once deployed, will include continuous measurements of surface water physical, chemical, and biological parameters, groundwater level, temperature and conductivity and local meteorology. Observational sampling will include bathymetry, water chemistry and isotopes, and a suite of organismal sampling from microbes to macroinvertebrates to vertebrates. NEON deployed a buoy to measure the temperature profile of the Black Warrior River from July - November, 2013 to determine the spatiotemporal variability across the water column from a daily to seasonal scale. In July 2014 a series of water quality profiles were performed to assess the contribution of physical and biogeochemical drivers over a diurnal cycle. Additional river transects were performed across our site reach to capture the spatial variability of surface water parameters. Our preliminary data show differing response times to precipitation events and diurnal processes informing our infrastructure designs and sampling protocols aimed at providing data to address the eco-hydrological gradient.
Sadala, S P; Patre, B M
2018-03-01
The 2-degree of freedom (DOF) helicopter system is a typical higher-order, multi-variable, nonlinear and strong coupled control system. The helicopter dynamics also includes parametric uncertainties and is subject to unknown external disturbances. Such complicated system requires designing a sophisticated control algorithm that can handle these difficulties. This paper presents a new robust control algorithm which is a combination of two continuous control techniques, composite nonlinear feedback (CNF) and super-twisting control (STC) methods. In the existing integral sliding mode (ISM) based CNF control law, the discontinuous term exhibits chattering which is not desirable for many practical applications. As the continuity of well known STC reduces chattering in the system, the proposed strategy is beneficial over the current ISM based CNF control law which has a discontinuous term. Two controllers with integral sliding surface are designed to control the position of the pitch and the yaw angles of the 2- DOF helicopter. The adequacy of this specific combination has been exhibited through general analysis, simulation and experimental results of 2-DOF helicopter setup. The acquired results demonstrate the good execution of the proposed controller regarding stabilization, following reference input without overshoot against actuator saturation and robustness concerning to the limited matched disturbances. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Performance of a newly designed continuous soot monitoring system (COSMOS).
Miyazaki, Yuzo; Kondo, Yutaka; Sahu, Lokesh K; Imaru, Junichi; Fukushima, Nobuhiko; Kano, Minoru
2008-10-01
We designed a continuous soot monitoring system (COSMOS) for fully automated, high-sensitivity, continuous measurement of light absorption by black carbon (BC) aerosols. The instrument monitors changes in transmittance across an automatically advancing quartz fiber filter tape using an LED at a 565 nm wavelength. To achieve measurements with high sensitivity and a lower detectable light absorption coefficient, COSMOS uses a double-convex lens and optical bundle pipes to maintain high light intensity and signal data are obtained at 1000 Hz. In addition, sampling flow rate and optical unit temperature are actively controlled. The inlet line for COSMOS is heated to 400 degrees C to effectively volatilize non-refractory aerosol components that are internally mixed with BC. In its current form, COSMOS provides BC light absorption measurements with a detection limit of 0.45 Mm(-1) (0.045 microg m(-3) for soot) for 10 min. The unit-to-unit variability is estimated to be within +/- 1%, demonstrating its high reproducibility. The absorption coefficients determined by COSMOS agreed with those by a particle soot absorption photometer (PSAP) to within 1% (r2 = 0.97). The precision (+/- 0.60 Mm(-1)) for 10 min integrated data was better than that of PSAP and an aethalometer under our operating conditions. These results showed that COSMOS achieved both an improved detection limit and higher precision for the filter-based light absorption measurements of BC compared to the existing methods.
Multidisciplinary optimization of a controlled space structure using 150 design variables
NASA Technical Reports Server (NTRS)
James, Benjamin B.
1993-01-01
A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.
Diurnal and seasonal variability of outdoor radon concentration in the area of the NRPI Prague.
Jilek, K; Slezákova, M; Thomas, J
2014-07-01
In autumn 2010, an outdoor measuring station for measurement of atmospheric radon, gamma equivalent dose rate in the range of 100 nSv h(-1)-1 Sv h(-1) and proper meteorological parameters such as thermal air gradient, relative air humidity, wind speed and direction and solar radiation intensity was built in the area of the National Radiation Protection Institute vvi. The station was designed to be independent of an electrical network and enables on-line wireless transfer of all data. After introduction of the station, illustrations of its measurement properties and the results of measured diurnal and seasonal variability of atmospheric radon, based on annual continuous measurement using a high-volume scintillation cell at a height of 2.5 m above the ground, are presented. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Computing in the presence of soft bit errors. [caused by single event upset on spacecraft
NASA Technical Reports Server (NTRS)
Rasmussen, R. D.
1984-01-01
It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.
Effect of mycoprotein on blood lipids.
Turnbull, W H; Leeds, A R; Edwards, G D
1990-10-01
This metabolic study was designed to investigate the effects of mycoprotein on blood lipids. Mycoprotein is a food produced by continuous fermentation of Fusarium graminearum (Schwabe) on a carbohydrate substrate. Two groups of subjects with slightly raised cholesterol concentrations took part in the 3-wk study. The experimental group was fed mycoprotein in place of meat and the control diet contained meat. There was no change in plasma cholesterol in the control group but there was a 13% reduction in the mycoprotein group. Low-density lipoprotein (LDL) increased in the control group by 12% and decreased by 9% in the mycoprotein group. High-density lipoprotein (HDL) decreased by 11% in the control group but increased by 12% in the mycoprotein group. In each case the group ANOVA differences between variables were statistically significant. It is clear from these results that lipid variables are advantageously altered by mycoprotein consumption.
QKD Via a Quantum Wavelength Router Using Spatial Soliton
NASA Astrophysics Data System (ADS)
Kouhnavard, M.; Amiri, I. S.; Afroozeh, A.; Jalil, M. A.; Ali, J.; Yupapin, P. P.
2011-05-01
A system for continuous variable quantum key distribution via a wavelength router is proposed. The Kerr type of light in the nonlinear microring resonator (NMRR) induces the chaotic behavior. In this proposed system chaotic signals are generated by an optical soliton or Gaussian pulse within a NMRR system. The parameters, such as input power, MRRs radii and coupling coefficients can change and plays important role in determining the results in which the continuous signals are generated spreading over the spectrum. Large bandwidth signals of optical soliton are generated by the input pulse propagating within the MRRs, which is allowed to form the continuous wavelength or frequency with large tunable channel capacity. The continuous variable QKD is formed by using the localized spatial soliton pulses via a quantum router and networks. The selected optical spatial pulse can be used to perform the secure communication network. Here the entangled photon generated by chaotic signals has been analyzed. The continuous entangled photon is generated by using the polarization control unit incorporating into the MRRs, required to provide the continuous variable QKD. Results obtained have shown that the application of such a system for the simultaneous continuous variable quantum cryptography can be used in the mobile telephone hand set and networks. In this study frequency band of 500 MHz and 2.0 GHz and wavelengths of 775 nm, 2,325 nm and 1.55 μm can be obtained for QKD use with input optical soliton and Gaussian beam respectively.
Efficiently enforcing artisanal fisheries to protect estuarine biodiversity.
Duarte de Paula Costa, Micheli; Mills, Morena; Richardson, Anthony J; Fuller, Richard A; Muelbert, José H; Possingham, Hugh P
2018-06-26
Artisanal fisheries support millions of livelihoods worldwide, yet ineffective enforcement can allow for continued environmental degradation due to overexploitation. Here, we use spatial planning to design an enforcement strategy for a pre-existing spatial closure for artisanal fisheries considering climate variability, existing seasonal fishing closures, representative conservation targets and enforcement costs. We calculated enforcement cost in three ways, based on different assumptions about who could be responsible for monitoring the fishery. We applied this approach in the Patos Lagoon estuary (Brazil), where we found three important results. First, spatial priorities for enforcement were similar under different climate scenarios. Second, we found that the cost and percentage of area enforced varied among scenarios tested by the conservation planning analysis, with only a modest increase in budget needed to incorporate climate variability. Third, we found that spatial priorities for enforcement depend on whether enforcement is carried out by a central authority or by the community itself. Here, we demonstrated a method that can be used to efficiently design enforcement plans, resulting in the conservation of biodiversity and estuarine resources. Also, cost of enforcement can be potentially reduced when fishers are empowered to enforce management within their fishing grounds. © 2018 by the Ecological Society of America.
Design approaches to experimental mediation☆
Pirlott, Angela G.; MacKinnon, David P.
2016-01-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259
Design approaches to experimental mediation.
Pirlott, Angela G; MacKinnon, David P
2016-09-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.
Kraus, Tamara E.C.; Bergamaschi, Brian A.; Downing, Bryan D.
2017-07-11
Executive SummaryThis report is the first in a series of three reports that provide information about high-frequency (HF) nutrient and biogeochemical monitoring in the Sacramento–San Joaquin Delta of northern California (Delta). This first report provides an introduction to the reasons for and fundamental concepts behind collecting HF measurements, and describes the benefits associated with a real-time, continuous, HF, multi-parameter water quality monitoring station network that is co-located with flow stations. It then provides examples of how HF nutrient measurements have improved our understating of nutrient sources and cycling in aquatic systems worldwide, followed by specific examples from the Delta. These examples describe the ways in which HF instrumentation may be used for both fixed-station and spatial assessments. The overall intent of this document is to describe how HF measurements currently (2017) are being used in the Delta to examine the relationship between nutrient concentrations, nutrient cycling, and aquatic habitat conditions.The second report in the series (Downing and others, 2017) summarizes information about HF nutrient and associated biogeochemical monitoring in the northern Delta. The report synthesizes data available from the nutrient and water quality monitoring network currently operated by the U.S. Geological Survey in this ecologically important region of the Delta. In the report, we present and discuss the available data at various timescales—first, at the monthly, seasonal, and inter-annual timescales; and, second, for comparison, at the tidal and event (for example, storms, reservoir releases, phytoplankton blooms) timescales. As expected, we determined that there is substantial variability in nitrate concentrations at short timescales within hours, but also significant variability at longer timescales such as months or years. This multi-scale, high variability affects calculation of fluxes and loads, indicating that HF monitoring is necessary for understanding and assessing flux-based processes and outcomes in tidal environments, such as the Delta.The third report in the series (Bergamaschi and others, 2017) provides information about how to design HF nutrient and biogeochemical monitoring for assessment of nutrient inputs and dynamics in the Delta. The report provides background, principles, and considerations for designing an HF nutrient-monitoring network for the Sacramento–San Joaquin Delta to address high-priority, nutrient-management questions. The report starts with high-priority management questions to be addressed, continues with questions and considerations that place demands and constraints on network design, discusses the principles applicable to network design, and concludes with the presentation of three example nutrient‑monitoring network designs for the Delta. For the three example networks, we assess how they would address high-priority questions identified by the Delta Regional Monitoring Program (Delta Regional Monitoring Program Technical Advisory Committee, 2015).
ERIC Educational Resources Information Center
Bollen, Kenneth A.; Maydeu-Olivares, Albert
2007-01-01
This paper presents a new polychoric instrumental variable (PIV) estimator to use in structural equation models (SEMs) with categorical observed variables. The PIV estimator is a generalization of Bollen's (Psychometrika 61:109-121, 1996) 2SLS/IV estimator for continuous variables to categorical endogenous variables. We derive the PIV estimator…
Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action
Mörtl, Alexander; Lorenz, Tamara; Hirche, Sandra
2014-01-01
Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans. PMID:24752212
Streamflow characteristics and trends in New Jersey, water years 1897-2003
Watson, Kara M.; Reiser, Robert G.; Nieswand, Steven P.; Schopp, Robert D.
2005-01-01
Streamflow statistics were computed for 111 continuous-record streamflow-gaging stations with 20 or more years of continuous record and for 500 low-flow partial-record stations, including 66 gaging stations with less than 20 years of continuous record. Daily mean streamflow data from water year 1897 through water year 2001 were used for the computations at the gaging stations. (The water year is the 12-month period, October 1 through September 30, designated by the calendar year in which it ends). The characteristics presented for the long-term continuous-record stations are daily streamflow, harmonic mean flow, flow frequency, daily flow durations, trend analysis, and streamflow variability. Low-flow statistics for gaging stations with less than 20 years of record and for partial-record stations were estimated by correlating base-flow measurements with daily mean flows at long-term (more than 20 years) continuous-record stations. Instantaneous streamflow measurements through water year 2003 were used to estimate low-flow statistics at the partial-record stations. The characteristics presented for partial-record stations are mean annual flow; harmonic mean flow; and annual and winter low-flow frequency. The annual 1-, 7-, and 30-day low- and high-flow data sets were tested for trends. The results of trend tests for high flows indicate relations between upward trends for high flows and stream regulation, and high flows and development in the basin. The relation between development and low-flow trends does not appear to be as strong as for development and high-flow trends. Monthly, seasonal, and annual precipitation data for selected long-term meteorological stations also were tested for trends to analyze the effects of climate. A significant upward trend in precipitation in northern New Jersey, Climate Division 1 was identified. For Climate Division 2, no general increase in average precipitation was observed. Trend test results indicate that high flows at undeveloped, unregulated sites have not been affected by the increase in average precipitation. The ratio of instantaneous peak flow to 3-day mean flow, ratios of flow duration, ratios of high-flow/low-flow frequency, and coefficient of variation were used to define streamflow variability. Streamflow variability was significantly greater among the group of gaging stations located outside the Coastal Plain than among the group of gaging stations located in the Coastal Plain.
Landguth, Erin L.; Gedy, Bradley C.; Oyler-McCance, Sara J.; Garey, Andrew L.; Emel, Sarah L.; Mumma, Matthew; Wagner, Helene H.; Fortin, Marie-Josée; Cushman, Samuel A.
2012-01-01
The influence of study design on the ability to detect the effects of landscape pattern on gene flow is one of the most pressing methodological gaps in landscape genetic research. To investigate the effect of study design on landscape genetics inference, we used a spatially-explicit, individual-based program to simulate gene flow in a spatially continuous population inhabiting a landscape with gradual spatial changes in resistance to movement. We simulated a wide range of combinations of number of loci, number of alleles per locus and number of individuals sampled from the population. We assessed how these three aspects of study design influenced the statistical power to successfully identify the generating process among competing hypotheses of isolation-by-distance, isolation-by-barrier, and isolation-by-landscape resistance using a causal modelling approach with partial Mantel tests. We modelled the statistical power to identify the generating process as a response surface for equilibrium and non-equilibrium conditions after introduction of isolation-by-landscape resistance. All three variables (loci, alleles and sampled individuals) affect the power of causal modelling, but to different degrees. Stronger partial Mantel r correlations between landscape distances and genetic distances were found when more loci were used and when loci were more variable, which makes comparisons of effect size between studies difficult. Number of individuals did not affect the accuracy through mean equilibrium partial Mantel r, but larger samples decreased the uncertainty (increasing the precision) of equilibrium partial Mantel r estimates. We conclude that amplifying more (and more variable) loci is likely to increase the power of landscape genetic inferences more than increasing number of individuals.
Landguth, E.L.; Fedy, B.C.; Oyler-McCance, S.J.; Garey, A.L.; Emel, S.L.; Mumma, M.; Wagner, H.H.; Fortin, M.-J.; Cushman, S.A.
2012-01-01
The influence of study design on the ability to detect the effects of landscape pattern on gene flow is one of the most pressing methodological gaps in landscape genetic research. To investigate the effect of study design on landscape genetics inference, we used a spatially-explicit, individual-based program to simulate gene flow in a spatially continuous population inhabiting a landscape with gradual spatial changes in resistance to movement. We simulated a wide range of combinations of number of loci, number of alleles per locus and number of individuals sampled from the population. We assessed how these three aspects of study design influenced the statistical power to successfully identify the generating process among competing hypotheses of isolation-by-distance, isolation-by-barrier, and isolation-by-landscape resistance using a causal modelling approach with partial Mantel tests. We modelled the statistical power to identify the generating process as a response surface for equilibrium and non-equilibrium conditions after introduction of isolation-by-landscape resistance. All three variables (loci, alleles and sampled individuals) affect the power of causal modelling, but to different degrees. Stronger partial Mantel r correlations between landscape distances and genetic distances were found when more loci were used and when loci were more variable, which makes comparisons of effect size between studies difficult. Number of individuals did not affect the accuracy through mean equilibrium partial Mantel r, but larger samples decreased the uncertainty (increasing the precision) of equilibrium partial Mantel r estimates. We conclude that amplifying more (and more variable) loci is likely to increase the power of landscape genetic inferences more than increasing number of individuals. ?? 2011 Blackwell Publishing Ltd.
Secondary outcome analysis for data from an outcome-dependent sampling design.
Pan, Yinghao; Cai, Jianwen; Longnecker, Matthew P; Zhou, Haibo
2018-04-22
Outcome-dependent sampling (ODS) scheme is a cost-effective way to conduct a study. For a study with continuous primary outcome, an ODS scheme can be implemented where the expensive exposure is only measured on a simple random sample and supplemental samples selected from 2 tails of the primary outcome variable. With the tremendous cost invested in collecting the primary exposure information, investigators often would like to use the available data to study the relationship between a secondary outcome and the obtained exposure variable. This is referred as secondary analysis. Secondary analysis in ODS designs can be tricky, as the ODS sample is not a random sample from the general population. In this article, we use the inverse probability weighted and augmented inverse probability weighted estimating equations to analyze the secondary outcome for data obtained from the ODS design. We do not make any parametric assumptions on the primary and secondary outcome and only specify the form of the regression mean models, thus allow an arbitrary error distribution. Our approach is robust to second- and higher-order moment misspecification. It also leads to more precise estimates of the parameters by effectively using all the available participants. Through simulation studies, we show that the proposed estimator is consistent and asymptotically normal. Data from the Collaborative Perinatal Project are analyzed to illustrate our method. Copyright © 2018 John Wiley & Sons, Ltd.
Dawes, Aaron J; Louie, Rachel; Nguyen, David K; Maggard-Gibbons, Melinda; Parikh, Punam; Ettner, Susan L; Ko, Clifford Y; Zingmond, David S
2014-01-01
Objective To examine the effect of Medicaid enrollment on the diagnosis, treatment, and survival of six surgically relevant cancers among poor and underserved Californians. Data Sources California Cancer Registry (CCR), California's Patient Discharge Database (PDD), and state Medicaid enrollment files between 2002 and 2008. Study Design We linked clinical and administrative records to differentiate patients continuously enrolled in Medicaid from those receiving coverage at the time of their cancer diagnosis. We developed multivariate logistic regression models to predict death within 1 year for each cancer after controlling for sociodemographic and clinical variables. Data Collection/Extraction Methods All incident cases of six cancers (colon, esophageal, lung, pancreas, stomach, and ovarian) were identified from CCR. CCR records were linked to hospitalizations (PDD) and monthly Medicaid enrollment. Principal Findings Continuous enrollment in Medicaid for at least 6 months prior to diagnosis improves survival in three surgically relevant cancers. Discontinuous Medicaid patients have higher stage tumors, undergo fewer definitive operations, and are more likely to die even after risk adjustment. Conclusions Expansion of continuous insurance coverage under the Affordable Care Act is likely to improve both access and clinical outcomes for cancer patients in California. PMID:25256223
Teleportation of Two-Mode Quantum State of Continuous Variables
NASA Astrophysics Data System (ADS)
Song, Tong-Qiang
2004-03-01
Using two Einstein-Podolsky-Rosen pair eigenstates |η> as quantum channels, we study the teleportation of two-mode quantum state of continuous variables. The project supported by Natural Science Foundation of Zhejiang Province of China and Open Foundation of Laboratory of High-Intensity Optics, Shanghai Institute of Optics and Fine Mechanics
Determination of continuous variable entanglement by purity measurements.
Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio
2004-02-27
We classify the entanglement of two-mode Gaussian states according to their degree of total and partial mixedness. We derive exact bounds that determine maximally and minimally entangled states for fixed global and marginal purities. This characterization allows for an experimentally reliable estimate of continuous variable entanglement based on measurements of purity.
Leverrier, Anthony; Grangier, Philippe
2009-05-08
We present a continuous-variable quantum key distribution protocol combining a discrete modulation and reverse reconciliation. This protocol is proven unconditionally secure and allows the distribution of secret keys over long distances, thanks to a reverse reconciliation scheme efficient at very low signal-to-noise ratio.
Quantum error correction of continuous-variable states against Gaussian noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ralph, T. C.
2011-08-15
We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.
Common pitfalls in statistical analysis: Linear regression analysis
Aggarwal, Rakesh; Ranganathan, Priya
2017-01-01
In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis. PMID:28447022
Grid sensitivity capability for large scale structures
NASA Technical Reports Server (NTRS)
Nagendra, Gopal K.; Wallerstein, David V.
1989-01-01
The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.
An Optimization-Based Approach to Injector Element Design
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)
2000-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, combining results from both elements to simulate a trade study, thrust-to-weight trends are illustrated and examined in detail.
Regression Analysis with Dummy Variables: Use and Interpretation.
ERIC Educational Resources Information Center
Hinkle, Dennis E.; Oliver, J. Dale
1986-01-01
Multiple regression analysis (MRA) may be used when both continuous and categorical variables are included as independent research variables. The use of MRA with categorical variables involves dummy coding, that is, assigning zeros and ones to levels of categorical variables. Caution is urged in results interpretation. (Author/CH)
Nutrient movement in a 104-year old soil fertility experiment
USDA-ARS?s Scientific Manuscript database
Alabama’s “Cullars Rotation” experiment (circa 1911) is the oldest, continuous soil fertility experiment in the southern U.S. Treatments include 5 K variables, P variables, S variables, soil pH variables and micronutrient variables in 14 treatments involving a 3-yr rotation of (1) cotton-winter legu...
NASA Astrophysics Data System (ADS)
Su, Yung-Chao; Wu, Shin-Tza
2017-09-01
We study theoretically the teleportation of a controlled-phase (cz) gate through measurement-based quantum-information processing for continuous-variable systems. We examine the degree of entanglement in the output modes of the teleported cz-gate for two classes of resource states: the canonical cluster states that are constructed via direct implementations of two-mode squeezing operations and the linear-optical version of cluster states which are built from linear-optical networks of beam splitters and phase shifters. In order to reduce the excess noise arising from finite-squeezed resource states, teleportation through resource states with different multirail designs will be considered and the enhancement of entanglement in the teleported cz gates will be analyzed. For multirail cluster with an arbitrary number of rails, we obtain analytical expressions for the entanglement in the output modes and analyze in detail the results for both classes of resource states. At the same time, we also show that for uniformly squeezed clusters the multirail noise reduction can be optimized when the excess noise is allocated uniformly to the rails. To facilitate the analysis, we develop a trick with manipulations of quadrature operators that can reveal rather efficiently the measurement sequence and corrective operations needed for the measurement-based gate teleportation, which will also be explained in detail.
Sirois, Fuschia M; Salamonsen, Anita; Kristoffersen, Agnete E
2016-02-24
Research on continued CAM use has been largely atheoretical and has not considered the broader range of psychological and behavioral factors that may be involved. The purpose of this study was to test a new conceptual model of commitment to CAM use that implicates utilitarian (trust in CAM) and symbolic (perceived fit with CAM) in psychological and behavioral dimensions of CAM commitment. A student sample of CAM consumers, (N = 159) completed a survey about their CAM use, CAM-related values, intentions for future CAM use, CAM word-of-mouth behavior, and perceptions of being an ongoing CAM consumer. Analysis revealed that the utilitarian, symbolic, and CAM commitment variables were significantly related, with r's ranging from .54 to .73. A series hierarchical regression analyses controlling for relevant demographic variables found that the utilitarian and symbolic values uniquely accounted for significant and substantial proportion of the variance in each of the three CAM commitment indicators (R(2) from .37 to .57). The findings provide preliminary support for the new model that posits that CAM commitment is a multi-dimensional psychological state with behavioral indicators. Further research with large-scale samples and longitudinal designs is warranted to understand the potential value of the new model.
Nabe-Nielsen, Kirsten; Garde, Anne Helene; Aust, Birgit; Diderichsen, Finn
2012-01-01
This quasi-experimental study investigated how an intervention aiming at increasing eldercare workers' influence on their working hours affected the flexibility, variability, regularity and predictability of the working hours. We used baseline (n = 296) and follow-up (n = 274) questionnaire data and interviews with intervention-group participants (n = 32). The work units in the intervention group designed their own intervention comprising either implementation of computerised self-scheduling (subgroup A), collection of information about the employees' work-time preferences by questionnaires (subgroup B), or discussion of working hours (subgroup C). Only computerised self-scheduling changed the working hours and the way they were planned. These changes implied more flexible but less regular working hours and an experience of less predictability and less continuity in the care of clients and in the co-operation with colleagues. In subgroup B and C, the participants ended up discussing the potential consequences of more work-time influence without actually implementing any changes. Employee work-time influence may buffer the adverse effects of shift work. However, our intervention study suggested that while increasing the individual flexibility, increasing work-time influence may also result in decreased regularity of the working hours and less continuity in the care of clients and co-operation with colleagues.
NASA Technical Reports Server (NTRS)
Sullivan, T. J.; Parker, D. E.
1979-01-01
A design technology study was performed to identify a high speed, multistage, variable geometry fan configuration capable of achieving wide flow modulation with near optimum efficiency at the important operating condition. A parametric screening study of the front and rear block fans was conducted in which the influence of major fan design features on weight and efficiency was determined. Key design parameters were varied systematically to determine the fan configuration most suited for a double bypass, variable cycle engine. Two and three stage fans were considered for the front block. A single stage, core driven fan was studied for the rear block. Variable geometry concepts were evaluated to provide near optimum off design performance. A detailed aerodynamic design and a preliminary mechanical design were carried out for the selected fan configuration. Performance predictions were made for the front and rear block fans.
NASA Astrophysics Data System (ADS)
Beach, A. L., III; Early, A. B.; Chen, G.; Parker, L.
2014-12-01
NASA has conducted airborne tropospheric chemistry studies for about three decades. These field campaigns have generated a great wealth of observations, which are characterized by a wide range of trace gases and aerosol properties. The airborne observational data have often been used in assessment and validation of models and satellite instruments. The ASDC Toolset for Airborne Data (TAD) is being designed to meet the user community needs for manipulating aircraft data for scientific research on climate change and air quality relevant issues. Given the sheer volume of data variables across field campaigns and instruments reporting data on different time scales, this data is often difficult and time-intensive for researchers to analyze. The TAD web application is designed to provide an intuitive user interface (UI) to facilitate quick and efficient discovery from a vast number of airborne variables and data. Users are given the option to search based on high-level parameter groups, individual common names, mission and platform, as well as date ranges. Experienced users can immediately filter by keyword using the global search option. Once the user has chosen their required variables, they are given the option to either request PI data files based on their search criteria or create merged data, i.e. geo-located data from one or more measurement PIs. The purpose of the merged data feature is to allow users to compare data from one flight, as not all data from each flight is taken on the same time scale. Time bases can be continuous or based on the time base from one of the measurement time scales and intervals. After an order is submitted and processed, an ASDC email is sent to the user with a link for data download. The TAD user interface design, application architecture, and proposed future enhancements will be presented.
Insoo Kim; Bhagat, Yusuf A
2016-08-01
The standard in noninvasive blood pressure (BP) measurement is an inflatable cuff device based on the oscillometric method, which poses several practical challenges for continuous BP monitoring. Here, we present a novel ultra-wide band RF Doppler radar sensor for next-generation mobile interface for the purpose of characterizing fluid flow speeds, and for ultimately measuring cuffless blood flow in the human wrist. The system takes advantage of the 7.1~10.5 GHz ultra-wide band signals which can reduce transceiver complexity and power consumption overhead. Moreover, results obtained from hardware development, antenna design and human wrist modeling, and subsequent phantom development are reported. Our comprehensive lab bench system setup with a peristaltic pump was capable of characterizing various speed flow components during a linear velocity sweep of 5~62 cm/s. The sensor holds potential for providing estimates of heart rate and blood pressure.
A digital signal processing system for coherent laser radar
NASA Technical Reports Server (NTRS)
Hampton, Diana M.; Jones, William D.; Rothermel, Jeffry
1991-01-01
A data processing system for use with continuous-wave lidar is described in terms of its configuration and performance during the second survey mission of NASA'a Global Backscatter Experiment. The system is designed to estimate a complete lidar spectrum in real time, record the data from two lidars, and monitor variables related to the lidar operating environment. The PC-based system includes a transient capture board, a digital-signal processing (DSP) board, and a low-speed data-acquisition board. Both unprocessed and processed lidar spectrum data are monitored in real time, and the results are compared to those of a previous non-DSP-based system. Because the DSP-based system is digital it is slower than the surface-acoustic-wave signal processor and collects 2500 spectra/s. However, the DSP-based system provides complete data sets at two wavelengths from the continuous-wave lidars.
Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C
2018-04-01
A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.
Kumar, Gautam; Kothare, Mayuresh V
2013-12-01
We derive conditions for continuous differentiability of inter-spike intervals (ISIs) of spiking neurons with respect to parameters (decision variables) of an external stimulating input current that drives a recurrent network of synaptically connected neurons. The dynamical behavior of individual neurons is represented by a class of discontinuous single-neuron models. We report here that ISIs of neurons in the network are continuously differentiable with respect to decision variables if (1) a continuously differentiable trajectory of the membrane potential exists between consecutive action potentials with respect to time and decision variables and (2) the partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials. Our theoretical results are supported by showing fulfillment of these conditions for a class of known bidimensional spiking neuron models.
François, Clément; Tanasescu, Adrian; Lamy, François-Xavier; Despiegel, Nicolas; Falissard, Bruno; Chalem, Ylana; Lançon, Christophe; Llorca, Pierre-Michel; Saragoussi, Delphine; Verpillat, Patrice; Wade, Alan G; Zighed, Djamel A
2017-01-01
Background and objective : Automated healthcare databases (AHDB) are an important data source for real life drug and healthcare use. In the filed of depression, lack of detailed clinical data requires the use of binary proxies with important limitations. The study objective was to create a Depressive Health State Index (DHSI) as a continuous health state measure for depressed patients using available data in an AHDB. Methods: The study was based on historical cohort design using the UK Clinical Practice Research Datalink (CPRD). Depressive episodes (depression diagnosis with an antidepressant prescription) were used to create the DHSI through 6 successive steps: (1) Defining study design; (2) Identifying constituent parameters; (3) Assigning relative weights to the parameters; (4) Ranking based on the presence of parameters; (5) Standardizing the rank of the DHSI; (6) Developing a regression model to derive the DHSI in any other sample. Results : The DHSI ranged from 0 (worst) to 100 (best health state) comprising 29 parameters. The proportion of depressive episodes with a remission proxy increased with DHSI quartiles. Conclusion : A continuous outcome for depressed patients treated by antidepressants was created in an AHDB using several different variables and allowed more granularity than currently used proxies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bahman Habibzadeh
2010-01-31
The project began under a corporative agreement between Mack Trucks, Inc and the Department of Energy starting from September 1, 2005. The major objective of the four year project is to demonstrate a 10% efficiency gain by operating a Volvo 13 Litre heavy-duty diesel engine at a constant or narrow speed and coupled to a continuously variable transmission. The simulation work on the Constant Speed Engine started on October 1st. The initial simulations are aimed to give a basic engine model for the VTEC vehicle simulations. Compressor and turbine maps are based upon existing maps and/or qualified, realistic estimations. Themore » reference engine is a MD 13 US07 475 Hp. Phase I was completed in May 2006 which determined that an increase in fuel efficiency for the engine of 10.5% over the OICA cycle, and 8.2% over a road cycle was possible. The net increase in fuel efficiency would be 5% when coupled to a CVT and operated over simulated highway conditions. In Phase II an economic analysis was performed on the engine with turbocompound (TC) and a Continuously Variable Transmission (CVT). The system was analyzed to determine the payback time needed for the added cost of the TC and CVT system. The analysis was performed by considering two different production scenarios of 10,000 and 60,000 units annually. The cost estimate includes the turbocharger, the turbocompound unit, the interstage duct diffuser and installation details, the modifications necessary on the engine and the CVT. Even with the cheapest fuel and the lowest improvement, the pay back time is only slightly more than 12 months. A gear train is necessary between the engine crankshaft and turbocompound unit. This is considered to be relatively straight forward with no design problems.« less
A 24 km fiber-based discretely signaled continuous variable quantum key distribution system.
Dinh Xuan, Quyen; Zhang, Zheshen; Voss, Paul L
2009-12-21
We report a continuous variable key distribution system that achieves a final secure key rate of 3.45 kilobits/s over a distance of 24.2 km of optical fiber. The protocol uses discrete signaling and post-selection to improve reconciliation speed and quantifies security by means of quantum state tomography. Polarization multiplexing and a frequency translation scheme permit transmission of a continuous wave local oscillator and suppression of noise from guided acoustic wave Brillouin scattering by more than 27 dB.
Design of RF MEMS switches without pull-in instability
NASA Astrophysics Data System (ADS)
Proctor, W. Cyrus; Richards, Gregory P.; Shen, Chongyi; Skorczewski, Tyler; Wang, Min; Zhang, Jingyan; Zhong, Peng; Massad, Jordan E.; Smith, Ralph
2010-04-01
Micro-electro-mechanical systems (MEMS) switches for radio-frequency (RF) signals have certain advantages over solid-state switches, such as lower insertion loss, higher isolation, and lower static power dissipation. Mechanical dynamics can be a determining factor for the reliability of RF MEMS. The RF MEMS ohmic switch discussed in this paper consists of a plate suspended over an actuation pad by four double-cantilever springs. Closing the switch with a simple step actuation voltage typically causes the plate to rebound from its electrical contacts. The rebound interrupts the signal continuity and degrades the performance, reliability and durability of the switch. The switching dynamics are complicated by a nonlinear, electrostatic pull-in instability that causes high accelerations. Slow actuation and tailored voltage control signals can mitigate switch bouncing and effects of the pull-in instability; however, slow switching speed and overly-complex input signals can significantly penalize overall system-level performance. Examination of a balanced and optimized alternative switching solution is sought. A step toward one solution is to consider a pull-in-free switch design. In this paper, determine how simple RC-circuit drive signals and particular structural properties influence the mechanical dynamics of an RF MEMS switch designed without a pull-in instability. The approach is to develop a validated modeling capability and subsequently study switch behavior for variable drive signals and switch design parameters. In support of project development, specifiable design parameters and constraints will be provided. Moreover, transient data of RF MEMS switches from laser Doppler velocimetry will be provided for model validation tasks. Analysis showed that a RF MEMS switch could feasibly be designed with a single pulse waveform and no pull-in instability and achieve comparable results to previous waveform designs. The switch design could reliably close in a timely manner, with small contact velocity, usually with little to no rebound even when considering manufacturing variability.
Reducing Design Risk Using Robust Design Methods: A Dual Response Surface Approach
NASA Technical Reports Server (NTRS)
Unal, Resit; Yeniay, Ozgur; Lepsch, Roger A. (Technical Monitor)
2003-01-01
Space transportation system conceptual design is a multidisciplinary process containing considerable element of risk. Risk here is defined as the variability in the estimated (output) performance characteristic of interest resulting from the uncertainties in the values of several disciplinary design and/or operational parameters. Uncertainties from one discipline (and/or subsystem) may propagate to another, through linking parameters and the final system output may have a significant accumulation of risk. This variability can result in significant deviations from the expected performance. Therefore, an estimate of variability (which is called design risk in this study) together with the expected performance characteristic value (e.g. mean empty weight) is necessary for multidisciplinary optimization for a robust design. Robust design in this study is defined as a solution that minimizes variability subject to a constraint on mean performance characteristics. Even though multidisciplinary design optimization has gained wide attention and applications, the treatment of uncertainties to quantify and analyze design risk has received little attention. This research effort explores the dual response surface approach to quantify variability (risk) in critical performance characteristics (such as weight) during conceptual design.
Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M
2017-12-01
The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.
Kusurkar, R A; Ten Cate, Th J; van Asperen, M; Croiset, G
2011-01-01
Motivation in learning behaviour and education is well-researched in general education, but less in medical education. To answer two research questions, 'How has the literature studied motivation as either an independent or dependent variable? How is motivation useful in predicting and understanding processes and outcomes in medical education?' in the light of the Self-determination Theory (SDT) of motivation. A literature search performed using the PubMed, PsycINFO and ERIC databases resulted in 460 articles. The inclusion criteria were empirical research, specific measurement of motivation and qualitative research studies which had well-designed methodology. Only studies related to medical students/school were included. Findings of 56 articles were included in the review. Motivation as an independent variable appears to affect learning and study behaviour, academic performance, choice of medicine and specialty within medicine and intention to continue medical study. Motivation as a dependent variable appears to be affected by age, gender, ethnicity, socioeconomic status, personality, year of medical curriculum and teacher and peer support, all of which cannot be manipulated by medical educators. Motivation is also affected by factors that can be influenced, among which are, autonomy, competence and relatedness, which have been described as the basic psychological needs important for intrinsic motivation according to SDT. Motivation is an independent variable in medical education influencing important outcomes and is also a dependent variable influenced by autonomy, competence and relatedness. This review finds some evidence in support of the validity of SDT in medical education.
Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A
2015-03-15
The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.
Battery Energy Storage Systems to Mitigate the Variability of Photovoltaic Power Generation
NASA Astrophysics Data System (ADS)
Gurganus, Heath Alan
Methods of generating renewable energy such as through solar photovoltaic (PV) cells and wind turbines offer great promise in terms of a reduced carbon footprint and overall impact on the environment. However, these methods also share the attribute of being highly stochastic, meaning they are variable in such a way that is difficult to forecast with sufficient accuracy. While solar power currently constitutes a small amount of generating potential in most regions, the cost of photovoltaics continues to decline and a trend has emerged to build larger PV plants than was once feasible. This has brought the matter of increased variability to the forefront of research in the industry. Energy storage has been proposed as a means of mitigating this increased variability --- and thus reducing the need to utilize traditional spinning reserves --- as well as offering auxiliary grid services such as peak-shifting and frequency control. This thesis addresses the feasibility of using electrochemical storage methods (i.e. batteries) to decrease the ramp rates of PV power plants. By building a simulation of a grid-connected PV array and a typical Battery Energy Storage System (BESS) in the NetLogo simulation environment, I have created a parameterized tool that can be tailored to describe almost any potential PV setup. This thesis describes the design and function of this model, and makes a case for the accuracy of its measurements by comparing its simulated output to that of well-documented real world sites. Finally, a set of recommendations for the design and operational parameters of such a system are then put forth based on the results of several experiments performed using this model.
Hybrid Methods in Quantum Information
NASA Astrophysics Data System (ADS)
Marshall, Kevin
Today, the potential power of quantum information processing comes as no surprise to physicist or science-fiction writer alike. However, the grand promises of this field remain unrealized, despite significant strides forward, due to the inherent difficulties of manipulating quantum systems. Simply put, it turns out that it is incredibly difficult to interact, in a controllable way, with the quantum realm when we seem to live our day to day lives in a classical world. In an effort to solve this challenge, people are exploring a variety of different physical platforms, each with their strengths and weaknesses, in hopes of developing new experimental methods that one day might allow us to control a quantum system. One path forward rests in combining different quantum systems in novel ways to exploit the benefits of different systems while circumventing their respective weaknesses. In particular, quantum systems come in two different flavours: either discrete-variable systems or continuous-variable ones. The field of hybrid quantum information seeks to combine these systems, in clever ways, to help overcome the challenges blocking the path between what is theoretically possible and what is achievable in a laboratory. In this thesis we explore four topics in the context of hybrid methods in quantum information, in an effort to contribute to the resolution of existing challenges and to stimulate new avenues of research. First, we explore the manipulation of a continuous-variable quantum system consisting of phonons in a linear chain of trapped ions where we use the discretized internal levels to mediate interactions. Using our proposed interaction we are able to implement, for example, the acoustic equivalent of a beam splitter with modest experimental resources. Next we propose an experimentally feasible implementation of the cubic phase gate, a primitive non-Gaussian gate required for universal continuous-variable quantum computation, based off sequential photon subtraction. We then discuss the notion of embedding a finite dimensional state into a continuous-variable system, and propose a method of performing quantum computations on encrypted continuous-variable states. This protocol allows for a client, of limited quantum ability, to outsource a computation while hiding their information. Next, we discuss the possibility of performing universal quantum computation on discrete-variable logical states encoded in mixed continuous-variable quantum states. Finally, we present an account of open problems related to our results, and possible future avenues of research.
Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm
NASA Astrophysics Data System (ADS)
Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana
2017-12-01
Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.
Modeling the Dynamics of Task Allocation and Specialization in Honeybee Societies
NASA Astrophysics Data System (ADS)
Hoogendoorn, Mark; Schut, Martijn C.; Treur, Jan
The concept of organization has been studied in sciences such as social science and economics, but recently also in artificial intelligence [Furtado 2005, Giorgini 2004, and McCallum 2005]. With the desire to analyze and design more complex systems consisting of larger numbers of agents (e.g., in nature, society, or software), the need arises for a concept of higher abstraction than the concept agent. To this end, organizational modeling is becoming a practiced stage in the analysis and design of multi-agent systems, hereby taking into consideration the environment of the organization. An environment can have a high degree of variability which might require organizations to adapt to the environment's dynamics, to ensure a continuous proper functioning of the organization. Hence, such change processes are a crucial function of the organization and should be part of the organizational model.
Designing healthcare information technology to catalyse change in clinical care.
Lester, William T; Zai, Adrian H; Grant, Richard W; Chueh, Henry C
2008-01-01
The gap between best practice and actual patient care continues to be a pervasive problem in our healthcare system. Efforts to improve on this knowledge-performance gap have included computerised disease management programs designed to improve guideline adherence. However, current computerised reminder and decision support interventions directed at changing physician behaviour have had only a limited and variable effect on clinical outcomes. Further, immediate pay-for-performance financial pressures on institutions have created an environment where disease management systems are often created under duress, appended to existing clinical systems and poorly integrated into the existing workflow, potentially limiting their real-world effectiveness. The authors present a review of disease management as well as a conceptual framework to guide the development of more effective health information technology (HIT) tools for translating clinical information into clinical action.
GASP- General Aviation Synthesis Program. Volume 1: Main program. Part 1: Theoretical development
NASA Technical Reports Server (NTRS)
Hague, D.
1978-01-01
The General Aviation synthesis program performs tasks generally associated with aircraft preliminary design and allows an analyst the capability of performing parametric studies in a rapid manner. GASP emphasizes small fixed-wing aircraft employing propulsion systems varying froma single piston engine with fixed pitch propeller through twin turboprop/ turbofan powered business or transport type aircraft. The program, which may be operated from a computer terminal in either the batch or interactive graphic mode, is comprised of modules representing the various technical disciplines integrated into a computational flow which ensures that the interacting effects of design variables are continuously accounted for in the aircraft sizing procedure. The model is a useful tool for comparing configurations, assessing aircraft performance and economics, performing tradeoff and sensitivity studies, and assessing the impact of advanced technologies on aircraft performance and economics.
[An Introduction to Methods for Evaluating Health Care Technology].
Lee, Ting-Ting
2015-06-01
The rapid and continual advance of healthcare technology makes ensuring that this technology is used effectively to achieve its original goals a critical issue. This paper presents three methods that may be applied by healthcare professionals in the evaluation of healthcare technology. These methods include: the perception/experiences of users, user work-pattern changes, and chart review or data mining. The first method includes two categories: using interviews to explore the user experience and using theory-based questionnaire surveys. The second method applies work sampling to observe the work pattern changes of users. The last method conducts chart reviews or data mining to analyze the designated variables. In conclusion, while evaluative feedback may be used to improve the design and development of healthcare technology applications, the informatics competency and informatics literacy of users may be further explored in future research.
Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu
2014-09-01
This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.
Effect analysis of design variables on the disc in a double-eccentric butterfly valve.
Kang, Sangmo; Kim, Da-Eun; Kim, Kuk-Kyeom; Kim, Jun-Oh
2014-01-01
We have performed a shape optimization of the disc in an industrial double-eccentric butterfly valve using the effect analysis of design variables to enhance the valve performance. For the optimization, we select three performance quantities such as pressure drop, maximum stress, and mass (weight) as the responses and three dimensions regarding the disc shape as the design variables. Subsequently, we compose a layout of orthogonal array (L16) by performing numerical simulations on the flow and structure using a commercial package, ANSYS v13.0, and then make an effect analysis of the design variables on the responses using the design of experiments. Finally, we formulate a multiobjective function consisting of the three responses and then propose an optimal combination of the design variables to maximize the valve performance. Simulation results show that the disc thickness makes the most significant effect on the performance and the optimal design provides better performance than the initial design.
Development of Multi-slice Analytical Tool to Support BIM-based Design Process
NASA Astrophysics Data System (ADS)
Atmodiwirjo, P.; Johanes, M.; Yatmo, Y. A.
2017-03-01
This paper describes the on-going development of computational tool to analyse architecture and interior space based on multi-slice representation approach that is integrated with Building Information Modelling (BIM). Architecture and interior space is experienced as a dynamic entity, which have the spatial properties that might be variable from one part of space to another, therefore the representation of space through standard architectural drawings is sometimes not sufficient. The representation of space as a series of slices with certain properties in each slice becomes important, so that the different characteristics in each part of space could inform the design process. The analytical tool is developed for use as a stand-alone application that utilises the data exported from generic BIM modelling tool. The tool would be useful to assist design development process that applies BIM, particularly for the design of architecture and interior spaces that are experienced as continuous spaces. The tool allows the identification of how the spatial properties change dynamically throughout the space and allows the prediction of the potential design problems. Integrating the multi-slice analytical tool in BIM-based design process thereby could assist the architects to generate better design and to avoid unnecessary costs that are often caused by failure to identify problems during design development stages.
Robson, Andrew; Robson, Fiona
2015-01-01
To identify the combination of variables that explain nurses' continuation intention in the UK National Health Service. This alternative arena has permitted the replication of a private sector Australian study. This study provides understanding about the issues that affect nurse retention in a sector where employee attrition is a key challenge, further exacerbated by an ageing workforce. A quantitative study based on a self-completion survey questionnaire completed in 2010. Nurses employed in two UK National Health Service Foundation Trusts were surveyed and assessed using seven work-related constructs and various demographics including age generation. Through correlation, multiple regression and stepwise regression analysis, the potential combined effect of various explanatory variables on continuation intention was assessed, across the entire nursing cohort and in three age-generation groups. Three variables act in combination to explain continuation intention: work-family conflict, work attachment and importance of work to the individual. This combination of significant explanatory variables was consistent across the three generations of nursing employee. Work attachment was identified as the strongest marginal predictor of continuation intention. Work orientation has a greater impact on continuation intention compared with employer-directed interventions such as leader-member exchange, teamwork and autonomy. UK nurses are homogeneous across the three age-generations regarding explanation of continuation intention, with the significant explanatory measures being recognizably narrower in their focus and more greatly concentrated on the individual. This suggests that differentiated approaches to retention should perhaps not be pursued in this sectoral context. © 2014 John Wiley & Sons Ltd.
26 CFR 1.467-5 - Section 467 rental agreements with variable interest.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Section 467 rental agreements with variable interest. 1.467-5 Section 1.467-5 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Taxable Year for Which Deductions Taken § 1.467-5 Section 467...
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles
NASA Astrophysics Data System (ADS)
Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.
2016-03-01
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, as shown successfully with light fields. Here, we report on the production of massive particles which meet the EPR criterion for continuous phase/amplitude variables. The created quantum state of ultracold atoms shows an EPR parameter of 0.18(3), which is 2.4 standard deviations below the threshold of 1/4. Our state presents a resource for tests of quantum nonlocality with massive particles and a wide variety of applications in the field of continuous-variable quantum information and metrology.
Testing for entanglement with periodic coarse graining
NASA Astrophysics Data System (ADS)
Tasca, D. S.; Rudnicki, Łukasz; Aspden, R. S.; Padgett, M. J.; Souto Ribeiro, P. H.; Walborn, S. P.
2018-04-01
Continuous-variable systems find valuable applications in quantum information processing. To deal with an infinite-dimensional Hilbert space, one in general has to handle large numbers of discretized measurements in tasks such as entanglement detection. Here we employ the continuous transverse spatial variables of photon pairs to experimentally demonstrate entanglement criteria based on a periodic structure of coarse-grained measurements. The periodization of the measurements allows an efficient evaluation of entanglement using spatial masks acting as mode analyzers over the entire transverse field distribution of the photons and without the need to reconstruct the probability densities of the conjugate continuous variables. Our experimental results demonstrate the utility of the derived criteria with a success rate in entanglement detection of ˜60 % relative to 7344 studied cases.
Composable security proof for continuous-variable quantum key distribution with coherent States.
Leverrier, Anthony
2015-02-20
We give the first composable security proof for continuous-variable quantum key distribution with coherent states against collective attacks. Crucially, in the limit of large blocks the secret key rate converges to the usual value computed from the Holevo bound. Combining our proof with either the de Finetti theorem or the postselection technique then shows the security of the protocol against general attacks, thereby confirming the long-standing conjecture that Gaussian attacks are optimal asymptotically in the composable security framework. We expect that our parameter estimation procedure, which does not rely on any assumption about the quantum state being measured, will find applications elsewhere, for instance, for the reliable quantification of continuous-variable entanglement in finite-size settings.
Finite-size analysis of a continuous-variable quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverrier, Anthony; Grosshans, Frederic; Grangier, Philippe
2010-06-15
The goal of this paper is to extend the framework of finite-size analysis recently developed for quantum key distribution to continuous-variable protocols. We do not solve this problem completely here, and we mainly consider the finite-size effects on the parameter estimation procedure. Despite the fact that some questions are left open, we are able to give an estimation of the secret key rate for protocols which do not contain a postselection procedure. As expected, these results are significantly more pessimistic than those obtained in the asymptotic regime. However, we show that recent continuous-variable protocols are able to provide fully securemore » secret keys in the finite-size scenario, over distances larger than 50 km.« less
NASA Astrophysics Data System (ADS)
Zheng, Y.; Chen, J.
2018-06-01
Variable stiffness composite structures take full advantages of composite’s design ability. An enlarged design space will make the structure’s performance more excellent. Through an optimal design of a variable stiffness cylinder, the buckling capacity of the cylinder will be increased as compared with its constant stiffness counterpart. In this paper, variable stiffness composite cylinders sustaining combined loadings are considered, and the optimization is conducted based on the multi-objective optimization method. The results indicate that variable stiffness cylinder’s loading capacity is increased significantly as compared with the constant stiffness, especially when an inhomogeneous loading is considered.
Toni, Tina; Tidor, Bruce
2013-01-01
Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i) high protein cooperativity and low miRNA cooperativity, (ii) imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii) correlated expression of mRNA and miRNA--for example, on the same transcript--was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in the steady state. Our model and findings provide a general framework to guide design principles in synthetic biology.
Toni, Tina; Tidor, Bruce
2013-01-01
Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i) high protein cooperativity and low miRNA cooperativity, (ii) imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii) correlated expression of mRNA and miRNA – for example, on the same transcript – was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in the steady state. Our model and findings provide a general framework to guide design principles in synthetic biology. PMID:23555205
Xu, Dan; King, Kevin F; Liang, Zhi-Pei
2007-10-01
A new class of spiral trajectories called variable slew-rate spirals is proposed. The governing differential equations for a variable slew-rate spiral are derived, and both numeric and analytic solutions to the equations are given. The primary application of variable slew-rate spirals is peak B(1) amplitude reduction in 2D RF pulse design. The reduction of peak B(1) amplitude is achieved by changing the gradient slew-rate profile, and gradient amplitude and slew-rate constraints are inherently satisfied by the design of variable slew-rate spiral gradient waveforms. A design example of 2D RF pulses is given, which shows that under the same hardware constraints the RF pulse using a properly chosen variable slew-rate spiral trajectory can be much shorter than that using a conventional constant slew-rate spiral trajectory, thus having greater immunity to resonance frequency offsets.
NASA Astrophysics Data System (ADS)
Houmat, A.
2018-02-01
The optimal lay-up design for the maximum fundamental frequency of variable stiffness laminated composite plates is investigated using a layer-wise optimization technique. The design variables are two fibre orientation angles per ply. Thin plate theory is used in conjunction with a p-element to calculate the fundamental frequencies of symmetrically and antisymmetrically laminated composite plates. Comparisons with existing optimal solutions for constant stiffness symmetrically laminated composite plates show excellent agreement. It is observed that the maximum fundamental frequency can be increased considerably using variable stiffness design as compared to constant stiffness design. In addition, optimal lay-ups for the maximum fundamental frequency of variable stiffness symmetrically and antisymmetrically laminated composite plates with different aspect ratios and various combinations of free, simply supported and clamped edge conditions are presented. These should prove a useful benchmark for optimal lay-ups of variable stiffness laminated composite plates.
Gurrera, Ronald J.; Karel, Michele J.; Azar, Armin R.; Moye, Jennifer
2013-01-01
OBJECTIVES The capacity of older adults to make health care decisions is often impaired in dementia and has been linked to performance on specific neuropsychological tasks. Within-person across-test neuropsychological performance variability has been shown to predict future dementia. This study examined the relationship of within-person across-test neuropsychological performance variability to a current construct of treatment decision (consent) capacity. DESIGN Participants completed a neuropsychological test battery and a standardized capacity assessment. Standard scores were used to compute mean neuropsychological performance and within-person across-test variability. SETTING Assessments were performed in the participant’s preferred location (e.g., outpatient clinic office, senior center, or home). PARTICIPANTS Participants were recruited from the community with fliers and advertisements, and consisted of men (N=79) and women (N=80) with (N=83) or without (N=76) significant cognitive impairment. MEASUREMENTS Participants completed the MacArthur Competence Assessment Tool - Treatment (MacCAT-T) and 11 neuropsychological tests commonly used in the cognitive assessment of older individuals. RESULTS Neuropsychological performance and within-person variability were independently associated with continuous and dichotomous measures of capacity, and within-person neuropsychological variability was significantly associated with within-person decisional ability variability. Prevalence of incapacity was greater than expected in participants with and without significant cognitive impairment when decisional abilities were considered separately. CONCLUSIONS These findings are consistent with an emerging construct of consent capacity in which discrete decisional abilities are differentially associated with cognitive processes, and indicate that the sensitivity and accuracy of consent capacity assessments can be improved by evaluating decisional abilities separately. PMID:23831178
Non-neural BOLD variability in block and event-related paradigms.
Kannurpatti, Sridhar S; Motes, Michael A; Rypma, Bart; Biswal, Bharat B
2011-01-01
Block and event-related stimulus designs are typically used in fMRI studies depending on the importance of detection power or estimation efficiency. The extent of vascular contribution to variability in block and event-related fMRI-BOLD response is not known. With scaling, the extent of vascular variability in the fMRI-BOLD response during block and event-related design tasks was investigated. Blood oxygen level-dependent (BOLD) contrast data from healthy volunteers performing a block design motor task and an event-related memory task requiring performance of a motor response were analyzed from the regions of interest (ROIs) surrounding the primary and supplementary motor cortices. Average BOLD signal change was significantly larger during the block design compared to the event-related design. In each subject, BOLD signal change across voxels in the ROIs had higher variation during the block design task compared to the event-related design task. Scaling using the resting state fluctuation of amplitude (RSFA) and breath-hold (BH), which minimizes BOLD variation due to vascular origins, reduced the within-subject BOLD variability in every subject during both tasks but significantly reduced BOLD variability across subjects only during the block design task. The strong non-neural source of intra- and intersubject variability of BOLD response during the block design compared to event-related task indicates that study designs optimizing for statistical power through enhancement of the BOLD contrast (for, e.g., block design) can be affected by enhancement of non-neural sources of BOLD variability. Copyright © 2011. Published by Elsevier Inc.
26 CFR 1.801-7 - Variable annuities.
Code of Federal Regulations, 2013 CFR
2013-04-01
...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-7 Variable annuities. (a) In general. (1... variable annuity contract vary with the insurance company's investment experience with respect to such.... Accordingly, a company issuing variable annuity contracts shall qualify as a life insurance company for...
26 CFR 1.801-7 - Variable annuities.
Code of Federal Regulations, 2012 CFR
2012-04-01
...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-7 Variable annuities. (a) In general. (1... variable annuity contract vary with the insurance company's investment experience with respect to such.... Accordingly, a company issuing variable annuity contracts shall qualify as a life insurance company for...
26 CFR 1.801-7 - Variable annuities.
Code of Federal Regulations, 2014 CFR
2014-04-01
...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-7 Variable annuities. (a) In general. (1... variable annuity contract vary with the insurance company's investment experience with respect to such.... Accordingly, a company issuing variable annuity contracts shall qualify as a life insurance company for...
26 CFR 1.801-7 - Variable annuities.
Code of Federal Regulations, 2011 CFR
2011-04-01
...) INCOME TAXES (CONTINUED) Life Insurance Companies § 1.801-7 Variable annuities. (a) In general. (1... variable annuity contract vary with the insurance company's investment experience with respect to such.... Accordingly, a company issuing variable annuity contracts shall qualify as a life insurance company for...
Smith, Leah M; Lévesque, Linda E; Kaufman, Jay S; Strumpf, Erin C
2017-06-01
The regression discontinuity design (RDD) is a quasi-experimental approach used to avoid confounding bias in the assessment of new policies and interventions. It is applied specifically in situations where individuals are assigned to a policy/intervention based on whether they are above or below a pre-specified cut-off on a continuously measured variable, such as birth date, income or weight. The strength of the design is that, provided individuals do not manipulate the value of this variable, assignment to the policy/intervention is considered as good as random for individuals close to the cut-off. Despite its popularity in fields like economics, the RDD remains relatively unknown in epidemiology where its application could be tremendously useful. In this paper, we provide a practical introduction to the RDD for health researchers, describe four empirically testable assumptions of the design and offer strategies that can be used to assess whether these assumptions are met in a given study. For illustrative purposes, we implement these strategies to assess whether the RDD is appropriate for a study of the impact of human papillomavirus vaccination on cervical dysplasia. We found that, whereas the assumptions of the RDD were generally satisfied in our study context, birth timing had the potential to confound our effect estimate in an unexpected way and therefore needed to be taken into account in the analysis. Our findings underscore the importance of assessing the validity of the assumptions of this design, testing them when possible and making adjustments as necessary to support valid causal inference. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association
Eklund, J
1997-10-01
This paper reviews the literature comparing the fields of ergonomics and quality, mainly in an industrial context, including mutual influences, similarities and differences. Relationships between ergonomics and the factors: work conditions, product design, ISO 9000, continuous improvements and TQM are reviewed in relation to the consequence, application, and process domains. The definitions of ergonomics and quality overlap substantially. Quality deficiencies, human errors and ergonomics problems often have the same cause, which in many cases can be traced to the design of work, workplace and environment e.g. noise, light, postures, loads, pace and work content. In addition, the possibility of performing to a high standard at work is an important prerequisite for satisfaction and well-being. Contradictions between the two fields have been identified in the view of concepts such as standardization, reduction of variability and copying of best practice, requiring further research. The field of quality would gain by incorporating ergonomics knowledge, especially in the areas of work design and human capability, since these factors are decisive for human performance and also therefore the performance of the systems involved. The field of ergonomics, on the other hand, would benefit from developing a stronger emphasis on methodologies and structures for improvement processes, including a clearer link with leadership and company strategies. Just as important is a further development of practicable participative ergonomics methods and tools for use at workplaces by the workers themselves, in order to integrate the top-down and the bottom-up processes and achieve better impact. Using participative processes for problem-solving and continuous improvement, focusing ergonomics and quality jointly has a great potential for improving working conditions and quality results simultaneously, and satisfying most of the interested parties.
Programmable rate modem utilizing digital signal processing techniques
NASA Technical Reports Server (NTRS)
Bunya, George K.; Wallace, Robert L.
1989-01-01
The engineering development study to follow was written to address the need for a Programmable Rate Digital Satellite Modem capable of supporting both burst and continuous transmission modes with either binary phase shift keying (BPSK) or quadrature phase shift keying (QPSK) modulation. The preferred implementation technique is an all digital one which utilizes as much digital signal processing (DSP) as possible. Here design tradeoffs in each portion of the modulator and demodulator subsystem are outlined, and viable circuit approaches which are easily repeatable, have low implementation losses and have low production costs are identified. The research involved for this study was divided into nine technical papers, each addressing a significant region of concern in a variable rate modem design. Trivial portions and basic support logic designs surrounding the nine major modem blocks were omitted. In brief, the nine topic areas were: (1) Transmit Data Filtering; (2) Transmit Clock Generation; (3) Carrier Synthesizer; (4) Receive AGC; (5) Receive Data Filtering; (6) RF Oscillator Phase Noise; (7) Receive Carrier Selectivity; (8) Carrier Recovery; and (9) Timing Recovery.
Design study of steel V-Belt CVT for electric vehicles
NASA Technical Reports Server (NTRS)
Swain, J. C.; Klausing, T. A.; Wilcox, J. P.
1980-01-01
A continuously variable transmission (CVT) design layout was completed. The intended application was for coupling the flywheel to the driveline of a flywheel battery hybrid electric vehicle. The requirements were that the CVT accommodate flywheel speeds from 14,000 to 28,000 rpm and driveline speeds of 850 to 5000 rpm without slipping. Below 850 rpm a slipping clutch was used between the CVT and the driveline. The CVT was required to accommodate 330 ft-lb maximum torque and 100 hp maximum transient. The weighted average power was 22 hp, the maximum allowable full range shift time was 2 seconds and the required lift was 2600 hours. The resulting design utilized two steel V-belts in series to accommodate the required wide speed ratio. The size of the CVT, including the slipping clutch, was 20.6 inches long, 9.8 inches high and 13.8 inches wide. The estimated weight was 155 lb. An overall potential efficiency of 95 percent was projected for the average power condition.
NASA Technical Reports Server (NTRS)
Chaparro, Daniel; Fujiwara, Gustavo E. C.; Ting, Eric; Nguyen, Nhan
2016-01-01
The need to rapidly scan large design spaces during conceptual design calls for computationally inexpensive tools such as the vortex lattice method (VLM). Although some VLM tools, such as Vorview have been extended to model fully-supersonic flow, VLM solutions are typically limited to inviscid, subcritical flow regimes. Many transport aircraft operate at transonic speeds, which limits the applicability of VLM for such applications. This paper presents a novel approach to correct three-dimensional VLM through coupling of two-dimensional transonic small disturbance (TSD) solutions along the span of an aircraft wing in order to accurately predict transonic aerodynamic loading and wave drag for transport aircraft. The approach is extended to predict flow separation and capture the attenuation of aerodynamic forces due to boundary layer viscosity by coupling the TSD solver with an integral boundary layer (IBL) model. The modeling framework is applied to the NASA General Transport Model (GTM) integrated with a novel control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF).
Two-dimensional computer simulation of EMVJ and grating solar cells under AMO illumination
NASA Technical Reports Server (NTRS)
Gray, J. L.; Schwartz, R. J.
1984-01-01
A computer program, SCAP2D (Solar Cell Analysis Program in 2-Dimensions), is used to evaluate the Etched Multiple Vertical Junction (EMVJ) and grating solar cells. The aim is to demonstrate how SCAP2D can be used to evaluate cell designs. The cell designs studied are by no means optimal designs. The SCAP2D program solves the three coupled, nonlinear partial differential equations, Poisson's Equation and the hole and electron continuity equations, simultaneously in two-dimensions using finite differences to discretize the equations and Newton's Method to linearize them. The variables solved for are the electrostatic potential and the hole and electron concentrations. Each linear system of equations is solved directly by Gaussian Elimination. Convergence of the Newton Iteration is assumed when the largest correction to the electrostatic potential or hole or electron quasi-potential is less than some predetermined error. A typical problem involves 2000 nodes with a Jacobi matrix of order 6000 and a bandwidth of 243.
Impact of an inquiry unit on grade 4 students' science learning
NASA Astrophysics Data System (ADS)
Di Mauro, María Florencia; Furman, Melina
2016-09-01
This paper concerns the identification of teaching strategies that enhance the development of 4th grade students' experimental design skills at a public primary school in Argentina. Students' performance in the design of relevant experiments was evaluated before and after an eight-week intervention compared to a control group, as well as the persistence of this learning after eight months. The study involved a quasi-experimental longitudinal study with pre-test/post-test/delayed post-test measures, complemented with semi-structured interviews with randomly selected students. Our findings showed improvement in the experimental design skills as well as its sustainability among students working with the inquiry-based sequence. After the intervention, students were able to establish valid comparisons, propose pertinent designs and identify variables that should remain constant. Contrarily, students in the control group showed no improvement and continued to solve the posed problems based on prior beliefs. In summary, this paper shows evidence that implementing inquiry-based units involving problems set in cross-domain everyday situations that combine independent student work with teacher guidance significantly improves the development of scientific skills in real classroom contexts.
NASA Astrophysics Data System (ADS)
Hanan, Lu; Qiushi, Li; Shaobin, Li
2016-12-01
This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.
Abend, Nicholas S.; Dlugos, Dennis J.; Hahn, Cecil D.; Hirsch, Lawrence J.; Herman, Susan T.
2010-01-01
Background Continuous EEG monitoring (cEEG) of critically ill patients is frequently utilized to detect non-convulsive seizures (NCS) and status epilepticus (NCSE). The indications for cEEG, as well as when and how to treat NCS, remain unclear. We aimed to describe the current practice of cEEG in critically ill patients to define areas of uncertainty that could aid in designing future research. Methods We conducted an international survey of neurologists focused on cEEG utilization and NCS management. Results Three-hundred and thirty physicians completed the survey. 83% use cEEG at least once per month and 86% manage NCS at least five times per year. The use of cEEG in patients with altered mental status was common (69%), with higher use if the patient had a prior convulsion (89%) or abnormal eye movements (85%). Most respondents would continue cEEG for 24 h. If NCS or NCSE is identified, the most common anticonvulsants administered were phenytoin/fosphenytoin, lorazepam, or levetiracetam, with slightly more use of levetiracetam for NCS than NCSE. Conclusions Continuous EEG monitoring (cEEG) is commonly employed in critically ill patients to detect NCS and NCSE. However, there is substantial variability in current practice related to cEEG indications and duration and to management of NCS and NCSE. The fact that such variability exists in the management of this common clinical problem suggests that further prospective study is needed. Multiple points of uncertainty are identified that require investigation. PMID:20198513
Health status: does it predict choice in further education?
Koivusilta, L; Rimpelä, A; Rimpelä, M
1995-01-01
STUDY OBJECTIVE--To study the significance of a young person's health to his or her choice of further education at age 16. DESIGN--A cross sectional population survey SETTING--The whole of Finland. PARTICIPANTS--A representative sample of 2977 Finnish 16 year olds. The response rate was 83%. MEASUREMENTS AND MAIN RESULTS--The three outcome variables reflected successive steps on the way to educational success: school attendance after the completion of compulsory schooling, the type of school, and school achievement for those at school. Continuing their education and choosing upper secondary school were most typical of young people from upper social classes. Female gender and living with both parents increased the probability of choosing to go on to upper secondary school. Over and above these background variables, some health factors had additional explanatory power. Continuing their education, attending upper secondary schools, and good achievement were typical of those who considered their health to be good. Chronically ill adolescents were more likely to continue their education than the healthy ones. CONCLUSIONS--School imposes great demands on young people, thus revealing differences in personal health resources. Adaptation to the norms of a society in which education is highly valued is related to satisfying health status. In a welfare state that offers equal educational opportunities for everyone, however, chronically ill adolescents can add to their resources for coping through schooling. Health related selection thus works differently for various indicators of health and in various kinds of societies. Social class differences in health in the future may be more dependent on personally experienced health problems than on medically diagnosed diseases. PMID:7798039
Variable Conductance Heat Pipes for Radioisotope Stirling Systems
NASA Astrophysics Data System (ADS)
Anderson, William G.; Tarau, Calin
2008-01-01
In a Stirling radioisotope system, heat must continually be removed from the GPHS modules, to maintain the GPHS modules and surrounding insulation at acceptable temperatures. Normally, the Stirling convertor provides this cooling. If the Stirling engine stops in the current system, the insulation is designed to spoil, preventing damage to the GPHS, but also ending the mission. An alkali-metal Variable Conductance Heat Pipe (VCHP) was designed to allow multiple stops and restarts of the Stirling engine. A VCHP was designed for the Advanced Stirling Radioisotope Generator, with a 850 °C heater head temperature. The VCHP turns on with a ΔT of 30 °C, which is high enough to not risk standard ASRG operation but low enough to save most heater head life. This VCHP has a low mass, and low thermal losses for normal operation. In addition to the design, a proof-of-concept NaK VCHP was fabricated and tested. While NaK is normally not used in heat pipes, it has an advantage in that it is liquid at the reservoir operating temperature, while Na or K alone would freeze. The VCHP had two condensers, one simulating the heater head, and the other simulating the radiator. The experiments successfully demonstrated operation with the simulated heater head condenser off and on, while allowing the reservoir temperature to vary over 40 to 120 °C, the maximum range expected. In agreement with previous NaK heat pipe tests, the evaporator ΔT was roughly 70 °C, due to distillation of the NaK in the evaporator.
Factors controlling stream water nitrate and phosphor loads during precipitation events
NASA Astrophysics Data System (ADS)
Rozemeijer, J. C.; van der Velde, Y.; van Geer, F. G.; de Rooij, G. H.; Broers, H. P.; Bierkens, M. F. P.
2009-04-01
Pollution of surface waters in densely populated areas with intensive land use is a serious threat to their ecological, industrial and recreational utilization. European and national manure policies and several regional and local pilot projects aim at reducing pollution loads to surface waters. For the evaluation of measures, water authorities and environmental research institutes are putting a lot of effort into monitoring surface water quality. Fro regional surface water quality monitoring, the measurement locations are usually situated in the downstream part of the catchment to represent a larger area. The monitoring frequency is usually low (e.g. monthly), due to the high costs for sampling and analysis. As a consequence, human induced trends in nutrient loads and concentrations in these monitoring data are often concealed by the large variability of surface water quality caused by meteorological variations. Because natural surface water quality variability is poorly understood, large uncertainties occur in the estimates of (trends in) nutrient loads or average concentrations. This study aims at uncertainty reduction in the estimates of mean concentrations and loads of N and P from regional monitoring data. For this purpose, we related continuous N and P records of stream water to variations in precipitation, discharge, groundwater level and tube drain discharge. A specially designed multi scale experimental setup was installed in an agricultural lowland catchment in The Netherlands. At the catchment outlet, continuous measurements of water quality and discharge were performed from July 2007-January 2009. At an experimental field within the catchment continuous measurements of precipitation, groundwater levels and tube drain discharges were collected. 20 significant rainfall events with a variety of antecedent conditions, durations and intensities were selected for analysis. Singular and multiple regression analysis was used to identify relations between the continuous N and P records and characteristics of the dynamics of discharge, precipitation, groundwater level and tube drain discharge. From this study, we conclude that generally available and easy to measure explanatory data (such as continuous records of discharge, precipitation and groundwater level) can reduce uncertainty in estimations of N and P loads and mean concentrations. However, for capturing the observed short load pulses of P, continuous or discharge proportional sampling is needed.
Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, R.; Rayos, E. M.; Campbell, C. H.; Rickman, S. L.
2006-01-01
Computational tools have been developed to estimate thermal and mechanical reentry loads experienced by the Space Shuttle Orbiter as the result of cavities in the Thermal Protection System (TPS). Such cavities can be caused by impact from ice or insulating foam debris shed from the External Tank (ET) on liftoff. The reentry loads depend on cavity geometry and certain Shuttle state variables, among other factors. Certain simplifying assumptions have been made in the tool development about the cavity geometry variables. For example, the cavities are all modeled as shoeboxes , with rectangular cross-sections and planar walls. So an actual cavity is typically approximated with an idealized cavity described in terms of its length, width, and depth, as well as its entry angle, exit angle, and side angles (assumed to be the same for both sides). As part of a comprehensive assessment of the uncertainty in reentry loads estimated by the debris impact assessment tools, an effort has been initiated to quantify the component of the uncertainty that is due to imperfect geometry specifications for the debris impact cavities. The approach is to compute predicted loads for a set of geometry factor combinations sufficient to develop polynomial approximations to the complex, nonparametric underlying computational models. Such polynomial models are continuous and feature estimable, continuous derivatives, conditions that facilitate the propagation of independent variable errors. As an additional benefit, once the polynomial models have been developed, they require fewer computational resources to execute than the underlying finite element and computational fluid dynamics codes, and can generate reentry loads estimates in significantly less time. This provides a practical screening capability, in which a large number of debris impact cavities can be quickly classified either as harmless, or subject to additional analysis with the more comprehensive underlying computational tools. The polynomial models also provide useful insights into the sensitivity of reentry loads to various cavity geometry variables, and reveal complex interactions among those variables that indicate how the sensitivity of one variable depends on the level of one or more other variables. For example, the effect of cavity length on certain reentry loads depends on the depth of the cavity. Such interactions are clearly displayed in the polynomial response models.
Variable selection in discrete survival models including heterogeneity.
Groll, Andreas; Tutz, Gerhard
2017-04-01
Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio
2006-03-01
We present a complete analysis of the multipartite entanglement of three-mode Gaussian states of continuous-variable systems. We derive standard forms which characterize the covariance matrix of pure and mixed three-mode Gaussian states up to local unitary operations, showing that the local entropies of pure Gaussian states are bound to fulfill a relationship which is stricter than the general Araki-Lieb inequality. Quantum correlations can be quantified by a proper convex roof extension of the squared logarithmic negativity, the continuous-variable tangle, or contangle. We review and elucidate in detail the proof that in multimode Gaussian states the contangle satisfies a monogamy inequality constraint [G. Adesso and F. Illuminati, New J. Phys8, 15 (2006)]. The residual contangle, emerging from the monogamy inequality, is an entanglement monotone under Gaussian local operations and classical communications and defines a measure of genuine tripartite entanglements. We determine the analytical expression of the residual contangle for arbitrary pure three-mode Gaussian states and study in detail the distribution of quantum correlations in such states. This analysis yields that pure, symmetric states allow for a promiscuous entanglement sharing, having both maximum tripartite entanglement and maximum couplewise entanglement between any pair of modes. We thus name these states GHZ/W states of continuous-variable systems because they are simultaneous continuous-variable counterparts of both the GHZ and the W states of three qubits. We finally consider the effect of decoherence on three-mode Gaussian states, studying the decay of the residual contangle. The GHZ/W states are shown to be maximally robust against losses and thermal noise.
Structural Optimization of a Force Balance Using a Computational Experiment Design
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2002-01-01
This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.
Gaudez, C; Gilles, M A; Savin, J
2016-03-01
For several years, increasing numbers of studies have highlighted the existence of movement variability. Before that, it was neglected in movement analysis and it is still almost completely ignored in workstation design. This article reviews motor control theories and factors influencing movement execution, and indicates how intrinsic movement variability is part of task completion. These background clarifications should help ergonomists and workstation designers to gain a better understanding of these concepts, which can then be used to improve design tools. We also question which techniques--kinematics, kinetics or muscular activity--and descriptors are most appropriate for describing intrinsic movement variability and for integration into design tools. By this way, simulations generated by designers for workstation design should be closer to the real movements performed by workers. This review emphasises the complexity of identifying, describing and processing intrinsic movement variability in occupational activities. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Flora, David B.; LaBrish, Cathy; Chalmers, R. Philip
2011-01-01
We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables. PMID:22403561
NASA Astrophysics Data System (ADS)
Bieniek, A.; Graba, M.; Prażnowski, K.
2016-09-01
The paper presents results of research on the effect of frequency control signal on the course selected operating parameters of the continuously variable transmission CVT. The study used a gear Fuji Hyper M6 with electro-hydraulic control system and proprietary software for control and data acquisition developed in LabView environment.
DOT National Transportation Integrated Search
1999-06-01
This report is a paper study of the fuel economy benefits on the Environmental Protection Agency (EPA) City and Highway Cycles of using a continuously variable transmission (CVT) in a 3625 lb (1644 kg) car and compact light truck. The baseline vehicl...
Extremal entanglement and mixedness in continuous variable systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio
2004-08-01
We investigate the relationship between mixedness and entanglement for Gaussian states of continuous variable systems. We introduce generalized entropies based on Schatten p norms to quantify the mixedness of a state and derive their explicit expressions in terms of symplectic spectra. We compare the hierarchies of mixedness provided by such measures with the one provided by the purity (defined as tr {rho}{sup 2} for the state {rho}) for generic n-mode states. We then review the analysis proving the existence of both maximally and minimally entangled states at given global and marginal purities, with the entanglement quantified by the logarithmic negativity.more » Based on these results, we extend such an analysis to generalized entropies, introducing and fully characterizing maximally and minimally entangled states for given global and local generalized entropies. We compare the different roles played by the purity and by the generalized p entropies in quantifying the entanglement and the mixedness of continuous variable systems. We introduce the concept of average logarithmic negativity, showing that it allows a reliable quantitative estimate of continuous variable entanglement by direct measurements of global and marginal generalized p entropies.« less
NASA Technical Reports Server (NTRS)
Kamman, J. H.; Hall, C. L.
1975-01-01
Two inlet performance tests and one inlet/airframe drag test were conducted in 1969 at the NASA-Ames Research Center. The basic inlet system was two-dimensional, three ramp (overhead), external compression, with variable capture area. The data from these tests were analyzed to show the effects of selected design variables on the performance of this type of inlet system. The inlet design variables investigated include inlet bleed, bypass, operating mass flow ratio, inlet geometry, and variable capture area.
Correlation and agreement: overview and clarification of competing concepts and measures.
Liu, Jinyuan; Tang, Wan; Chen, Guanqin; Lu, Yin; Feng, Changyong; Tu, Xin M
2016-04-25
Agreement and correlation are widely-used concepts that assess the association between variables. Although similar and related, they represent completely different notions of association. Assessing agreement between variables assumes that the variables measure the same construct, while correlation of variables can be assessed for variables that measure completely different constructs. This conceptual difference requires the use of different statistical methods, and when assessing agreement or correlation, the statistical method may vary depending on the distribution of the data and the interest of the investigator. For example, the Pearson correlation, a popular measure of correlation between continuous variables, is only informative when applied to variables that have linear relationships; it may be non-informative or even misleading when applied to variables that are not linearly related. Likewise, the intraclass correlation, a popular measure of agreement between continuous variables, may not provide sufficient information for investigators if the nature of poor agreement is of interest. This report reviews the concepts of agreement and correlation and discusses differences in the application of several commonly used measures.
Exploring persistence in science in CEGEP: Toward a motivational model
NASA Astrophysics Data System (ADS)
Simon, Rebecca A.
There is currently a shortage of science teachers in North America and continually decreasing rates of enrollment in science programs. Science continues to be the academic domain that sees the highest attrition rates, particularly for women. The purpose of the present study was to examine male and female students' experiences in mathematics and science courses during a crucial time in their academic development in an attempt to explain the high attrition rates in science between the last year of high school and the first year of CEGEP (junior college). In line with self-determination theory (Deci & Ryan, 1985), as well as achievement-goal theory (Pintrich & Schunk, 1996) and research on academic emotions, the study examined the relation between a set of motivational variables (i.e., perceptions of autonomy-support, self-efficacy, achievement goals, and intrinsic motivation), affect, achievement, and persistence. A secondary objective was to test a motivational model of student persistence in science using structural equation modeling (SEM). The sample consisted of 603 male and 706 female students from four English-language CEGEPs in the greater Montreal area. Just prior to beginning CEGEP, participants completed a questionnaire that asked about the learning environment in high school mathematics and science classes as well as student characteristics including sources of motivation, personal achievement goals, and feelings of competence. All students expressed an initial interest in pursuing a career in science by enrolling in optional advanced mathematics and science courses during high school. Multivariate analysis of variance was used to examine differences among male and female students across the variables measured. Structural equation modeling was used to test the validity of a questionnaire designed specifically to gather information about CEGEP students' experiences with mathematics and science, and to evaluate the fit of a model designed to reflect the interactions between the different variables. Students' experiences during high school have an impact on their decisions to pursue or abandon their path toward an eventual science career. Classroom experiences and student characteristics interact to influence their performance and affect, which in turn influence their decisions. Implications for promoting persistence in science are discussed.
Planning Coverage Campaigns for Mission Design and Analysis: CLASP for DESDynl
NASA Technical Reports Server (NTRS)
Knight, Russell L.; McLaren, David A.; Hu, Steven
2013-01-01
Mission design and analysis presents challenges in that almost all variables are in constant flux, yet the goal is to achieve an acceptable level of performance against a concept of operations, which might also be in flux. To increase responsiveness, automated planning tools are used that allow for the continual modification of spacecraft, ground system, staffing, and concept of operations, while returning metrics that are important to mission evaluation, such as area covered, peak memory usage, and peak data throughput. This approach was applied to the DESDynl mission design using the CLASP planning system, but since this adaptation, many techniques have changed under the hood for CLASP, and the DESDynl mission concept has undergone drastic changes. The software produces mission evaluation products, such as memory highwater marks, coverage percentages, given a mission design in the form of coverage targets, concept of operations, spacecraft parameters, and orbital parameters. It tries to overcome the lack of fidelity and timeliness of mission requirements coverage analysis during mission design. Previous techniques primarily use Excel in ad hoc fashion to approximate key factors in mission performance, often falling victim to overgeneralizations necessary in such an adaptation. The new program allows designers to faithfully represent their mission designs quickly, and get more accurate results just as quickly.
NASA Technical Reports Server (NTRS)
Moses, P. L.; Bouchard, K. A.; Vause, R. F.; Pinckney, S. Z.; Ferlemann, S. M.; Leonard, C. P.; Taylor, L. W., III; Robinson, J. S.; Martin, J. G.; Petley, D. H.
1999-01-01
Airbreathing launch vehicles continue to be a subject of great interest in the space access community. In particular, horizontal takeoff and horizontal landing vehicles are attractive with their airplane-like benefits and flexibility for future space launch requirements. The most promising of these concepts involve airframe integrated propulsion systems, in which the external undersurface of the vehicle forms part of the propulsion flowpath. Combining of airframe and engine functions in this manner involves all of the design disciplines interacting at once. Design and optimization of these configurations is a most difficult activity, requiring a multi-discipline process to analytically resolve the numerous interactions among the design variables. This paper describes the design and optimization of one configuration in this vehicle class, a lifting body with turbine-based low-speed propulsion. The integration of propulsion and airframe, both from an aero-propulsive and mechanical perspective are addressed. This paper primarily focuses on the design details of the preferred configuration and the analyses performed to assess its performance. The integration of both low-speed and high-speed propulsion is covered. Structural and mechanical designs are described along with materials and technologies used. Propellant and systems packaging are shown and the mission-sized vehicle weights are disclosed.
Neuman systems model-based research: an integrative review project.
Fawcett, J; Giangrande, S K
2001-07-01
The project integrated Neuman systems model-based research literature. Two hundred published studies were located. This article is limited to the 59 full journal articles and 3 book chapters identified. A total of 37% focused on prevention interventions; 21% on perception of stressors; and 10% on stressor reactions. Only 50% of the reports explicitly linked the model with the study variables, and 61% did not include conclusions regarding model utility or credibility. No programs of research were identified. Academic courses and continuing education workshops are needed to help researchers design programs of Neuman systems model-based research and better explicate linkages between the model and the research.
Aeroelastic Wing Shaping Control Subject to Actuation Constraints.
NASA Technical Reports Server (NTRS)
Swei, Sean Shan-Min; Nguyen, Nhan
2014-01-01
This paper considers the control of coupled aeroelastic aircraft model which is configured with Variable Camber Continuous Trailing Edge Flap (VCCTEF) system. The relative deflection between two adjacent flaps is constrained and this actuation constraint is accounted for when designing an effective control law for suppressing the wing vibration. A simple tuned-mass damper mechanism with two attached masses is used as an example to demonstrate the effectiveness of vibration suppression with confined motion of tuned masses. In this paper, a dynamic inversion based pseudo-control hedging (PCH) and bounded control approach is investigated, and for illustration, it is applied to the NASA Generic Transport Model (GTM) configured with VCCTEF system.
NASA Technical Reports Server (NTRS)
Coogan, J. J.
1986-01-01
Modifications were designed for the B-737-100 Research Aircraft autobrake system hardware of the Advanced Transport Operating Systems (ATOPS) Program at Langley Research Center. These modifications will allow the on-board flight control computer to control the aircraft deceleration after landing to a continuously variable level for the purpose of executing automatic high speed turn-offs from the runway. A bread board version of the proposed modifications was built and tested in simulated stopping conditions. Test results, for various aircraft weights, turnoff speed, winds, and runway conditions show that the turnoff speeds are achieved generally with errors less than 1 ft/sec.
Aqua's First 10 Years: An Overview
NASA Technical Reports Server (NTRS)
Parkinson, Claire L.
2012-01-01
NASA's Aqua spacecraft was launched at 2:55 a.m. on May 4, 2002, from Vandenberg Air Force Base in California, into a near-polar, sun-synchronous orbit at an altitude of 705 km. Aqua carries six Earth-observing instruments to collect data on water in all its forms (liquid, vapor, and solid) and on a wide variety of additional Earth system variables (Parkinson 2003). The design lifetime for Aqua's prime mission was 6 years, and Aqua is now well into its extended mission, approaching 10 years of successful operations. The Aqua data have been used for hundreds of scientific studies and continue to be used for scientific discovery and numerous practical applications.
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.
Some applications of uncertainty relations in quantum information
NASA Astrophysics Data System (ADS)
Majumdar, A. S.; Pramanik, T.
2016-08-01
We discuss some applications of various versions of uncertainty relations for both discrete and continuous variables in the context of quantum information theory. The Heisenberg uncertainty relation enables demonstration of the Einstein, Podolsky and Rosen (EPR) paradox. Entropic uncertainty relations (EURs) are used to reveal quantum steering for non-Gaussian continuous variable states. EURs for discrete variables are studied in the context of quantum memory where fine-graining yields the optimum lower bound of uncertainty. The fine-grained uncertainty relation is used to obtain connections between uncertainty and the nonlocality of retrieval games for bipartite and tripartite systems. The Robertson-Schrödinger (RS) uncertainty relation is applied for distinguishing pure and mixed states of discrete variables.
Current Directions in Mediation Analysis
MacKinnon, David P.; Fairchild, Amanda J.
2010-01-01
Mediating variables continue to play an important role in psychological theory and research. A mediating variable transmits the effect of an antecedent variable on to a dependent variable, thereby providing more detailed understanding of relations among variables. Methods to assess mediation have been an active area of research for the last two decades. This paper describes the current state of methods to investigate mediating variables. PMID:20157637
Saving Material with Systematic Process Designs
NASA Astrophysics Data System (ADS)
Kerausch, M.
2011-08-01
Global competition is forcing the stamping industry to further increase quality, to shorten time-to-market and to reduce total cost. Continuous balancing between these classical time-cost-quality targets throughout the product development cycle is required to ensure future economical success. In today's industrial practice, die layout standards are typically assumed to implicitly ensure the balancing of company specific time-cost-quality targets. Although die layout standards are a very successful approach, there are two methodical disadvantages. First, the capabilities for tool design have to be continuously adapted to technological innovations; e.g. to take advantage of the full forming capability of new materials. Secondly, the great variety of die design aspects have to be reduced to a generic rule or guideline; e.g. binder shape, draw-in conditions or the use of drawbeads. Therefore, it is important to not overlook cost or quality opportunities when applying die design standards. This paper describes a systematic workflow with focus on minimizing material consumption. The starting point of the investigation is a full process plan for a typical structural part. All requirements are definedaccording to a predefined set of die design standards with industrial relevance are fulfilled. In a first step binder and addendum geometry is systematically checked for material saving potentials. In a second step, blank shape and draw-in are adjusted to meet thinning, wrinkling and springback targets for a minimum blank solution. Finally the identified die layout is validated with respect to production robustness versus splits, wrinkles and springback. For all three steps the applied methodology is based on finite element simulation combined with a stochastical variation of input variables. With the proposed workflow a well-balanced (time-cost-quality) production process assuring minimal material consumption can be achieved.
Elements de conception d'un systeme geothermique hybride par optimisation financiere
NASA Astrophysics Data System (ADS)
Henault, Benjamin
The choice of design parameters for a hybrid geothermal system is usually based on current practices or questionable assumptions. In fact, the main purpose of a hybrid geothermal system is to maximize the energy savings associated with heating and cooling requirements while minimizing the costs of operation and installation. This thesis presents a strategy to maximize the net present value of a hybrid geothermal system. This objective is expressed by a series of equations that lead to a global objective function. Iteratively, the algorithm converges to an optimal solution by using an optimization method: the conjugate gradient combined with a combinatorial method. The objective function presented in this paper makes use of a simulation algorithm for predicting the fluid temperature of a hybrid geothermal system on an hourly basis. Thus, the optimization method selects six variables iteratively, continuous and integer type, affecting project costs and energy savings. These variables are the limit temperature at the entry of the heat pump (geothermal side), the number of heat pumps, the number of geothermal wells and the distance in X and Y between the geothermal wells. Generally, these variables have a direct impact on the cost of the installation, on the entering water temperature at the heat pumps, the cost of equipment, the thermal interference between boreholes, the total capacity of geothermal system, on system performance, etc. On the other hand, the arrangement of geothermal wells is variable and is often irregular depending on the number of selected boreholes by the algorithm. Removal or addition of one or more borehole is guided by a predefined order dicted by the designer. This feature of irregular arrangement represents an innovation in the field and is necessary for the operation of this algorithm. Indeed, this ensures continuity between the number of boreholes allowing the use of the conjugate gradient method. The proposed method provides as outputs the net present value of the optimal solution, the position of the vertical boreholes, the number of installed heat pumps, the limits of entering water temperature at the heat pumps and energy consumption of the hybrid geothermal system. To demonstrate the added value of this design method, two case studies are analyzed, for a commercial building and a residential. The two studies allow to conclude that: the net present value of hybrid geothermal systems can be significantly improved by the choice of right specifications; the economic value of a geothermal project is strongly influenced by the number of heat pumps and the number of geothermal wells or the temperature limit in heating mode; the choice of design parameters should always be driven by an objective function and not by the designer; peak demand charges favor hybrid geothermal systems with a higher capacity. Then, in order to validate the operation, this new design method is compared to the standard sizing method which is commonly used. By designing the hybrid geothermal system according to standard sizing method and to meet 70% of peak heating, the net present value over 20 years for the residential project is negative, at -61,500 while it is 43,700 for commercial hybrid geothermal system. Using the new design method presented in this thesis, the net present values of projects are respectively 162,000 and 179,000. The use of this algorithm is beneficial because it significantly increases the net present value of projects. The research presented in this thesis allows to optimize the financial performance of hybrid geothermal systems. The proposed method will allow industry stakeholders to increase the profitability of their projects associated with low temperature geothermal energy.
Gapped two-body Hamiltonian for continuous-variable quantum computation.
Aolita, Leandro; Roncaglia, Augusto J; Ferraro, Alessandro; Acín, Antonio
2011-03-04
We introduce a family of Hamiltonian systems for measurement-based quantum computation with continuous variables. The Hamiltonians (i) are quadratic, and therefore two body, (ii) are of short range, (iii) are frustration-free, and (iv) possess a constant energy gap proportional to the squared inverse of the squeezing. Their ground states are the celebrated Gaussian graph states, which are universal resources for quantum computation in the limit of infinite squeezing. These Hamiltonians constitute the basic ingredient for the adiabatic preparation of graph states and thus open new venues for the physical realization of continuous-variable quantum computing beyond the standard optical approaches. We characterize the correlations in these systems at thermal equilibrium. In particular, we prove that the correlations across any multipartition are contained exactly in its boundary, automatically yielding a correlation area law.
One-step generation of continuous-variable quadripartite cluster states in a circuit QED system
NASA Astrophysics Data System (ADS)
Yang, Zhi-peng; Li, Zhen; Ma, Sheng-li; Li, Fu-li
2017-07-01
We propose a dissipative scheme for one-step generation of continuous-variable quadripartite cluster states in a circuit QED setup consisting of four superconducting coplanar waveguide resonators and a gap-tunable superconducting flux qubit. With external driving fields to adjust the desired qubit-resonator and resonator-resonator interactions, we show that continuous-variable quadripartite cluster states of the four resonators can be generated with the assistance of energy relaxation of the qubit. By comparison with the previous proposals, the distinct advantage of our scheme is that only one step of quantum operation is needed to realize the quantum state engineering. This makes our scheme simpler and more feasible in experiment. Our result may have useful application for implementing quantum computation in solid-state circuit QED systems.
Sample size calculations for the design of cluster randomized trials: A summary of methodology.
Gao, Fei; Earnest, Arul; Matchar, David B; Campbell, Michael J; Machin, David
2015-05-01
Cluster randomized trial designs are growing in popularity in, for example, cardiovascular medicine research and other clinical areas and parallel statistical developments concerned with the design and analysis of these trials have been stimulated. Nevertheless, reviews suggest that design issues associated with cluster randomized trials are often poorly appreciated and there remain inadequacies in, for example, describing how the trial size is determined and the associated results are presented. In this paper, our aim is to provide pragmatic guidance for researchers on the methods of calculating sample sizes. We focus attention on designs with the primary purpose of comparing two interventions with respect to continuous, binary, ordered categorical, incidence rate and time-to-event outcome variables. Issues of aggregate and non-aggregate cluster trials, adjustment for variation in cluster size and the effect size are detailed. The problem of establishing the anticipated magnitude of between- and within-cluster variation to enable planning values of the intra-cluster correlation coefficient and the coefficient of variation are also described. Illustrative examples of calculations of trial sizes for each endpoint type are included. Copyright © 2015 Elsevier Inc. All rights reserved.
Performance optimization for rotors in hover and axial flight
NASA Technical Reports Server (NTRS)
Quackenbush, T. R.; Wachspress, D. A.; Kaufman, A. E.; Bliss, D. B.
1989-01-01
Performance optimization for rotors in hover and axial flight is a topic of continuing importance to rotorcraft designers. The aim of this Phase 1 effort has been to demonstrate that a linear optimization algorithm could be coupled to an existing influence coefficient hover performance code. This code, dubbed EHPIC (Evaluation of Hover Performance using Influence Coefficients), uses a quasi-linear wake relaxation to solve for the rotor performance. The coupling was accomplished by expanding of the matrix of linearized influence coefficients in EHPIC to accommodate design variables and deriving new coefficients for linearized equations governing perturbations in power and thrust. These coefficients formed the input to a linear optimization analysis, which used the flow tangency conditions on the blade and in the wake to impose equality constraints on the expanded system of equations; user-specified inequality contraints were also employed to bound the changes in the design. It was found that this locally linearized analysis could be invoked to predict a design change that would produce a reduction in the power required by the rotor at constant thrust. Thus, an efficient search for improved versions of the baseline design can be carried out while retaining the accuracy inherent in a free wake/lifting surface performance analysis.
Tan, Chuen Seng; Støer, Nathalie C; Chen, Ying; Andersson, Marielle; Ning, Yilin; Wee, Hwee-Lin; Khoo, Eric Yin Hao; Tai, E-Shyong; Kao, Shih Ling; Reilly, Marie
2017-01-01
The control of confounding is an area of extensive epidemiological research, especially in the field of causal inference for observational studies. Matched cohort and case-control study designs are commonly implemented to control for confounding effects without specifying the functional form of the relationship between the outcome and confounders. This paper extends the commonly used regression models in matched designs for binary and survival outcomes (i.e. conditional logistic and stratified Cox proportional hazards) to studies of continuous outcomes through a novel interpretation and application of logit-based regression models from the econometrics and marketing research literature. We compare the performance of the maximum likelihood estimators using simulated data and propose a heuristic argument for obtaining the residuals for model diagnostics. We illustrate our proposed approach with two real data applications. Our simulation studies demonstrate that our stratification approach is robust to model misspecification and that the distribution of the estimated residuals provides a useful diagnostic when the strata are of moderate size. In our applications to real data, we demonstrate that parity and menopausal status are associated with percent mammographic density, and that the mean level and variability of inpatient blood glucose readings vary between medical and surgical wards within a national tertiary hospital. Our work highlights how the same class of regression models, available in most statistical software, can be used to adjust for confounding in the study of binary, time-to-event and continuous outcomes.
Random field theory to interpret the spatial variability of lacustrine soils
NASA Astrophysics Data System (ADS)
Russo, Savino; Vessia, Giovanna
2015-04-01
The lacustrine soils are quaternary soils, dated from Pleistocene to Holocene periods, generated in low-energy depositional environments and characterized by soil mixture of clays, sands and silts with alternations of finer and coarser grain size layers. They are often met at shallow depth filling several tens of meters of tectonic or erosive basins typically placed in internal Appenine areas. The lacustrine deposits are often locally interbedded by detritic soils resulting from the failure of surrounding reliefs. Their heterogeneous lithology is associated with high spatial variability of physical and mechanical properties both along horizontal and vertical directions. The deterministic approach is still commonly adopted to accomplish the mechanical characterization of these heterogeneous soils where undisturbed sampling is practically not feasible (if the incoherent fraction is prevalent) or not spatially representative (if the cohesive fraction prevails). The deterministic approach consists on performing in situ tests, like Standard Penetration Tests (SPT) or Cone Penetration Tests (CPT) and deriving design parameters through "expert judgment" interpretation of the measure profiles. These readings of tip and lateral resistances (Rp and RL respectively) are almost continuous but highly variable in soil classification according to Schmertmann (1978). Thus, neglecting the spatial variability cannot be the best strategy to estimated spatial representative values of physical and mechanical parameters of lacustrine soils to be used for engineering applications. Hereafter, a method to draw the spatial variability structure of the aforementioned measure profiles is presented. It is based on the theory of the Random Fields (Vanmarcke 1984) applied to vertical readings of Rp measures from mechanical CPTs. The proposed method relies on the application of the regression analysis, by which the spatial mean trend and fluctuations about this trend are derived. Moreover, the scale of fluctuation is calculated to measure the maximum length beyond which profiles of measures are independent. The spatial mean trend can be used to identify "quasi-homogeneous" soil layers where the standard deviation and the scale of fluctuation can be calculated. In this study, five Rp profiles performed in the lacustrine deposits of the high River Pescara Valley have been analyzed. There, silty clay deposits with thickness ranging from a few meters to about 60m, and locally rich in sands and peats, are investigated. In this study, vertical trends of Rp profiles have been derived to be converted into design parameter mean trends. Furthermore, the variability structure derived from Rp readings can be propagated to design parameters to calculate the "characteristic values" requested by the European building codes. References Schmertmann J.H. 1978. Guidelines for Cone Penetration Test, Performance and Design. Report No. FHWA-TS-78-209, U.S. Department of Transportation, Washington, D.C., pp. 145. Vanmarcke E.H. 1984. Random Fields, analysis and synthesis. Cambridge (USA): MIT Press.
Clustering and variable selection in the presence of mixed variable types and missing data.
Storlie, C B; Myers, S M; Katusic, S K; Weaver, A L; Voigt, R G; Croarkin, P E; Stoeckel, R E; Port, J D
2018-05-17
We consider the problem of model-based clustering in the presence of many correlated, mixed continuous, and discrete variables, some of which may have missing values. Discrete variables are treated with a latent continuous variable approach, and the Dirichlet process is used to construct a mixture model with an unknown number of components. Variable selection is also performed to identify the variables that are most influential for determining cluster membership. The work is motivated by the need to cluster patients thought to potentially have autism spectrum disorder on the basis of many cognitive and/or behavioral test scores. There are a modest number of patients (486) in the data set along with many (55) test score variables (many of which are discrete valued and/or missing). The goal of the work is to (1) cluster these patients into similar groups to help identify those with similar clinical presentation and (2) identify a sparse subset of tests that inform the clusters in order to eliminate unnecessary testing. The proposed approach compares very favorably with other methods via simulation of problems of this type. The results of the autism spectrum disorder analysis suggested 3 clusters to be most likely, while only 4 test scores had high (>0.5) posterior probability of being informative. This will result in much more efficient and informative testing. The need to cluster observations on the basis of many correlated, continuous/discrete variables with missing values is a common problem in the health sciences as well as in many other disciplines. Copyright © 2018 John Wiley & Sons, Ltd.
Patil, Hemlata; Feng, Xin; Ye, Xingyou; Majumdar, Soumyajit; Repka, Michael A
2015-01-01
This contribution describes a continuous process for the production of solid lipid nanoparticles (SLN) as drug-carrier systems via hot-melt extrusion (HME). Presently, HME technology has not been used for the manufacturing of SLN. Generally, SLN are prepared as a batch process, which is time consuming and may result in variability of end-product quality attributes. In this study, using Quality by Design (QbD) principles, we were able to achieve continuous production of SLN by combining two processes: HME technology for melt-emulsification and high-pressure homogenization (HPH) for size reduction. Fenofibrate (FBT), a poorly water-soluble model drug, was incorporated into SLN using HME-HPH methods. The developed novel platform demonstrated better process control and size reduction compared to the conventional process of hot homogenization (batch process). Varying the process parameters enabled the production of SLN below 200 nm. The dissolution profile of the FBT SLN prepared by the novel HME-HPH method was faster than that of the crude FBT and a micronized marketed FBT formulation. At the end of a 5-h in vitro dissolution study, a SLN formulation released 92-93% of drug, whereas drug release was approximately 65 and 45% for the marketed micronized formulation and crude drug, respectively. Also, pharmacokinetic study results demonstrated a statistical increase in Cmax, Tmax, and AUC0-24 h in the rate of drug absorption from SLN formulations as compared to the crude drug and marketed micronized formulation. In summary, the present study demonstrated the potential use of hot-melt extrusion technology for continuous and large-scale production of SLN.
40 CFR 130.9 - Designation and de-designation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Section 130.9 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER... continue water quality planning activities within the designated boundaries. (c) Impact of de-designation... responsibility for continued water quality planning and oversight of implementation within the area. (d...
Pawłowicz, Urszula; Wasilewska, Anna; Olański, Witold; Stefanowicz, Marta
2013-09-01
Poisoning among children and youths in the northeastern part of Poland accounted for 25% of the total number of patients admitted to the Hospital Emergency Department of the Paediatric University Hospital of Białystok. We hypothesise that the epidemiology of poisoned paediatric patients admitted is related to increase in 'designer drugs' (mainly amphetamine- and ecstasy-like psychostimulants, hallucinogens and synthetic cannabinoids ('spice') intake, which became popular 5 years ago in our country. A retrospective chart review of medical records of 489 patients admitted due to poisoning in the 5-year period (2006-2010). The data included: age, sex, place of residence, nature of the substance, causes of poisoning, former use of psychoactive stimulants, accompanying self-mutilation and injuries and length of hospitalisation. Categorical variables were expressed as percentages, and continuous variables as mean and SD. The data were collected in a Microsoft Excel database. Statistical analysis was performed using the Statistical Programme for Social Sciences. Out of 2176 hospitalised children, 489 were admitted because of poisoning. Out of these, 244 (49.9%) were hospitalised due to intoxication by alcohol. Only eight children used designer drugs. The mean age of all patients in our group was 12.86±5.04 years, of which 52.4% were male. Poisoning was intentional in 75.5%, and accidental in 24.5% of cases. Appearance of 'designer drugs' had no significant impact on the number and epidemiology of poisonings in our group.
Variable Conductance Heat Pipes for Radioisotope Stirling Systems
NASA Technical Reports Server (NTRS)
Anderson, William G.; Tarau, Calin
2008-01-01
In a Stirling radioisotope system, heat must continually be removed from the GPHS modules, to maintain the GPHS modules and surrounding insulation at acceptable temperatures. Normally, the Stirling convertor provides this cooling. If the Stirling engine stops in the current system, the insulation is designed to spoil, preventing damage to the GPHS, but also ending the mission. An alkali-metal Variable Conductance Heat Pipe (VCHP) was designed to allow multiple stops and restarts of the Stirling engine. A VCHP turns on with a delta T of 30 C, which is high enough to not risk standard ASRG operation but low enough to save most heater head life. This VCHP has a low mass, and low thermal losses for normal operation. In addition to the design, a proof-of-concept NaK VCHP was fabricated and tested. While NaK is normally not used in heat pipes, it has an advantage in that it is liquid at the reservoir operating temperature, while Na or K alone would freeze. The VCHP had two condensers, one simulating the heater head, and the other simulating the radiator. The experiments successfully demonstrated operation with the simulated heater head condenser off and on, while allowing the reservoir temperature to vary over 40 to 120 C, the maximum range expected. In agreement with previous NaK heat pipe tests, the evaporator delta T was roughly 70 C, due to distillation of the NaK in the evaporator.
Kushida, Clete A.; Nichols, Deborah A.; Holmes, Tyson H.; Quan, Stuart F.; Walsh, James K.; Gottlieb, Daniel J.; Simon, Richard D.; Guilleminault, Christian; White, David P.; Goodwin, James L.; Schweitzer, Paula K.; Leary, Eileen B.; Hyde, Pamela R.; Hirshkowitz, Max; Green, Sylvan; McEvoy, Linda K.; Chan, Cynthia; Gevins, Alan; Kay, Gary G.; Bloch, Daniel A.; Crabtree, Tami; Dement, William C.
2012-01-01
Study Objective: To determine the neurocognitive effects of continuous positive airway pressure (CPAP) therapy on patients with obstructive sleep apnea (OSA). Design, Setting, and Participants: The Apnea Positive Pressure Long-term Efficacy Study (APPLES) was a 6-month, randomized, double-blind, 2-arm, sham-controlled, multicenter trial conducted at 5 U.S. university, hospital, or private practices. Of 1,516 participants enrolled, 1,105 were randomized, and 1,098 participants diagnosed with OSA contributed to the analysis of the primary outcome measures. Intervention: Active or sham CPAP Measurements: Three neurocognitive variables, each representing a neurocognitive domain: Pathfinder Number Test-Total Time (attention and psychomotor function [A/P]), Buschke Selective Reminding Test-Sum Recall (learning and memory [L/M]), and Sustained Working Memory Test-Overall Mid-Day Score (executive and frontal-lobe function [E/F]) Results: The primary neurocognitive analyses showed a difference between groups for only the E/F variable at the 2 month CPAP visit, but no difference at the 6 month CPAP visit or for the A/P or L/M variables at either the 2 or 6 month visits. When stratified by measures of OSA severity (AHI or oxygen saturation parameters), the primary E/F variable and one secondary E/F neurocognitive variable revealed transient differences between study arms for those with the most severe OSA. Participants in the active CPAP group had a significantly greater ability to remain awake whether measured subjectively by the Epworth Sleepiness Scale or objectively by the maintenance of wakefulness test. Conclusions: CPAP treatment improved both subjectively and objectively measured sleepiness, especially in individuals with severe OSA (AHI > 30). CPAP use resulted in mild, transient improvement in the most sensitive measures of executive and frontal-lobe function for those with severe disease, which suggests the existence of a complex OSA-neurocognitive relationship. Clinical Trial Information: Registered at clinicaltrials.gov. Identifier: NCT00051363. Citation: Kushida CA; Nichols DA; Holmes TH; Quan SF; Walsh JK; Gottlieb DJ; Simon RD; Guilleminault C; White DP; Goodwin JL; Schweitzer PK; Leary EB; Hyde PR; Hirshkowitz M; Green S; McEvoy LK; Chan C; Gevins A; Kay GG; Bloch DA; Crabtree T; Demen WC. Effects of continuous positive airway pressure on neurocognitive function in obstructive sleep apnea patients: the Apnea Positive Pressure Long-term Efficacy Study (APPLES). SLEEP 2012;35(12):1593-1602. PMID:23204602
ERIC Educational Resources Information Center
Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria
2012-01-01
A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…
Potumarthi, Ravichandra; Subhakar, Ch; Pavani, A; Jetty, Annapurna
2008-04-01
Calcium-alginate immobilization method for the production of alkaline protease by Bacillus licheniformis NCIM-2042 was optimized statistically. Four variables, such as sodium-alginate concentration, calcium chloride concentration, inoculum size and agitation speed were optimized by 2(4) full factorial central composite design and subsequent analysis and model validation by a second-order regression equation. Eleven carbon, 11 organic nitrogen and seven inorganic nitrogen sources were screened by two-level Plackett-Burman design for maximum alkaline protease production by using optimized immobilized conditions. The levels of four variables, such as Na-alginate 2.78%; CaCl(2), 2.15%; inoculum size, 8.10% and agitation, 139 rpm were found to be optimum for maximal production of protease. Glucose, soybean meal and ammonium sulfate were resulted in maximum protease production at 644 U/ml, 720 U/ml, and 806 U/ml when screened for carbon, organic nitrogen and inorganic nitrogen sources, respectively, using optimized immobilization conditions. Repeated fed batch mode of operation, using optimized immobilized conditions, resulted in continuous operation for 12 cycles without disintegration of beads. Cross-sectional scanning electron microscope images have shown the growth pattern of B. licheniformis in Ca-alginate immobilized beads.
Virupakshappa, Praveen Kumar Siddalingappa; Mishra, Gaurav; Mehkri, Mohammed Ameenuddin
2016-01-01
The present paper describes the process optimization study for crude oil degradation which is a continuation of our earlier work on hydrocarbon degradation study of the isolate Stenotrophomonas rhizophila (PM-1) with GenBank accession number KX082814. Response Surface Methodology with Box-Behnken Design was used to optimize the process wherein temperature, pH, salinity, and inoculum size (at three levels) were used as independent variables and Total Petroleum Hydrocarbon, Biological Oxygen Demand, and Chemical Oxygen Demand of crude oil and PAHs as dependent variables (response). The statistical analysis, via ANOVA, showed coefficient of determination R 2 as 0.7678 with statistically significant P value 0.0163 fitting in second-order quadratic regression model for crude oil removal. The predicted optimum parameters, namely, temperature, pH, salinity, and inoculum size, were found to be 32.5°C, 9, 12.5, and 12.5 mL, respectively. At this optimum condition, the observed and predicted PAHs and crude oil removal were found to be 71.82% and 79.53% in validation experiments, respectively. The % TPH results correlate with GC/MS studies, BOD, COD, and TPC. The validation of numerical optimization was done through GC/MS studies and % removal of crude oil. PMID:28116165
The High Energy Telescope on EXIST: Hunting High Red-shift GRBs and Other Exotic Transients
NASA Astrophysics Data System (ADS)
Hong, JaeSub; Grindlay, J.; Allen, B.; Skinner, G. K.; Finger, M. H.; Jernigan, J. G.; EXIST Team
2009-01-01
The current baseline design of the High Energy Telescope (HET) on EXIST will localize high red-shift Gamma-Ray Bursts (GRBs) and other exotic transients fast (<10 sec) and accurately (<17") in order to allow the rapid (<1-2 min) follow-up onboard optical/IR imaging and spectroscopy. HET employs coded-aperture imaging with 5.5m2 CZT detector and a large hybrid tungsten mask (See also Skinner et al. in this meeting). The wide energy band coverage (5-600 keV) is optimal for capturing these transients and highly obscured AGNs. The continuous scan with the wide field of view ( 45 deg radius at 25% coding fraction) increases the chance of capturing rare elusive events such as soft Gamma-ray repeaters and tidal disruption events of stars by dormant supermassive black holes. Sweeping nearly the entire sky every two orbits (3 hour) will also establish a finely-sampled long-term history of the X-ray variability of many X-ray sources, opening up a new time domain of the variability study. In light of the new EXIST design concept, we review the observing strategy to maximize the science return and report the latest development of the CZT detectors for HET.
Satellite Telemetry and Long-Range Bat Movements
Smith, Craig S.; Epstein, Jonathan H.; Breed, Andrew C.; Plowright, Raina K.; Olival, Kevin J.; de Jong, Carol; Daszak, Peter; Field, Hume E.
2011-01-01
Background Understanding the long-distance movement of bats has direct relevance to studies of population dynamics, ecology, disease emergence, and conservation. Methodology/Principal Findings We developed and trialed several collar and platform terminal transmitter (PTT) combinations on both free-living and captive fruit bats (Family Pteropodidae: Genus Pteropus). We examined transmitter weight, size, profile and comfort as key determinants of maximized transmitter activity. We then tested the importance of bat-related variables (species size/weight, roosting habitat and behavior) and environmental variables (day-length, rainfall pattern) in determining optimal collar/PTT configuration. We compared battery- and solar-powered PTT performance in various field situations, and found the latter more successful in maintaining voltage on species that roosted higher in the tree canopy, and at lower density, than those that roost more densely and lower in trees. Finally, we trialed transmitter accuracy, and found that actual distance errors and Argos location class error estimates were in broad agreement. Conclusions/Significance We conclude that no single collar or transmitter design is optimal for all bat species, and that species size/weight, species ecology and study objectives are key design considerations. Our study provides a strategy for collar and platform choice that will be applicable to a larger number of bat species as transmitter size and weight continue to decrease in the future. PMID:21358823
NASA Astrophysics Data System (ADS)
Shah, Rajesh C.; Shah, Rajiv B.
2017-12-01
Based on the Shliomis ferrofluid flow model (SFFM) and continuity equation for the film as well as porous region, modified Reynolds equation for lubrication of circular squeeze film bearings is derived by considering the effects of oblique radially variable magnetic field (VMF), slip velocity at the film-porous interface and rotations of both the discs. The squeeze film bearings are made up of circular porous upper disc of different shapes (exponential, secant, mirror image of secant and parallel) and circular impermeable flat lower disc. The validity of Darcy's Law is assumed in the porous region. The SFFM is important because it includes the effects of rotations of the carrier liquid as well as magnetic particles. The VMF is used because of its advantage of generating maximum field at the required active contact area of the bearing design system. Also, the effect of porosity is included because of its advantageous property of self-lubrication. Using Reynolds equation, general form of pressure equation is derived and expression for dimensionless load-carrying capacity is obtained. Using this expression, results for different bearing design systems (due to different shapes of the upper disc) are computed and compared for variation of different parameters.
Liu, Jiaying; Zhao, Siman; Chen, Xi; Falk, Emily; Albarracín, Dolores
2017-10-01
Although the influence of peers on adolescent smoking should vary depending on social dynamics, there is a lack of understanding of which elements are most crucial and how this dynamic unfolds for smoking initiation and continuation across areas of the world. The present meta-analysis included 75 studies yielding 237 effect sizes that examined associations between peers' smoking and adolescents' smoking initiation and continuation with longitudinal designs across 16 countries. Mixed-effects models with robust variance estimates were used to calculate weighted-mean Odds ratios. This work showed that having peers who smoke is associated with about twice the odds of adolescents beginning (OR ¯ = 1.96, 95% confidence interval [CI] [1.76, 2.19]) and continuing to smoke (OR ¯ = 1.78, 95% CI [1.55, 2.05]). Moderator analyses revealed that (a) smoking initiation was more positively correlated with peers' smoking when the interpersonal closeness between adolescents and their peers was higher (vs. lower); and (b) both smoking initiation and continuation were more positively correlated with peers' smoking when samples were from collectivistic (vs. individualistic) cultures. Thus, both individual as well as population level dynamics play a critical role in the strength of peer influence. Accounting for cultural variables may be especially important given effects on both initiation and continuation. Implications for theory, research, and antismoking intervention strategies are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).