Sample records for state variable technique

  1. The application of the Routh approximation method to turbofan engine models

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1977-01-01

    The Routh approximation technique is applied in the frequency domain to a 16th order state variable turbofan engine model. The results obtained motivate the extension of the frequency domain formulation of the Routh method to the time domain to handle the state variable formulation directly. The time domain formulation is derived, and a characterization, which specifies all possible Routh similarity transformations, is given. The characterization is computed by the solution of two eigenvalue eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given.

  2. A special protection scheme utilizing trajectory sensitivity analysis in power transmission

    NASA Astrophysics Data System (ADS)

    Suriyamongkol, Dan

    In recent years, new measurement techniques have provided opportunities to improve the North American Power System observability, control and protection. This dissertation discusses the formulation and design of a special protection scheme based on a novel utilization of trajectory sensitivity techniques with inputs consisting of system state variables and parameters. Trajectory sensitivity analysis (TSA) has been used in previous publications as a method for power system security and stability assessment, and the mathematical formulation of TSA lends itself well to some of the time domain power system simulation techniques. Existing special protection schemes often have limited sets of goals and control actions. The proposed scheme aims to maintain stability while using as many control actions as possible. The approach here will use the TSA in a novel way by using the sensitivities of system state variables with respect to state parameter variations to determine the state parameter controls required to achieve the desired state variable movements. The initial application will operate based on the assumption that the modeled power system has full system observability, and practical considerations will be discussed.

  3. Dimmable electronic ballasts by variable power density modulation technique

    NASA Astrophysics Data System (ADS)

    Borekci, Selim; Kesler, Selami

    2014-11-01

    Dimming can be accomplished commonly by switching frequency and pulse density modulation techniques and a variable inductor. In this study, a variable power density modulation (VPDM) control technique is proposed for dimming applications. A fluorescent lamp is operated in several states to meet the desired lamp power in a modulation period. The proposed technique has the same advantages of magnetic dimming topologies have. In addition, a unique and flexible control technique can be achieved. A prototype dimmable electronic ballast is built and experiments related to it have been conducted. As a result, a 36WT8 fluorescent lamp can be driven for a desired lamp power from several alternatives without modulating the switching frequency.

  4. VO2 and VCO2 variabilities through indirect calorimetry instrumentation.

    PubMed

    Cadena-Méndez, Miguel; Escalante-Ramírez, Boris; Azpiroz-Leehan, Joaquín; Infante-Vázquez, Oscar

    2013-01-01

    The aim of this paper is to understand how to measure the VO2 and VCO2 variabilities in indirect calorimetry (IC) since we believe they can explain the high variation in the resting energy expenditure (REE) estimation. We propose that variabilities should be separately measured from the VO2 and VCO2 averages to understand technological differences among metabolic monitors when they estimate the REE. To prove this hypothesis the mixing chamber (MC) and the breath-by-breath (BbB) techniques measured the VO2 and VCO2 averages and their variabilities. Variances and power spectrum energies in the 0-0.5 Hertz band were measured to establish technique differences in steady and non-steady state. A hybrid calorimeter with both IC techniques studied a population of 15 volunteers that underwent the clino-orthostatic maneuver in order to produce the two physiological stages. The results showed that inter-individual VO2 and VCO2 variabilities measured as variances were negligible using the MC while variabilities measured as spectral energies using the BbB underwent 71 and 56% (p < 0.05), increase respectively. Additionally, the energy analysis showed an unexpected cyclic rhythm at 0.025 Hertz only during the orthostatic stage, which is new physiological information, not reported previusly. The VO2 and VCO2 inter-individual averages increased to 63 and 39% by the MC (p < 0.05) and 32 and 40% using the BbB (p < 0.1), respectively, without noticeable statistical differences among techniques. The conclusions are: (a) metabolic monitors should simultaneously include the MC and the BbB techniques to correctly interpret the steady or non-steady state variabilities effect in the REE estimation, (b) the MC is the appropriate technique to compute averages since it behaves as a low-pass filter that minimizes variances, (c) the BbB is the ideal technique to measure the variabilities since it can work as a high-pass filter to generate discrete time series able to accomplish spectral analysis, and (d) the new physiological information in the VO2 and VCO2 variabilities can help to understand why metabolic monitors with dissimilar IC techniques give different results in the REE estimation.

  5. Applied Routh approximation

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1978-01-01

    The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.

  6. Theory and design of variable conductance heat pipes

    NASA Technical Reports Server (NTRS)

    Marcus, B. D.

    1972-01-01

    A comprehensive review and analysis of all aspects of heat pipe technology pertinent to the design of self-controlled, variable conductance devices for spacecraft thermal control is presented. Subjects considered include hydrostatics, hydrodynamics, heat transfer into and out of the pipe, fluid selection, materials compatibility and variable conductance control techniques. The report includes a selected bibliography of pertinent literature, analytical formulations of various models and theories describing variable conductance heat pipe behavior, and the results of numerous experiments on the steady state and transient performance of gas controlled variable conductance heat pipes. Also included is a discussion of VCHP design techniques.

  7. On the primary variable switching technique for simulating unsaturated-saturated flows

    NASA Astrophysics Data System (ADS)

    Diersch, H.-J. G.; Perrochet, P.

    Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.

  8. Use of Machine Learning Techniques for Identification of Robust Teleconnections to East African Rainfall Variability

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Robertson, F. R.; Funk, C.

    2014-01-01

    Hidden Markov models can be used to investigate structure of subseasonal variability. East African short rain variability has connections to large-scale tropical variability. MJO - Intraseasonal variations connected with appearance of "wet" and "dry" states. ENSO/IOZM SST and circulation anomalies are apparent during years of anomalous residence time in the subseasonal "wet" state. Similar results found in previous studies, but we can interpret this with respect to variations of subseasonal wet and dry modes. Reveal underlying connections between MJO/IOZM/ENSO with respect to East African rainfall.

  9. Discrete optimal control approach to a four-dimensional guidance problem near terminal areas

    NASA Technical Reports Server (NTRS)

    Nagarajan, N.

    1974-01-01

    Description of a computer-oriented technique to generate the necessary control inputs to guide an aircraft in a given time from a given initial state to a prescribed final state subject to the constraints on airspeed, acceleration, and pitch and bank angles of the aircraft. A discrete-time mathematical model requiring five state variables and three control variables is obtained, assuming steady wind and zero sideslip. The guidance problem is posed as a discrete nonlinear optimal control problem with a cost functional of Bolza form. A solution technique for the control problem is investigated, and numerical examples are presented. It is believed that this approach should prove to be useful in automated air traffic control schemes near large terminal areas.

  10. Surface Irregularity Factor as a Parameter to Evaluate the Fatigue Damage State of CFRP

    PubMed Central

    Zuluaga-Ramírez, Pablo; Frövel, Malte; Belenguer, Tomás; Salazar, Félix

    2015-01-01

    This work presents an optical non-contact technique to evaluate the fatigue damage state of CFRP structures measuring the irregularity factor of the surface. This factor includes information about surface topology and can be measured easily on field, by techniques such as optical perfilometers. The surface irregularity factor has been correlated with stiffness degradation, which is a well-accepted parameter for the evaluation of the fatigue damage state of composite materials. Constant amplitude fatigue loads (CAL) and realistic variable amplitude loads (VAL), representative of real in- flight conditions, have been applied to “dog bone” shaped tensile specimens. It has been shown that the measurement of the surface irregularity parameters can be applied to evaluate the damage state of a structure, and that it is independent of the type of fatigue load that has caused the damage. As a result, this measurement technique is applicable for a wide range of inspections of composite material structures, from pressurized tanks with constant amplitude loads, to variable amplitude loaded aeronautical structures such as wings and empennages, up to automotive and other industrial applications. PMID:28793655

  11. Reducing Projection Calculation in Quantum Teleportation by Virtue of the IWOP Technique and Schmidt Decomposition of |η〉 State

    NASA Astrophysics Data System (ADS)

    Fan, Hong-Yi; Fan, Yue

    2002-01-01

    By virtue of the technique of integration within an ordered product of operators and the Schmidt decomposition of the entangled state |η〉, we reduce the general projection calculation in the theory of quantum teleportation to a as simple as possible form and present a general formalism for teleportating quantum states of continuous variable. The project supported by National Natural Science Foundation of China and Educational Ministry Foundation of China

  12. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS.

    PubMed

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-01-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS

    NASA Astrophysics Data System (ADS)

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-10-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.

  14. In-situ technique for checking the calibration of platinum resistance thermometers

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Dillon-Townes, Lawrence A.

    1987-01-01

    The applicability of the self-heating technique for checking the calibration of platinum resistance thermometers located inside wind tunnels was investigated. This technique is based on a steady state measurement of resistance increase versus joule heating. This method was found to be undesirable, mainly because of the fluctuations of flow variables during any wind tunnel testing.

  15. Teleportation-based continuous variable quantum cryptography

    NASA Astrophysics Data System (ADS)

    Luiz, F. S.; Rigolin, Gustavo

    2017-03-01

    We present a continuous variable (CV) quantum key distribution (QKD) scheme based on the CV quantum teleportation of coherent states that yields a raw secret key made up of discrete variables for both Alice and Bob. This protocol preserves the efficient detection schemes of current CV technology (no single-photon detection techniques) and, at the same time, has efficient error correction and privacy amplification schemes due to the binary modulation of the key. We show that for a certain type of incoherent attack, it is secure for almost any value of the transmittance of the optical line used by Alice to share entangled two-mode squeezed states with Bob (no 3 dB or 50% loss limitation characteristic of beam splitting attacks). The present CVQKD protocol works deterministically (no postselection needed) with efficient direct reconciliation techniques (no reverse reconciliation) in order to generate a secure key and beyond the 50% loss case at the incoherent attack level.

  16. Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2003-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.

  17. Digital pre-compensation techniques enabling high-capacity bandwidth variable transponders

    NASA Astrophysics Data System (ADS)

    Napoli, Antonio; Berenguer, Pablo Wilke; Rahman, Talha; Khanna, Ginni; Mezghanni, Mahdi M.; Gardian, Lennart; Riccardi, Emilio; Piat, Anna Chiadò; Calabrò, Stefano; Dris, Stefanos; Richter, André; Fischer, Johannes Karl; Sommerkorn-Krombholz, Bernd; Spinnler, Bernhard

    2018-02-01

    Digital pre-compensation techniques are among the enablers for cost-efficient high-capacity transponders. In this paper we describe various methods to mitigate the impairments introduced by state-of-the-art components within modern optical transceivers. Numerical and experimental results validate their performance and benefits.

  18. State-of-the-Art Sensor Technology in Spain: Invasive and Non-Invasive Techniques for Monitoring Respiratory Variables

    PubMed Central

    Domingo, Christian; Blanch, Lluis; Murias, Gaston; Luján, Manel

    2010-01-01

    The interest in measuring physiological parameters (especially arterial blood gases) has grown progressively in parallel to the development of new technologies. Physiological parameters were first measured invasively and at discrete time points; however, it was clearly desirable to measure them continuously and non-invasively. The development of intensive care units promoted the use of ventilators via oral intubation ventilators via oral intubation and mechanical respiratory variables were progressively studied. Later, the knowledge gained in the hospital was applied to out-of-hospital management. In the present paper we review the invasive and non-invasive techniques for monitoring respiratory variables. PMID:22399898

  19. State-of-the-art sensor technology in Spain: invasive and non-invasive techniques for monitoring respiratory variables.

    PubMed

    Domingo, Christian; Blanch, Lluis; Murias, Gaston; Luján, Manel

    2010-01-01

    The interest in measuring physiological parameters (especially arterial blood gases) has grown progressively in parallel to the development of new technologies. Physiological parameters were first measured invasively and at discrete time points; however, it was clearly desirable to measure them continuously and non-invasively. The development of intensive care units promoted the use of ventilators via oral intubation ventilators via oral intubation and mechanical respiratory variables were progressively studied. Later, the knowledge gained in the hospital was applied to out-of-hospital management. In the present paper we review the invasive and non-invasive techniques for monitoring respiratory variables.

  20. Surface atrial frequency analysis in patients with atrial fibrillation: a tool for evaluating the effects of intervention.

    PubMed

    Raine, Dan; Langley, Philip; Murray, Alan; Dunuwille, Asunga; Bourke, John P

    2004-09-01

    The aims of this study were to evaluate (1) principal component analysis as a technique for extracting the atrial signal waveform from the standard 12-lead ECG and (2) its ability to distinguish changes in atrial fibrillation (AF) frequency parameters over time and in response to pharmacologic manipulation using drugs with different effects on atrial electrophysiology. Twenty patients with persistent AF were studied. Continuous 12-lead Holter ECGs were recorded for 60 minutes, first, in the drug-free state. Mean and variability of atrial waveform frequency were measured using an automated computer technique. This extracted the atrial signal by principal component analysis and identified the main frequency component using Fourier analysis. Patients were then allotted sequentially to receive 1 of 4 drugs intravenously (amiodarone, flecainide, sotalol, or metoprolol), and changes induced in mean and variability of atrial waveform frequency measured. Mean and variability of atrial waveform frequency did not differ within patients between the two 30-minute sections of the drug-free state. As hypothesized, significant changes in mean and variability of atrial waveform frequency were detected after manipulation with amiodarone (mean: 5.77 vs 4.86 Hz; variability: 0.55 vs 0.31 Hz), flecainide (mean: 5.33 vs 4.72 Hz; variability: 0.71 vs 0.31 Hz), and sotalol (mean: 5.94 vs 4.90 Hz; variability: 0.73 vs 0.40 Hz) but not with metoprolol (mean: 5.41 vs 5.17 Hz; variability: 0.81 vs 0.82 Hz). A technique for continuously analyzing atrial frequency characteristics of AF from the surface ECG has been developed and validated.

  1. Near-optimal, asymptotic tracking in control problems involving state-variable inequality constraints

    NASA Technical Reports Server (NTRS)

    Markopoulos, N.; Calise, A. J.

    1993-01-01

    The class of all piecewise time-continuous controllers tracking a given hypersurface in the state space of a dynamical system can be split by the present transformation technique into two disjoint classes; while the first of these contains all controllers which track the hypersurface in finite time, the second contains all controllers that track the hypersurface asymptotically. On this basis, a reformulation is presented for optimal control problems involving state-variable inequality constraints. If the state constraint is regarded as 'soft', there may exist controllers which are asymptotic, two-sided, and able to yield the optimal value of the performance index.

  2. Composable security proof for continuous-variable quantum key distribution with coherent States.

    PubMed

    Leverrier, Anthony

    2015-02-20

    We give the first composable security proof for continuous-variable quantum key distribution with coherent states against collective attacks. Crucially, in the limit of large blocks the secret key rate converges to the usual value computed from the Holevo bound. Combining our proof with either the de Finetti theorem or the postselection technique then shows the security of the protocol against general attacks, thereby confirming the long-standing conjecture that Gaussian attacks are optimal asymptotically in the composable security framework. We expect that our parameter estimation procedure, which does not rely on any assumption about the quantum state being measured, will find applications elsewhere, for instance, for the reliable quantification of continuous-variable entanglement in finite-size settings.

  3. An improved switching converter model using discrete and average techniques

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  4. Investigating the social behavioral dynamics and differentiation of skill in a martial arts technique.

    PubMed

    Caron, Robert R; Coey, Charles A; Dhaim, Ashley N; Schmidt, R C

    2017-08-01

    Coordinating interpersonal motor activity is crucial in martial arts, where managing spatiotemporal parameters is emphasized to produce effective techniques. Modeling arm movements in an Aikido technique as coupled oscillators, we investigated whether more-skilled participants would adapt to the perturbation of weighted arms in different and predictable ways compared to less-skilled participants. Thirty-four participants ranging from complete novice to veterans of more than twenty years were asked to perform an Aikido exercise with a repeated attack and response, resulting in a period of steady-state coordination, followed by a take down. We used mean relative phase and its variability to measure the steady-state dynamics of both the inter- and intrapersonal coordination. Our findings suggest that interpersonal coordination of less-skilled participants is disrupted in highly predictable ways based on oscillatory dynamics; however, more-skilled participants overcome these natural dynamics to maintain critical performance variables. Interestingly, the more-skilled participants exhibited more variability in their intrapersonal dynamics while meeting these interpersonal demands. This work lends insight to the development of skill in competitive social motor activities. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Performance Monitoring Of A Computer Numerically Controlled (CNC) Lathe Using Pattern Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Daneshmend, L. K.; Pak, H. A.

    1984-02-01

    On-line monitoring of the cutting process in CNC lathe is desirable to ensure unattended fault-free operation in an automated environment. The state of the cutting tool is one of the most important parameters which characterises the cutting process. Direct monitoring of the cutting tool or workpiece is not feasible during machining. However several variables related to the state of the tool can be measured on-line. A novel monitoring technique is presented which uses cutting torque as the variable for on-line monitoring. A classifier is designed on the basis of the empirical relationship between cutting torque and flank wear. The empirical model required by the on-line classifier is established during an automated training cycle using machine vision for off-line direct inspection of the tool.

  6. The Performance of A Sampled Data Delay Lock Loop Implemented with a Kalman Loop Filter.

    DTIC Science & Technology

    1980-01-01

    que for analysis is computer simulation. Other techniques include state variable techniques and z-transform methods. Since the Kalman filter is linear...LOGIC NOT SHOWN Figure 2. Block diagram of the sampled data delay lock loop (SDDLL) Es A/ A 3/A/ Figure 3. Sampled error voltage ( Es ) as a function of...from a sum of two components. The first component is the previous filtered es - timate advanced one step forward by the state transition matrix. The 8

  7. Exaggerated heart rate oscillations during two meditation techniques.

    PubMed

    Peng, C K; Mietus, J E; Liu, Y; Khalsa, G; Douglas, P S; Benson, H; Goldberger, A L

    1999-07-31

    We report extremely prominent heart rate oscillations associated with slow breathing during specific traditional forms of Chinese Chi and Kundalini Yoga meditation techniques in healthy young adults. We applied both spectral analysis and a novel analytic technique based on the Hilbert transform to quantify these heart rate dynamics. The amplitude of these oscillations during meditation was significantly greater than in the pre-meditation control state and also in three non-meditation control groups: i) elite athletes during sleep, ii) healthy young adults during metronomic breathing, and iii) healthy young adults during spontaneous nocturnal breathing. This finding, along with the marked variability of the beat-to-beat heart rate dynamics during such profound meditative states, challenges the notion of meditation as only an autonomically quiescent state.

  8. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-05-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  9. Variable Torque Prescription: State of Art.

    PubMed Central

    Lacarbonara, Mariano; Accivile, Ettore; Abed, Maria R.; Dinoi, Maria Teresa; Monaco, Annalisa; Marzo, Giuseppe; Capogreco, Mario

    2015-01-01

    The variable prescription is widely described under the clinical aspect: the clinics is the result of the evolution of the state-of-the-art, aspect that is less considered in the daily literature. The state-of-the-art is the key to understand not only how we reach where we are but also to learn how to manage propely the torque, focusing on the technical and biomechanical purpos-es that led to the change of the torque values over time. The aim of this study is to update the clinicians on the aspects that affect the torque under the biomechanical sight, helping them to understand how to managing it, following the “timeline changes” in the different techniques so that the Variable Prescription Orthodontic (VPO) would be a suitable tool in every clinical case. PMID:25674173

  10. An approach to online network monitoring using clustered patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jinoh; Sim, Alex; Suh, Sang C.

    Network traffic monitoring is a core element in network operations and management for various purposes such as anomaly detection, change detection, and fault/failure detection. In this study, we introduce a new approach to online monitoring using a pattern-based representation of the network traffic. Unlike the past online techniques limited to a single variable to summarize (e.g., sketch), the focus of this study is on capturing the network state from the multivariate attributes under consideration. To this end, we employ clustering with its benefit of the aggregation of multidimensional variables. The clustered result represents the state of the network with regardmore » to the monitored variables, which can also be compared with the previously observed patterns visually and quantitatively. Finally, we demonstrate the proposed method with two popular use cases, one for estimating state changes and the other for identifying anomalous states, to confirm its feasibility.« less

  11. An approach to online network monitoring using clustered patterns

    DOE PAGES

    Kim, Jinoh; Sim, Alex; Suh, Sang C.; ...

    2017-03-13

    Network traffic monitoring is a core element in network operations and management for various purposes such as anomaly detection, change detection, and fault/failure detection. In this study, we introduce a new approach to online monitoring using a pattern-based representation of the network traffic. Unlike the past online techniques limited to a single variable to summarize (e.g., sketch), the focus of this study is on capturing the network state from the multivariate attributes under consideration. To this end, we employ clustering with its benefit of the aggregation of multidimensional variables. The clustered result represents the state of the network with regardmore » to the monitored variables, which can also be compared with the previously observed patterns visually and quantitatively. Finally, we demonstrate the proposed method with two popular use cases, one for estimating state changes and the other for identifying anomalous states, to confirm its feasibility.« less

  12. Portal Vein Embolization: State-of-the-Art Technique and Options to Improve Liver Hypertrophy.

    PubMed

    Huang, Steven Y; Aloia, Thomas A

    2017-12-01

    Portal vein embolization (PVE) is associated with a high technical and clinical success rate for induction of future liver remnant hypertrophy prior to surgical resection. The degree of hypertrophy is variable and depends on multiple factors, including technical aspects of the procedure and underlying chronic liver disease. For patients with insufficient liver volume following PVE, adjunctive techniques, such as intra-portal administration of stem cells, dietary supplementation, transarterial embolization, and hepatic vein embolization, are available. Our purpose is to review the state-of-the-art technique associated with high-quality PVE and to discuss options to improve hypertrophy of the future liver remnant.

  13. A Comparison of Two Above-Ground Biomass Estimation Techniques Integrating Satellite-Based Remotely Sensed Data and Ground Data for Tropical and Semiarid Forests in Puerto Rico

    EPA Science Inventory

    Two above-ground forest biomass estimation techniques were evaluated for the United States Territory of Puerto Rico using predictor variables acquired from satellite based remotely sensed data and ground data from the U.S. Department of Agriculture Forest Inventory Analysis (FIA)...

  14. A multitasking finite state architecture for computer control of an electric powertrain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burba, J.C.

    1984-01-01

    Finite state techniques provide a common design language between the control engineer and the computer engineer for event driven computer control systems. They simplify communication and provide a highly maintainable control system understandable by both. This paper describes the development of a control system for an electric vehicle powertrain utilizing finite state concepts. The basics of finite state automata are provided as a framework to discuss a unique multitasking software architecture developed for this application. The architecture employs conventional time-sliced techniques with task scheduling controlled by a finite state machine representation of the control strategy of the powertrain. The complexitiesmore » of excitation variable sampling in this environment are also considered.« less

  15. Localization of one-photon state in space and Einstein-Podolsky-Rosen paradox in spontaneous parametric down conversion

    NASA Technical Reports Server (NTRS)

    Penin, A. N.; Reutova, T. A.; Sergienko, A. V.

    1992-01-01

    An experiment on one-photon state localization in space using a correlation technique in Spontaneous Parametric Down Conversion (SPDC) process is discussed. Results of measurements demonstrate an idea of the Einstein-Podolsky-Rosen (EPR) paradox for coordinate and momentum variables of photon states. Results of the experiment can be explained with the help of an advanced wave technique. The experiment is based on the idea that two-photon states of optical electromagnetic fields arising in the nonlinear process of the spontaneous parametric down conversion (spontaneous parametric light scattering) can be explained by quantum mechanical theory with the help of a single wave function.

  16. Localization of one-photon state in space and Einstein-Podolsky-Rosen paradox in spontaneous parametric down conversion

    NASA Astrophysics Data System (ADS)

    Penin, A. N.; Reutova, T. A.; Sergienko, A. V.

    1992-02-01

    An experiment on one-photon state localization in space using a correlation technique in Spontaneous Parametric Down Conversion (SPDC) process is discussed. Results of measurements demonstrate an idea of the Einstein-Podolsky-Rosen (EPR) paradox for coordinate and momentum variables of photon states. Results of the experiment can be explained with the help of an advanced wave technique. The experiment is based on the idea that two-photon states of optical electromagnetic fields arising in the nonlinear process of the spontaneous parametric down conversion (spontaneous parametric light scattering) can be explained by quantum mechanical theory with the help of a single wave function.

  17. Integrated research in constitutive modelling at elevated temperatures, part 1

    NASA Technical Reports Server (NTRS)

    Haisler, W. E.; Allen, D. H.

    1986-01-01

    Topics covered include: numerical integration techniques; thermodynamics and internal state variables; experimental lab development; comparison of models at room temperature; comparison of models at elevated temperature; and integrated software development.

  18. Variable Selection Strategies for Small-area Estimation Using FIA Plots and Remotely Sensed Data

    Treesearch

    Andrew Lister; Rachel Riemann; James Westfall; Mike Hoppus

    2005-01-01

    The USDA Forest Service's Forest Inventory and Analysis (FIA) unit maintains a network of tens of thousands of georeferenced forest inventory plots distributed across the United States. Data collected on these plots include direct measurements of tree diameter and height and other variables. We present a technique by which FIA plot data and coregistered...

  19. Variable-rate optical communication through the turbulent atmosphere. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Levitt, B. K.

    1971-01-01

    It was demonstrated that the data transmitter can extract real time, channel state information by processing the field received when a pilot tone is sent from the data receiver to the data transmitter. Based on these channel measurements, optimal variable rate techniques were derived and significant improvements in system perforamnce were obtained, particularly at low bit error rates.

  20. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    PubMed

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm.

  1. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  2. Efficient Approaches for Propagating Hydrologic Forcing Uncertainty: High-Resolution Applications Over the Western United States

    NASA Astrophysics Data System (ADS)

    Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.

    2017-12-01

    NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.

  3. Simplified Phased-Mission System Analysis for Systems with Independent Component Repairs

    NASA Technical Reports Server (NTRS)

    Somani, Arun K.

    1996-01-01

    Accurate analysis of reliability of system requires that it accounts for all major variations in system's operation. Most reliability analyses assume that the system configuration, success criteria, and component behavior remain the same. However, multiple phases are natural. We present a new computationally efficient technique for analysis of phased-mission systems where the operational states of a system can be described by combinations of components states (such as fault trees or assertions). Moreover, individual components may be repaired, if failed, as part of system operation but repairs are independent of the system state. For repairable systems Markov analysis techniques are used but they suffer from state space explosion. That limits the size of system that can be analyzed and it is expensive in computation. We avoid the state space explosion. The phase algebra is used to account for the effects of variable configurations, repairs, and success criteria from phase to phase. Our technique yields exact (as opposed to approximate) results. We demonstrate our technique by means of several examples and present numerical results to show the effects of phases and repairs on the system reliability/availability.

  4. Management options for songbirds using the oak shelterwood-burn technique in upland forests of the Southeastern United States

    Treesearch

    J. Drew Lanham; Patrick D. Keyser; Patrick H. Brose; David H. Van Lear

    2002-01-01

    The shelterwood-burn technique is a novel method for regenerating oak-dominated stands on some upland sites while simultaneously minimizing undesirable hardwood intrusion with prescribed fire. Management options available within an oak shelterwood-burn regime will create variably structured habitats that may potentially harbor avian communities of mature forest and...

  5. Objective techniques for psychological assessment

    NASA Technical Reports Server (NTRS)

    Wortz, E.; Hendrickson, W.; Ross, T.

    1973-01-01

    A literature review and a pilot study are used to develop psychological assessment techniques for determining objectively the major aspects of the psychological state of an astronaut. Relationships between various performance and psychophysiological variables and between those aspects of attention necessary to engage successfully in various functions are considered in developing a paradigm to be used for collecting data in manned isolation chamber experiments.

  6. Status of the Usage of Active Learning and Teaching Method and Techniques by Social Studies Teachers

    ERIC Educational Resources Information Center

    Akman, Özkan

    2016-01-01

    The purpose of this study was to determine the active learning and teaching methods and techniques which are employed by the social studies teachers working in state schools of Turkey. This usage status was assessed using different variables. This was a case study, wherein the research was limited to 241 social studies teachers. These teachers…

  7. Event triggered state estimation techniques for power systems with integrated variable energy resources.

    PubMed

    Francy, Reshma C; Farid, Amro M; Youcef-Toumi, Kamal

    2015-05-01

    For many decades, state estimation (SE) has been a critical technology for energy management systems utilized by power system operators. Over time, it has become a mature technology that provides an accurate representation of system state under fairly stable and well understood system operation. The integration of variable energy resources (VERs) such as wind and solar generation, however, introduces new fast frequency dynamics and uncertainties into the system. Furthermore, such renewable energy is often integrated into the distribution system thus requiring real-time monitoring all the way to the periphery of the power grid topology and not just the (central) transmission system. The conventional solution is two fold: solve the SE problem (1) at a faster rate in accordance with the newly added VER dynamics and (2) for the entire power grid topology including the transmission and distribution systems. Such an approach results in exponentially growing problem sets which need to be solver at faster rates. This work seeks to address these two simultaneous requirements and builds upon two recent SE methods which incorporate event-triggering such that the state estimator is only called in the case of considerable novelty in the evolution of the system state. The first method incorporates only event-triggering while the second adds the concept of tracking. Both SE methods are demonstrated on the standard IEEE 14-bus system and the results are observed for a specific bus for two difference scenarios: (1) a spike in the wind power injection and (2) ramp events with higher variability. Relative to traditional state estimation, the numerical case studies showed that the proposed methods can result in computational time reductions of 90%. These results were supported by a theoretical discussion of the computational complexity of three SE techniques. The work concludes that the proposed SE techniques demonstrate practical improvements to the computational complexity of classical state estimation. In such a way, state estimation can continue to support the necessary control actions to mitigate the imbalances resulting from the uncertainties in renewables. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. A Transformation Approach to Optimal Control Problems with Bounded State Variables

    NASA Technical Reports Server (NTRS)

    Hanafy, Lawrence Hanafy

    1971-01-01

    A technique is described and utilized in the study of the solutions to various general problems in optimal control theory, which are converted in to Lagrange problems in the calculus of variations. This is accomplished by mapping certain properties in Euclidean space onto closed control and state regions. Nonlinear control problems with a unit m cube as control region and unit n cube as state region are considered.

  9. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-12-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  10. (Bayesian) Inference for X-ray Timing

    NASA Astrophysics Data System (ADS)

    Huppenkothen, Daniela

    2016-07-01

    Fourier techniques have been incredibly successful in describing variability of X-ray binaries (XRBs) and Active Galactic Nuclei (AGN). The detection and characterization of both broadband noise components and quasi-periodic oscillations as well as their behavior in the context of spectral changes during XRB outbursts has become an important tool for studying the physical processes of accretion and ejection in these systems. In this talk, I will review state-of-the-art techniques for characterizing variability in compact objects and show how these methods help us understand the causes of the observed variability and how we may use it to probe fundamental physics. Despite numerous successes, however, it has also become clear that many scientific questions cannot be answered with traditional timing methods alone. I will therefore also present recent advances, some in the time domain like CARMA, to modeling variability with generative models and discuss where these methods might lead us in the future.

  11. Optical Sensing of the Fatigue Damage State of CFRP under Realistic Aeronautical Load Sequences

    PubMed Central

    Zuluaga-Ramírez, Pablo; Arconada, Álvaro; Frövel, Malte; Belenguer, Tomás; Salazar, Félix

    2015-01-01

    We present an optical sensing methodology to estimate the fatigue damage state of structures made of carbon fiber reinforced polymer (CFRP), by measuring variations on the surface roughness. Variable amplitude loads (VAL), which represent realistic loads during aeronautical missions of fighter aircraft (FALSTAFF) have been applied to coupons until failure. Stiffness degradation and surface roughness variations have been measured during the life of the coupons obtaining a Pearson correlation of 0.75 between both variables. The data were compared with a previous study for Constant Amplitude Load (CAL) obtaining similar results. Conclusions suggest that the surface roughness measured in strategic zones is a useful technique for structural health monitoring of CFRP structures, and that it is independent of the type of load applied. Surface roughness can be measured in the field by optical techniques such as speckle, confocal perfilometers and interferometry, among others. PMID:25760056

  12. Space shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.

  13. An automated procedure for calculating system matrices from perturbation data generated by an EAI Pacer and 100 hybrid computer system

    NASA Technical Reports Server (NTRS)

    Milner, E. J.; Krosel, S. M.

    1977-01-01

    Techniques are presented for determining the elements of the A, B, C, and D state variable matrices for systems simulated on an EAI Pacer 100 hybrid computer. An automated procedure systematically generates disturbance data necessary to linearize the simulation model and stores these data on a floppy disk. A separate digital program verifies this data, calculates the elements of the system matrices, and prints these matrices appropriately labeled. The partial derivatives forming the elements of the state variable matrices are approximated by finite difference calculations.

  14. Efficient continuous-variable state tomography using Padua points

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.

  15. Collaborative Technologies and their Effect on Operator Workload in BMC2 Domains

    DTIC Science & Technology

    2007-06-01

    Vogel, & Luck, 1998). 16 Like brain measurement techniques, measures of heart rate variability ( HRV ) have been used extensively as a physiological...et al., 2004; Helton, Dember, Warm, & Matthews, 1999; Matthews, et al., 1999; Szalma et al., 2004). Both state and trait anxiety were assessed using...the STAI (State-Trait Anxiety Inventory, Spielberger et al., 1983). Each dimension of the STAI (i.e., state and trait) consists of a 20-item

  16. Fourier decomposition pulmonary MRI using a variable flip angle balanced steady-state free precession technique.

    PubMed

    Corteville, D M R; Kjïrstad, Å; Henzler, T; Zöllner, F G; Schad, L R

    2015-05-01

    Fourier decomposition (FD) is a noninvasive method for assessing ventilation and perfusion-related information in the lungs. However, the technique has a low signal-to-noise ratio (SNR) in the lung parenchyma. We present an approach to increase the SNR in both morphological and functional images. The data used to create functional FD images are usually acquired using a standard balanced steady-state free precession (bSSFP) sequence. In the standard sequence, the possible range of the flip angle is restricted due to specific absorption rate (SAR) limitations. Thus, using a variable flip angle approach as an optimization is possible. This was validated using measurements from a phantom and six healthy volunteers. The SNR in both the morphological and functional FD images was increased by 32%, while the SAR restrictions were kept unchanged. Furthermore, due to the higher SNR, the effective resolution of the functional images was increased visibly. The variable flip angle approach did not introduce any new transient artifacts, and blurring artifacts were minimized. Both a gain in SNR and an effective resolution gain in functional lung images can be obtained using the FD method in conjunction with a variable flip angle optimized bSSFP sequence. © 2014 Wiley Periodicals, Inc.

  17. Hydrologic Remote Sensing and Land Surface Data Assimilation.

    PubMed

    Moradkhani, Hamid

    2008-05-06

    Accurate, reliable and skillful forecasting of key environmental variables such as soil moisture and snow are of paramount importance due to their strong influence on many water resources applications including flood control, agricultural production and effective water resources management which collectively control the behavior of the climate system. Soil moisture is a key state variable in land surface-atmosphere interactions affecting surface energy fluxes, runoff and the radiation balance. Snow processes also have a large influence on land-atmosphere energy exchanges due to snow high albedo, low thermal conductivity and considerable spatial and temporal variability resulting in the dramatic change on surface and ground temperature. Measurement of these two variables is possible through variety of methods using ground-based and remote sensing procedures. Remote sensing, however, holds great promise for soil moisture and snow measurements which have considerable spatial and temporal variability. Merging these measurements with hydrologic model outputs in a systematic and effective way results in an improvement of land surface model prediction. Data Assimilation provides a mechanism to combine these two sources of estimation. Much success has been attained in recent years in using data from passive microwave sensors and assimilating them into the models. This paper provides an overview of the remote sensing measurement techniques for soil moisture and snow data and describes the advances in data assimilation techniques through the ensemble filtering, mainly Ensemble Kalman filter (EnKF) and Particle filter (PF), for improving the model prediction and reducing the uncertainties involved in prediction process. It is believed that PF provides a complete representation of the probability distribution of state variables of interests (according to sequential Bayes law) and could be a strong alternative to EnKF which is subject to some limitations including the linear updating rule and assumption of jointly normal distribution of errors in state variables and observation.

  18. Regime Behavior in Paleo-Reconstructed Streamflow: Attributions to Atmospheric Dynamics, Synoptic Circulation and Large-Scale Climate Teleconnection Patterns

    NASA Astrophysics Data System (ADS)

    Ravindranath, A.; Devineni, N.

    2017-12-01

    Studies have shown that streamflow behavior and dynamics have a significant link with climate and climate variability. Patterns of persistent regime behavior from extended streamflow records in many watersheds justify investigating large-scale climate mechanisms as potential drivers of hydrologic regime behavior and streamflow variability. Understanding such streamflow-climate relationships is crucial to forecasting/simulation systems and the planning and management of water resources. In this study, hidden Markov models are used with reconstructed streamflow to detect regime-like behaviors - the hidden states - and state transition phenomena. Individual extreme events and their spatial variability across the basin are then verified with the identified states. Wavelet analysis is performed to examine the signals over time in the streamflow records. Joint analyses of the climatic data in the 20th century and the identified states are undertaken to better understand the hydroclimatic connections within the basin as well as important teleconnections that influence water supply. Compositing techniques are used to identify atmospheric circulation patterns associated with identified states of streamflow. The grouping of such synoptic patterns and their frequency are then examined. Sliding time-window correlation analysis and cross-wavelet spectral analysis are performed to establish the synchronicity of basin flows to the identified synoptic and teleconnection patterns. The Missouri River Basin (MRB) is examined in this study, both as a means of better understanding the synoptic climate controls in this important watershed and as a case study for the techniques developed here. Initial wavelet analyses of reconstructed streamflow at major gauges in the MRB show multidecadal cycles in regime behavior.

  19. DUAL STATE-PARAMETER UPDATING SCHEME ON A CONCEPTUAL HYDROLOGIC MODEL USING SEQUENTIAL MONTE CARLO FILTERS

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin

    Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.

  20. Discovery and Monitoring of a New Black Hole Candidate XTE J1752-223 with RXTE: RMS Spectrum Evolution, BH Mass and the Source Distance

    NASA Technical Reports Server (NTRS)

    Shaposhinikov, Nikolai; Markwardt, Craig; Swank, Jean; Krimm, Hans

    2010-01-01

    We report on the discovery and monitoring observations of a new galactic black hole candidate XTE J1752-223 by Rossi X-ray Timing Explorer (RXTE). The new source appeared on the X-ray sky on October 21 2009 and was active for almost 8 months. Phenomenologically, the source exhibited the low-hard/highsoft spectral state bi-modality and the variability evolution during the state transition that matches standard behavior expected from a stellar mass black hole binary. We model the energy spectrum throughout the outburst using a generic Comptonization model assuming that part of the input soft radiation in the form of a black body spectrum gets reprocessed in the Comptonizing medium. We follow the evolution of fractional root-mean-square (RMS) variability in the RXTE/PCA energy band with the source spectral state and conclude that broad band variability is strongly correlated with the source hardness (or Comptonized fraction). We follow changes in the energy distribution of rms variability during the low-hard state and the state transition and find further evidence that variable emission is strongly concentrated in the power-law spectral component. We discuss the implication of our results to the Comptonization regimes during different spectral states. Correlations of spectral and variability properties provide measurements of the BH mass and distance to the source. The spectral-timing correlation scaling technique applied to the RXTE observations during the hardto- soft state transition indicates a mass of the BH in XTE J1752-223 between 8 and 11 solar masses and a distance to the source about 3.5 kiloparsec.

  1. Solution of the finite Milne problem in stochastic media with RVT Technique

    NASA Astrophysics Data System (ADS)

    Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.

    2017-12-01

    This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.

  2. Divergence-free approach for obtaining decompositions of quantum-optical processes

    NASA Astrophysics Data System (ADS)

    Sabapathy, K. K.; Ivan, J. S.; García-Patrón, R.; Simon, R.

    2018-02-01

    Operator-sum representations of quantum channels can be obtained by applying the channel to one subsystem of a maximally entangled state and deploying the channel-state isomorphism. However, for continuous-variable systems, such schemes contain natural divergences since the maximally entangled state is ill defined. We introduce a method that avoids such divergences by utilizing finitely entangled (squeezed) states and then taking the limit of arbitrary large squeezing. Using this method, we derive an operator-sum representation for all single-mode bosonic Gaussian channels where a unique feature is that both quantum-limited and noisy channels are treated on an equal footing. This technique facilitates a proof that the rank-1 Kraus decomposition for Gaussian channels at its respective entanglement-breaking thresholds, obtained in the overcomplete coherent-state basis, is unique. The methods could have applications to simulation of continuous-variable channels.

  3. Evaluating physical habitat and water chemistry data from statewide stream monitoring programs to establish least-impacted conditions in Washington State

    USGS Publications Warehouse

    Wilmoth, Siri K.; Irvine, Kathryn M.; Larson, Chad

    2015-01-01

    Various GIS-generated land-use predictor variables, physical habitat metrics, and water chemistry variables from 75 reference streams and 351 randomly sampled sites throughout Washington State were evaluated for effectiveness at discriminating reference from random sites within level III ecoregions. A combination of multivariate clustering and ordination techniques were used. We describe average observed conditions for a subset of predictor variables as well as proposing statistical criteria for establishing reference conditions for stream habitat in Washington. Using these criteria, we determined whether any of the random sites met expectations for reference condition and whether any of the established reference sites failed to meet expectations for reference condition. Establishing these criteria will set a benchmark from which future data will be compared.

  4. Impact damage resistance of composite fuselage structure, part 1

    NASA Technical Reports Server (NTRS)

    Dost, E. F.; Avery, W. B.; Ilcewicz, L. B.; Grande, D. H.; Coxon, B. R.

    1992-01-01

    The impact damage resistance of laminated composite transport aircraft fuselage structures was studied experimentally. A statistically based designed experiment was used to examine numerous material, laminate, structural, and extrinsic (e.g., impactor type) variables. The relative importance and quantitative measure of the effect of each variable and variable interactions on responses including impactor dynamic response, visibility, and internal damage state were determined. The study utilized 32 three-stiffener panels, each with a unique combination of material type, material forms, and structural geometry. Two manufacturing techniques, tow placement and tape lamination, were used to build panels representative of potential fuselage crown, keel, and lower side-panel designs. Various combinations of impactor variables representing various foreign-object-impact threats to the aircraft were examined. Impacts performed at different structural locations within each panel (e.g., skin midbay, stiffener attaching flange, etc.) were considered separate parallel experiments. The relationship between input variables, measured damage states, and structural response to this damage are presented including recommendations for materials and impact test methods for fuselage structure.

  5. A review on reflective remote sensing and data assimilation techniques for enhanced agroecosystem modeling

    NASA Astrophysics Data System (ADS)

    Dorigo, W. A.; Zurita-Milla, R.; de Wit, A. J. W.; Brazile, J.; Singh, R.; Schaepman, M. E.

    2007-05-01

    During the last 50 years, the management of agroecosystems has been undergoing major changes to meet the growing demand for food, timber, fibre and fuel. As a result of this intensified use, the ecological status of many agroecosystems has been severely deteriorated. Modeling the behavior of agroecosystems is, therefore, of great help since it allows the definition of management strategies that maximize (crop) production while minimizing the environmental impacts. Remote sensing can support such modeling by offering information on the spatial and temporal variation of important canopy state variables which would be very difficult to obtain otherwise. In this paper, we present an overview of different methods that can be used to derive biophysical and biochemical canopy state variables from optical remote sensing data in the VNIR-SWIR regions. The overview is based on an extensive literature review where both statistical-empirical and physically based methods are discussed. Subsequently, the prevailing techniques of assimilating remote sensing data into agroecosystem models are outlined. The increasing complexity of data assimilation methods and of models describing agroecosystem functioning has significantly increased computational demands. For this reason, we include a short section on the potential of parallel processing to deal with the complex and computationally intensive algorithms described in the preceding sections. The studied literature reveals that many valuable techniques have been developed both for the retrieval of canopy state variables from reflective remote sensing data as for assimilating the retrieved variables in agroecosystem models. However, for agroecosystem modeling and remote sensing data assimilation to be commonly employed on a global operational basis, emphasis will have to be put on bridging the mismatch between data availability and accuracy on one hand, and model and user requirements on the other. This could be achieved by integrating imagery with different spatial, temporal, spectral, and angular resolutions, and the fusion of optical data with data of different origin, such as LIDAR and radar/microwave.

  6. Newtonian nudging for a Richards equation-based distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Marrocu, Marino; Putti, Mario; Verbunt, Mark

    The objective of data assimilation is to provide physically consistent estimates of spatially distributed environmental variables. In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimilation scheme. Nudging is shown to be successful in improving the hydrological simulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitivity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexible, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be readily extended to any of these features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.

  7. A Review of New Surgical and Endoscopic Therapies for Gastroesophageal Reflux Disease.

    PubMed

    Ganz, Robert A

    2016-07-01

    Treatment of gastroesophageal reflux disease in the United States today is binary, with the majority of patients with gastroesophageal reflux disease being treated with antisecre-tory medications and a minority of patients, typically those with volume regurgitation, undergoing Nissen fundoplication. However, there has been increasing dissatisfaction with proton pump inhibitor therapy among a significant number of patients with gastroesophageal reflux disease owing to cost, side effects, and refractory symptoms, and there has been a general reluctance to undergo surgical fundoplication due to its attendant side-effect profile. As a result, a therapy gap exists for many patients with gastroesophageal reflux disease. Alternative techniques are available for these gap patients, including 2 endoscopic fundoplication techniques, an endoscopic radiofrequency energy delivery technique, and 2 minimally invasive surgical procedures. These alternative techniques have been extensively evaluated; however, there are limitations to published studies, including arbitrary definitions of success, variable efficacy measurements, deficient reporting tools, inconsistent study designs, inconsistent lengths of follow-up postintervention, and lack of comparison data across techniques. Although all of the techniques appear to be safe, the endoscopic techniques lack demonstrable reflux control and show variable symptom improvement and variable decreases in proton pump inhibitor use. The surgical techniques are more robust, with evidence for adequate reflux control, symptom improvement, and decreased proton pump inhibitor use; however, these techniques are more difficult to perform and are more intrusive. Additionally, these alternative techniques have only been studied in patients with relatively normal anatomy. The field of gastroesophageal reflux disease treatment is in need of consistent definitions of efficacy, standardized study design and outcome measurements, and improved reporting tools before the role of these techniques can be fully ascertained.

  8. Aircraft model prototypes which have specified handling-quality time histories

    NASA Technical Reports Server (NTRS)

    Johnson, S. H.

    1976-01-01

    Several techniques for obtaining linear constant-coefficient airplane models from specified handling-quality time histories are discussed. One technique, the pseudodata method, solves the basic problem, yields specified eigenvalues, and accommodates state-variable transfer-function zero suppression. The method is fully illustrated for a fourth-order stability-axis small-motion model with three lateral handling-quality time histories specified. The FORTRAN program which obtains and verifies the model is included and fully documented.

  9. Survey of Radiographic Requirements and Techniques.

    ERIC Educational Resources Information Center

    Farman, Allan G.; Shawkat, Abdul H.

    1981-01-01

    A survey of dental schools revealed little standardization of student requirements for dental radiography in the United States. There was a high degree of variability as to what constituted a full radiographic survey, which has implications concerning the maximum limits to patient exposure to radiation. (Author/MLW)

  10. Electrical characterizations of MIS structures based on variable-gap n(p)-HgCdTe grown by MBE on Si(0 1 3) substrates

    NASA Astrophysics Data System (ADS)

    Voitsekhovskii, A. V.; Nesmelov, S. N.; Dzyadukh, S. M.; Varavin, V. S.; Dvoretskii, S. A.; Mikhailov, N. N.; Yakushev, M. V.; Sidorov, G. Yu.

    2017-12-01

    Metal-insulator-semiconductor (MIS) structures based on n(p)-Hg1-xCdxTe (x = 0.22-0.40) with near-surface variable-gap layers were grown by the molecular-beam epitaxy (MBE) technique on the Si (0 1 3) substrates. Electrical properties of MIS structures were investigated experimentally at various temperatures (9-77 K) and directions of voltage sweep. The ;narrow swing; technique was used to determine the spectra of fast surface states with the exception of hysteresis effects. It is established that the density of fast surface states at the MCT/Al2O3 interface at a minimum does not exceed 3 × 1010 eV-1 × cm-2. For MIS structures based on n-MCT/Si(0 1 3), the differential resistance of the space-charge region in strong inversion mode in the temperature range 50-90 K is limited by the Shockley-Read-Hall generation in the space-charge region.

  11. Gaussian entanglement revisited

    NASA Astrophysics Data System (ADS)

    Lami, Ludovico; Serafini, Alessio; Adesso, Gerardo

    2018-02-01

    We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.

  12. A technique for estimating time of concentration and storage coefficient values for Illinois streams

    USGS Publications Warehouse

    Graf, Julia B.; Garklavs, George; Oberg, Kevin A.

    1982-01-01

    Values of the unit hydrograph parameters time of concentration (TC) and storage coefficient (R) can be estimated for streams in Illinois by a two-step technique developed from data for 98 gaged basins in the State. The sum of TC and R is related to stream length (L) and main channel slope (S) by the relation (TC + R)e = 35.2L0.39S-0.78. The variable R/(TC + R) is not significantly correlated with drainage area, slope, or length, but does exhibit a regional trend. Regional values of R/(TC + R) are used with the computed values of (TC + R)e to solve for estimated values of time of concentration (TCe) and storage coefficient (Re). The use of the variable R/(TC + R) is thought to account for variations in unit hydrograph parameters caused by physiographic variables such as basin topography, flood-plain development, and basin storage characteristics. (USGS)

  13. Multifactor valuation models of energy futures and options on futures

    NASA Astrophysics Data System (ADS)

    Bertus, Mark J.

    The intent of this dissertation is to investigate continuous time pricing models for commodity derivative contracts that consider mean reversion. The motivation for pricing commodity futures and option on futures contracts leads to improved practical risk management techniques in markets where uncertainty is increasing. In the dissertation closed-form solutions to mean reverting one-factor, two-factor, three-factor Brownian motions are developed for futures contracts. These solutions are obtained through risk neutral pricing methods that yield tractable expressions for futures prices, which are linear in the state variables, hence making them attractive for estimation. These functions, however, are expressed in terms of latent variables (i.e. spot prices, convenience yield) which complicate the estimation of the futures pricing equation. To address this complication a discussion on Dynamic factor analysis is given. This procedure documents latent variables using a Kalman filter and illustrations show how this technique may be used for the analysis. In addition, to the futures contracts closed form solutions for two option models are obtained. Solutions to the one- and two-factor models are tailored solutions of the Black-Scholes pricing model. Furthermore, since these contracts are written on the futures contracts, they too are influenced by the same underlying parameters of the state variables used to price the futures contracts. To conclude, the analysis finishes with an investigation of commodity futures options that incorporate random discrete jumps.

  14. Climate downscaling effects on predictive ecological models: a case study for threatened and endangered vertebrates in the southeastern United States

    USGS Publications Warehouse

    Bucklin, David N.; Watling, James I.; Speroterra, Carolina; Brandt, Laura A.; Mazzotti, Frank J.; Romañach, Stephanie S.

    2013-01-01

    High-resolution (downscaled) projections of future climate conditions are critical inputs to a wide variety of ecological and socioeconomic models and are created using numerous different approaches. Here, we conduct a sensitivity analysis of spatial predictions from climate envelope models for threatened and endangered vertebrates in the southeastern United States to determine whether two different downscaling approaches (with and without the use of a regional climate model) affect climate envelope model predictions when all other sources of variation are held constant. We found that prediction maps differed spatially between downscaling approaches and that the variation attributable to downscaling technique was comparable to variation between maps generated using different general circulation models (GCMs). Precipitation variables tended to show greater discrepancies between downscaling techniques than temperature variables, and for one GCM, there was evidence that more poorly resolved precipitation variables contributed relatively more to model uncertainty than more well-resolved variables. Our work suggests that ecological modelers requiring high-resolution climate projections should carefully consider the type of downscaling applied to the climate projections prior to their use in predictive ecological modeling. The uncertainty associated with alternative downscaling methods may rival that of other, more widely appreciated sources of variation, such as the general circulation model or emissions scenario with which future climate projections are created.

  15. Combination of process and vibration data for improved condition monitoring of industrial systems working under variable operating conditions

    NASA Astrophysics Data System (ADS)

    Ruiz-Cárcel, C.; Jaramillo, V. H.; Mba, D.; Ottewill, J. R.; Cao, Y.

    2016-01-01

    The detection and diagnosis of faults in industrial processes is a very active field of research due to the reduction in maintenance costs achieved by the implementation of process monitoring algorithms such as Principal Component Analysis, Partial Least Squares or more recently Canonical Variate Analysis (CVA). Typically the condition of rotating machinery is monitored separately using vibration analysis or other specific techniques. Conventional vibration-based condition monitoring techniques are based on the tracking of key features observed in the measured signal. Typically steady-state loading conditions are required to ensure consistency between measurements. In this paper, a technique based on merging process and vibration data is proposed with the objective of improving the detection of mechanical faults in industrial systems working under variable operating conditions. The capabilities of CVA for detection and diagnosis of faults were tested using experimental data acquired from a compressor test rig where different process faults were introduced. Results suggest that the combination of process and vibration data can effectively improve the detectability of mechanical faults in systems working under variable operating conditions.

  16. Continuous-variable controlled-Z gate using an atomic ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Mingfeng; Jiang Nianquan; Jin Qingli

    2011-06-15

    The continuous-variable controlled-Z gate is a canonical two-mode gate for universal continuous-variable quantum computation. It is considered as one of the most fundamental continuous-variable quantum gates. Here we present a scheme for realizing continuous-variable controlled-Z gate between two optical beams using an atomic ensemble. The gate is performed by simply sending the two beams propagating in two orthogonal directions twice through a spin-squeezed atomic medium. Its fidelity can run up to one if the input atomic state is infinitely squeezed. Considering the noise effects due to atomic decoherence and light losses, we show that the observed fidelities of the schememore » are still quite high within presently available techniques.« less

  17. Distillation of mixed-state continuous-variable entanglement by photon subtraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Shengli; Loock, Peter van

    2010-12-15

    We present a detailed theoretical analysis for the distillation of one copy of a mixed two-mode continuous-variable entangled state using beam splitters and coherent photon-detection techniques, including conventional on-off detectors and photon-number-resolving detectors. The initial Gaussian mixed-entangled states are generated by transmitting a two-mode squeezed state through a lossy bosonic channel, corresponding to the primary source of errors in current approaches to optical quantum communication. We provide explicit formulas to calculate the entanglement in terms of logarithmic negativity before and after distillation, including losses in the channel and the photon detection, and show that one-copy distillation is still possible evenmore » for losses near the typical fiber channel attenuation length. A lower bound for the transmission coefficient of the photon-subtraction beam splitter is derived, representing the minimal value that still allows to enhance the entanglement.« less

  18. Testing Web Applications with Mutation Analysis

    ERIC Educational Resources Information Center

    Praphamontripong, Upsorn

    2017-01-01

    Web application software uses new technologies that have novel methods for integration and state maintenance that amount to new control flow mechanisms and new variables scoping. While modern web development technologies enhance the capabilities of web applications, they introduce challenges that current testing techniques do not adequately test…

  19. Test systems for measuring ocular parameters and visual function in mice.

    PubMed

    Schaeffel, Frank

    2008-05-01

    New techniques are described to measure refractive state, pupil responses, corneal curvature, ocular dimensions and spatial vision in mice. These variables are important for studies on myopia development in mice, but they are also valuable for phenotyping mouse mutants and for pharmacological studies.

  20. Techniques for estimating flood-peak discharges from urban basins in Missouri

    USGS Publications Warehouse

    Becker, L.D.

    1986-01-01

    Techniques are defined for estimating the magnitude and frequency of future flood peak discharges of rainfall-induced runoff from small urban basins in Missouri. These techniques were developed from an initial analysis of flood records of 96 gaged sites in Missouri and adjacent states. Final regression equations are based on a balanced, representative sampling of 37 gaged sites in Missouri. This sample included 9 statewide urban study sites, 18 urban sites in St. Louis County, and 10 predominantly rural sites statewide. Short-term records were extended on the basis of long-term climatic records and use of a rainfall-runoff model. Linear least-squares regression analyses were used with log-transformed variables to relate flood magnitudes of selected recurrence intervals (dependent variables) to selected drainage basin indexes (independent variables). For gaged urban study sites within the State, the flood peak estimates are from the frequency curves defined from the synthesized long-term discharge records. Flood frequency estimates are made for ungaged sites by using regression equations that require determination of the drainage basin size and either the percentage of impervious area or a basin development factor. Alternative sets of equations are given for the 2-, 5-, 10-, 25-, 50-, and 100-yr recurrence interval floods. The average standard errors of estimate range from about 33% for the 2-yr flood to 26% for the 100-yr flood. The techniques for estimation are applicable to flood flows that are not significantly affected by storage caused by manmade activities. Flood peak discharge estimating equations are considered applicable for sites on basins draining approximately 0.25 to 40 sq mi. (Author 's abstract)

  1. Gaussian-based techniques for quantum propagation from the time-dependent variational principle: Formulation in terms of trajectories of coupled classical and quantum variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalashilin, Dmitrii V.; Burghardt, Irene

    2008-08-28

    In this article, two coherent-state based methods of quantum propagation, namely, coupled coherent states (CCS) and Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH), are put on the same formal footing, using a derivation from a variational principle in Lagrangian form. By this approach, oscillations of the classical-like Gaussian parameters and oscillations of the quantum amplitudes are formally treated in an identical fashion. We also suggest a new approach denoted here as coupled coherent states trajectories (CCST), which completes the family of Gaussian-based methods. Using the same formalism for all related techniques allows their systematization and a straightforward comparison of their mathematical structuremore » and cost.« less

  2. Caroline Furness and the Evolution of Visual Variable Star Observing

    NASA Astrophysics Data System (ADS)

    Larsen, Kristine

    2017-01-01

    An Introduction to the Study of Variable Stars by Dr. Caroline Ellen Furness (1869-1936), Director of the Vassar College Observatory, was published in October 2015. Issued in honor of the fiftieth anniversary of the founding of Vassar College, the work was meant to fill a void in the literature, namely as both an introduction to the topic of variable stars as well as a manual explaining how they should be observed and the resulting data analyzed. It was judged to be one of the hundred best books written by an American woman in the last hundred years at the 1933 World’s Fair in Chicago. The book covers the relevant history of and background on types of variable stars, star charts, catalogs, and the magnitude scale, then describes observing techniques, including visual, photographic, and photoelectric photometry. The work finishes with a discussion of light curves and patterns of variability, with a special emphasis on eclipsing binaries and long period variables. Furness’s work is therefore a valuable snapshot of the state of astronomical knowledge, technology, and observing techniques from a century ago. Furness’s book and its reception in the scientific community are analyzed, and parallels with (and departures from) the current advice given by the AAVSO to beginning variable star observers today are highlighted.

  3. Revisiting Caroline Furness's An Introduction to the Study of Variable Stars on its Centenary (Poster abstract)

    NASA Astrophysics Data System (ADS)

    Larsen, K.

    2016-06-01

    (Abstract only) A century and one month ago (October 1915) Dr. Caroline Ellen Furness (1869-1936), Director of the Vassar College Observatory, published An Introduction to the Study of Variable Stars. Issued in honor of the fiftieth anniversary of the founding of Vassar College, the work was meant to fill a void in the literature, namely as both an introduction to the topic of variable stars and as a manual explaining how they should be observed and the resulting data analyzed. It was judged to be one of the hundred best books written by an American woman in the last hundred years at the 1933 World's Fair in Chicago. The book covers the relevant history of and background on types of variable stars, star charts, catalogs, and the magnitude scale, then describes observing techniques, including visual, photographic, and photoelectric photometry. The work finishes with a discussion of light curves and patterns of variability, with a special emphasis on eclipsing binaries and long period variables. Furness's work is a valuable snapshot of the state of astronomical knowledge, technology, and observing techniques from a century ago. This presentation will analyze both Furness's book and its reception in the scientific community, and draw parallels to current advice given to beginning variable star observers.

  4. SAINT: A combined simulation language for modeling man-machine systems

    NASA Technical Reports Server (NTRS)

    Seifert, D. J.

    1979-01-01

    SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.

  5. Infrared spectroscopy and upconversion luminescence behaviour of erbium doped yttrium (III) oxide phosphor

    NASA Astrophysics Data System (ADS)

    Dubey, Vikas; Tiwari, Ratnesh; Tamrakar, Raunak Kumar; Rathore, Gajendra Singh; Sharma, Chitrakant; Tiwari, Neha

    2014-11-01

    The paper reports upconversion luminescence behaviour and infra-red spectroscopic pattern of erbium doped yttrium (III) oxide phosphor. Sample was synthesized by solid state reaction method with variable concentration or erbium (0.5-2.5 mol%). The conventional solid state method is suitable for large scale production and eco-friendly method. The prepared sample was characterized by X-ray diffraction (XRD) technique. From structural analysis by XRD technique shows cubic structure of prepared sample with variable concentration of erbium and no impurity phase were found when increase the concentration of Er3+. Particle size was calculated by Scherer's formula and it varies from 67 nm to 120 nm. The surface morphology of prepared phosphor was determined by field emission gun scanning electron microscopy (FEGSEM) technique. The surface morphology of the sample shows good connectivity with grains as well as some agglomerates formation occurs in sample. The functional group analysis was done by Fourier transform infra-red technique (FTIR) analysis which confirm the formation of Y2O3:Er3+ phosphor was prepared. The results indicated that the Y2O3:Er3+ phosphors might have high upconversion efficiency because of their low vibrational energy. Under 980 nm laser excitation sample shows intense green emission at 555 nm and orange emission at 590 nm wavelength. For green emission transition occurs 2H11/2 → 4I15/2, 4S3/2 → 4I15/2 for upconversion emissions. Excited state absorption and energy transfer process were discussed as possible upconversion mechanisms. The near infrared luminescence spectra was recorded. The upconversion luminescence intensity increase with increasing the concentration or erbium up to 2 mol% after that luminescence intensity decreases due to concentration quenching occurs. Spectrophotometric determinations of peaks are evaluated by Commission Internationale de I'Eclairage (CIE) technique. From CIE technique the dominant peak of from PL spectra shows intense green emission so the prepared phosphor is may be useful for green light emitting diode (GLED) application.

  6. Adjustment of prior constraints for an improved crop monitoring with the Earth Observation Land Data Assimilation System (EO-LDAS)

    NASA Astrophysics Data System (ADS)

    Truckenbrodt, Sina C.; Gómez-Dans, José; Stelmaszczuk-Górska, Martyna A.; Chernetskiy, Maxim; Schmullius, Christiane C.

    2017-04-01

    Throughout the past decades various satellite sensors have been launched that record reflectance in the optical domain and facilitate comprehensive monitoring of the vegetation-covered land surface from space. The interaction of photons with the canopy, leaves and soil that determines the spectrum of reflected sunlight can be simulated with radiative transfer models (RTMs). The inversion of RTMs permits the derivation of state variables such as leaf area index (LAI) and leaf chlorophyll content from top-of-canopy reflectance. Space-borne data are, however, insufficient for an unambiguous derivation of state variables and additional constraints are required to resolve this ill-posed problem. Data assimilation techniques permit the conflation of various information with due allowance for associated uncertainties. The Earth Observation Land Data Assimilation System (EO-LDAS) integrates RTMs into a dynamic process model that describes the temporal evolution of state variables. In addition, prior information is included to further constrain the inversion and enhance the state variable derivation. In previous studies on EO-LDAS, prior information was represented by temporally constant values for all investigated state variables, while information about their phenological evolution was neglected. Here, we examine to what extent the implementation of prior information reflecting the phenological variability improves the performance of EO-LDAS with respect to the monitoring of crops on the agricultural Gebesee test site (Central Germany). Various routines for the generation of prior information are tested. This involves the usage of data on state variables that was acquired in previous years as well as the application of phenological models. The performance of EO-LDAS with the newly implemented prior information is tested based on medium resolution satellite imagery (e.g., RapidEye REIS, Sentinel-2 MSI, Landsat-7 ETM+ and Landsat-8 OLI). The predicted state variables are validated against in situ data from the Gebesee test site that were acquired with a weekly to fortnightly resolution throughout the growing seasons of 2010, 2013, 2014 and 2016. Furthermore, the results are compared with the outcome of using constant values as prior information. In this presentation, the EO-LDAS scheme and results obtained from different prior information are presented.

  7. Analysis of control system responses for aircraft stability and efficient numerical techniques for real-time simulations

    NASA Astrophysics Data System (ADS)

    Stroe, Gabriela; Andrei, Irina-Carmen; Frunzulica, Florin

    2017-01-01

    The objectives of this paper are the study and the implementation of both aerodynamic and propulsion models, as linear interpolations using look-up tables in a database. The aerodynamic and propulsion dependencies on state and control variable have been described by analytic polynomial models. Some simplifying hypotheses were made in the development of the nonlinear aircraft simulations. The choice of a certain technique to use depends on the desired accuracy of the solution and the computational effort to be expended. Each nonlinear simulation includes the full nonlinear dynamics of the bare airframe, with a scaled direct connection from pilot inputs to control surface deflections to provide adequate pilot control. The engine power dynamic response was modeled with an additional state equation as first order lag in the actual power level response to commanded power level was computed as a function of throttle position. The number of control inputs and engine power states varied depending on the number of control surfaces and aircraft engines. The set of coupled, nonlinear, first-order ordinary differential equations that comprise the simulation model can be represented by the vector differential equation. A linear time-invariant (LTI) system representing aircraft dynamics for small perturbations about a reference trim condition is given by the state and output equations present. The gradients are obtained numerically by perturbing each state and control input independently and recording the changes in the trimmed state and output equations. This is done using the numerical technique of central finite differences, including the perturbations of the state and control variables. For a reference trim condition of straight and level flight, linearization results in two decoupled sets of linear, constant-coefficient differential equations for longitudinal and lateral / directional motion. The linearization is valid for small perturbations about the reference trim condition. Experimental aerodynamic and thrust data are used to model the applied aerodynamic and propulsion forces and moments for arbitrary states and controls. There is no closed form solution to such problems, so the equations must be solved using numerical integration. Techniques for solving this initial value problem for ordinary differential equations are employed to obtain approximate solutions at discrete points along the aircraft state trajectory.

  8. New and Improved GLDAS and NLDAS Data Sets and Data Services at HDISC/NASA

    NASA Technical Reports Server (NTRS)

    Rui, Hualan; Beaudoing, Hiroko Kato; Mocko, David M.; Rodell, Matthew; Teng, William L.; Vollmer. Bruce

    2010-01-01

    Terrestrial hydrological variables are important in global hydrology, climate, and carbon cycle studies. Generating global fields of these variables, however, is still a challenge. The goal of a land data assimilation system (LDAS)is to ingest satellite-and ground-based observational data products, using advanced land surface modeling and data assimilation techniques, in order to generate optimal fields of land surface states and fluxes data and, thereby, facilitate hydrology and climate modeling, research, and forecast.

  9. Security of continuous-variable quantum key distribution against general attacks.

    PubMed

    Leverrier, Anthony; García-Patrón, Raúl; Renner, Renato; Cerf, Nicolas J

    2013-01-18

    We prove the security of Gaussian continuous-variable quantum key distribution with coherent states against arbitrary attacks in the finite-size regime. In contrast to previously known proofs of principle (based on the de Finetti theorem), our result is applicable in the practically relevant finite-size regime. This is achieved using a novel proof approach, which exploits phase-space symmetries of the protocols as well as the postselection technique introduced by Christandl, Koenig, and Renner [Phys. Rev. Lett. 102, 020504 (2009)].

  10. Newtonian Nudging For A Richards Equation-based Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Paniconi, C.; Marrocu, M.; Putti, M.; Verbunt, M.

    In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimila- tion scheme. Nudging is shown to be successful in improving the hydrological sim- ulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitiv- ity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexi- ble, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be read- ily extended to any features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.

  11. Deterministic quantum teleportation of photonic quantum bits by a hybrid technique.

    PubMed

    Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; van Loock, Peter; Furusawa, Akira

    2013-08-15

    Quantum teleportation allows for the transfer of arbitrary unknown quantum states from a sender to a spatially distant receiver, provided that the two parties share an entangled state and can communicate classically. It is the essence of many sophisticated protocols for quantum communication and computation. Photons are an optimal choice for carrying information in the form of 'flying qubits', but the teleportation of photonic quantum bits (qubits) has been limited by experimental inefficiencies and restrictions. Main disadvantages include the fundamentally probabilistic nature of linear-optics Bell measurements, as well as the need either to destroy the teleported qubit or attenuate the input qubit when the detectors do not resolve photon numbers. Here we experimentally realize fully deterministic quantum teleportation of photonic qubits without post-selection. The key step is to make use of a hybrid technique involving continuous-variable teleportation of a discrete-variable, photonic qubit. When the receiver's feedforward gain is optimally tuned, the continuous-variable teleporter acts as a pure loss channel, and the input dual-rail-encoded qubit, based on a single photon, represents a quantum error detection code against photon loss and hence remains completely intact for most teleportation events. This allows for a faithful qubit transfer even with imperfect continuous-variable entangled states: for four qubits the overall transfer fidelities range from 0.79 to 0.82 and all of them exceed the classical limit of teleportation. Furthermore, even for a relatively low level of the entanglement, qubits are teleported much more efficiently than in previous experiments, albeit post-selectively (taking into account only the qubit subspaces), and with a fidelity comparable to the previously reported values.

  12. Determining the Spatial and Seasonal Variability in OM/OC Ratios across the U.S. Using Multiple Regression

    EPA Science Inventory

    Data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network are used to estimate organic mass to organic carbon (OM/OC) ratios across the United States by extending previously published multiple regression techniques. Our new methodology addresses com...

  13. A Model-Based Approach to Inventory Stratification

    Treesearch

    Ronald E. McRoberts

    2006-01-01

    Forest inventory programs report estimates of forest variables for areas of interest ranging in size from municipalities to counties to States and Provinces. Classified satellite imagery has been shown to be an effective source of ancillary data that, when used with stratified estimation techniques, contributes to increased precision with little corresponding increase...

  14. Economic and Demographic Factors Impacting Placement of Students with Autism

    ERIC Educational Resources Information Center

    Kurth, Jennifer A.; Mastergeorge, Ann M.; Paschall, Katherine

    2016-01-01

    Educational placement of students with autism is often associated with child factors, such as IQ and communication skills. However, variability in placement patterns across states suggests that other factors are at play. This study used hierarchical cluster analysis techniques to identify demographic, economic, and educational covariates…

  15. Space Shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The fifth monthly progress report includes corrections and additions to the previously submitted reports. The addition of the SRB propellant thickness as a state variable is included with the associated partial derivatives. During this reporting period, preliminary results of the estimation program checkout was presented to NASA technical personnel.

  16. Experimental determination of Grunieisen gamma for two dissimilar materials (PEEK and Al 5083) via the shock-reverberation technique

    NASA Astrophysics Data System (ADS)

    Roberts, Andrew; Appleby-Thomas, Gareth; Hazell, Paul

    2011-06-01

    Following multiple loading events the resultant shock state of a material will lie away from the principle Hugoniot. Prediction of such states requires knowledge of a materials equation-of-state. The material-specific variable Grunieisen gamma (Γ) defines the shape of ``off-Hugoniot'' points in energy-volume-pressure space. Experimentally the shock-reverberation technique (based on the principle of impedance-matching) has previously allowed estimation of the first-order Grunieisen gamma term (Γ1) for a silicone elastomer. Here, this approach was employed to calculate Γ1 for two dissimilar materials, Polyether ether ketone (PEEK) and the armour-grade aluminium alloy 5083 (H32); thereby allowing discussion of limitations of this technique in the context of plate-impact experiments employing Manganin stress gauges. Finally, the experimentally determined values for Γ1 were further refined by comparison between experimental records and numerical simulations carried out using the commercial code ANYSYS Autodyn®.

  17. Monitoring the Variability of the Supermassive Black Hole at the Galactic Center

    NASA Astrophysics Data System (ADS)

    Chen, Zhuo; Do, Tuan; Witzel, Gunther; Ghez, Andrea; Schödel, Rainer; Gallego, Laly; Sitarski, Breann; Lu, Jessica; Becklin, Eric Eric; Dehghanfar, Arezu; Gautam, Abhimat; Hees, Aurelien; Jia, Siyao; Matthews, Keith; Morris, Mark

    2018-01-01

    The variability of the supermassive black hole at the center of the Galaxy, Sgr A*, has been widely studied over the years in a variety of wavelengths. However, near-infrared studies of the variability of Sgr A* only began in 2003 with the then new technique Adaptive Optics (AO) as speckle shift-and-add data did not reach sufficient depth to detect Sgr A* (K < 16). We apply our new speckle holography approach to the analysis of data obtained between 1995 and 2005 with the speckle imaging technique (reaching K < 17) to re-examine the variability of Sgr A* in an effort to explore the Sgr A* accretion flow over a time baseline of 20 years. We find that the average magnitude of Sgr A* from 1995 to 2005 (K = 16.49 +/- 0.086) agrees very well with the average AO magnitude from 2005-2007 (Kp = 16.3). Our detections of Sgr A* are the first reported prior to 2002. In particular, a significant increase of power in the PSD between the main correlation timescale of ~300 min and 20 years can be excluded. This renders 300 min the dominant timescale and setting the variability state of Sgr A* in the time since 1995 apart from states discussed in the context of the X-ray echoes in the surrounding molecular clouds (for which extended bright periods of several years are required). Finally, we note that the 2001 periapse passage of the extended, dusty object G1, a source similar to G2, had no apparent effect on the emissivity of the accretion flow onto Sgr A*.

  18. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  19. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted

  20. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    NASA Astrophysics Data System (ADS)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  1. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  2. Quantitative Tomography for Continuous Variable Quantum Systems

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  3. Remote creation of hybrid entanglement between particle-like and wave-like optical qubits

    NASA Astrophysics Data System (ADS)

    Morin, Olivier; Huang, Kun; Liu, Jianli; Le Jeannic, Hanna; Fabre, Claude; Laurat, Julien

    2014-07-01

    The wave-particle duality of light has led to two different encodings for optical quantum information processing. Several approaches have emerged based either on particle-like discrete-variable states (that is, finite-dimensional quantum systems) or on wave-like continuous-variable states (that is, infinite-dimensional systems). Here, we demonstrate the generation of entanglement between optical qubits of these different types, located at distant places and connected by a lossy channel. Such hybrid entanglement, which is a key resource for a variety of recently proposed schemes, including quantum cryptography and computing, enables information to be converted from one Hilbert space to the other via teleportation and therefore the connection of remote quantum processors based upon different encodings. Beyond its fundamental significance for the exploration of entanglement and its possible instantiations, our optical circuit holds promise for implementations of heterogeneous network, where discrete- and continuous-variable operations and techniques can be efficiently combined.

  4. A Method to Achieve High Fidelity in Internet-Distributed Hardware-in-the-Loop Simulation

    DTIC Science & Technology

    2012-08-01

    2008. [17] M. Compere , J. Goodell, M. Simon, W. Smith, and M. Brudnak, "Robust control techniques enabling duty cycle experiments utilizing a 6-DOF...01-3077, 2006. [18] J. Goodell, M. Compere , M. Simon, W. Smith, R. Wright, and M. Brudnak, "Robust control techniques for state tracking in the...presence of variable time delays," SAE Technical Paper, 2006-01-1163, 2006. [19] M. Brudnak, M. Pozolo, V. Paul, S. Mohammad, W. Smith, M. Compere , J

  5. Remote sensing using MIMO systems

    DOEpatents

    Bikhazi, Nicolas; Young, William F; Nguyen, Hung D

    2015-04-28

    A technique for sensing a moving object within a physical environment using a MIMO communication link includes generating a channel matrix based upon channel state information of the MIMO communication link. The physical environment operates as a communication medium through which communication signals of the MIMO communication link propagate between a transmitter and a receiver. A spatial information variable is generated for the MIMO communication link based on the channel matrix. The spatial information variable includes spatial information about the moving object within the physical environment. A signature for the moving object is generated based on values of the spatial information variable accumulated over time. The moving object is identified based upon the signature.

  6. Pseudo-steady-state non-Gaussian Einstein-Podolsky-Rosen steering of massive particles in pumped and damped Bose-Hubbard dimers

    NASA Astrophysics Data System (ADS)

    Olsen, M. K.

    2017-02-01

    We propose and analyze a pumped and damped Bose-Hubbard dimer as a source of continuous-variable Einstein-Podolsky-Rosen (EPR) steering with non-Gaussian statistics. We use and compare the results of the approximate truncated Wigner and the exact positive-P representation to calculate and compare the predictions for intensities, second-order quantum correlations, and third- and fourth-order cumulants. We find agreement for intensities and the products of inferred quadrature variances, which indicate that states demonstrating the EPR paradox are present. We find clear signals of non-Gaussianity in the quantum states of the modes from both the approximate and exact techniques, with quantitative differences in their predictions. Our proposed experimental configuration is extrapolated from current experimental techniques and adds another apparatus to the current toolbox of quantum atom optics.

  7. Stator and Rotor Flux Based Deadbeat Direct Torque Control of Induction Machines

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.; Lorenz, Robert D.

    2001-01-01

    A new, deadbeat type of direct torque control is proposed, analyzed, and experimentally verified in this paper. The control is based on stator and rotor flux as state variables. This choice of state variables allows a graphical representation which is transparent and insightful. The graphical solution shows the effects of realistic considerations such as voltage and current limits. A position and speed sensorless implementation of the control, based on the self-sensing signal injection technique, is also demonstrated experimentally for low speed operation. The paper first develops the new, deadbeat DTC methodology and graphical representation of the new algorithm. It then evaluates feasibility via simulation and experimentally demonstrates performance of the new method with a laboratory prototype including the sensorless methods.

  8. Stator and Rotor Flux Based Deadbeat Direct Torque Control of Induction Machines

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.; Lorenz, Robert D.

    2003-01-01

    A new, deadbeat type of direct torque control is proposed, analyzed and experimentally verified in this paper. The control is based on stator and rotor flux as state variables. This choice of state variables allows a graphical representation which is transparent and insightful. The graphical solution shows the effects of realistic considerations such as voltage and current limits. A position and speed sensorless implementation of the control, based on the self-sensing signal injection technique, is also demonstrated experimentally for low speed operation. The paper first develops the new, deadbeat DTC methodology and graphical representation of the new algorithm. It then evaluates feasibility via simulation and experimentally demonstrates performance of the new method with a laboratory prototype including the sensorless methods.

  9. Stator and Rotor Flux Based Deadbeat Direct Torque Control of Induction Machines. Revision 1

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.; Lorenz, Robert D.

    2002-01-01

    A new, deadbeat type of direct torque control is proposed, analyzed, and experimentally verified in this paper. The control is based on stator and rotor flux as state variables. This choice of state variables allows a graphical representation which is transparent and insightful. The graphical solution shows the effects of realistic considerations such as voltage and current limits. A position and speed sensorless implementation of the control, based on the self-sensing signal injection technique, is also demonstrated experimentally for low speed operation. The paper first develops the new, deadbeat DTC methodology and graphical representation of the new algorithm. It then evaluates feasibility via simulation and experimentally demonstrates performance of the new method with a laboratory prototype including the sensorless methods.

  10. Long-distance continuous-variable quantum key distribution using non-Gaussian state-discrimination detection

    NASA Astrophysics Data System (ADS)

    Liao, Qin; Guo, Ying; Huang, Duan; Huang, Peng; Zeng, Guihua

    2018-02-01

    We propose a long-distance continuous-variable quantum key distribution (CVQKD) with a four-state protocol using non-Gaussian state-discrimination detection. A photon subtraction operation, which is deployed at the transmitter, is used for splitting the signal required for generating the non-Gaussian operation to lengthen the maximum transmission distance of the CVQKD. Whereby an improved state-discrimination detector, which can be deemed as an optimized quantum measurement that allows the discrimination of nonorthogonal coherent states beating the standard quantum limit, is applied at the receiver to codetermine the measurement result with the conventional coherent detector. By tactfully exploiting the multiplexing technique, the resulting signals can be simultaneously transmitted through an untrusted quantum channel, and subsequently sent to the state-discrimination detector and coherent detector, respectively. Security analysis shows that the proposed scheme can lengthen the maximum transmission distance up to hundreds of kilometers. Furthermore, by taking the finite-size effect and composable security into account we obtain the tightest bound of the secure distance, which is more practical than that obtained in the asymptotic limit.

  11. Comparing five modelling techniques for predicting forest characteristics

    Treesearch

    Gretchen G. Moisen; Tracey S. Frescino

    2002-01-01

    Broad-scale maps of forest characteristics are needed throughout the United States for a wide variety of forest land management applications. Inexpensive maps can be produced by modelling forest class and structure variables collected in nationwide forest inventories as functions of satellite-based information. But little work has been directed at comparing modelling...

  12. SEASONAL VARIATIONS OF NITRIC OXIDE FLUX FROM AGRICULTURAL SOILS IN THE SOUTHEAST UNITED STATES

    EPA Science Inventory

    Fluxes of nitric oxide (NO) were measured from the summer of 1994 to the spring of 1995 from an intensively managed agricultural soil using a dynamic flow through chamber technique in order to study the seasonal variability in the emissions of NO. The measurements were made on a ...

  13. Supervision of dynamic systems: Monitoring, decision-making and control

    NASA Technical Reports Server (NTRS)

    White, T. N.

    1982-01-01

    Effects of task variables on the performance of the human supervisor by means of modelling techniques are discussed. The task variables considered are: The dynamics of the system, the task to be performed, the environmental disturbances and the observation noise. A relationship between task variables and parameters of a supervisory model is assumed. The model consists of three parts: (1) The observer part is thought to be a full order optimal observer, (2) the decision-making part is stated as a set of decision rules, and (3) the controller part is given by a control law. The observer part generates, on the basis of the system output and the control actions, an estimate of the state of the system and its associated variance. The outputs of the observer part are then used by the decision-making part to determine the instants in time of the observation actions on the one hand and the controls actions on the other. The controller part makes use of the estimated state to derive the amplitude(s) of the control action(s).

  14. Markov state modeling of sliding friction

    NASA Astrophysics Data System (ADS)

    Pellegrini, F.; Landes, François P.; Laio, A.; Prestipino, S.; Tosatti, E.

    2016-11-01

    Markov state modeling (MSM) has recently emerged as one of the key techniques for the discovery of collective variables and the analysis of rare events in molecular simulations. In particular in biochemistry this approach is successfully exploited to find the metastable states of complex systems and their evolution in thermal equilibrium, including rare events, such as a protein undergoing folding. The physics of sliding friction and its atomistic simulations under external forces constitute a nonequilibrium field where relevant variables are in principle unknown and where a proper theory describing violent and rare events such as stick slip is still lacking. Here we show that MSM can be extended to the study of nonequilibrium phenomena and in particular friction. The approach is benchmarked on the Frenkel-Kontorova model, used here as a test system whose properties are well established. We demonstrate that the method allows the least prejudiced identification of a minimal basis of natural microscopic variables necessary for the description of the forced dynamics of sliding, through their probabilistic evolution. The steps necessary for the application to realistic frictional systems are highlighted.

  15. An entropy-variables-based formulation of residual distribution schemes for non-equilibrium flows

    NASA Astrophysics Data System (ADS)

    Garicano-Mena, Jesús; Lani, Andrea; Degrez, Gérard

    2018-06-01

    In this paper we present an extension of Residual Distribution techniques for the simulation of compressible flows in non-equilibrium conditions. The latter are modeled by means of a state-of-the-art multi-species and two-temperature model. An entropy-based variable transformation that symmetrizes the projected advective Jacobian for such a thermophysical model is introduced. Moreover, the transformed advection Jacobian matrix presents a block diagonal structure, with mass-species and electronic-vibrational energy being completely decoupled from the momentum and total energy sub-system. The advantageous structure of the transformed advective Jacobian can be exploited by contour-integration-based Residual Distribution techniques: established schemes that operate on dense matrices can be substituted by the same scheme operating on the momentum-energy subsystem matrix and repeated application of scalar scheme to the mass-species and electronic-vibrational energy terms. Finally, the performance gain of the symmetrizing-variables formulation is quantified on a selection of representative testcases, ranging from subsonic to hypersonic, in inviscid or viscous conditions.

  16. Forecasting conditional climate-change using a hybrid approach

    USGS Publications Warehouse

    Esfahani, Akbar Akbari; Friedel, Michael J.

    2014-01-01

    A novel approach is proposed to forecast the likelihood of climate-change across spatial landscape gradients. This hybrid approach involves reconstructing past precipitation and temperature using the self-organizing map technique; determining quantile trends in the climate-change variables by quantile regression modeling; and computing conditional forecasts of climate-change variables based on self-similarity in quantile trends using the fractionally differenced auto-regressive integrated moving average technique. The proposed modeling approach is applied to states (Arizona, California, Colorado, Nevada, New Mexico, and Utah) in the southwestern U.S., where conditional forecasts of climate-change variables are evaluated against recent (2012) observations, evaluated at a future time period (2030), and evaluated as future trends (2009–2059). These results have broad economic, political, and social implications because they quantify uncertainty in climate-change forecasts affecting various sectors of society. Another benefit of the proposed hybrid approach is that it can be extended to any spatiotemporal scale providing self-similarity exists.

  17. Selection of a Geostatistical Method to Interpolate Soil Properties of the State Crop Testing Fields using Attributes of a Digital Terrain Model

    NASA Astrophysics Data System (ADS)

    Sahabiev, I. A.; Ryazanov, S. S.; Kolcova, T. G.; Grigoryan, B. R.

    2018-03-01

    The three most common techniques to interpolate soil properties at a field scale—ordinary kriging (OK), regression kriging with multiple linear regression drift model (RK + MLR), and regression kriging with principal component regression drift model (RK + PCR)—were examined. The results of the performed study were compiled into an algorithm of choosing the most appropriate soil mapping technique. Relief attributes were used as the auxiliary variables. When spatial dependence of a target variable was strong, the OK method showed more accurate interpolation results, and the inclusion of the auxiliary data resulted in an insignificant improvement in prediction accuracy. According to the algorithm, the RK + PCR method effectively eliminates multicollinearity of explanatory variables. However, if the number of predictors is less than ten, the probability of multicollinearity is reduced, and application of the PCR becomes irrational. In that case, the multiple linear regression should be used instead.

  18. Continuous Variable Cluster State Generation over the Optical Spatial Mode Comb

    DOE PAGES

    Pooser, Raphael C.; Jing, Jietai

    2014-10-20

    One way quantum computing uses single qubit projective measurements performed on a cluster state (a highly entangled state of multiple qubits) in order to enact quantum gates. The model is promising due to its potential scalability; the cluster state may be produced at the beginning of the computation and operated on over time. Continuous variables (CV) offer another potential benefit in the form of deterministic entanglement generation. This determinism can lead to robust cluster states and scalable quantum computation. Recent demonstrations of CV cluster states have made great strides on the path to scalability utilizing either time or frequency multiplexingmore » in optical parametric oscillators (OPO) both above and below threshold. The techniques relied on a combination of entangling operators and beam splitter transformations. Here we show that an analogous transformation exists for amplifiers with Gaussian inputs states operating on multiple spatial modes. By judicious selection of local oscillators (LOs), the spatial mode distribution is analogous to the optical frequency comb consisting of axial modes in an OPO cavity. We outline an experimental system that generates cluster states across the spatial frequency comb which can also scale the amount of quantum noise reduction to potentially larger than in other systems.« less

  19. Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Schwartz, C. S.

    2017-12-01

    Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.

  20. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  1. Influence of the optimization methods on neural state estimation quality of the drive system with elasticity.

    PubMed

    Orlowska-Kowalska, Teresa; Kaminski, Marcin

    2014-01-01

    The paper deals with the implementation of optimized neural networks (NNs) for state variable estimation of the drive system with an elastic joint. The signals estimated by NNs are used in the control structure with a state-space controller and additional feedbacks from the shaft torque and the load speed. High estimation quality is very important for the correct operation of a closed-loop system. The precision of state variables estimation depends on the generalization properties of NNs. A short review of optimization methods of the NN is presented. Two techniques typical for regularization and pruning methods are described and tested in detail: the Bayesian regularization and the Optimal Brain Damage methods. Simulation results show good precision of both optimized neural estimators for a wide range of changes of the load speed and the load torque, not only for nominal but also changed parameters of the drive system. The simulation results are verified in a laboratory setup.

  2. Intercomparison of Downscaling Methods on Hydrological Impact for Earth System Model of NE United States

    NASA Astrophysics Data System (ADS)

    Yang, P.; Fekete, B. M.; Rosenzweig, B.; Lengyel, F.; Vorosmarty, C. J.

    2012-12-01

    Atmospheric dynamics are essential inputs to Regional-scale Earth System Models (RESMs). Variables including surface air temperature, total precipitation, solar radiation, wind speed and humidity must be downscaled from coarse-resolution, global General Circulation Models (GCMs) to the high temporal and spatial resolution required for regional modeling. However, this downscaling procedure can be challenging due to the need to correct for bias from the GCM and to capture the spatiotemporal heterogeneity of the regional dynamics. In this study, the results obtained using several downscaling techniques and observational datasets were compared for a RESM of the Northeast Corridor of the United States. Previous efforts have enhanced GCM model outputs through bias correction using novel techniques. For example, the Climate Impact Research at Potsdam Institute developed a series of bias-corrected GCMs towards the next generation climate change scenarios (Schiermeier, 2012; Moss et al., 2010). Techniques to better represent the heterogeneity of climate variables have also been improved using statistical approaches (Maurer, 2008; Abatzoglou, 2011). For this study, four downscaling approaches to transform bias-corrected HADGEM2-ES Model output (daily at .5 x .5 degree) to the 3'*3'(longitude*latitude) daily and monthly resolution required for the Northeast RESM were compared: 1) Bilinear Interpolation, 2) Daily bias-corrected spatial downscaling (D-BCSD) with Gridded Meteorological Datasets (developed by Abazoglou 2011), 3) Monthly bias-corrected spatial disaggregation (M-BCSD) with CRU(Climate Research Unit) and 4) Dynamic Downscaling based on Weather Research and Forecast (WRF) model. Spatio-temporal analysis of the variability in precipitation was conducted over the study domain. Validation of the variables of different downscaling methods against observational datasets was carried out for assessment of the downscaled climate model outputs. The effects of using the different approaches to downscale atmospheric variables (specifically air temperature and precipitation) for use as inputs to the Water Balance Model (WBMPlus, Vorosmarty et al., 1998;Wisser et al., 2008) for simulation of daily discharge and monthly stream flow in the Northeast US for a 100-year period in the 21st century were also assessed. Statistical techniques especially monthly bias-corrected spatial disaggregation (M-BCSD) showed potential advantage among other methods for the daily discharge and monthly stream flow simulation. However, Dynamic Downscaling will provide important complements to the statistical approaches tested.

  3. A coherent discrete variable representation method on a sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Hua -Gen

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  4. A coherent discrete variable representation method on a sphere

    DOE PAGES

    Yu, Hua -Gen

    2017-09-05

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  5. A comparison of two above-ground biomass estimation techniques integrating satellite-based remotely sensed data and ground data for tropical and semiarid forests in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Iiames, J. S.; Riegel, J.; Lunetta, R.

    2013-12-01

    Two above-ground forest biomass estimation techniques were evaluated for the United States Territory of Puerto Rico using predictor variables acquired from satellite based remotely sensed data and ground data from the U.S. Department of Agriculture Forest Inventory Analysis (FIA) program. The U.S. Environmental Protection Agency (EPA) estimated above-ground forest biomass implementing methodology first posited by the Woods Hole Research Center developed for conterminous United States (National Biomass and Carbon Dataset [NBCD2000]). For EPA's effort, spatial predictor layers for above-ground biomass estimation included derived products from the U.S. Geologic Survey (USGS) National Land Cover Dataset 2001 (NLCD) (landcover and canopy density), the USGS Gap Analysis Program (forest type classification), the USGS National Elevation Dataset, and the NASA Shuttle Radar Topography Mission (tree heights). In contrast, the U.S. Forest Service (USFS) biomass product integrated FIA ground-based data with a suite of geospatial predictor variables including: (1) the Moderate Resolution Imaging Spectrometer (MODIS)-derived image composites and percent tree cover; (2) NLCD land cover proportions; (3) topographic variables; (4) monthly and annual climate parameters; and (5) other ancillary variables. Correlations between both data sets were made at variable watershed scales to test level of agreement. Notice: This work is done in support of EPA's Sustainable Healthy Communities Research Program. The U.S EPA funded and conducted the research described in this paper. Although this work was reviewed by the EPA and has been approved for publication, it may not necessarily reflect official Agency policy. Mention of any trade names or commercial products does not constitute endorsement or recommendation for use.

  6. Studying Degradation in Lithium-Ion Batteries by Depth Profiling with Lithium-Nuclear Reaction Analysis

    NASA Astrophysics Data System (ADS)

    Schulz, Adam

    Lithium ion batteries (LIBs) are secondary (rechargeable) energy storage devices that lose the ability to store charge, or degrade, with time. This charge capacity loss stems from unwanted reactions such as the continual growth of the solid electrolyte interphase (SEI) layer on the negative carbonaceous electrode. Parasitic reactions consume mobile lithium, the byproducts of which deposit as SEI layer. Introducing various electrolyte additives and coatings on the positive electrode reduce the rate of SEI growth and lead to improved calendar lifetimes of LIBs respectively. There has been substantial work both electrochemically monitoring and computationally modeling the development of the SEI layer. Additionally, a plethora of spectroscopic techniques have been employed in an attempt to characterize the components of the SEI layer. Despite lithium being the charge carrier in LIBs, depth profiles of lithium in the SEI are few. Moreover, accurate depth profiles relating capacity loss to lithium in the SEI are virtually non-existent. Better quantification of immobilized lithium would lead to improved understanding of the mechanisms of capacity loss and allow for computational and electrochemical models dependent on true materials states. A method by which to prepare low variability, high energy density electrochemical cells for depth profiling with the non-destructive technique, lithium nuclear reaction analysis (Li-NRA), is presented here. Due to the unique and largely non-destructive nature of Li-NRA we are able to perform repeated measurement on the same sample and evaluate the variability of the technique. By using low variability electrochemical cells along with this precise spectroscopic technique, we are able to confidently report trends of lithium concentration while controlling variables such as charge state, age and electrolyte composition. Conversion of gamma intensity versus beam energy, rendered by NRA, to Li concentration as a function of depth requires calibration and modeling of the nuclear stopping power of the substrate (electrode material). A methodology to accurately convert characteristic gamma intensity versus beam energy raw data to Li % as a function of depth is presented. Depth profiles are performed on the electrodes of commercial LIBs charged to different states of charge and aged to different states of health. In-lab created Li-ion cells are prepared with different electrolytes and then depth profiled by Li-NRA. It was found lithium accumulates within the solid electrolyte interphase (SEI) layer with the square root of time, consistent with previous reports. When vinylene carbonate (VC) is introduced to electrolyte lithium accumulates at a rapidly reduced rate as compared to cells containing ethylene carbonte (EC). Additionally, lithium concentration within the positive electrode surface was observed to decrease linearly with time independent of electrolyte tested. Future experiments to be conducted to finish the work and the underpinnings of a materials based capacity loss model are proposed.

  7. Postselection technique for quantum channels with applications to quantum cryptography.

    PubMed

    Christandl, Matthias; König, Robert; Renner, Renato

    2009-01-16

    We propose a general method for studying properties of quantum channels acting on an n-partite system, whose action is invariant under permutations of the subsystems. Our main result is that, in order to prove that a certain property holds for an arbitrary input, it is sufficient to consider the case where the input is a particular de Finetti-type state, i.e., a state which consists of n identical and independent copies of an (unknown) state on a single subsystem. Our technique can be applied to the analysis of information-theoretic problems. For example, in quantum cryptography, we get a simple proof for the fact that security of a discrete-variable quantum key distribution protocol against collective attacks implies security of the protocol against the most general attacks. The resulting security bounds are tighter than previously known bounds obtained with help of the exponential de Finetti theorem.

  8. An Empirical Study of Chronic Diseases in the United States: A Visual Analytics Approach to Public Health

    PubMed Central

    Raghupathi, Wullianallur; Raghupathi, Viju

    2018-01-01

    In this research we explore the current state of chronic diseases in the United States, using data from the Centers for Disease Control and Prevention and applying visualization and descriptive analytics techniques. Five main categories of variables are studied, namely chronic disease conditions, behavioral health, mental health, demographics, and overarching conditions. These are analyzed in the context of regions and states within the U.S. to discover possible correlations between variables in several categories. There are widespread variations in the prevalence of diverse chronic diseases, the number of hospitalizations for specific diseases, and the diagnosis and mortality rates for different states. Identifying such correlations is fundamental to developing insights that will help in the creation of targeted management, mitigation, and preventive policies, ultimately minimizing the risks and costs of chronic diseases. As the population ages and individuals suffer from multiple conditions, or comorbidity, it is imperative that the various stakeholders, including the government, non-governmental organizations (NGOs), policy makers, health providers, and society as a whole, address these adverse effects in a timely and efficient manner. PMID:29494555

  9. Non-Destructive Techniques Based on Eddy Current Testing

    PubMed Central

    García-Martín, Javier; Gómez-Gil, Jaime; Vázquez-Sánchez, Ernesto

    2011-01-01

    Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future. PMID:22163754

  10. Non-destructive techniques based on eddy current testing.

    PubMed

    García-Martín, Javier; Gómez-Gil, Jaime; Vázquez-Sánchez, Ernesto

    2011-01-01

    Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future.

  11. Climate-growth relationships for largemouth bass (Micropterus salmoides) across three southeastern USA states

    Treesearch

    Andrew L. Rypel

    2009-01-01

    The role of climate variability in the ecology of freshwater fishes is of increasing interest. However, there are relatively few tools available for examining how freshwater fish populations respond to climate variations. Here, I apply tree-ring techniques to incremental growth patterns in largemouth bass (Micropterus salmoides Lacepe`de) otoliths to explore...

  12. Transient Response of a Second Order System Using State Variables.

    ERIC Educational Resources Information Center

    LePage, Wilbur R.

    This programed booklet is designed for the engineering student who is familiar with the techniques of integral calculus and electrical networks. The booklet teaches how to determine the current and voltages across a resistor, inductor, and capacitor after the switch in a network has been closed. This is a classical problem in engineering, the…

  13. Commande de vol non lineaire d'un drone a voilure fixe par la methode du backstepping

    NASA Astrophysics Data System (ADS)

    Finoki, Edouard

    This thesis describes the design of a non-linear controller for a UAV using the backstepping method. It is a fixed-wing UAV, the NexSTAR ARF from HobbicoRTM. The aim is to find the expressions of the aileron, the elevator, and the rudder deflection in order to command the flight path angle, the heading angle and the sideslip angle. Controlling the flight path angle allows a steady, climb or descent flight, controlling the heading cap allows to choose the heading and annul the sideslip angle allows an efficient flight. A good technical control has to ensure the stability of the system and provide optimal performances. Backstepping interlaces the choice of a Lyapunov function with the design of feedback control. This control technique works with the true non-linear model without any approximation. The procedure is to transform intermediate state variables into virtual inputs which will control other state variables. Advantages of this technique are its recursivity, its minimum control effort and its cascaded structure that allows dividing a high order system into several simpler lower order systems. To design this non-linear controller, a non-linear model of the UAV was used. Equations of motion are very accurate, aerodynamic coefficients result from interpolations between several essential variables in flight. The controller has been implemented in Matlab/Simulink and FlightGear.

  14. Left Atrial Appendage Closure for Stroke Prevention: Devices, Techniques, and Efficacy.

    PubMed

    Iskandar, Sandia; Vacek, James; Lavu, Madhav; Lakkireddy, Dhanunjaya

    2016-05-01

    Left atrial appendage closure can be performed either surgically or percutaneously. Surgical approaches include direct suture, excision and suture, stapling, and clipping. Percutaneous approaches include endocardial, epicardial, and hybrid endocardial-epicardial techniques. Left atrial appendage anatomy is highly variable and complex; therefore, preprocedural imaging is crucial to determine device selection and sizing, which contribute to procedural success and reduction of complications. Currently, the WATCHMAN is the only device that is approved for left atrial appendage closure in the United States. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Risk factors for difficult peripheral venous cannulation in hospitalised patients. Protocol for a multicentre case-control study in 48 units of eight public hospitals in Spain.

    PubMed

    Rodriguez-Calero, Miguel Angel; Fernandez-Fernandez, Ismael; Molero-Ballester, Luis Javier; Matamalas-Massanet, Catalina; Moreno-Mejias, Luis; de Pedro-Gomez, Joan Ernest; Blanco-Mavillard, Ian; Morales-Asencio, Jose Miguel

    2018-02-08

    Patients with difficult venous access experience undesirable effects during healthcare, such as delayed diagnosis and initiation of treatment, stress and pain related to the technique and reduced satisfaction. This study aims to identify risk factors with which to model the appearance of difficulty in achieving peripheral venous puncture in hospital treatment. Case-control study. We will include adult patients requiring peripheral venous cannulation in eight public hospitals, excluding those in emergency situations and women in childbirth or during puerperium. The nurse who performs the technique will record in an anonymised register variables related to the intervention. Subsequently, a researcher will extract the health variables from the patient's medical history. Patients who present one of the following conditions will be assigned to the case group: two or more failed punctures, need for puncture support, need for central access after failure to achieve peripheral access, or decision to reject the technique. The control group will be obtained from records of patients who do not meet the above conditions. It has been stated a minimum sample size of 2070 patients, 207 cases and 1863 controls.A descriptive analysis will be made of the distribution of the phenomenon. The variables hypothesised to be risk factors for the appearance of difficult venous cannulation will be studied using a logistic regression model. The study was funded in January 2017 and obtained ethical approval from the Research Ethics Committee of the Balearic Islands. Informed consent will be obtained prior to data collection. Results will be published in a peer-reviewed scientific journal. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Navigating complex sample analysis using national survey data.

    PubMed

    Saylor, Jennifer; Friedmann, Erika; Lee, Hyeon Joo

    2012-01-01

    The National Center for Health Statistics conducts the National Health and Nutrition Examination Survey and other national surveys with probability-based complex sample designs. Goals of national surveys are to provide valid data for the population of the United States. Analyses of data from population surveys present unique challenges in the research process but are valuable avenues to study the health of the United States population. The aim of this study was to demonstrate the importance of using complex data analysis techniques for data obtained with complex multistage sampling design and provide an example of analysis using the SPSS Complex Samples procedure. Illustration of challenges and solutions specific to secondary data analysis of national databases are described using the National Health and Nutrition Examination Survey as the exemplar. Oversampling of small or sensitive groups provides necessary estimates of variability within small groups. Use of weights without complex samples accurately estimates population means and frequency from the sample after accounting for over- or undersampling of specific groups. Weighting alone leads to inappropriate population estimates of variability, because they are computed as if the measures were from the entire population rather than a sample in the data set. The SPSS Complex Samples procedure allows inclusion of all sampling design elements, stratification, clusters, and weights. Use of national data sets allows use of extensive, expensive, and well-documented survey data for exploratory questions but limits analysis to those variables included in the data set. The large sample permits examination of multiple predictors and interactive relationships. Merging data files, availability of data in several waves of surveys, and complex sampling are techniques used to provide a representative sample but present unique challenges. In sophisticated data analysis techniques, use of these data is optimized.

  17. An approximate Riemann solver for thermal and chemical nonequilibrium flows

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.

    1994-01-01

    Among the many methods available for the determination of inviscid fluxes across a surface of discontinuity, the flux-difference-splitting technique that employs Roe-averaged variables has been used extensively by the CFD community because of its simplicity and its ability to capture shocks exactly. This method, originally developed for perfect gas flows, has since been extended to equilibrium as well as nonequilibrium flows. Determination of the Roe-averaged variables for the case of a perfect gas flow is a simple task; however, for thermal and chemical nonequilibrium flows, some of the variables are not uniquely defined. Methods available in the literature to determine these variables seem to lack sound bases. The present paper describes a simple, yet accurate, method to determine all the variables for nonequilibrium flows in the Roe-average state. The basis for this method is the requirement that the Roe-averaged variables form a consistent set of thermodynamic variables. The present method satisfies the requirement that the square of the speed of sound be positive.

  18. Noise in Neuronal and Electronic Circuits: A General Modeling Framework and Non-Monte Carlo Simulation Techniques.

    PubMed

    Kilinc, Deniz; Demir, Alper

    2017-08-01

    The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.

  19. [Association between social deprivation and causes of mortality among elderly residents in the city of Recife, Pernambuco State, Brazil].

    PubMed

    Silva, Vanessa de Lima; Leal, Márcia Carréra Campos; Marino, Jacira Guiro; Marques, Ana Paula de Oliveira

    2008-05-01

    This paper aims to analyze mortality among elderly residents in the city of Recife, Pernambuco State, Brazil, and its association with social deprivation (hardship) in the year 2000. An ecological study was performed, and 94 neighborhoods and 5 social strata were analyzed. The independent variable consisted of a composite social deprivation indicator, obtained for each neighborhood and calculated through a scoring technique based on census variables: water supply, sewage, illiteracy, and head-of-household's years of schooling and income. The dependent variables were: mortality rate in individuals > 60 years of age and cause-specific mortality rates. The association was calculated by means of the Pearson correlation coefficient, linear regression, and mortality odds between social deprivation strata formed by grouping of neighborhoods according to the indicator's quintiles. The data show a statistically significant positive correlation between social deprivation and mortality in the elderly from pneumonia, protein-energy malnutrition, tuberculosis, diarrhea/gastroenteritis, and traffic accidents, and a negative correlation with deaths from bronchopulmonary and breast cancers.

  20. Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies

    NASA Astrophysics Data System (ADS)

    Perez Hoyos, Isabel Cristina

    The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction accuracy of a single RT is reduced in comparison with RFs, single trees can still be used to understand the interactions that might be taking place between predictor variables and the response variable. Regarding RF, there is a great potential in using the power of an ensemble of trees for prediction of WTD. The superior capability of RF to accurately map water table position in Nevada, California, and Washington demonstrate that this technique can be applied at scales larger than regional levels. It is also shown that the removal of remote sensing variables from the RF training process degrades the performance of the model. Using the predicted WTD, the probability of an ecosystem to be groundwater dependent (GDE probability) is estimated at 1 km spatial resolution. The modeling technique is evaluated in the state of Nevada, USA to develop a systematic approach for the identification of GDEs and it is then applied in the United States. The modeling approach selected for the development of the GDE probability map results from a comparison of the performance of classification trees (CT) and classification forests (CF). Predictive performance evaluation for the selection of the most accurate model is achieved using a threshold independent technique, and the prediction accuracy of both models is assessed in greater detail using threshold-dependent measures. The resulting GDE probability map can potentially be used for the definition of conservation areas since it can be translated into a binary classification map with two classes: GDE and NON-GDE. These maps are created by selecting a probability threshold. It is demonstrated that the choice of this threshold has dramatic effects on deterministic model performance measures.

  1. A longitudinal study of mortality and air pollution for São Paulo, Brazil.

    PubMed

    Botter, Denise A; Jørgensen, Bent; Peres, Antonieta A Q

    2002-09-01

    We study the effects of various air-pollution variables on the daily death counts for people over 65 years in São Paulo, Brazil, from 1991 to 1993, controlling for meteorological variables. We use a state space model where the air-pollution variables enter via the latent process, and the meteorological variables via the observation equation. The latent process represents the potential mortality due to air pollution, and is estimated by Kalman filter techniques. The effect of air pollution on mortality is found to be a function of the variation in the sulphur dioxide level for the previous 3 days, whereas the other air-pollution variables (total suspended particulates, nitrogen dioxide, carbon monoxide, ozone) are not significant when sulphur dioxide is in the equation. There are significant effects of humidity and up to lag 3 of temperature, and a significant seasonal variation.

  2. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.

  3. Modelling Viscoelastic Behaviour of Polymer by A Mixed Velocity, Displacement Formulation - Numerical and Experimental Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, VT.; Silva, L.; Digonnet, H.

    2011-05-04

    The objective of this work is to model the viscoelastic behaviour of polymer from the solid state to the liquid state. With this objective, we perform experimental tensile tests and compare with simulation results. The chosen polymer is a PMMA whose behaviour depends on its temperature. The computation simulation is based on Navier-Stokes equations where we propose a mixed finite element method with an interpolation P1+/P1 using displacement (or velocity) and pressure as principal variables. The implemented technique uses a mesh composed of triangles (2D) or tetrahedra (3D). The goal of this approach is to model the viscoelastic behaviour ofmore » polymers through a fluid-structure coupling technique with a multiphase approach.« less

  4. Calibrationless parallel magnetic resonance imaging: a joint sparsity model.

    PubMed

    Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab

    2013-12-05

    State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  5. Predicting In-State Workforce Retention After Graduate Medical Education Training.

    PubMed

    Koehler, Tracy J; Goodfellow, Jaclyn; Davis, Alan T; Spybrook, Jessaca; vanSchagen, John E; Schuh, Lori

    2017-02-01

    There is a paucity of literature when it comes to identifying predictors of in-state retention of graduate medical education (GME) graduates, such as the demographic and educational characteristics of these physicians. The purpose was to use demographic and educational predictors to identify graduates from a single Michigan GME sponsoring institution, who are also likely to practice medicine in Michigan post-GME training. We included all residents and fellows who graduated between 2000 and 2014 from 1 of 18 GME programs at a Michigan-based sponsoring institution. Predictor variables identified by logistic regression with cross-validation were used to create a scoring tool to determine the likelihood of a GME graduate to practice medicine in the same state post-GME training. A 6-variable model, which included 714 observations, was identified. The predictor variables were birth state, program type (primary care versus non-primary care), undergraduate degree location, medical school location, state in which GME training was completed, and marital status. The positive likelihood ratio (+LR) for the scoring tool was 5.31, while the negative likelihood ratio (-LR) was 0.46, with an accuracy of 74%. The +LR indicates that the scoring tool was useful in predicting whether graduates who trained in a Michigan-based GME sponsoring institution were likely to practice medicine in Michigan following training. Other institutions could use these techniques to identify key information that could help pinpoint matriculating residents/fellows likely to practice medicine within the state in which they completed their training.

  6. The Past, Present and Future of Geodemographic Research in the United States and United Kingdom

    PubMed Central

    Singleton, Alexander D.; Spielman, Seth E.

    2014-01-01

    This article presents an extensive comparative review of the emergence and application of geodemographics in both the United States and United Kingdom, situating them as an extension of earlier empirically driven models of urban socio-spatial structure. The empirical and theoretical basis for this generalization technique is also considered. Findings demonstrate critical differences in both the application and development of geodemographics between the United States and United Kingdom resulting from their diverging histories, variable data economies, and availability of academic or free classifications. Finally, current methodological research is reviewed, linking this discussion prospectively to the changing spatial data economy in both the United States and United Kingdom. PMID:25484455

  7. Study on the variable cycle engine modeling techniques based on the component method

    NASA Astrophysics Data System (ADS)

    Zhang, Lihua; Xue, Hui; Bao, Yuhai; Li, Jijun; Yan, Lan

    2016-01-01

    Based on the structure platform of the gas turbine engine, the components of variable cycle engine were simulated by using the component method. The mathematical model of nonlinear equations correspondeing to each component of the gas turbine engine was established. Based on Matlab programming, the nonlinear equations were solved by using Newton-Raphson steady-state algorithm, and the performance of the components for engine was calculated. The numerical simulation results showed that the model bulit can describe the basic performance of the gas turbine engine, which verified the validity of the model.

  8. Compatibility check of measured aircraft responses using kinematic equations and extended Kalman filter

    NASA Technical Reports Server (NTRS)

    Klein, V.; Schiess, J. R.

    1977-01-01

    An extended Kalman filter smoother and a fixed point smoother were used for estimation of the state variables in the six degree of freedom kinematic equations relating measured aircraft responses and for estimation of unknown constant bias and scale factor errors in measured data. The computing algorithm includes an analysis of residuals which can improve the filter performance and provide estimates of measurement noise characteristics for some aircraft output variables. The technique developed was demonstrated using simulated and real flight test data. Improved accuracy of measured data was obtained when the data were corrected for estimated bias errors.

  9. Estimating tree crown widths for the primary Acadian species in Maine

    Treesearch

    Matthew B. Russell; Aaron R. Weiskittel

    2012-01-01

    In this analysis, data for seven conifer and eight hardwood species were gathered from across the state of Maine for estimating tree crown widths. Maximum and largest crown width equations were developed using tree diameter at breast height as the primary predicting variable. Quantile regression techniques were used to estimate the maximum crown width and a constrained...

  10. Variable Geometry Aircraft Pylon Structure and Related Operation Techniques

    NASA Technical Reports Server (NTRS)

    Shah, Parthiv N. (Inventor)

    2014-01-01

    An aircraft control structure can be utilized for purposes of drag management, noise control, or aircraft flight maneuvering. The control structure includes a high pressure engine nozzle, such as a bypass nozzle or a core nozzle of a turbofan engine. The nozzle exhausts a high pressure fluid stream, which can be swirled using a deployable swirl vane architecture. The control structure also includes a variable geometry pylon configured to be coupled between the nozzle and the aircraft. The variable geometry pylon has a moveable pylon section that can be deployed into a deflected state to maintain or alter a swirling fluid stream (when the swirl vane architecture is deployed) for drag management purposes, or to assist in the performance of aircraft flight maneuvers.

  11. Impact of multicollinearity on small sample hydrologic regression models

    NASA Astrophysics Data System (ADS)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  12. Hybrid Discrete-Continuous Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich

    2003-01-01

    This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.

  13. Dynamic Programming for Structured Continuous Markov Decision Problems

    NASA Technical Reports Server (NTRS)

    Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu

    2004-01-01

    We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.

  14. Variable horizon in a peridynamic medium

    DOE PAGES

    Silling, Stewart A.; Littlewood, David J.; Seleson, Pablo

    2015-12-10

    Here, a notion of material homogeneity is proposed for peridynamic bodies with variable horizon but constant bulk properties. A relation is derived that scales the force state according to the position-dependent horizon while keeping the bulk properties unchanged. Using this scaling relation, if the horizon depends on position, artifacts called ghost forces may arise in a body under a homogeneous deformation. These artifacts depend on the second derivative of the horizon and can be reduced by employing a modified equilibrium equation using a new quantity called the partial stress. Bodies with piecewise constant horizon can be modeled without ghost forcesmore » by using a simpler technique called a splice. As a limiting case of zero horizon, both the partial stress and splice techniques can be used to achieve local-nonlocal coupling. Computational examples, including dynamic fracture in a one-dimensional model with local-nonlocal coupling, illustrate the methods.« less

  15. Dimension reduction techniques for the integrative analysis of multi-omics data

    PubMed Central

    Zeleznik, Oana A.; Thallinger, Gerhard G.; Kuster, Bernhard; Gholami, Amin M.

    2016-01-01

    State-of-the-art next-generation sequencing, transcriptomics, proteomics and other high-throughput ‘omics' technologies enable the efficient generation of large experimental data sets. These data may yield unprecedented knowledge about molecular pathways in cells and their role in disease. Dimension reduction approaches have been widely used in exploratory analysis of single omics data sets. This review will focus on dimension reduction approaches for simultaneous exploratory analyses of multiple data sets. These methods extract the linear relationships that best explain the correlated structure across data sets, the variability both within and between variables (or observations) and may highlight data issues such as batch effects or outliers. We explore dimension reduction techniques as one of the emerging approaches for data integration, and how these can be applied to increase our understanding of biological systems in normal physiological function and disease. PMID:26969681

  16. Inter- and Intra-method Variability of VS Profiles and VS30 at ARRA-funded Sites

    NASA Astrophysics Data System (ADS)

    Yong, A.; Boatwright, J.; Martin, A. J.

    2015-12-01

    The 2009 American Recovery and Reinvestment Act (ARRA) funded geophysical site characterizations at 191 seismographic stations in California and in the central and eastern United States. Shallow boreholes were considered cost- and environmentally-prohibitive, thus non-invasive methods (passive and active surface- and body-wave techniques) were used at these stations. The drawback, however, is that these techniques measure seismic properties indirectly and introduce more uncertainty than borehole methods. The principal methods applied were Array Microtremor (AM), Multi-channel Analysis of Surface Waves (MASW; Rayleigh and Love waves), Spectral Analysis of Surface Waves (SASW), Refraction Microtremor (ReMi), and P- and S-wave refraction tomography. Depending on the apparent geologic or seismic complexity of the site, field crews applied one or a combination of these methods to estimate the shear-wave velocity (VS) profile and calculate VS30, the time-averaged VS to a depth of 30 meters. We study the inter- and intra-method variability of VS and VS30 at each seismographic station where combinations of techniques were applied. For each site, we find both types of variability in VS30 remain insignificant (5-10% difference) despite substantial variability observed in the VS profiles. We also find that reliable VS profiles are best developed using a combination of techniques, e.g., surface-wave VS profiles correlated against P-wave tomography to constrain variables (Poisson's ratio and density) that are key depth-dependent parameters used in modeling VS profiles. The most reliable results are based on surface- or body-wave profiles correlated against independent observations such as material properties inferred from outcropping geology nearby. For example, mapped geology describes station CI.LJR as a hard rock site (VS30 > 760 m/s). However, decomposed rock outcrops were found nearby and support the estimated VS30 of 303 m/s derived from the MASW (Love wave) profile.

  17. Flux-Based Deadbeat Control of Induction-Motor Torque

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.; Lorenz, Robert D.

    2003-01-01

    An improved method and prior methods of deadbeat direct torque control involve the use of pulse-width modulation (PWM) of applied voltages. The prior methods are based on the use of stator flux and stator current as state variables, leading to mathematical solutions of control equations in forms that do not lend themselves to clear visualization of solution spaces. In contrast, the use of rotor and stator fluxes as the state variables in the present improved method lends itself to graphical representations that aid in understanding possible solutions under various operating conditions. In addition, the present improved method incorporates the superposition of high-frequency carrier signals for use in a motor-self-sensing technique for estimating the rotor shaft angle at any speed (including low or even zero speed) without need for additional shaft-angle-measuring sensors.

  18. Adaptive neural network output feedback control for stochastic nonlinear systems with unknown dead-zone and unmodeled dynamics.

    PubMed

    Tong, Shaocheng; Wang, Tong; Li, Yongming; Zhang, Huaguang

    2014-06-01

    This paper discusses the problem of adaptive neural network output feedback control for a class of stochastic nonlinear strict-feedback systems. The concerned systems have certain characteristics, such as unknown nonlinear uncertainties, unknown dead-zones, unmodeled dynamics and without the direct measurements of state variables. In this paper, the neural networks (NNs) are employed to approximate the unknown nonlinear uncertainties, and then by representing the dead-zone as a time-varying system with a bounded disturbance. An NN state observer is designed to estimate the unmeasured states. Based on both backstepping design technique and a stochastic small-gain theorem, a robust adaptive NN output feedback control scheme is developed. It is proved that all the variables involved in the closed-loop system are input-state-practically stable in probability, and also have robustness to the unmodeled dynamics. Meanwhile, the observer errors and the output of the system can be regulated to a small neighborhood of the origin by selecting appropriate design parameters. Simulation examples are also provided to illustrate the effectiveness of the proposed approach.

  19. Dominant root locus in state estimator design for material flow processes: A case study of hot strip rolling.

    PubMed

    Fišer, Jaromír; Zítek, Pavel; Skopec, Pavel; Knobloch, Jan; Vyhlídal, Tomáš

    2017-05-01

    The purpose of the paper is to achieve a constrained estimation of process state variables using the anisochronic state observer tuned by the dominant root locus technique. The anisochronic state observer is based on the state-space time delay model of the process. Moreover the process model is identified not only as delayed but also as non-linear. This model is developed to describe a material flow process. The root locus technique combined with the magnitude optimum method is utilized to investigate the estimation process. Resulting dominant roots location serves as a measure of estimation process performance. The higher the dominant (natural) frequency in the leftmost position of the complex plane the more enhanced performance with good robustness is achieved. Also the model based observer control methodology for material flow processes is provided by means of the separation principle. For demonstration purposes, the computer-based anisochronic state observer is applied to the strip temperatures estimation in the hot strip finishing mill composed of seven stands. This application was the original motivation to the presented research. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Procedures for Obtaining and Analyzing Writing Samples of School-Age Children and Adolescents.

    PubMed

    Price, Johanna R; Jackson, Sandra C

    2015-10-01

    Many students' writing skills are below grade-level expectations, and students with oral language difficulties are at particular risk for writing difficulties. Speech-language pathologists' (SLPs') expertise in language applies to both the oral and written modalities, yet evidence suggests that SLPs' confidence regarding writing assessment is low. Writing samples are a clinically useful, criterion-referenced assessment technique that is relevant to helping students satisfy writing-related requirements of the Common Core State Standards (National Governors Association Center for Best Practices and Council of Chief State School Officers, 2010a). This article provides recommendations for obtaining and analyzing students' writing samples. In this tutorial, the authors provide a comprehensive literature review of methods regarding (a) collection of writing samples from narrative, expository (informational/explanatory), and persuasive (argument) genres; (b) variables of writing performance that are useful to assess; and (c) manual and computer-aided techniques for analyzing writing samples. The authors relate their findings to expectations for writing skills expressed in the Common Core State Standards (National Governors Association Center for Best Practices & Council of Chief State School Officers, 2010a). SLPs can readily implement many techniques for obtaining and analyzing writing samples. The information in this article provides SLPs with recommendations for the use of writing samples and may help increase SLPs' confidence regarding written language assessment.

  1. Timing analysis by model checking

    NASA Technical Reports Server (NTRS)

    Naydich, Dimitri; Guaspari, David

    2000-01-01

    The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.

  2. Bi-Frequency Modulated Quasi-Resonant Converters: Theory and Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Yuefeng

    1995-01-01

    To avoid the variable frequency operation of quasi -resonant converters, many soft-switching PWM converters have been proposed, all of them require an auxiliary switch, which will increase the cost and complexity of the power supply system. In this thesis, a new kind of technique for quasi -resonant converters has been proposed, which is called the bi-frequency modulation technique. By operating the quasi-resonant converters at two switching frequencies, this technique enables quasi-resonant converters to achieve the soft-switching, at fixed switching frequencies, without an auxiliary switch. The steady-state analysis of four commonly used quasi-resonant converters, namely, ZVS buck, ZCS buck, ZVS boost, and ZCS boost converter has been presented. Using the concepts of equivalent sources, equivalent sinks, and resonant tank, the large signal models of these four quasi -resonant converters were developed. Based on these models, the steady-state control characteristics of BFM ZVS buck, BFM ZCS buck, BFM ZVS boost, and BFM ZCS boost converter have been derived. The functional block and design consideration of the bi-frequency controller were presented, and one of the implementations of the bi-frequency controller was given. A complete design example has been presented. Both computer simulations and experimental results have verified that the bi-frequency modulated quasi-resonant converters can achieve soft-switching, at fixed switching frequencies, without an auxiliary switch. One of the application of bi-frequency modulation technique is for EMI reduction. The basic principle of using BFM technique for EMI reduction was introduced. Based on the spectral analysis, the EMI performances of the PWM, variable-frequency, and bi-frequency modulated control signals was evaluated, and the BFM control signals show the lowest EMI emission. The bi-frequency modulated technique has also been applied to the power factor correction. A BFM zero -current switching boost converter has been designed for the power factor correction, and the simulation results show that the power factor has been improved.

  3. Comparison of manual and automatic techniques for substriatal segmentation in 11C-raclopride high-resolution PET studies.

    PubMed

    Johansson, Jarkko; Alakurtti, Kati; Joutsa, Juho; Tohka, Jussi; Ruotsalainen, Ulla; Rinne, Juha O

    2016-10-01

    The striatum is the primary target in regional C-raclopride-PET studies, and despite its small volume, it contains several functional and anatomical subregions. The outcome of the quantitative dopamine receptor study using C-raclopride-PET depends heavily on the quality of the region-of-interest (ROI) definition of these subregions. The aim of this study was to evaluate subregional analysis techniques because new approaches have emerged, but have not yet been compared directly. In this paper, we compared manual ROI delineation with several automatic methods. The automatic methods used either direct clustering of the PET image or individualization of chosen brain atlases on the basis of MRI or PET image normalization. State-of-the-art normalization methods and atlases were applied, including those provided in the FreeSurfer, Statistical Parametric Mapping8, and FSL software packages. Evaluation of the automatic methods was based on voxel-wise congruity with the manual delineations and the test-retest variability and reliability of the outcome measures using data from seven healthy male participants who were scanned twice with C-raclopride-PET on the same day. The results show that both manual and automatic methods can be used to define striatal subregions. Although most of the methods performed well with respect to the test-retest variability and reliability of binding potential, the smallest average test-retest variability and SEM were obtained using a connectivity-based atlas and PET normalization (test-retest variability=4.5%, SEM=0.17). The current state-of-the-art automatic ROI methods can be considered good alternatives for subjective and laborious manual segmentation in C-raclopride-PET studies.

  4. Analysis of landscape fragmentation in the Peloncillo Mountains in relation to wildfire, prescribed burning, and cattle grazing

    Treesearch

    John Rogan; Kelley O' Neal; Stephen Yool

    2005-01-01

    This paper examined the application of state-of-the-art remote sensing image enhancement and classification techniques for mapping land cover change in the Peloncillo Mountains of Arizona and New Mexico. Spectrally enhanced images acquired August 1985, 1991, 1996, and 2000 were combined with environmental variables such as slope and aspect to map land cover...

  5. Solid state VRX CT detector

    NASA Astrophysics Data System (ADS)

    DiBianca, Frank A.; Melnyk, Roman; Sambari, Aniket; Jordan, Lawrence M.; Laughter, Joseph S.; Zou, Ping

    2000-04-01

    A technique called Variable-Resolution X-ray (VRX) detection that greatly increases the spatial resolution in computed tomography (CT) and digital radiography (DR) is presented. The technique is based on a principle called 'projective compression' that allows the resolution element of a CT detector to scale with the subject or field size. For very large (40 - 50 cm) field sizes, resolution exceeding 2 cy/mm is possible and for very small fields, microscopy is attainable with resolution exceeding 100 cy/mm. Preliminary results from a 576-channel solid-state detector are presented. The detector has a dual-arm geometry and is comprised of CdWO4 scintillator crystals arranged in 24 modules of 24 channels/module. The scintillators are 0.85 mm wide and placed on 1 mm centers. Measurements of signal level, MTF and SNR, all versus detector angle, are presented.

  6. A neural fuzzy controller learning by fuzzy error propagation

    NASA Technical Reports Server (NTRS)

    Nauck, Detlef; Kruse, Rudolf

    1992-01-01

    In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.

  7. Review and classification of variability analysis techniques with clinical applications.

    PubMed

    Bravi, Andrea; Longtin, André; Seely, Andrew J E

    2011-10-10

    Analysis of patterns of variation of time-series, termed variability analysis, represents a rapidly evolving discipline with increasing applications in different fields of science. In medicine and in particular critical care, efforts have focussed on evaluating the clinical utility of variability. However, the growth and complexity of techniques applicable to this field have made interpretation and understanding of variability more challenging. Our objective is to provide an updated review of variability analysis techniques suitable for clinical applications. We review more than 70 variability techniques, providing for each technique a brief description of the underlying theory and assumptions, together with a summary of clinical applications. We propose a revised classification for the domains of variability techniques, which include statistical, geometric, energetic, informational, and invariant. We discuss the process of calculation, often necessitating a mathematical transform of the time-series. Our aims are to summarize a broad literature, promote a shared vocabulary that would improve the exchange of ideas, and the analyses of the results between different studies. We conclude with challenges for the evolving science of variability analysis.

  8. Review and classification of variability analysis techniques with clinical applications

    PubMed Central

    2011-01-01

    Analysis of patterns of variation of time-series, termed variability analysis, represents a rapidly evolving discipline with increasing applications in different fields of science. In medicine and in particular critical care, efforts have focussed on evaluating the clinical utility of variability. However, the growth and complexity of techniques applicable to this field have made interpretation and understanding of variability more challenging. Our objective is to provide an updated review of variability analysis techniques suitable for clinical applications. We review more than 70 variability techniques, providing for each technique a brief description of the underlying theory and assumptions, together with a summary of clinical applications. We propose a revised classification for the domains of variability techniques, which include statistical, geometric, energetic, informational, and invariant. We discuss the process of calculation, often necessitating a mathematical transform of the time-series. Our aims are to summarize a broad literature, promote a shared vocabulary that would improve the exchange of ideas, and the analyses of the results between different studies. We conclude with challenges for the evolving science of variability analysis. PMID:21985357

  9. Integrator Windup Protection-Techniques and a STOVL Aircraft Engine Controller Application

    NASA Technical Reports Server (NTRS)

    KrishnaKumar, K.; Narayanaswamy, S.

    1997-01-01

    Integrators are included in the feedback loop of a control system to eliminate the steady state errors in the commanded variables. The integrator windup problem arises if the control actuators encounter operational limits before the steady state errors are driven to zero by the integrator. The typical effects of windup are large system oscillations, high steady state error, and a delayed system response following the windup. In this study, methods to prevent the integrator windup are examined to provide Integrator Windup Protection (IW) for an engine controller of a Short Take-Off and Vertical Landing (STOVL) aircraft. An unified performance index is defined to optimize the performance of the Conventional Anti-Windup (CAW) and the Modified Anti-Windup (MAW) methods. A modified Genetic Algorithm search procedure with stochastic parameter encoding is implemented to obtain the optimal parameters of the CAW scheme. The advantages and drawbacks of the CAW and MAW techniques are discussed and recommendations are made for the choice of the IWP scheme, given some characteristics of the system.

  10. Computation of Steady-State Probability Distributions in Stochastic Models of Cellular Networks

    PubMed Central

    Hallen, Mark; Li, Bochong; Tanouchi, Yu; Tan, Cheemeng; West, Mike; You, Lingchong

    2011-01-01

    Cellular processes are “noisy”. In each cell, concentrations of molecules are subject to random fluctuations due to the small numbers of these molecules and to environmental perturbations. While noise varies with time, it is often measured at steady state, for example by flow cytometry. When interrogating aspects of a cellular network by such steady-state measurements of network components, a key need is to develop efficient methods to simulate and compute these distributions. We describe innovations in stochastic modeling coupled with approaches to this computational challenge: first, an approach to modeling intrinsic noise via solution of the chemical master equation, and second, a convolution technique to account for contributions of extrinsic noise. We show how these techniques can be combined in a streamlined procedure for evaluation of different sources of variability in a biochemical network. Evaluation and illustrations are given in analysis of two well-characterized synthetic gene circuits, as well as a signaling network underlying the mammalian cell cycle entry. PMID:22022252

  11. Simulating ensembles of source water quality using a K-nearest neighbor resampling approach.

    PubMed

    Towler, Erin; Rajagopalan, Balaji; Seidel, Chad; Summers, R Scott

    2009-03-01

    Climatological, geological, and water management factors can cause significant variability in surface water quality. As drinking water quality standards become more stringent, the ability to quantify the variability of source water quality becomes more important for decision-making and planning in water treatment for regulatory compliance. However, paucity of long-term water quality data makes it challenging to apply traditional simulation techniques. To overcome this limitation, we have developed and applied a robust nonparametric K-nearest neighbor (K-nn) bootstrap approach utilizing the United States Environmental Protection Agency's Information Collection Rule (ICR) data. In this technique, first an appropriate "feature vector" is formed from the best available explanatory variables. The nearest neighbors to the feature vector are identified from the ICR data and are resampled using a weight function. Repetition of this results in water quality ensembles, and consequently the distribution and the quantification of the variability. The main strengths of the approach are its flexibility, simplicity, and the ability to use a large amount of spatial data with limited temporal extent to provide water quality ensembles for any given location. We demonstrate this approach by applying it to simulate monthly ensembles of total organic carbon for two utilities in the U.S. with very different watersheds and to alkalinity and bromide at two other U.S. utilities.

  12. An improved switching converter model. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.

  13. Can Protection Motivation Theory explain farmers'adaptation to Climate change/variability decision making in the Gambia?

    NASA Astrophysics Data System (ADS)

    Bagagnan, A. R.

    2016-12-01

    In the Gambia, Changes in the climate pattern has affected and continue to affect the agriculture sector and therefore calling for effective adaptation policies. The present study aimed to explain farmers' adoption of climate change adaptation measure through the protection motivation theory in The Central River Region of The Gambia. Primary data were collected in all the eight communities of the study area. A transect walk was conducted first followed by a survey with 283 informants. The perception variables were referring to the past 20 years while the stated implementation was addressing the current adaptation practices. Results showed that on one hand, most of the perception variables such as severity, ability to withstand, and internal barriers are significantly correlated to protection motivation and on the other hand Protection motivation and stated implementation for water conservation technique are strongly correlated. Structural Equation Modeling confirms the mediation role of Protection motivation between Farmers stated implementation and their perception of climate variability. Decrease in soil water storage capacity, degradation of the quality of soil surface structure, decrease of the length of the growing season are factors that motivate farmers to implement an adaptation measure. Cost of the implementation and farmers' vulnerability are factors that prevent farmers to implement an adaptation measure. The cost of the implementation is the main barrier to farmers `protection motivation. Therefore the study suggested that farmers' awareness about climate change/variability should be increased through farmers' field school and awareness campaigns, farmers' resilience should be improved and adaptation measures should be made accessible to farmers through loans facilities and subsidizes application.

  14. Application of fluorescence spectroscopy for on-line bioprocess monitoring and control

    NASA Astrophysics Data System (ADS)

    Boehl, Daniela; Solle, D.; Toussaint, Hans J.; Menge, M.; Renemann, G.; Lindemann, Carsten; Hitzmann, Bernd; Scheper, Thomas-Helmut

    2001-02-01

    12 Modern bioprocess control requires fast data acquisition and in-time evaluation of bioprocess variables. On-line fluorescence spectroscopy for data acquisition and the use of chemometric methods accomplish these requirements. The presented investigations were performed with fluorescence spectrometers with wide ranges of excitation and emission wavelength. By detection of several biogenic fluorophors (amino acids, coenzymes and vitamins) a large amount of information about the state of the bioprocess are obtained. For the evaluation of the process variables partial least squares regression is used. This technique was applied to several bioprocesses: the production of ergotamine by Claviceps purpurea, the production of t-PA (tissue plasminogen activator) by animal cells and brewing processes. The main point of monitoring the brewing processes was to determine the process variables cell count and extract concentration.

  15. Single step optimization of manipulator maneuvers with variable structure control

    NASA Technical Reports Server (NTRS)

    Chen, N.; Dwyer, T. A. W., III

    1987-01-01

    One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.

  16. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Multimodel Kalman filtering for adaptive nonuniformity correction in infrared sensors.

    PubMed

    Pezoa, Jorge E; Hayat, Majeed M; Torres, Sergio N; Rahman, Md Saifur

    2006-06-01

    We present an adaptive technique for the estimation of nonuniformity parameters of infrared focal-plane arrays that is robust with respect to changes and uncertainties in scene and sensor characteristics. The proposed algorithm is based on using a bank of Kalman filters in parallel. Each filter independently estimates state variables comprising the gain and the bias matrices of the sensor, according to its own dynamic-model parameters. The supervising component of the algorithm then generates the final estimates of the state variables by forming a weighted superposition of all the estimates rendered by each Kalman filter. The weights are computed and updated iteratively, according to the a posteriori-likelihood principle. The performance of the estimator and its ability to compensate for fixed-pattern noise is tested using both simulated and real data obtained from two cameras operating in the mid- and long-wave infrared regime.

  18. Regression Tree-Based Methodology for Customizing Building Energy Benchmarks to Individual Commercial Buildings

    NASA Astrophysics Data System (ADS)

    Kaskhedikar, Apoorva Prakash

    According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI pertinent to the building type. The ability to identify and rank the important variables is of great importance in practical implementation of the benchmarking tools which rely on query-based building and HVAC variable filters specified by the user.

  19. Variability of hand tremor in rest and in posture--a pilot study.

    PubMed

    Rahimi, Fariborz; Bee, Carina; South, Angela; Debicki, Derek; Jog, Mandar

    2011-01-01

    Previous, studies have demonstrated variability in the frequency and amplitude in tremor between subjects and between trials in both healthy individuals and those with disease states. However, to date, few studies have examined the composition of tremor. Efficacy of treatment for tremor using techniques such as Botulinum neurotoxin type A (BoNT A) injection may benefit from a better understanding of tremor variability, but more importantly, tremor composition. In the present study, we evaluated tremor variability and composition in 8 participants with either essential tremor or Parkinson disease tremor using kinematic recording methods. Our preliminary findings suggest that while individual patients may have more intra-trial and intra-task variability, overall, task effect was significant only for amplitude of tremor. Composition of tremor varied among patients and the data suggest that tremor composition is complex involving multiple muscle groups. These results may support the value of kinematic assessment methods and the improved understanding of tremor composition in the management of tremor.

  20. Optimal Stabilization of Social Welfare under Small Variation of Operating Condition with Bifurcation Analysis

    NASA Astrophysics Data System (ADS)

    Chanda, Sandip; De, Abhinandan

    2016-12-01

    A social welfare optimization technique has been proposed in this paper with a developed state space based model and bifurcation analysis to offer substantial stability margin even in most inadvertent states of power system networks. The restoration of the power market dynamic price equilibrium has been negotiated in this paper, by forming Jacobian of the sensitivity matrix to regulate the state variables for the standardization of the quality of solution in worst possible contingencies of the network and even with co-option of intermittent renewable energy sources. The model has been tested in IEEE 30 bus system and illustrious particle swarm optimization has assisted the fusion of the proposed model and methodology.

  1. Faithful test of nonlocal realism with entangled coherent states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Chang-Woo; Jeong, Hyunseok; Paternostro, Mauro

    2011-02-15

    We investigate the violation of Leggett's inequality for nonlocal realism using entangled coherent states and various types of local measurements. We prove mathematically the relation between the violation of the Clauser-Horne-Shimony-Holt form of Bell's inequality and Leggett's one when tested by the same resources. For Leggett inequalities, we generalize the nonlocal realistic bound to systems in Hilbert spaces larger than bidimensional ones and introduce an optimization technique that allows one to achieve larger degrees of violation by adjusting the local measurement settings. Our work describes the steps that should be performed to produce a self-consistent generalization of Leggett's original argumentsmore » to continuous-variable states.« less

  2. Using object-oriented analysis techniques to support system testing

    NASA Astrophysics Data System (ADS)

    Zucconi, Lin

    1990-03-01

    Testing of real-time control systems can be greatly facilitated by use of object-oriented and structured analysis modeling techniques. This report describes a project where behavior, process and information models built for a real-time control system were used to augment and aid traditional system testing. The modeling techniques used were an adaptation of the Ward/Mellor method for real-time systems analysis and design (Ward85) for object-oriented development. The models were used to simulate system behavior by means of hand execution of the behavior or state model and the associated process (data and control flow) and information (data) models. The information model, which uses an extended entity-relationship modeling technique, is used to identify application domain objects and their attributes (instance variables). The behavioral model uses state-transition diagrams to describe the state-dependent behavior of the object. The process model uses a transformation schema to describe the operations performed on or by the object. Together, these models provide a means of analyzing and specifying a system in terms of the static and dynamic properties of the objects which it manipulates. The various models were used to simultaneously capture knowledge about both the objects in the application domain and the system implementation. Models were constructed, verified against the software as-built and validated through informal reviews with the developer. These models were then hand-executed.

  3. Measurement of process variables in solid-state fermentation of wheat straw using FT-NIR spectroscopy and synergy interval PLS algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Hui; Liu, Guohai; Mei, Congli; Yu, Shuang; Xiao, Xiahong; Ding, Yuhan

    2012-11-01

    The feasibility of rapid determination of the process variables (i.e. pH and moisture content) in solid-state fermentation (SSF) of wheat straw using Fourier transform near infrared (FT-NIR) spectroscopy was studied. Synergy interval partial least squares (siPLS) algorithm was implemented to calibrate regression model. The number of PLS factors and the number of subintervals were optimized simultaneously by cross-validation. The performance of the prediction model was evaluated according to the root mean square error of cross-validation (RMSECV), the root mean square error of prediction (RMSEP) and the correlation coefficient (R). The measurement results of the optimal model were obtained as follows: RMSECV = 0.0776, Rc = 0.9777, RMSEP = 0.0963, and Rp = 0.9686 for pH model; RMSECV = 1.3544% w/w, Rc = 0.8871, RMSEP = 1.4946% w/w, and Rp = 0.8684 for moisture content model. Finally, compared with classic PLS and iPLS models, the siPLS model revealed its superior performance. The overall results demonstrate that FT-NIR spectroscopy combined with siPLS algorithm can be used to measure process variables in solid-state fermentation of wheat straw, and NIR spectroscopy technique has a potential to be utilized in SSF industry.

  4. Guiding the study of brain dynamics by using first-person data: Synchrony patterns correlate with ongoing conscious states during a simple visual task

    PubMed Central

    Lutz, Antoine; Lachaux, Jean-Philippe; Martinerie, Jacques; Varela, Francisco J.

    2002-01-01

    Even during well-calibrated cognitive tasks, successive brain responses to repeated identical stimulations are highly variable. The source of this variability is believed to reside mainly in fluctuations of the subject's cognitive “context” defined by his/her attentive state, spontaneous thought process, strategy to carry out the task, and so on … As these factors are hard to manipulate precisely, they are usually not controlled, and the variability is discarded by averaging techniques. We combined first-person data and the analysis of neural processes to reduce such noise. We presented the subjects with a three-dimensional illusion and recorded their electrical brain activity and their own report about their cognitive context. Trials were clustered according to these first-person data, and separate dynamical analyses were conducted for each cluster. We found that (i) characteristic patterns of endogenous synchrony appeared in frontal electrodes before stimulation. These patterns depended on the degree of preparation and the immediacy of perception as verbally reported. (ii) These patterns were stable for several recordings. (iii) Preparatory states modulate both the behavioral performance and the evoked and induced synchronous patterns that follow. (iv) These results indicated that first-person data can be used to detect and interpret neural processes. PMID:11805299

  5. Low-thrust trajectory analysis for the geosynchronous mission

    NASA Technical Reports Server (NTRS)

    Jasper, T. P.

    1973-01-01

    Methodology employed in development of a computer program designed to analyze optimal low-thrust trajectories is described, and application of the program to a Solar Electric Propulsion Stage (SEPS) geosynchronous mission is discussed. To avoid the zero inclination and eccentricity singularities which plague many small-force perturbation techniques, a special set of state variables (equinoctial) is used. Adjoint equations are derived for the minimum time problem and are also free from the singularities. Solutions to the state and adjoint equations are obtained by both orbit averaging and precision numerical integration; an evaluation of these approaches is made.

  6. Uniting statistical and individual-based approaches for animal movement modelling.

    PubMed

    Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel

    2014-01-01

    The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems.

  7. Uniting Statistical and Individual-Based Approaches for Animal Movement Modelling

    PubMed Central

    Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel

    2014-01-01

    The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems. PMID:24979047

  8. Least-rattling feedback from strong time-scale separation

    NASA Astrophysics Data System (ADS)

    Chvykov, Pavel; England, Jeremy

    2018-03-01

    In most interacting many-body systems associated with some "emergent phenomena," we can identify subgroups of degrees of freedom that relax on dramatically different time scales. Time-scale separation of this kind is particularly helpful in nonequilibrium systems where only the fast variables are subjected to external driving; in such a case, it may be shown through elimination of fast variables that the slow coordinates effectively experience a thermal bath of spatially varying temperature. In this paper, we investigate how such a temperature landscape arises according to how the slow variables affect the character of the driven quasisteady state reached by the fast variables. Brownian motion in the presence of spatial temperature gradients is known to lead to the accumulation of probability density in low-temperature regions. Here, we focus on the implications of attraction to low effective temperature for the long-term evolution of slow variables. After quantitatively deriving the temperature landscape for a general class of overdamped systems using a path-integral technique, we then illustrate in a simple dynamical system how the attraction to low effective temperature has a fine-tuning effect on the slow variable, selecting configurations that bring about exceptionally low force fluctuation in the fast-variable steady state. We furthermore demonstrate that a particularly strong effect of this kind can take place when the slow variable is tuned to bring about orderly, integrable motion in the fast dynamics that avoids thermalizing energy absorbed from the drive. We thus point to a potentially general feedback mechanism in multi-time-scale active systems, that leads to the exploration of slow variable space, as if in search of fine tuning for a "least-rattling" response in the fast coordinates.

  9. Ab Initio Studies of Shock-Induced Chemical Reactions of Inter-Metallics

    NASA Astrophysics Data System (ADS)

    Zaharieva, Roussislava; Hanagud, Sathya

    2009-06-01

    Shock-induced and shock assisted chemical reactions of intermetallic mixtures are studied by many researchers, using both experimental and theoretical techniques. The theoretical studies are primarily at continuum scales. The model frameworks include mixture theories and meso-scale models of grains of porous mixtures. The reaction models vary from equilibrium thermodynamic model to several non-equilibrium thermodynamic models. The shock-effects are primarily studied using appropriate conservation equations and numerical techniques to integrate the equations. All these models require material constants from experiments and estimates of transition states. Thus, the objective of this paper is to present studies based on ab initio techniques. The ab inito studies, to date, use ab inito molecular dynamics. This paper presents a study that uses shock pressures, and associated temperatures as starting variables. Then intermetallic mixtures are modeled as slabs. The required shock stresses are created by straining the lattice. Then, ab initio binding energy calculations are used to examine the stability of the reactions. Binding energies are obtained for different strain components super imposed on uniform compression and finite temperatures. Then, vibrational frequencies and nudge elastic band techniques are used to study reactivity and transition states. Examples include Ni and Al.

  10. Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds

    NASA Astrophysics Data System (ADS)

    Abdo, Mohammad Gamal Mohammad Mostafa

    This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).

  11. Biostatistics Series Module 10: Brief Overview of Multivariate Methods.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2017-01-01

    Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.

  12. Micropropagation systems of Feijoa (Acca sellowiana (O. Berg) Burret).

    PubMed

    Guerra, Miguel Pedro; Cangahuala-Inocente, Gabriela Claudia; Vesco, Lirio Luiz Dal; Pescador, Rosete; Caprestano, Clarissa Alves

    2013-01-01

    Acca sellowiana (O. Berg) Burret sin. Feijoa sellowiana (Myrtaceae) is a semiwoody fruit species native to South Brazil, Uruguay, and Argentina; edible fruits are tasty. The naturally occurring populations in Santa Catarina State show high variability in fruit size, color, and other features. A breeding program launched in 1990 resulted in the release of four Brazilian commercial varieties. The conventional clonal propagation methods of this species, such as cutting and grafting, have shown low efficiency. Therefore, tissue culture techniques were developed for mass propagation. This chapter describes several protocols based on organogenesis and somatic embryogenesis. Additional techniques including synthetic seed technology and temporary immersion system are also described.

  13. State of the art in marketing hospital foodservice departments.

    PubMed

    Pickens, C W; Shanklin, C W

    1985-11-01

    The purposes of this study were to identify the state of the art relative to the utilization of marketing techniques within hospital foodservice departments throughout the United States and to determine whether any relationships existed between the degree of utilization of marketing techniques and selected demographic characteristics of the foodservice administrators and/or operations. A validated questionnaire was mailed to 600 randomly selected hospital foodservice administrators requesting information related to marketing in their facilities. Forty-five percent of the questionnaires were returned and analyzed for frequency of response and significant relationship between variables. Chi-square was used for nominal data and Spearman rho for ranked data. Approximately 73% of the foodservice administrators stated that marketing was extremely important in the success of a hospital foodservice department. Respondents (79%) further indicated that marketing had become more important in their departments in the past 2 years. Departmental records, professional journals, foodservice suppliers, observation, and surveys were the sources most often used to obtain marketing data, a responsibility generally assumed by the foodservice director (86.2%). Merchandising, public relations, and word-of-mouth reputation were regarded as the most important aspects of marketing. Increased sales, participation, good will, departmental recognition, and employee satisfaction were used most frequently to evaluate the success of implemented marketing techniques. Marketing audits as a means of evaluating the success of marketing were used to a limited extent by the respondents.

  14. Composite pulses for interferometry in a thermal cold atom cloud

    NASA Astrophysics Data System (ADS)

    Dunning, Alexander; Gregory, Rachel; Bateman, James; Cooper, Nathan; Himsworth, Matthew; Jones, Jonathan A.; Freegarde, Tim

    2014-09-01

    Atom interferometric sensors and quantum information processors must maintain coherence while the evolving quantum wave function is split, transformed, and recombined, but suffer from experimental inhomogeneities and uncertainties in the speeds and paths of these operations. Several error-correction techniques have been proposed to isolate the variable of interest. Here we apply composite pulse methods to velocity-sensitive Raman state manipulation in a freely expanding thermal atom cloud. We compare several established pulse sequences, and follow the state evolution within them. The agreement between measurements and simple predictions shows the underlying coherence of the atom ensemble, and the inversion infidelity in a ˜80μK atom cloud is halved. Composite pulse techniques, especially if tailored for atom interferometric applications, should allow greater interferometer areas, larger atomic samples, and longer interaction times, and hence improve the sensitivity of quantum technologies from inertial sensing and clocks to quantum information processors and tests of fundamental physics.

  15. An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Zhou, Ning

    With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less

  16. Automatic assessment of voice quality according to the GRBAS scale.

    PubMed

    Sáenz-Lechón, Nicolás; Godino-Llorente, Juan I; Osma-Ruiz, Víctor; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando

    2006-01-01

    Nowadays, the most extended techniques to measure the voice quality are based on perceptual evaluation by well trained professionals. The GRBAS scale is a widely used method for perceptual evaluation of voice quality. The GRBAS scale is widely used in Japan and there is increasing interest in both Europe and the United States. However, this technique needs well-trained experts, and is based on the evaluator's expertise, depending a lot on his own psycho-physical state. Furthermore, a great variability in the assessments performed from one evaluator to another is observed. Therefore, an objective method to provide such measurement of voice quality would be very valuable. In this paper, the automatic assessment of voice quality is addressed by means of short-term Mel cepstral parameters (MFCC), and learning vector quantization (LVQ) in a pattern recognition stage. Results show that this approach provides acceptable results for this purpose, with accuracy around 65% at the best.

  17. Results of an International Survey on the Investigation and Endovascular Management of Cerebral Vasospasm and Delayed Cerebral Ischemia.

    PubMed

    Hollingworth, Milo; Chen, Peng Roc; Goddard, Antony J P; Coulthard, Alan; Söderman, Michael; Bulsara, Ketan R

    2015-06-01

    Delayed cerebral ischemia (DCI) is a major cause of morbidity and mortality in aneurysmal subarachnoid hemorrhage. Endovascular management of this condition offers a new hope in preventing adverse outcome; however, a uniform standard of practice is lacking owing to a paucity of clinical trials. We conducted an international survey on the use of investigative and endovascular techniques in the treatment of DCI to assess the variability of current practice. Neurovascular neurosurgeons and neuroradiologists were contacted through professional societies from America, United Kingdom, Europe, and Australasia. Members were invited to complete a 13-item questionnaire regarding screening techniques, first-line and second-line therapies in endovascular intervention, and the role of angioplasty. Answers were compared using χ(2) testing for nonparametric data. Data from 344 respondents from 32 countries were analyzed: 167 non-United States and 177 U.S. More than half of all clinicians had 10+ years of experience in units with a mixture of higher and lower case volumes. Daily transcranial Doppler ultrasonography was the most commonly used screening technique by both U.S. (70%) and non-U.S. (53%) practitioners. Verapamil was the most common first-line therapy in the United States, whereas nimodipine was most popular in non-U.S. countries. Angioplasty was performed by 83% of non-U.S. and 91% of U.S. clinicians in the treatment of vasospasm; however, more U.S. clinicians reported using angioplasty for distal vasospasm. Treatment practices for DCI vary considerably, with the greatest variability in the choice of agent for intra-arterial therapy. Our data demonstrate the wide variation of approaches in use at present. However, without further clinical trials and development of a uniform standard of best practice, variability in treatment and outcome for DCI is likely to continue. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Econometrics in outcomes research: the use of instrumental variables.

    PubMed

    Newhouse, J P; McClellan, M

    1998-01-01

    We describe an econometric technique, instrumental variables, that can be useful in estimating the effectiveness of clinical treatments in situations when a controlled trial has not or cannot be done. This technique relies upon the existence of one or more variables that induce substantial variation in the treatment variable but have no direct effect on the outcome variable of interest. We illustrate the use of the technique with an application to aggressive treatment of acute myocardial infarction in the elderly.

  19. Estimates of CO2 from fires in the United States: implications for carbon management.

    PubMed

    Wiedinmyer, Christine; Neff, Jason C

    2007-11-01

    Fires emit significant amounts of CO2 to the atmosphere. These emissions, however, are highly variable in both space and time. Additionally, CO2 emissions estimates from fires are very uncertain. The combination of high spatial and temporal variability and substantial uncertainty associated with fire CO2 emissions can be problematic to efforts to develop remote sensing, monitoring, and inverse modeling techniques to quantify carbon fluxes at the continental scale. Policy and carbon management decisions based on atmospheric sampling/modeling techniques must account for the impact of fire CO2 emissions; a task that may prove very difficult for the foreseeable future. This paper addresses the variability of CO2 emissions from fires across the US, how these emissions compare to anthropogenic emissions of CO2 and Net Primary Productivity, and the potential implications for monitoring programs and policy development. Average annual CO2 emissions from fires in the lower 48 (LOWER48) states from 2002-2006 are estimated to be 213 (+/- 50 std. dev.) Tg CO2 yr-1 and 80 (+/- 89 std. dev.) Tg CO2 yr-1 in Alaska. These estimates have significant interannual and spatial variability. Needleleaf forests in the Southeastern US and the Western US are the dominant source regions for US fire CO2 emissions. Very high emission years typically coincide with droughts, and climatic variability is a major driver of the high interannual and spatial variation in fire emissions. The amount of CO2 emitted from fires in the US is equivalent to 4-6% of anthropogenic emissions at the continental scale and, at the state-level, fire emissions of CO2 can, in some cases, exceed annual emissions of CO2 from fossil fuel usage. The CO2 released from fires, overall, is a small fraction of the estimated average annual Net Primary Productivity and, unlike fossil fuel CO2 emissions, the pulsed emissions of CO2 during fires are partially counterbalanced by uptake of CO2 by regrowing vegetation in the decades following fire. Changes in fire severity and frequency can, however, lead to net changes in atmospheric CO2 and the short-term impacts of fire emissions on monitoring, modeling, and carbon management policy are substantial.

  20. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  1. Confronting weather and climate models with observational data from soil moisture networks over the United States

    PubMed Central

    Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal D.; Balsamo, Gianpaolo; Lawrence, David M.

    2018-01-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison. PMID:29645013

  2. Confronting Weather and Climate Models with Observational Data from Soil Moisture Networks over the United States

    NASA Technical Reports Server (NTRS)

    Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A., Jr.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal Dean; hide

    2016-01-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses out perform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.

  3. Comparison of full field and anomaly initialisation for decadal climate prediction: towards an optimal consistency between the ocean and sea-ice anomaly initialisation state

    NASA Astrophysics Data System (ADS)

    Volpi, Danila; Guemas, Virginie; Doblas-Reyes, Francisco J.

    2017-08-01

    Decadal prediction exploits sources of predictability from both the internal variability through the initialisation of the climate model from observational estimates, and the external radiative forcings. When a model is initialised with the observed state at the initial time step (Full Field Initialisation—FFI), the forecast run drifts towards the biased model climate. Distinguishing between the climate signal to be predicted and the model drift is a challenging task, because the application of a-posteriori bias correction has the risk of removing part of the variability signal. The anomaly initialisation (AI) technique aims at addressing the drift issue by answering the following question: if the model is allowed to start close to its own attractor (i.e. its biased world), but the phase of the simulated variability is constrained toward the contemporaneous observed one at the initialisation time, does the prediction skill improve? The relative merits of the FFI and AI techniques applied respectively to the ocean component and the ocean and sea ice components simultaneously in the EC-Earth global coupled model are assessed. For both strategies the initialised hindcasts show better skill than historical simulations for the ocean heat content and AMOC along the first two forecast years, for sea ice and PDO along the first forecast year, while for AMO the improvements are statistically significant for the first two forecast years. The AI in the ocean and sea ice components significantly improves the skill of the Arctic sea surface temperature over the FFI.

  4. Confronting weather and climate models with observational data from soil moisture networks over the United States.

    PubMed

    Dirmeyer, Paul A; Wu, Jiexia; Norton, Holly E; Dorigo, Wouter A; Quiring, Steven M; Ford, Trenton W; Santanello, Joseph A; Bosilovich, Michael G; Ek, Michael B; Koster, Randal D; Balsamo, Gianpaolo; Lawrence, David M

    2016-04-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.

  5. Controlling for confounding variables in MS-omics protocol: why modularity matters.

    PubMed

    Smith, Rob; Ventura, Dan; Prince, John T

    2014-09-01

    As the field of bioinformatics research continues to grow, more and more novel techniques are proposed to meet new challenges and improvements upon solutions to long-standing problems. These include data processing techniques and wet lab protocol techniques. Although the literature is consistently thorough in experimental detail and variable-controlling rigor for wet lab protocol techniques, bioinformatics techniques tend to be less described and less controlled. As the validation or rejection of hypotheses rests on the experiment's ability to isolate and measure a variable of interest, we urge the importance of reducing confounding variables in bioinformatics techniques during mass spectrometry experimentation. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  6. Uncertainty Analysis of Historical Hurricane Data

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2007-01-01

    An analysis of variance (ANOVA) study was conducted for historical hurricane data dating back to 1851 that was obtained from the U. S. Department of Commerce National Oceanic and Atmospheric Administration (NOAA). The data set was chosen because it is a large, publicly available collection of information, exhibiting great variability which has made the forecasting of future states, from current and previous states, difficult. The availability of substantial, high-fidelity validation data, however, made for an excellent uncertainty assessment study. Several factors (independent variables) were identified from the data set, which could potentially influence the track and intensity of the storms. The values of these factors, along with the values of responses of interest (dependent variables) were extracted from the data base, and provided to a commercial software package for processing via the ANOVA technique. The primary goal of the study was to document the ANOVA modeling uncertainty and predictive errors in making predictions about hurricane location and intensity 24 to 120 hours beyond known conditions, as reported by the data set. A secondary goal was to expose the ANOVA technique to a broader community within NASA. The independent factors considered to have an influence on the hurricane track included the current and starting longitudes and latitudes (measured in degrees), and current and starting maximum sustained wind speeds (measured in knots), and the storm starting date, its current duration from its first appearance, and the current year fraction of each reading, all measured in years. The year fraction and starting date were included in order to attempt to account for long duration cyclic behaviors, such as seasonal weather patterns, and years in which the sea or atmosphere were unusually warm or cold. The effect of short duration weather patterns and ocean conditions could not be examined with the current data set. The responses analyzed were the storm latitude, longitude and intensity, as recorded in the data set, 24 or 120 hours beyond the current state. Several ANOVA modeling schemes were examined. Two forms of validation were used: 1) comparison with official hurricane prediction performance metrics and 2) cases studies conducted on hurricanes from the 2005 season, which were not included within the model construction and ANOVA assessment. In general, the ANOVA technique did not perform as well as the established official prediction performance metrics published by NOAA; still, the technique did remarkably well in this demonstration with a difficult data set and could probably be made to perform better with more knowledge of hurricane development and dynamics applied to the problem. The technique provides a repeatable prediction process that eliminates the need for judgment in the forecast.

  7. Scaling Up Decision Theoretic Planning to Planetary Rover Problems

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Dearden, Richard; Washington, Rich

    2004-01-01

    Because of communication limits, planetary rovers must operate autonomously during consequent durations. The ability to plan under uncertainty is one of the main components of autonomy. Previous approaches to planning under uncertainty in NASA applications are not able to address the challenges of future missions, because of several apparent limits. On another side, decision theory provides a solid principle framework for reasoning about uncertainty and rewards. Unfortunately, there are several obstacles to a direct application of decision-theoretic techniques to the rover domain. This paper focuses on the issues of structure and concurrency, and continuous state variables. We describes two techniques currently under development that address specifically these issues and allow scaling-up decision theoretic solution techniques to planetary rover planning problems involving a small number of goals.

  8. Facial soft tissue thickness of Brazilian adults.

    PubMed

    Tedeschi-Oliveira, Sílvia Virginia; Melani, Rodolfo Francisco Haltenhoff; de Almeida, Natalie Haddad; de Paiva, Luiz Airton Saavedra

    2009-12-15

    The auxiliary technique known as Facial Reconstruction enables one to reestablish the contours of the soft tissues over the skull, therefore producing a face and increasing the probability of a facial recognition. The reliability of this technique depends on the evaluation of the mean values of soft tissue thicknesses observed in a given population. Measurements were evaluated in autopsied corpses in "Section of Technical Verification of Deaths" in Guarulhos, São Paulo, Brazil. Thickness was measured manually by puncturing 10 midline craniometrical points and 11 bilateral points on a sample of 40 corpses of both sexes aged between 17 and 90 years, classified by skin color and the nutritional state. The results for the average thickness values are higher for males, variations related to the nutritional state are proportional to the increased fat on the face and age was not significant. The ethnic variable related to skin color when compared to studies with other populations showed differences, with the need for a reference table for a given population application of Facial Reconstruction technique in skulls of non-attributable identity.

  9. Effective Interpolation of Incomplete Satellite-Derived Leaf-Area Index Time Series for the Continental United States

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Borak, Jordan S.

    2008-01-01

    Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.

  10. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  11. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  12. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  13. Different techniques of multispectral data analysis for vegetation fraction retrieval

    NASA Astrophysics Data System (ADS)

    Kancheva, Rumiana; Georgiev, Georgi

    2012-07-01

    Vegetation monitoring is one of the most important applications of remote sensing technologies. In respect to farmlands, the assessment of crop condition constitutes the basis of growth, development, and yield processes monitoring. Plant condition is defined by a set of biometric variables, such as density, height, biomass amount, leaf area index, and etc. The canopy cover fraction is closely related to these variables, and is state-indicative of the growth process. At the same time it is a defining factor of the soil-vegetation system spectral signatures. That is why spectral mixtures decomposition is a primary objective in remotely sensed data processing and interpretation, specifically in agricultural applications. The actual usefulness of the applied methods depends on their prediction reliability. The goal of this paper is to present and compare different techniques for quantitative endmember extraction from soil-crop patterns reflectance. These techniques include: linear spectral unmixing, two-dimensional spectra analysis, spectral ratio analysis (vegetation indices), spectral derivative analysis (red edge position), colorimetric analysis (tristimulus values sum, chromaticity coordinates and dominant wavelength). The objective is to reveal their potential, accuracy and robustness for plant fraction estimation from multispectral data. Regression relationships have been established between crop canopy cover and various spectral estimators.

  14. Techniques for estimating streamflow characteristics in the Eastern and Interior coal provinces of the United States

    USGS Publications Warehouse

    Wetzel, Kim L.; Bettandorff, J.M.

    1986-01-01

    Techniques are presented for estimating various streamflow characteristics, such as peak flows, mean monthly and annual flows, flow durations, and flow volumes, at ungaged sites on unregulated streams in the Eastern Coal region. Streamflow data and basin characteristics for 629 gaging stations were used to develop multiple-linear-regression equations. Separate equations were developed for the Eastern and Interior Coal Provinces. Drainage area is an independent variable common to all equations. Other variables needed, depending on the streamflow characteristic, are mean annual precipitation, mean basin elevation, main channel length, basin storage, main channel slope, and forest cover. A ratio of the observed 50- to 90-percent flow durations was used in the development of relations to estimate low-flow frequencies in the Eastern Coal Province. Relations to estimate low flows in the Interior Coal Province are not presented because the standard errors were greater than 0.7500 log units and were considered to be of poor reliability.

  15. Nonlinear zero-sum differential game analysis by singular perturbation methods

    NASA Technical Reports Server (NTRS)

    Sinar, J.; Farber, N.

    1982-01-01

    A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.

  16. Resilience, rapid transitions and regime shifts: fingerprinting the responses of Lake Żabińskie (NE Poland) to climate variability and human disturbance since 1000 AD

    NASA Astrophysics Data System (ADS)

    Tylmann, Wojciech; Hernández-Almeida, Iván; Grosjean, Martin; José Gómez Navarro, Juan; Larocque-Tobler, Isabelle; Bonk, Alicja; Enters, Dirk; Ustrzycka, Alicja; Piotrowska, Natalia; Przybylak, Rajmund; Wacnik, Agnieszka; Witak, Małgorzata

    2016-04-01

    Rapid ecosystem transitions and adverse effects on ecosystem services as responses to combined climate and human impacts are of major concern. Yet few quantitative observational data exist, particularly for ecosystems that have a long history of human intervention. Here, we combine quantitative summer and winter climate reconstructions, climate model simulations and proxies for three major environmental pressures (land use, nutrients and erosion) to explore the system dynamics, resilience, and the role of disturbance regimes in varved eutrophic Lake Żabińskie since AD 1000. Comparison between regional and global climate simulations and quantitative climate reconstructions indicate that proxy data capture noticeably natural forced climate variability, while internal variability appears as the dominant source of climate variability in the climate model simulations during most parts of the last millennium. Using different multivariate analyses and change point detection techniques, we identify ecosystem changes through time and shifts between rather stable states and highly variable ones, as expressed by the proxies for land-use, erosion and productivity in the lake. Prior to AD 1600, the lake ecosystem was characterized by a high stability and resilience against considerable observed natural climate variability. In contrast, lake-ecosystem conditions started to fluctuate at high frequency across a broad range of states after AD 1600. The period AD 1748-1868 represents the phase with the strongest human disturbance of the ecosystem. Analyses of the frequency of change points in the multi-proxy dataset suggests that the last 400 years were highly variable and flickering with increasing vulnerability of the ecosystem to the combined effects of climate variability and anthropogenic disturbances. This led to significant rapid ecosystem transformations.

  17. Robust high-precision attitude control for flexible spacecraft with improved mixed H2/H∞ control strategy under poles assignment constraint

    NASA Astrophysics Data System (ADS)

    Liu, Chuang; Ye, Dong; Shi, Keke; Sun, Zhaowei

    2017-07-01

    A novel improved mixed H2/H∞ control technique combined with poles assignment theory is presented to achieve attitude stabilization and vibration suppression simultaneously for flexible spacecraft in this paper. The flexible spacecraft dynamics system is described and transformed into corresponding state space form. Based on linear matrix inequalities (LMIs) scheme and poles assignment theory, the improved mixed H2/H∞ controller does not restrict the equivalence of the two Lyapunov variables involved in H2 and H∞ performance, which can reduce conservatives compared with traditional mixed H2/H∞ controller. Moreover, it can eliminate the coupling of Lyapunov matrix variables and system matrices by introducing slack variable that provides additional degree of freedom. Several simulations are performed to demonstrate the effectiveness and feasibility of the proposed method in this paper.

  18. Contrasts in the Sensitivity of Community Calcification to Temporal Saturation State Variability Within Temperate and Tropical Marine Environments

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, L.

    2016-02-01

    Ongoing emissions of carbon dioxide (CO2) and invasion of part of this CO2 into the oceans are projected to lower the calcium carbonate saturation state. As a result, the ability of many marine organisms to calcify may be compromised, with significant impacts on ocean ecosystems throughout the 21st Century. In laboratory manipulations, calcifying organisms have exhibited reduced calcification under elevated pCO2 conditions. Consequently, in situ observations of the sensitivity of calcifying communities to natural saturation state variability are increasingly valued as they incorporate complex species interactions, and capture the carbonate chemistry conditions to which communities are acclimatized. Using intensive seawater sampling techniques we assess the community level sensitivity of calcification rates to natural temporal variability in the aragonite saturation state (Ωarag) at both a tropical coral reef and temperate intertidal study site. Both sites experiences large daily variation in Ωarag during low tide due to photosynthesis, respiration, and the time at which the sites are isolated from the open ocean. On hourly timescales, we find that community level rates of calcification have only a weak dependence on variability in Ωarag at the tropical study site. At the temperate study site, although limited Ωarag sensitivity is observed during the day, nighttime community calcification rates are found to be strongly influenced by variability in Ωarag, with greater dissolution rates at lower Ωarag levels. If the short-term sensitivity of community calcification to Ωarag described here is representative of the long-term sensitivity of marine ecosystems to ocean acidification, then one would expect temperate intertidal calcifying communities to be more vulnerable than tropical coral reef calcifying communities. In particular, reductions in net community calcification, in the temperate intertidal zone may be predominately due to the nocturnal impact of ocean acidification.

  19. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  20. Movement variability and skill level of various throwing techniques.

    PubMed

    Wagner, Herbert; Pfusterschmied, Jürgen; Klous, Miriam; von Duvillard, Serge P; Müller, Erich

    2012-02-01

    In team-handball, skilled athletes are able to adapt to different game situations that may lead to differences in movement variability. Whether movement variability affects the performance of a team-handball throw and is affected by different skill levels or throwing techniques has not yet been demonstrated. Consequently, the aims of the study were to determine differences in performance and movement variability for several throwing techniques in different phases of the throwing movement, and of different skill levels. Twenty-four team-handball players of different skill levels (n=8) performed 30 throws using various throwing techniques. Upper body kinematics was measured via an 8 camera Vicon motion capture system and movement variability was calculated. Results indicated an increase in movement variability in the distal joint movements during the acceleration phase. In addition, there was a decrease in movement variability in highly skilled and skilled players in the standing throw with run-up, which indicated an increase in the ball release speed, which was highest when using this throwing technique. We assert that team-handball players had the ability to compensate an increase in movement variability in the acceleration phase to throw accurately, and skilled players were able to control the movement, although movement variability decreased in the standing throw with run-up. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Adaptive Neural Output-Feedback Control for a Class of Nonlower Triangular Nonlinear Systems With Unmodeled Dynamics.

    PubMed

    Wang, Huanqing; Liu, Peter Xiaoping; Li, Shuai; Wang, Ding

    2017-08-29

    This paper presents the development of an adaptive neural controller for a class of nonlinear systems with unmodeled dynamics and immeasurable states. An observer is designed to estimate system states. The structure consistency of virtual control signals and the variable partition technique are combined to overcome the difficulties appearing in a nonlower triangular form. An adaptive neural output-feedback controller is developed based on the backstepping technique and the universal approximation property of the radial basis function (RBF) neural networks. By using the Lyapunov stability analysis, the semiglobally and uniformly ultimate boundedness of all signals within the closed-loop system is guaranteed. The simulation results show that the controlled system converges quickly, and all the signals are bounded. This paper is novel at least in the two aspects: 1) an output-feedback control strategy is developed for a class of nonlower triangular nonlinear systems with unmodeled dynamics and 2) the nonlinear disturbances and their bounds are the functions of all states, which is in a more general form than existing results.

  2. Application of dynamic programming to control khuzestan water resources system

    USGS Publications Warehouse

    Jamshidi, M.; Heidari, M.

    1977-01-01

    An approximate optimization technique based on discrete dynamic programming called discrete differential dynamic programming (DDDP), is employed to obtain the near optimal operation policies of a water resources system in the Khuzestan Province of Iran. The technique makes use of an initial nominal state trajectory for each state variable, and forms corridors around the trajectories. These corridors represent a set of subdomains of the entire feasible domain. Starting with such a set of nominal state trajectories, improvements in objective function are sought within the corridors formed around them. This leads to a set of new nominal trajectories upon which more improvements may be sought. Since optimization is confined to a set of subdomains, considerable savings in memory and computer time are achieved over that of conventional dynamic programming. The Kuzestan water resources system considered in this study is located in southwest Iran, and consists of two rivers, three reservoirs, three hydropower plants, and three irrigable areas. Data and cost benefit functions for the analysis were obtained either from the historical records or from similar studies. ?? 1977.

  3. Methods and costs of thin-seam mining. Final report, 25 September 1977-24 January 1979. [Thin seam in association with a thick seam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finch, T.E.; Fidler, E.L.

    1981-02-01

    This report defines the state of the art (circa 1978) in removing thin coal seams associated with vastly thicker seams found in the surface coal mines of the western United States. New techniques are evaluated and an innovative method and machine is proposed. Western states resource recovery regulations are addressed and representative mining operations are examined. Thin seam recovery is investigated through its effect on (1) overburden removal, (2) conventional seam extraction methods, and (3) innovative techniques. Equations and graphs are used to accommodate the variable stratigraphic positions in the mining sequence on which thin seams occur. Industrial concern andmore » agency regulations provided the impetus for this study of total resource recovery. The results are a compendium of thin seam removal methods and costs. The work explains how the mining industry recovers thin coal seams in western surface mines where extremely thick seams naturally hold the most attention. It explains what new developments imply and where to look for new improvements and their probable adaptability.« less

  4. Legendre spectral-collocation method for solving some types of fractional optimal control problems

    PubMed Central

    Sweilam, Nasser H.; Al-Ajami, Tamer M.

    2014-01-01

    In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh–Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937

  5. An iterative technique to stabilize a linear time invariant multivariable system with output feedback

    NASA Technical Reports Server (NTRS)

    Sankaran, V.

    1974-01-01

    An iterative procedure for determining the constant gain matrix that will stabilize a linear constant multivariable system using output feedback is described. The use of this procedure avoids the transformation of variables which is required in other procedures. For the case in which the product of the output and input vector dimensions is greater than the number of states of the plant, general solution is given. In the case in which the states exceed the product of input and output vector dimensions, a least square solution which may not be stable in all cases is presented. The results are illustrated with examples.

  6. Measurement of process variables in solid-state fermentation of wheat straw using FT-NIR spectroscopy and synergy interval PLS algorithm.

    PubMed

    Jiang, Hui; Liu, Guohai; Mei, Congli; Yu, Shuang; Xiao, Xiahong; Ding, Yuhan

    2012-11-01

    The feasibility of rapid determination of the process variables (i.e. pH and moisture content) in solid-state fermentation (SSF) of wheat straw using Fourier transform near infrared (FT-NIR) spectroscopy was studied. Synergy interval partial least squares (siPLS) algorithm was implemented to calibrate regression model. The number of PLS factors and the number of subintervals were optimized simultaneously by cross-validation. The performance of the prediction model was evaluated according to the root mean square error of cross-validation (RMSECV), the root mean square error of prediction (RMSEP) and the correlation coefficient (R). The measurement results of the optimal model were obtained as follows: RMSECV=0.0776, R(c)=0.9777, RMSEP=0.0963, and R(p)=0.9686 for pH model; RMSECV=1.3544% w/w, R(c)=0.8871, RMSEP=1.4946% w/w, and R(p)=0.8684 for moisture content model. Finally, compared with classic PLS and iPLS models, the siPLS model revealed its superior performance. The overall results demonstrate that FT-NIR spectroscopy combined with siPLS algorithm can be used to measure process variables in solid-state fermentation of wheat straw, and NIR spectroscopy technique has a potential to be utilized in SSF industry. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Software sensors for biomass concentration in a SSC process using artificial neural networks and support vector machine.

    PubMed

    Acuña, Gonzalo; Ramirez, Cristian; Curilem, Millaray

    2014-01-01

    The lack of sensors for some relevant state variables in fermentation processes can be coped by developing appropriate software sensors. In this work, NARX-ANN, NARMAX-ANN, NARX-SVM and NARMAX-SVM models are compared when acting as software sensors of biomass concentration for a solid substrate cultivation (SSC) process. Results show that NARMAX-SVM outperforms the other models with an SMAPE index under 9 for a 20 % amplitude noise. In addition, NARMAX models perform better than NARX models under the same noise conditions because of their better predictive capabilities as they include prediction errors as inputs. In the case of perturbation of initial conditions of the autoregressive variable, NARX models exhibited better convergence capabilities. This work also confirms that a difficult to measure variable, like biomass concentration, can be estimated on-line from easy to measure variables like CO₂ and O₂ using an adequate software sensor based on computational intelligence techniques.

  8. [The effects of hatha yoga exercises on stress and anxiety levels in mastectomized women].

    PubMed

    Bernardi, Marina Lima Daleprane; Amorim, Maria Helena Costa; Zandonade, Eliana; Santaella, Danilo Forghieri; Barbosa, Juliana de Assis Novais

    2013-12-01

    This article seeks to evaluate the effects of hatha yoga on stress and anxiety levels in mastectomized women. It also investigates the relationship between these levels with the following variables: age; marital status; religion; instruction; profession; smoke addiction; elitism; staging of the disease; and treatment phase. This involved controlled random clinical trial sampling of 45 mastectomized women treated at the Ilza Bianco outpatient service of Santa Rita de Cássia Hospital in the Brazilian state of Espírito Santo from March to November 2010. The experimental group participated in 6 individually-applied sessions with incentive for ongoing home practice and was re-evaluated after the period, whereas the control group was re-evaluated after a proportional period. For the study of the variables, the interview and recording on a form technique was used, along with the Anxiety Trait and State Test, and the Stress Symptoms and Signs Test. For statistical treatment, the Statistical Pack for Social Sciences was used. The data are statistically significant and have shown that hatha yoga exercises decrease stress and anxiety in the experimental group. No connection between confounding variables and anxiety and stress levels was found.

  9. Characterizing Variability of Modular Brain Connectivity with Constrained Principal Component Analysis

    PubMed Central

    Hirayama, Jun-ichiro; Hyvärinen, Aapo; Kiviniemi, Vesa; Kawanabe, Motoaki; Yamashita, Okito

    2016-01-01

    Characterizing the variability of resting-state functional brain connectivity across subjects and/or over time has recently attracted much attention. Principal component analysis (PCA) serves as a fundamental statistical technique for such analyses. However, performing PCA on high-dimensional connectivity matrices yields complicated “eigenconnectivity” patterns, for which systematic interpretation is a challenging issue. Here, we overcome this issue with a novel constrained PCA method for connectivity matrices by extending the idea of the previously proposed orthogonal connectivity factorization method. Our new method, modular connectivity factorization (MCF), explicitly introduces the modularity of brain networks as a parametric constraint on eigenconnectivity matrices. In particular, MCF analyzes the variability in both intra- and inter-module connectivities, simultaneously finding network modules in a principled, data-driven manner. The parametric constraint provides a compact module-based visualization scheme with which the result can be intuitively interpreted. We develop an optimization algorithm to solve the constrained PCA problem and validate our method in simulation studies and with a resting-state functional connectivity MRI dataset of 986 subjects. The results show that the proposed MCF method successfully reveals the underlying modular eigenconnectivity patterns in more general situations and is a promising alternative to existing methods. PMID:28002474

  10. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  11. A technique for pole-zero placement for dual-input control systems. [computer simulation of CH-47 helicopter longitudinal dynamics

    NASA Technical Reports Server (NTRS)

    Reid, G. F.

    1976-01-01

    A technique is presented for determining state variable feedback gains that will place both the poles and zeros of a selected transfer function of a dual-input control system at pre-determined locations in the s-plane. Leverrier's algorithm is used to determine the numerator and denominator coefficients of the closed-loop transfer function as functions of the feedback gains. The values of gain that match these coefficients to those of a pre-selected model are found by solving two systems of linear simultaneous equations. The algorithm has been used in a computer simulation of the CH-47 helicopter to control longitudinal dynamics.

  12. Nonstationary time series prediction combined with slow feature analysis

    NASA Astrophysics Data System (ADS)

    Wang, G.; Chen, X.

    2015-07-01

    Almost all climate time series have some degree of nonstationarity due to external driving forces perturbing the observed system. Therefore, these external driving forces should be taken into account when constructing the climate dynamics. This paper presents a new technique of obtaining the driving forces of a time series from the slow feature analysis (SFA) approach, and then introduces them into a predictive model to predict nonstationary time series. The basic theory of the technique is to consider the driving forces as state variables and to incorporate them into the predictive model. Experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted to test the model. The results showed improved prediction skills.

  13. Numerical studies of the Bethe-Salpeter equation for a two-fermion bound state

    NASA Astrophysics Data System (ADS)

    de Paula, W.; Frederico, T.; Salmè, G.; Viviani, M.

    2018-03-01

    Some recent advances on the solution of the Bethe-Salpeter equation (BSE) for a two-fermion bound system directly in Minkowski space are presented. The calculations are based on the expression of the Bethe-Salpeter amplitude in terms of the so-called Nakanishi integral representation and on the light-front projection (i.e. the integration of the light-front variable k - = k 0 - k 3). The latter technique allows for the analytically exact treatment of the singularities plaguing the two-fermion BSE in Minkowski space. The good agreement observed between our results and those obtained using other existing numerical methods, based on both Minkowski and Euclidean space techniques, fully corroborate our analytical treatment.

  14. Topological characterization versus synchronization for assessing (or not) dynamical equivalence

    NASA Astrophysics Data System (ADS)

    Letellier, Christophe; Mangiarotti, Sylvain; Sendiña-Nadal, Irene; Rössler, Otto E.

    2018-04-01

    Model validation from experimental data is an important and not trivial topic which is too often reduced to a simple visual inspection of the state portrait spanned by the variables of the system. Synchronization was suggested as a possible technique for model validation. By means of a topological analysis, we revisited this concept with the help of an abstract chemical reaction system and data from two electrodissolution experiments conducted by Jack Hudson's group. The fact that it was possible to synchronize topologically different global models led us to conclude that synchronization is not a recommendable technique for model validation. A short historical preamble evokes Jack Hudson's early career in interaction with Otto E. Rössler.

  15. Hindcast of extreme sea states in North Atlantic extratropical storms

    NASA Astrophysics Data System (ADS)

    Ponce de León, Sonia; Guedes Soares, Carlos

    2015-02-01

    This study examines the variability of freak wave parameters around the eye of northern hemisphere extratropical cyclones. The data was obtained from a hindcast performed with the WAve Model (WAM) model forced by the wind fields of the Climate Forecast System Reanalysis (CFSR). The hindcast results were validated against the wave buoys and satellite altimetry data showing a good correlation. The variability of different wave parameters was assessed by applying the empirical orthogonal functions (EOF) technique on the hindcast data. From the EOF analysis, it can be concluded that the first empirical orthogonal function (V1) accounts for greater share of variability of significant wave height (Hs), peak period (Tp), directional spreading (SPR) and Benjamin-Feir index (BFI). The share of variance in V1 varies for cyclone and variable: for the 2nd storm and Hs V1 contains 96 % of variance while for the 3rd storm and BFI V1 accounts only for 26 % of variance. The spatial patterns of V1 show that the variables are distributed around the cyclones centres mainly in a lobular fashion.

  16. Comparison of facility-level methane emission rates from natural gas production well pads in the Marcellus, Denver-Julesburg, and Uintah Basins

    NASA Astrophysics Data System (ADS)

    Omara, M.; Li, X.; Sullivan, M.; Subramanian, R.; Robinson, A. L.; Presto, A. A.

    2015-12-01

    The boom in shale natural gas (NG) production, brought about by advances in horizontal drilling and hydraulic fracturing, has yielded both economic benefits and concerns about environmental and climate impacts. In particular, leakages of methane from the NG supply chain could substantially increase the carbon footprint of NG, diminishing its potential role as a transition fuel between carbon intensive fossil fuels and renewable energy systems. Recent research has demonstrated significant variability in measured methane emission rates from NG production facilities within a given shale gas basin. This variability often reflect facility-specific differences in NG production capacity, facility age, utilization of emissions capture and control, and/or the level of facility inspection and maintenance. Across NG production basins, these differences in facility-level methane emission rates are likely amplified, especially if significant variability in NG composition and state emissions regulations are present. In this study, we measured methane emission rates from the NG production sector in the Marcellus Shale Basin (Pennsylvania and West Virginia), currently the largest NG production basin in the U.S., and contrast these results with those of the Denver-Julesburg (Colorado) and Uintah (Utah) shale basins. Facility-level methane emission rates were measured at 106 NG production facilities using the dual tracer flux (nitrous oxide and acetylene), Gaussian dispersion simulations, and the OTM 33A techniques. The distribution of facility-level average methane emission rate for each NG basin will be discussed, with emphasis on how variability in NG composition (i.e., ethane-to-methane ratios) and state emissions regulations impact measured methane leak rates. While the focus of this presentation will be on the comparison of methane leak rates among NG basins, the use of three complimentary top-down methane measurement techniques provides a unique opportunity to explore the effectiveness of each approach, which will also be discussed.

  17. Aspects on the Physiological and Biochemical Foundations of Neurocritical Care

    PubMed Central

    Nordström, Carl-Henrik; Koskinen, Lars-Owe; Olivecrona, Magnus

    2017-01-01

    Neurocritical care (NCC) is a branch of intensive care medicine characterized by specific physiological and biochemical monitoring techniques necessary for identifying cerebral adverse events and for evaluating specific therapies. Information is primarily obtained from physiological variables related to intracranial pressure (ICP) and cerebral blood flow (CBF) and from physiological and biochemical variables related to cerebral energy metabolism. Non-surgical therapies developed for treating increased ICP are based on knowledge regarding transport of water across the intact and injured blood–brain barrier (BBB) and the regulation of CBF. Brain volume is strictly controlled as the BBB permeability to crystalloids is very low restricting net transport of water across the capillary wall. Cerebral pressure autoregulation prevents changes in intracranial blood volume and intracapillary hydrostatic pressure at variations in arterial blood pressure. Information regarding cerebral oxidative metabolism is obtained from measurements of brain tissue oxygen tension (PbtO2) and biochemical data obtained from intracerebral microdialysis. As interstitial lactate/pyruvate (LP) ratio instantaneously reflects shifts in intracellular cytoplasmatic redox state, it is an important indicator of compromised cerebral oxidative metabolism. The combined information obtained from PbtO2, LP ratio, and the pattern of biochemical variables reveals whether impaired oxidative metabolism is due to insufficient perfusion (ischemia) or mitochondrial dysfunction. Intracerebral microdialysis and PbtO2 give information from a very small volume of tissue. Accordingly, clinical interpretation of the data must be based on information of the probe location in relation to focal brain damage. Attempts to evaluate global cerebral energy state from microdialysis of intraventricular fluid and from the LP ratio of the draining venous blood have recently been presented. To be of clinical relevance, the information from all monitoring techniques should be presented bedside online. Accordingly, in the future, the chemical variables obtained from microdialysis will probably be analyzed by biochemical sensors. PMID:28674514

  18. Mapping ecological states in a complex environment

    NASA Astrophysics Data System (ADS)

    Steele, C. M.; Bestelmeyer, B.; Burkett, L. M.; Ayers, E.; Romig, K.; Slaughter, A.

    2013-12-01

    The vegetation of northern Chihuahuan Desert rangelands is sparse, heterogeneous and for most of the year, consists of a large proportion of non-photosynthetic material. The soils in this area are spectrally bright and variable in their reflectance properties. Both factors provide challenges to the application of remote sensing for estimating canopy variables (e.g., leaf area index, biomass, percentage canopy cover, primary production). Additionally, with reference to current paradigms of rangeland health assessment, remotely-sensed estimates of canopy variables have limited practical use to the rangeland manager if they are not placed in the context of ecological site and ecological state. To address these challenges, we created a multifactor classification system based on the USDA-NRCS ecological site schema and associated state-and-transition models to map ecological states on desert rangelands in southern New Mexico. Applying this system using per-pixel image processing techniques and multispectral, remotely sensed imagery raised other challenges. Per-pixel image classification relies upon the spectral information in each pixel alone, there is no reference to the spatial context of the pixel and its relationship with its neighbors. Ecological state classes may have direct relevance to managers but the non-unique spectral properties of different ecological state classes in our study area means that per-pixel classification of multispectral data performs poorly in discriminating between different ecological states. We found that image interpreters who are familiar with the landscape and its associated ecological site descriptions perform better than per-pixel classification techniques in assigning ecological states. However, two important issues affect manual classification methods: subjectivity of interpretation and reproducibility of results. An alternative to per-pixel classification and manual interpretation is object-based image analysis. Object-based image analysis provides a platform for classification that more closely resembles human recognition of objects within a remotely sensed image. The analysis presented here compares multiple thematic maps created for test locations on the USDA-ARS Jornada Experimental Range ranch. Three study sites in different pastures, each 300 ha in size, were selected for comparison on the basis of their ecological site type (';Clayey', ';Sandy' and a combination of both) and the degree of complexity of vegetation cover. Thematic maps were produced for each study site using (i) manual interpretation of digital aerial photography (by five independent interpreters); (ii) object-oriented, decision-tree classification of fine and moderate spatial resolution imagery (Quickbird; Landsat Thematic Mapper) and (iii) ground survey. To identify areas of uncertainty, we compared agreement in location, areal extent and class assignation between 5 independently produced, manually-digitized ecological state maps and with the map created from ground survey. Location, areal extent and class assignation of the map produced by object-oriented classification was also assessed with reference to the ground survey map.

  19. The relationship of document and quantitative literacy with learning styles and selected personal variables for aerospace technology students at Indiana State University

    NASA Astrophysics Data System (ADS)

    Martin, Royce Ann

    The purpose of this study was to determine the extent that student scores on a researcher-constructed quantitative and document literacy test, the Aviation Documents Delineator (ADD), were associated with (a) learning styles (imaginative, analytic, common sense, dynamic, and undetermined), as identified by the Learning Type Measure, (b) program curriculum (aerospace administration, professional pilot, both aerospace administration and professional pilot, other, or undeclared), (c) overall cumulative grade point average at Indiana State University, and (d) year in school (freshman, sophomore, junior, or senior). The Aviation Documents Delineator (ADD) was a three-part, 35 question survey that required students to interpret graphs, tables, and maps. Tasks assessed in the ADD included (a) locating, interpreting, and describing specific data displayed in the document, (b) determining data for a specified point on the table through interpolation, (c) comparing data for a string of variables representing one aspect of aircraft performance to another string of variables representing a different aspect of aircraft performance, (d) interpreting the documents to make decisions regarding emergency situations, and (e) performing single and/or sequential mathematical operations on a specified set of data. The Learning Type Measure (LTM) was a 15 item self-report survey developed by Bernice McCarthy (1995) to profile an individual's processing and perception tendencies in order to reveal different individual approaches to learning. The sample used in this study included 143 students enrolled in Aerospace Technology Department courses at Indiana State University in the fall of 1996. The ADD and the LTM were administered to each subject. Data collected in this investigation were analyzed using a stepwise multiple regression analysis technique. Results of the study revealed that the variables, year in school and GPA, were significant predictors of the criterion variables, document, quantitative, and total literacy, when utilizing the ADD. The variables learning style and program of study were found not to be significant predictors of literacy scores on the ADD instrument.

  20. Structural reanalysis via a mixed method. [using Taylor series for accuracy improvement

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1975-01-01

    A study is made of the approximate structural reanalysis technique based on the use of Taylor series expansion of response variables in terms of design variables in conjunction with the mixed method. In addition, comparisons are made with two reanalysis techniques based on the displacement method. These techniques are the Taylor series expansion and the modified reduced basis. It is shown that the use of the reciprocals of the sizing variables as design variables (which is the natural choice in the mixed method) can result in a substantial improvement in the accuracy of the reanalysis technique. Numerical results are presented for a space truss structure.

  1. Effects of visual feedback-induced variability on motor learning of handrim wheelchair propulsion.

    PubMed

    Leving, Marika T; Vegter, Riemer J K; Hartog, Johanneke; Lamoth, Claudine J C; de Groot, Sonja; van der Woude, Lucas H V

    2015-01-01

    It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear simultaneously during the motor learning process. Their relationship is most likely modified by other factors such as the amount of the intra-individual variability.

  2. Effects of Visual Feedback-Induced Variability on Motor Learning of Handrim Wheelchair Propulsion

    PubMed Central

    Leving, Marika T.; Vegter, Riemer J. K.; Hartog, Johanneke; Lamoth, Claudine J. C.; de Groot, Sonja; van der Woude, Lucas H. V.

    2015-01-01

    Background It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. Methods 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. Results The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. Conclusion These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear simultaneously during the motor learning process. Their relationship is most likely modified by other factors such as the amount of the intra-individual variability. PMID:25992626

  3. Machine learning techniques applied to the determination of road suitability for the transportation of dangerous substances.

    PubMed

    Matías, J M; Taboada, J; Ordóñez, C; Nieto, P G

    2007-08-17

    This article describes a methodology to model the degree of remedial action required to make short stretches of a roadway suitable for dangerous goods transport (DGT), particularly pollutant substances, using different variables associated with the characteristics of each segment. Thirty-one factors determining the impact of an accident on a particular stretch of road were identified and subdivided into two major groups: accident probability factors and accident severity factors. Given the number of factors determining the state of a particular road segment, the only viable statistical methods for implementing the model were machine learning techniques, such as multilayer perceptron networks (MLPs), classification trees (CARTs) and support vector machines (SVMs). The results produced by these techniques on a test sample were more favourable than those produced by traditional discriminant analysis, irrespective of whether dimensionality reduction techniques were applied. The best results were obtained using SVMs specifically adapted to ordinal data. This technique takes advantage of the ordinal information contained in the data without penalising the computational load. Furthermore, the technique permits the estimation of the utility function that is latent in expert knowledge.

  4. Search for Supersymmetry in Hadronic Final States

    NASA Astrophysics Data System (ADS)

    Mulholland, Troy

    We present a search for supersymmetry in purely hadronic final states with large missing transverse momentum using data collected by the CMS detector at the CERN LHC. The data were produced in proton-proton collisions with center-of-mass energy of 13 TeV and correspond to an integrated luminosity of 35.9 fb -1. Data are analyzed with variables defined in terms of jet multiplicity, bottom quark tagged jet multiplicity, the scalar sum of jet transverse momentum, the magnitude of the vector sum of jet transverse momentum, and angular separation between jets and the vector sum of transverse momentum. We perform the search on the data using two analysis techniques: a boosted decision tree trained on simulated data using the above variables as features and a four-dimensional fit with rectangular search regions. In both analyses, standard model background estimations are derived from data-driven techniques and the signal data are separated into exclusive search regions. The observed yields in the search regions agree with background expectations. We derive upper limits on the production cross sections of pairs of gluinos and pairs of top squarks at 95% confidence using simplified models with the lightest supersymmetric particle assumed to be a weakly interacting neutralino. Gluinos as heavy as 1960 GeV and top squarks as heavy as 980 GeV are excluded. The limits significantly extend the exclusions obtained from previous results.

  5. Nationwide summary of US Geological Survey regional regression equations for estimating magnitude and frequency of floods for ungaged sites, 1993

    USGS Publications Warehouse

    Jennings, M.E.; Thomas, W.O.; Riggs, H.C.

    1994-01-01

    For many years, the U.S. Geological Survey (USGS) has been involved in the development of regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally these equations have been developed on a statewide or metropolitan area basis as part of cooperative study programs with specific State Departments of Transportation or specific cities. The USGS, in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency, has compiled all the current (as of September 1993) statewide and metropolitan area regression equations into a micro-computer program titled the National Flood Frequency Program.This program includes regression equations for estimating flood-peak discharges and techniques for estimating a typical flood hydrograph for a given recurrence interval peak discharge for unregulated rural and urban watersheds. These techniques should be useful to engineers and hydrologists for planning and design applications. This report summarizes the statewide regression equations for rural watersheds in each State, summarizes the applicable metropolitan area or statewide regression equations for urban watersheds, describes the National Flood Frequency Program for making these computations, and provides much of the reference information on the extrapolation variables needed to run the program.

  6. Pressure- and buoyancy-driven thermal convection in a rectangular enclosure

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.; Churchill, S. W.

    1975-01-01

    Results are presented for unsteady laminar thermal convection in compressible fluids at various reduced levels of gravity in a rectangular enclosure which is heated on one side and cooled on the opposite side. The results were obtained by solving numerically the equations of conservation for a viscous, compressible, heat-conducting, ideal gas in the presence of a gravitational body force. The formulation differs from the Boussinesq simplification in that the effects of variable density are completely retained. A conservative, explicit, time-dependent, finite-difference technique was used and good agreement was found for the limited cases where direct comparison with previous investigations was possible. The solutions show that the thermally induced motion is acoustic in nature at low levels of gravity and that the unsteady-state rate of heat transfer is thereby greatly enhanced relative to pure conduction. The nonlinear variable density profile skews the streamlines towards the cooler walls but is shown to have little effect on the steady-state isotherms.

  7. A variant of the anomaly initialisation approach for global climate forecast models

    NASA Astrophysics Data System (ADS)

    Volpi, Danila; Guemas, Virginie; Doblas-Reyes, Francisco; Hawkins, Ed; Nichols, Nancy; Carrassi, Alberto

    2014-05-01

    This work presents a refined method of anomaly initialisation (AI) applied to the ocean and sea ice components of the global climate forecast model EC-Earth, with the following particularities: - the use of a weight to the anomalies, in order to avoid the risk of introducing too big anomalies recorded in the observed state, whose amplitude does not fit the range of the internal variability generated by the model. - the AI of the temperature and density ocean state variables instead of the temperature and salinity. Results show that the use of such refinements improve the skill over the Arctic region, part of the North and South Atlantic, part of the North and South Pacific and the Mediterranean Sea. In the Tropical Pacific the full field initialised experiment performs better. This is probably due to a displacement of the observed anomalies caused by the use of the AI technique. Furthermore, preliminary results of an anomaly nudging experiment are discussed.

  8. Phase-noise limitations in continuous-variable quantum key distribution with homodyne detection

    NASA Astrophysics Data System (ADS)

    Corvaja, Roberto

    2017-02-01

    In continuous-variables quantum key distribution with coherent states, the advantage of performing the detection by using standard telecoms components is counterbalanced by the lack of a stable phase reference in homodyne detection due to the complexity of optical phase-locking circuits and to the unavoidable phase noise of lasers, which introduces a degradation on the achievable secure key rate. Pilot-assisted phase-noise estimation and postdetection compensation techniques are used to implement a protocol with coherent states where a local laser is employed and it is not locked to the received signal, but a postdetection phase correction is applied. Here the reduction of the secure key rate determined by the laser phase noise, for both individual and collective attacks, is analytically evaluated and a scheme of pilot-assisted phase estimation proposed, outlining the tradeoff in the system design between phase noise and spectral efficiency. The optimal modulation variance as a function of the phase-noise amount is derived.

  9. Fractional Order Two-Temperature Dual-Phase-Lag Thermoelasticity with Variable Thermal Conductivity

    PubMed Central

    Mallik, Sadek Hossain; Kanoria, M.

    2014-01-01

    A new theory of two-temperature generalized thermoelasticity is constructed in the context of a new consideration of dual-phase-lag heat conduction with fractional orders. The theory is then adopted to study thermoelastic interaction in an isotropic homogenous semi-infinite generalized thermoelastic solids with variable thermal conductivity whose boundary is subjected to thermal and mechanical loading. The basic equations of the problem have been written in the form of a vector-matrix differential equation in the Laplace transform domain, which is then solved by using a state space approach. The inversion of Laplace transforms is computed numerically using the method of Fourier series expansion technique. The numerical estimates of the quantities of physical interest are obtained and depicted graphically. Some comparisons of the thermophysical quantities are shown in figures to study the effects of the variable thermal conductivity, temperature discrepancy, and the fractional order parameter. PMID:27419210

  10. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  11. Characteristic-based and interface-sharpening algorithm for high-order simulations of immiscible compressible multi-material flows

    NASA Astrophysics Data System (ADS)

    He, Zhiwei; Tian, Baolin; Zhang, Yousheng; Gao, Fujie

    2017-03-01

    The present work focuses on the simulation of immiscible compressible multi-material flows with the Mie-Grüneisen-type equation of state governed by the non-conservative five-equation model [1]. Although low-order single fluid schemes have already been adopted to provide some feasible results, the application of high-order schemes (introducing relatively small numerical dissipation) to these flows may lead to results with severe numerical oscillations. Consequently, attempts to apply any interface-sharpening techniques to stop the progressively more severe smearing interfaces for a longer simulation time may result in an overshoot increase and in some cases convergence to a non-physical solution occurs. This study proposes a characteristic-based interface-sharpening algorithm for performing high-order simulations of such flows by deriving a pressure-equilibrium-consistent intermediate state (augmented with approximations of pressure derivatives) for local characteristic variable reconstruction and constructing a general framework for interface sharpening. First, by imposing a weak form of the jump condition for the non-conservative five-equation model, we analytically derive an intermediate state with pressure derivatives treated as additional parameters of the linearization procedure. Based on this intermediate state, any well-established high-order reconstruction technique can be employed to provide the state at each cell edge. Second, by designing another state with only different reconstructed values of the interface function at each cell edge, the advection term in the equation of the interface function is discretized twice using any common algorithm. The difference between the two discretizations is employed consistently for interface compression, yielding a general framework for interface sharpening. Coupled with the fifth-order improved accurate monotonicity-preserving scheme [2] for local characteristic variable reconstruction and the tangent of hyperbola for the interface capturing scheme [3] for designing other reconstructed values of the interface function, the present algorithm is examined using some typical tests, with the Mie-Grüneisen-type equation of state used for characterizing the materials of interest in both one- and two-dimensional spaces. The results of these tests verify the effectiveness of the present algorithm: essentially non-oscillatory and interface-sharpened results are obtained.

  12. Seasonal predictions of equatorial Atlantic SST in a low-resolution CGCM with surface heat flux correction

    NASA Astrophysics Data System (ADS)

    Dippe, Tina; Greatbatch, Richard; Ding, Hui

    2016-04-01

    The dominant mode of interannual variability in tropical Atlantic sea surface temperatures (SSTs) is the Atlantic Niño or Zonal Mode. Akin to the El Niño-Southern Oscillation in the Pacific sector, it is able to impact the climate both of the adjacent equatorial African continent and remote regions. Due to heavy biases in the mean state climate of the equatorial-to-subtropical Atlantic, however, most state-of-the-art coupled global climate models (CGCMs) are unable to realistically simulate equatorial Atlantic variability. In this study, the Kiel Climate Model (KCM) is used to investigate the impact of a simple bias alleviation technique on the predictability of equatorial Atlantic SSTs. Two sets of seasonal forecasting experiments are performed: An experiment using the standard KCM (STD), and an experiment with additional surface heat flux correction (FLX) that efficiently removes the SST bias from simulations. Initial conditions for both experiments are generated by the KCM run in partially coupled mode, a simple assimilation technique that forces the KCM with observed wind stress anomalies and preserves SST as a fully prognostic variable. Seasonal predictions for both sets of experiments are run four times yearly for 1981-2012. Results: Heat flux correction substantially improves the simulated variability in the initialization runs for boreal summer and fall (June-October). In boreal spring (March-May), however, neither the initialization runs of the STD or FLX-experiments are able to capture the observed variability. FLX-predictions show no consistent enhancement of skill relative to the predictions of the STD experiment over the course of the year. The skill of persistence forecasts is hardly beat by either of the two experiments in any season, limiting the usefulness of the few forecasts that show significant skill. However, FLX-forecasts initialized in May recover skill in July and August, the peak season of the Atlantic Niño (anomaly correlation coefficients of about 0.3). Further study is necessary to determine the mechanism that drives this potentially useful recovery.

  13. Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions

    NASA Astrophysics Data System (ADS)

    Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.

    2010-12-01

    Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.

  14. Investigation of an HMM/ANN hybrid structure in pattern recognition application using cepstral analysis of dysarthric (distorted) speech signals.

    PubMed

    Polur, Prasad D; Miller, Gerald E

    2006-10-01

    Computer speech recognition of individuals with dysarthria, such as cerebral palsy patients requires a robust technique that can handle conditions of very high variability and limited training data. In this study, application of a 10 state ergodic hidden Markov model (HMM)/artificial neural network (ANN) hybrid structure for a dysarthric speech (isolated word) recognition system, intended to act as an assistive tool, was investigated. A small size vocabulary spoken by three cerebral palsy subjects was chosen. The effect of such a structure on the recognition rate of the system was investigated by comparing it with an ergodic hidden Markov model as a control tool. This was done in order to determine if this modified technique contributed to enhanced recognition of dysarthric speech. The speech was sampled at 11 kHz. Mel frequency cepstral coefficients were extracted from them using 15 ms frames and served as training input to the hybrid model setup. The subsequent results demonstrated that the hybrid model structure was quite robust in its ability to handle the large variability and non-conformity of dysarthric speech. The level of variability in input dysarthric speech patterns sometimes limits the reliability of the system. However, its application as a rehabilitation/control tool to assist dysarthric motor impaired individuals holds sufficient promise.

  15. Revisiting gender, race, and ear differences in peripheral auditory function

    NASA Astrophysics Data System (ADS)

    Boothalingam, Sriram; Klyn, Niall A. M.; Stiepan, Samantha M.; Wilson, Uzma S.; Lee, Jungwha; Siegel, Jonathan H.; Dhar, Sumitrajit

    2018-05-01

    Various measures of auditory function are reported to be superior in females as compared to males, in African American compared to Caucasian individuals, and in right compared to left ears. We re-examined the influence of these subject variables on hearing thresholds and otoacoustic emissions (OAEs) in a sample of 887 human participants between 10 and 68 years of age. Even though the variables of interest here have been examined before, previous attempts have largely been limited to frequencies up to 8 kHz. We used state-of-the-art signal delivery and recording techniques that compensated for individual differences in ear canal acoustics, allowing us to measure hearing thresholds and OAEs up to 20 kHz. The use of these modern calibration and recording techniques provided the motivation for re-examining these commonly studied variables. While controlling for age, noise exposure history, and general health history, we attempted to isolate the effects of gender, race, and ear (left versus right) on hearing thresholds and OAEs. Our results challenge the notion of a right ear advantage and question the existence of a significant gender and race differences in both hearing thresholds and OAE levels. These results suggest that ear canal anatomy and acoustics should be important considerations when evaluating the influence of gender, race, and ear on peripheral auditory function.

  16. Predicting non-stationary algal dynamics following changes in hydrometeorological conditions using data assimilation techniques

    NASA Astrophysics Data System (ADS)

    Kim, S.; Seo, D. J.

    2017-12-01

    When water temperature (TW) increases due to changes in hydrometeorological conditions, the overall ecological conditions change in the aquatic system. The changes can be harmful to human health and potentially fatal to fish habitat. Therefore, it is important to assess the impacts of thermal disturbances on in-stream processes of water quality variables and be able to predict effectiveness of possible actions that may be taken for water quality protection. For skillful prediction of in-stream water quality processes, it is necessary for the watershed water quality models to be able to reflect such changes. Most of the currently available models, however, assume static parameters for the biophysiochemical processes and hence are not able to capture nonstationaries seen in water quality observations. In this work, we assess the performance of the Hydrological Simulation Program-Fortran (HSPF) in predicting algal dynamics following TW increase. The study area is located in the Republic of Korea where waterway change due to weir construction and drought concurrently occurred around 2012. In this work we use data assimilation (DA) techniques to update model parameters as well as the initial condition of selected state variables for in-stream processes relevant to algal growth. For assessment of model performance and characterization of temporal variability, various goodness-of-fit measures and wavelet analysis are used.

  17. An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor.

    PubMed

    Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei

    2016-01-11

    Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.

  18. North Atlantic climate variability: The role of the North Atlantic Oscillation

    NASA Astrophysics Data System (ADS)

    Hurrell, James W.; Deser, Clara

    2009-08-01

    Marine ecosystems are undergoing rapid change at local and global scales. To understand these changes, including the relative roles of natural variability and anthropogenic effects, and to predict the future state of marine ecosystems requires quantitative understanding of the physics, biogeochemistry and ecology of oceanic systems at mechanistic levels. Central to this understanding is the role played by dominant patterns or "modes" of atmospheric and oceanic variability, which orchestrate coherent variations in climate over large regions with profound impacts on ecosystems. We review the spatial structure of extratropical climate variability over the Northern Hemisphere and, specifically, focus on modes of climate variability over the extratropical North Atlantic. A leading pattern of weather and climate variability over the Northern Hemisphere is the North Atlantic Oscillation (NAO). The NAO refers to a redistribution of atmospheric mass between the Arctic and the subtropical Atlantic, and swings from one phase to another producing large changes in surface air temperature, winds, storminess and precipitation over the Atlantic as well as the adjacent continents. The NAO also affects the ocean through changes in heat content, gyre circulations, mixed layer depth, salinity, high latitude deep water formation and sea ice cover. Thus, indices of the NAO have become widely used to document and understand how this mode of variability alters the structure and functioning of marine ecosystems. There is no unique way, however, to define the NAO. Several approaches are discussed including both linear (e.g., principal component analysis) and nonlinear (e.g., cluster analysis) techniques. The former, which have been most widely used, assume preferred atmospheric circulation states come in pairs, in which anomalies of opposite polarity have the same spatial structure. In contrast, nonlinear techniques search for recurrent patterns of a specific amplitude and sign. They reveal, for instance, spatial asymmetries between different phases of the NAO that are likely important for ecological studies. It also follows that there is no universally accepted index to describe the temporal evolution of the NAO. Several of the most common measures are presented and compared. All reveal that there is no preferred time scale of variability for the NAO: large changes occur from one winter to the next and from one decade to the next. There is also a large amount of within-season variability in the patterns of atmospheric circulation of the North Atlantic, so that most winters cannot be characterized solely by a canonical NAO structure. A better understanding of how the NAO responds to external forcing, including sea surface temperature changes in the tropics, stratospheric influences, and increasing greenhouse gas concentrations, is crucial to the current debate on climate variability and change.

  19. North Atlantic climate variability: The role of the North Atlantic Oscillation

    NASA Astrophysics Data System (ADS)

    Hurrell, James W.; Deser, Clara

    2010-02-01

    Marine ecosystems are undergoing rapid change at local and global scales. To understand these changes, including the relative roles of natural variability and anthropogenic effects, and to predict the future state of marine ecosystems requires quantitative understanding of the physics, biogeochemistry and ecology of oceanic systems at mechanistic levels. Central to this understanding is the role played by dominant patterns or "modes" of atmospheric and oceanic variability, which orchestrate coherent variations in climate over large regions with profound impacts on ecosystems. We review the spatial structure of extratropical climate variability over the Northern Hemisphere and, specifically, focus on modes of climate variability over the extratropical North Atlantic. A leading pattern of weather and climate variability over the Northern Hemisphere is the North Atlantic Oscillation (NAO). The NAO refers to a redistribution of atmospheric mass between the Arctic and the subtropical Atlantic, and swings from one phase to another producing large changes in surface air temperature, winds, storminess and precipitation over the Atlantic as well as the adjacent continents. The NAO also affects the ocean through changes in heat content, gyre circulations, mixed layer depth, salinity, high latitude deep water formation and sea ice cover. Thus, indices of the NAO have become widely used to document and understand how this mode of variability alters the structure and functioning of marine ecosystems. There is no unique way, however, to define the NAO. Several approaches are discussed including both linear (e.g., principal component analysis) and nonlinear (e.g., cluster analysis) techniques. The former, which have been most widely used, assume preferred atmospheric circulation states come in pairs, in which anomalies of opposite polarity have the same spatial structure. In contrast, nonlinear techniques search for recurrent patterns of a specific amplitude and sign. They reveal, for instance, spatial asymmetries between different phases of the NAO that are likely important for ecological studies. It also follows that there is no universally accepted index to describe the temporal evolution of the NAO. Several of the most common measures are presented and compared. All reveal that there is no preferred time scale of variability for the NAO: large changes occur from one winter to the next and from one decade to the next. There is also a large amount of within-season variability in the patterns of atmospheric circulation of the North Atlantic, so that most winters cannot be characterized solely by a canonical NAO structure. A better understanding of how the NAO responds to external forcing, including sea surface temperature changes in the tropics, stratospheric influences, and increasing greenhouse gas concentrations, is crucial to the current debate on climate variability and change.

  20. Locally optimal control under unknown dynamics with learnt cost function: application to industrial robot positioning

    NASA Astrophysics Data System (ADS)

    Guérin, Joris; Gibaru, Olivier; Thiery, Stéphane; Nyiri, Eric

    2017-01-01

    Recent methods of Reinforcement Learning have enabled to solve difficult, high dimensional, robotic tasks under unknown dynamics using iterative Linear Quadratic Gaussian control theory. These algorithms are based on building a local time-varying linear model of the dynamics from data gathered through interaction with the environment. In such tasks, the cost function is often expressed directly in terms of the state and control variables so that it can be locally quadratized to run the algorithm. If the cost is expressed in terms of other variables, a model is required to compute the cost function from the variables manipulated. We propose a method to learn the cost function directly from the data, in the same way as for the dynamics. This way, the cost function can be defined in terms of any measurable quantity and thus can be chosen more appropriately for the task to be carried out. With our method, any sensor information can be used to design the cost function. We demonstrate the efficiency of this method through simulating, with the V-REP software, the learning of a Cartesian positioning task on several industrial robots with different characteristics. The robots are controlled in joint space and no model is provided a priori. Our results are compared with another model free technique, consisting in writing the cost function as a state variable.

  1. Effects of variable practice on the motor learning outcomes in manual wheelchair propulsion.

    PubMed

    Leving, Marika T; Vegter, Riemer J K; de Groot, Sonja; van der Woude, Lucas H V

    2016-11-23

    Handrim wheelchair propulsion is a cyclic skill that needs to be learned during rehabilitation. It has been suggested that more variability in propulsion technique benefits the motor learning process of wheelchair propulsion. The purpose of this study was to determine the influence of variable practice on the motor learning outcomes of wheelchair propulsion in able-bodied participants. Variable practice was introduced in the form of wheelchair basketball practice and wheelchair-skill practice. Motor learning was operationalized as improvements in mechanical efficiency and propulsion technique. Eleven Participants in the variable practice group and 12 participants in the control group performed an identical pre-test and a post-test. Pre- and post-test were performed in a wheelchair on a motor-driven treadmill (1.11 m/s) at a relative power output of 0.23 W/kg. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated. Between the pre- and the post-test the variable practice group received 7 practice sessions. During the practice sessions participants performed one-hour of variable practice, consisting of five wheelchair-skill tasks and a 30 min wheelchair basketball game. The control group did not receive any practice between the pre- and the post-test. Comparison of the pre- and the post-test showed that the variable practice group significantly improved the mechanical efficiency (4.5 ± 0.6% → 5.7 ± 0.7%) in contrast to the control group (4.5 ± 0.6% → 4.4 ± 0.5%) (group x time interaction effect p < 0.001).With regard to propulsion technique, both groups significantly reduced the push frequency and increased the contact angle of the hand with the handrim (within group, time effect). No significant group × time interaction effects were found for propulsion technique. With regard to propulsion variability, the variable practice group increased variability when compared to the control group (interaction effect p < 0.001). Compared to a control, variable practice, resulted in an increase in mechanical efficiency and increased variability. Interestingly, the large relative improvement in mechanical efficiency was concomitant with only moderate improvements in the propulsion technique, which were similar in the control group, suggesting that other factors besides propulsion technique contributed to the lower energy expenditure.

  2. New Technique of High-Performance Torque Control Developed for Induction Machines

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.

    2003-01-01

    Two forms of high-performance torque control for motor drives have been described in the literature: field orientation control and direct torque control. Field orientation control has been the method of choice for previous NASA electromechanical actuator research efforts with induction motors. Direct torque control has the potential to offer some advantages over field orientation, including ease of implementation and faster response. However, the most common form of direct torque control is not suitable for the highspeed, low-stator-flux linkage induction machines designed for electromechanical actuators with the presently available sample rates of digital control systems (higher sample rates are required). In addition, this form of direct torque control is not suitable for the addition of a high-frequency carrier signal necessary for the "self-sensing" (sensorless) position estimation technique. This technique enables low- and zero-speed position sensorless operation of the machine. Sensorless operation is desirable to reduce the number of necessary feedback signals and transducers, thus improving the reliability and reducing the mass and volume of the system. This research was directed at developing an alternative form of direct torque control known as a "deadbeat," or inverse model, solution. This form uses pulse-width modulation of the voltage applied to the machine, thus reducing the necessary sample and switching frequency for the high-speed NASA motor. In addition, the structure of the deadbeat form allows the addition of the high-frequency carrier signal so that low- and zero-speed sensorless operation is possible. The new deadbeat solution is based on using the stator and rotor flux as state variables. This choice of state variables leads to a simple graphical representation of the solution as the intersection of a constant torque line with a constant stator flux circle. Previous solutions have been expressed only in complex mathematical terms without a method to clearly visualize the solution. The graphical technique allows a more insightful understanding of the operation of the machine under various conditions.

  3. Comparison of seven techniques for typing international epidemic strains of Clostridium difficile: restriction endonuclease analysis, pulsed-field gel electrophoresis, PCR-ribotyping, multilocus sequence typing, multilocus variable-number tandem-repeat analysis, amplified fragment length polymorphism, and surface layer protein A gene sequence typing.

    PubMed

    Killgore, George; Thompson, Angela; Johnson, Stuart; Brazier, Jon; Kuijper, Ed; Pepin, Jacques; Frost, Eric H; Savelkoul, Paul; Nicholson, Brad; van den Berg, Renate J; Kato, Haru; Sambol, Susan P; Zukowski, Walter; Woods, Christopher; Limbago, Brandi; Gerding, Dale N; McDonald, L Clifford

    2008-02-01

    Using 42 isolates contributed by laboratories in Canada, The Netherlands, the United Kingdom, and the United States, we compared the results of analyses done with seven Clostridium difficile typing techniques: multilocus variable-number tandem-repeat analysis (MLVA), amplified fragment length polymorphism (AFLP), surface layer protein A gene sequence typing (slpAST), PCR-ribotyping, restriction endonuclease analysis (REA), multilocus sequence typing (MLST), and pulsed-field gel electrophoresis (PFGE). We assessed the discriminating ability and typeability of each technique as well as the agreement among techniques in grouping isolates by allele profile A (AP-A) through AP-F, which are defined by toxinotype, the presence of the binary toxin gene, and deletion in the tcdC gene. We found that all isolates were typeable by all techniques and that discrimination index scores for the techniques tested ranged from 0.964 to 0.631 in the following order: MLVA, REA, PFGE, slpAST, PCR-ribotyping, MLST, and AFLP. All the techniques were able to distinguish the current epidemic strain of C. difficile (BI/027/NAP1) from other strains. All of the techniques showed multiple types for AP-A (toxinotype 0, binary toxin negative, and no tcdC gene deletion). REA, slpAST, MLST, and PCR-ribotyping all included AP-B (toxinotype III, binary toxin positive, and an 18-bp deletion in tcdC) in a single group that excluded other APs. PFGE, AFLP, and MLVA grouped two, one, and two different non-AP-B isolates, respectively, with their AP-B isolates. All techniques appear to be capable of detecting outbreak strains, but only REA and MLVA showed sufficient discrimination to distinguish strains from different outbreaks.

  4. The Applications of Mindfulness with Students of Secondary School: Results on the Academic Performance, Self-concept and Anxiety

    NASA Astrophysics Data System (ADS)

    Franco, Clemente; Mañas, Israel; Cangas, Adolfo J.; Gallego, José

    The aim of the present research is to verify the impact of a mindfulness programme on the levels academic performance, self-concept and anxiety, of a group of students in Year 1 at secondary school. The statistical analyses carried out on the variables studied showed significant differences in favour of the experimental group with regard to the control group in all the variables analysed. In the experimental group we can observe a significant increase of academic performance as well as an improvement in all the self-concept dimensions, and a significant decrease in anxiety states and traits. The importance and usefulness of mindfulness techniques in the educative system is discussed.

  5. Usage of machine learning for the separation of electroweak and strong Zγ production at the LHC experiments

    NASA Astrophysics Data System (ADS)

    Petukhov, A. M.; Soldatov, E. Yu

    2017-12-01

    Separation of electroweak component from strong component of associated Zγ production on hadron colliders is a very challenging task due to identical final states of such processes. The only difference is the origin of two leading jets in these two processes. Rectangular cuts on jet kinematic variables from ATLAS/CMS 8 TeV Zγ experimental analyses were improved using machine learning techniques. New selection variables were also tested. The expected significance of separation for LHC experiments conditions at the second datataking period (Run2) and 120 fb-1 amount of data reaches more than 5σ. Future experimental observation of electroweak Zγ production can also lead to the observation physics beyond Standard Model.

  6. Experimental study on discretely modulated continuous-variable quantum key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen Yong; Zou Hongxin; Chen Pingxing

    2010-08-15

    We present a discretely modulated continuous-variable quantum key distribution system in free space by using strong coherent states. The amplitude noise in the laser source is suppressed to the shot-noise limit by using a mode cleaner combined with a frequency shift technique. Also, it is proven that the phase noise in the source has no impact on the final secret key rate. In order to increase the encoding rate, we use broadband homodyne detectors and the no-switching protocol. In a realistic model, we establish a secret key rate of 46.8 kbits/s against collective attacks at an encoding rate of 10more » MHz for a 90% channel loss when the modulation variance is optimal.« less

  7. Accounting for substitution and spatial heterogeneity in a labelled choice experiment.

    PubMed

    Lizin, S; Brouwer, R; Liekens, I; Broeckx, S

    2016-10-01

    Many environmental valuation studies using stated preferences techniques are single-site studies that ignore essential spatial aspects, including possible substitution effects. In this paper substitution effects are captured explicitly in the design of a labelled choice experiment and the inclusion of different distance variables in the choice model specification. We test the effect of spatial heterogeneity on welfare estimates and transfer errors for minor and major river restoration works, and the transferability of river specific utility functions, accounting for key variables such as site visitation, spatial clustering and income. River specific utility functions appear to be transferable, resulting in low transfer errors. However, ignoring spatial heterogeneity increases transfer errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Non-Inferential Multi-Subject Study of Functional Connectivity during Visual Stimulation.

    PubMed

    Esposito, F; Cirillo, M; Aragri, A; Caranci, F; Cirillo, L; Di Salle, F; Cirillo, S

    2007-01-31

    Independent component analysis (ICA) is a powerful technique for the multivariate, non-inferential, data-driven analysis of functional magnetic resonance imaging (fMRI) data-sets. The non-inferential nature of ICA makes this a suitable technique for the study of complex mental states whose temporal evolution would be difficult to describe analytically in terms of classical statistical regressors. Taking advantage of this feature, ICA can extract a number of functional connectivity patterns regardless of the task executed by the subject. The technique is so powerful that functional connectivity patterns can be derived even when the subject is just resting in the scanner, opening the opportunity for functional investigation of the human mind at its basal "default" state, which has been proposed to be altered in several brain disorders. However, one major drawback of ICA consists in the difficulty of managing its results, which are not represented by a single functional image as in inferential studies. This produces the need for a classification of ICA results and exacerbates the difficulty of obtaining group "averaged" functional connectivity patterns, while preserving the interpretation of individual differences. Addressing the subject-level variability in the very same framework of "grouping" appears to be a favourable approach towards the clinical evaluation and application of ICA-based methodologies. Here we present a novel strategy for group-level ICA analyses, namely the self-organizing group-level ICA (sog-ICA), which is used on visual activation fMRI data from a block-design experiment repeated on six subjects. We propose the sog-ICA as a multi-subject analysis tool for grouping ICA data while assessing the similarity and variability of the fMRI results of individual subject decompositions.

  9. A human visual based binarization technique for histological images

    NASA Astrophysics Data System (ADS)

    Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos

    2017-05-01

    In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.

  10. Enhancing SMAP Soil Moisture Retrievals via Superresolution Techniques

    NASA Astrophysics Data System (ADS)

    Beale, K. D.; Ebtehaj, A. M.; Romberg, J. K.; Bras, R. L.

    2017-12-01

    Soil moisture is a key state variable that modulates land-atmosphere interactions and its high-resolution global scale estimates are essential for improved weather forecasting, drought prediction, crop management, and the safety of troop mobility. Currently, NASA's Soil Moisture Active/Passive (SMAP) satellite provides a global picture of soil moisture variability at a resolution of 36 km, which is prohibitive for some hydrologic applications. The goal of this research is to enhance the resolution of SMAP passive microwave retrievals by a factor of 2 to 4 using modern superresolution techniques that rely on the knowledge of high-resolution land surface models. In this work, we explore several super-resolution techniques including an empirical dictionary method, a learned dictionary method, and a three-layer convolutional neural network. Using a year of global high-resolution land surface model simulations as training set, we found that we are able to produce high-resolution soil moisture maps that outperform the original low-resolution observations both qualitatively and quantitatively. In particular, on a patch-by-patch basis we are able to produce estimates of high-resolution soil moisture maps that improve on the original low-resolution patches by on average 6% in terms of mean-squared error, and 14% in terms of the structural similarity index.

  11. The design of a turboshaft speed governor using modern control techniques

    NASA Technical Reports Server (NTRS)

    Delosreyes, G.; Gouchoe, D. R.

    1986-01-01

    The objectives of this program were: to verify the model of off schedule compressor variable geometry in the T700 turboshaft engine nonlinear model; to evaluate the use of the pseudo-random binary noise (PRBN) technique for obtaining engine frequency response data; and to design a high performance power turbine speed governor using modern control methods. Reduction of T700 engine test data generated at NASA-Lewis indicated that the off schedule variable geometry effects were accurate as modeled. Analysis also showed that the PRBN technique combined with the maximum likelihood model identification method produced a Bode frequency response that was as accurate as the response obtained from standard sinewave testing methods. The frequency response verified the accuracy of linear models consisting of engine partial derivatives and used for design. A power turbine governor was designed using the Linear Quadratic Regulator (LQR) method of full state feedback control. A Kalman filter observer was used to estimate helicopter main rotor blade velocity. Compared to the baseline T700 power turbine speed governor, the LQR governor reduced droop up to 25 percent for a 490 shaft horsepower transient in 0.1 sec simulating a wind gust, and up to 85 percent for a 700 shaft horsepower transient in 0.5 sec simulating a large collective pitch angle transient.

  12. A regularized auxiliary particle filtering approach for system state estimation and battery life prediction

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wang, Wilson; Ma, Fai

    2011-07-01

    System current state estimation (or condition monitoring) and future state prediction (or failure prognostics) constitute the core elements of condition-based maintenance programs. For complex systems whose internal state variables are either inaccessible to sensors or hard to measure under normal operational conditions, inference has to be made from indirect measurements using approaches such as Bayesian learning. In recent years, the auxiliary particle filter (APF) has gained popularity in Bayesian state estimation; the APF technique, however, has some potential limitations in real-world applications. For example, the diversity of the particles may deteriorate when the process noise is small, and the variance of the importance weights could become extremely large when the likelihood varies dramatically over the prior. To tackle these problems, a regularized auxiliary particle filter (RAPF) is developed in this paper for system state estimation and forecasting. This RAPF aims to improve the performance of the APF through two innovative steps: (1) regularize the approximating empirical density and redraw samples from a continuous distribution so as to diversify the particles; and (2) smooth out the rather diffused proposals by a rejection/resampling approach so as to improve the robustness of particle filtering. The effectiveness of the proposed RAPF technique is evaluated through simulations of a nonlinear/non-Gaussian benchmark model for state estimation. It is also implemented for a real application in the remaining useful life (RUL) prediction of lithium-ion batteries.

  13. Logic programming to predict cell fate patterns and retrodict genotypes in organogenesis.

    PubMed

    Hall, Benjamin A; Jackson, Ethan; Hajnal, Alex; Fisher, Jasmin

    2014-09-06

    Caenorhabditis elegans vulval development is a paradigm system for understanding cell differentiation in the process of organogenesis. Through temporal and spatial controls, the fate pattern of six cells is determined by the competition of the LET-23 and the Notch signalling pathways. Modelling cell fate determination in vulval development using state-based models, coupled with formal analysis techniques, has been established as a powerful approach in predicting the outcome of combinations of mutations. However, computing the outcomes of complex and highly concurrent models can become prohibitive. Here, we show how logic programs derived from state machines describing the differentiation of C. elegans vulval precursor cells can increase the speed of prediction by four orders of magnitude relative to previous approaches. Moreover, this increase in speed allows us to infer, or 'retrodict', compatible genomes from cell fate patterns. We exploit this technique to predict highly variable cell fate patterns resulting from dig-1 reduced-function mutations and let-23 mosaics. In addition to the new insights offered, we propose our technique as a platform for aiding the design and analysis of experimental data. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Windhorst, Robert; Phillips, James

    1998-01-01

    This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.

  15. Optimization of Supersonic Transport Trajectories

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Windhorst, Robert; Phillips, James

    1998-01-01

    This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.

  16. Solid-state transformation of Fe-rich intermetallic phases in Al–5.0Cu–0.6Mn squeeze cast alloy with variable Fe contents during solution heat treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Bo; School of Mechanical Engineering, Gui Zhou University, Guiyang 550000; Zhang, Weiwen, E-mail: mewzhang@scut.edu.cn

    2015-06-15

    The Al–5.0 wt.% Cu–0.6 wt.% Mn alloys with a variable Fe content were prepared by squeeze casting. Optical microscopy (OM), Deep etching technique, scanning electron microscopy(SEM), X-ray diffraction (XRD) and transmission electron microscopy (TEM) were used to examine the solid-state transformation of Fe-rich intermetallics during the solution heat treatment. The results showed that the Chinese script-like α-Fe, Al{sub 6}(FeMn) and needle-like Al{sub 3}(FeMn) phases transform to a new Cu-rich β-Fe (Al{sub 7}Cu{sub 2}(FeMn)) phase during solution heat treatment. The possible reaction and overall transformation kinetics of the solid-state phase transformation for the Fe-rich intermetallics were investigated. - Graphical abstract: Displaymore » Omitted - Highlights: • The α-Fe, Al{sub 6}(FeMn) and Al{sub 3}(FeMn) phases change to the β-Fe phases. • Possible reactions of Fe phases during solution heat treatment are discussed. • The overall fractional transformation rate follows an Avrami curve.« less

  17. Analysis of heart rate variability signal in meditation using second-order difference plot

    NASA Astrophysics Data System (ADS)

    Goswami, Damodar Prasad; Tibarewala, Dewaki Nandan; Bhattacharya, Dilip Kumar

    2011-06-01

    In this article, the heart rate variability signal taken from subjects practising different types of meditations have been investigated to find the underlying similarity among them and how they differ from the non-meditative condition. Four different groups of subjects having different meditation techniques are involved. The data have been obtained from the Physionet and also collected with our own ECG machine. For data analysis, the second order difference plot is applied. Each of the plots obtained from the second order differences form a single cluster which is nearly elliptical in shape except for some outliers. In meditation, the axis of the elliptical cluster rotates anticlockwise from the cluster formed from the premeditation data, although the amount of rotation is not of the same extent in every case. This form study reveals definite and specific changes in the heart rate variability of the subjects during meditation. All the four groups of subjects followed different procedures but surprisingly the resulting physiological effect is the same to some extent. It indicates that there is some commonness among all the meditative techniques in spite of their apparent dissimilarity and it may be hoped that each of them leads to the same result as preached by the masters of meditation. The study shows that meditative state has a completely different physiology and that it can be achieved by any meditation technique we have observed. Possible use of this tool in clinical setting such as in stress management and in the treatment of hypertension is also mentioned.

  18. The XMM deep survey in the CDF-S. X. X-ray variability of bright sources

    NASA Astrophysics Data System (ADS)

    Falocco, S.; Paolillo, M.; Comastri, A.; Carrera, F. J.; Ranalli, P.; Iwasawa, K.; Georgantopoulos, I.; Vignali, C.; Gilli, R.

    2017-12-01

    Aims: We aim to study the variability properties of bright hard X-ray selected active galactic nuclei (AGN) in the redshift range between 0.3 and 1.6 detected in the Chandra Deep Field South (XMM-CDFS) by a long ( 3 Ms) XMM observation. Methods: Taking advantage of the good count statistics in the XMM CDFS, we search for flux and spectral variability using the hardness ratio (HR) techniques. We also investigate the spectral variability of different spectral components (photon index of the power law, column density of the local absorber, and reflection intensity). The spectra were merged in six epochs (defined as adjacent observations) and in high and low flux states to understand whether the flux transitions are accompanied by spectral changes. Results: The flux variability is significant in all the sources investigated. The HRs in general are not as variable as the fluxes, in line with previous results on deep fields. Only one source displays a variable HR, anti-correlated with the flux (source 337). The spectral analysis in the available epochs confirms the steeper when brighter trend consistent with Comptonisation models only in this source at 99% confidence level. Finding this trend in one out of seven unabsorbed sources is consistent, within the statistical limits, with the 15% of unabsorbed AGN in previous deep surveys. No significant variability in the column densities, nor in the Compton reflection component, has been detected across the epochs considered. The high and low states display in general different normalisations but consistent spectral properties. Conclusions: X-ray flux fluctuations are ubiquitous in AGN, though in some cases the data quality does not allow for their detection. In general, the significant flux variations are not associated with spectral variability: photon index and column densities are not significantly variable in nine out of the ten AGN over long timescales (from three to six and a half years). Photon index variability is found only in one source (which is steeper when brighter) out of seven unabsorbed AGN. The percentage of spectrally variable objects is consistent, within the limited statistics of sources studied here, with previous deep samples.

  19. A Generic Inner-Loop Control Law Structure for Six-Degree-of-Freedom Conceptual Aircraft Design

    NASA Technical Reports Server (NTRS)

    Cox, Timothy H.; Cotting, M. Christopher

    2005-01-01

    A generic control system framework for both real-time and batch six-degree-of-freedom simulations is presented. This framework uses a simplified dynamic inversion technique to allow for stabilization and control of any type of aircraft at the pilot interface level. The simulation, designed primarily for the real-time simulation environment, also can be run in a batch mode through a simple guidance interface. Direct vehicle-state acceleration feedback is required with the simplified dynamic inversion technique. The estimation of surface effectiveness within real-time simulation timing constraints also is required. The generic framework provides easily modifiable control variables, allowing flexibility in the variables that the pilot commands. A direct control allocation scheme is used to command aircraft effectors. Primary uses for this system include conceptual and preliminary design of aircraft, when vehicle models are rapidly changing and knowledge of vehicle six-degree-of-freedom performance is required. A simulated airbreathing hypersonic vehicle and simulated high-performance fighter aircraft are used to demonstrate the flexibility and utility of the control system.

  20. A Generic Inner-Loop Control Law Structure for Six-Degree-of-Freedom Conceptual Aircraft Design

    NASA Technical Reports Server (NTRS)

    Cox, Timothy H.; Cotting, Christopher

    2005-01-01

    A generic control system framework for both real-time and batch six-degree-of-freedom (6-DOF) simulations is presented. This framework uses a simplified dynamic inversion technique to allow for stabilization and control of any type of aircraft at the pilot interface level. The simulation, designed primarily for the real-time simulation environment, also can be run in a batch mode through a simple guidance interface. Direct vehicle-state acceleration feedback is required with the simplified dynamic inversion technique. The estimation of surface effectiveness within real-time simulation timing constraints also is required. The generic framework provides easily modifiable control variables, allowing flexibility in the variables that the pilot commands. A direct control allocation scheme is used to command aircraft effectors. Primary uses for this system include conceptual and preliminary design of aircraft, when vehicle models are rapidly changing and knowledge of vehicle 6-DOF performance is required. A simulated airbreathing hypersonic vehicle and simulated high-performance fighter aircraft are used to demonstrate the flexibility and utility of the control system.

  1. Reduced heart rate variability in chronic severe traumatic brain injury: Association with impaired emotional and social functioning, and potential for treatment using biofeedback.

    PubMed

    Francis, Heather M; Fisher, Alana; Rushby, Jacqueline A; McDonald, Skye

    2016-01-01

    Heart rate variability (HRV) may provide an index of capacity for social functioning and may be remediated by HRV biofeedback. Given reductions in HRV are found following traumatic brain injury (TBI), the present study aimed to determine whether lower HRV in TBI is associated with social function, and whether HRV biofeedback might be a useful remediation technique in this population. Resting state HRV and measures of social and emotional processing were collected in 30 individuals with severe TBI (3-34 years post-injury) and 30 controls. This was followed by a single session of HRV biofeedback. HRV was positively associated with social cognition and empathy, and negatively associated with alexithymia for the TBI group. Both TBI and control groups showed significantly increased HRV on both time-domain (i.e., SDNN, rMSSD) and frequency-domain measures (LF, HF, LF:HF ratio) during biofeedback compared to baseline. These results suggest that decreased HRV is linked to social and emotional function following severe TBI, and may be a novel target for therapy using HRV biofeedback techniques.

  2. Experimental quantum key distribution with source flaws

    NASA Astrophysics Data System (ADS)

    Xu, Feihu; Wei, Kejin; Sajeed, Shihan; Kaiser, Sarah; Sun, Shihai; Tang, Zhiyuan; Qian, Li; Makarov, Vadim; Lo, Hoi-Kwong

    2015-09-01

    Decoy-state quantum key distribution (QKD) is a standard technique in current quantum cryptographic implementations. Unfortunately, existing experiments have two important drawbacks: the state preparation is assumed to be perfect without errors and the employed security proofs do not fully consider the finite-key effects for general attacks. These two drawbacks mean that existing experiments are not guaranteed to be proven to be secure in practice. Here, we perform an experiment that shows secure QKD with imperfect state preparations over long distances and achieves rigorous finite-key security bounds for decoy-state QKD against coherent attacks in the universally composable framework. We quantify the source flaws experimentally and demonstrate a QKD implementation that is tolerant to channel loss despite the source flaws. Our implementation considers more real-world problems than most previous experiments, and our theory can be applied to general discrete-variable QKD systems. These features constitute a step towards secure QKD with imperfect devices.

  3. Aeroservoelastic modeling and applications using minimum-state approximations of the unsteady aerodynamics

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Karpel, Mordechay

    1989-01-01

    Various control analysis, design, and simulation techniques for aeroelastic applications require the equations of motion to be cast in a linear time-invariant state-space form. Unsteady aerodynamics forces have to be approximated as rational functions of the Laplace variable in order to put them in this framework. For the minimum-state method, the number of denominator roots in the rational approximation. Results are shown of applying various approximation enhancements (including optimization, frequency dependent weighting of the tabular data, and constraint selection) with the minimum-state formulation to the active flexible wing wind-tunnel model. The results demonstrate that good models can be developed which have an order of magnitude fewer augmenting aerodynamic equations more than traditional approaches. This reduction facilitates the design of lower order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena.

  4. Extending Quantum Chemistry of Bound States to Electronic Resonances

    NASA Astrophysics Data System (ADS)

    Jagau, Thomas-C.; Bravaya, Ksenia B.; Krylov, Anna I.

    2017-05-01

    Electronic resonances are metastable states with finite lifetime embedded in the ionization or detachment continuum. They are ubiquitous in chemistry, physics, and biology. Resonances play a central role in processes as diverse as DNA radiolysis, plasmonic catalysis, and attosecond spectroscopy. This review describes novel equation-of-motion coupled-cluster (EOM-CC) methods designed to treat resonances and bound states on an equal footing. Built on complex-variable techniques such as complex scaling and complex absorbing potentials that allow resonances to be associated with a single eigenstate of the molecular Hamiltonian rather than several continuum eigenstates, these methods extend electronic-structure tools developed for bound states to electronic resonances. Selected examples emphasize the formal advantages as well as the numerical accuracy of EOM-CC in the treatment of electronic resonances. Connections to experimental observables such as spectra and cross sections, as well as practical aspects of implementing complex-valued approaches, are also discussed.

  5. Resting-State Functional Connectivity in Autism Spectrum Disorders: A Review

    PubMed Central

    Hull, Jocelyn V.; Jacokes, Zachary J.; Torgerson, Carinna M.; Irimia, Andrei; Van Horn, John Darrell

    2017-01-01

    Ongoing debate exists within the resting-state functional MRI (fMRI) literature over how intrinsic connectivity is altered in the autistic brain, with reports of general over-connectivity, under-connectivity, and/or a combination of both. Classifying autism using brain connectivity is complicated by the heterogeneous nature of the condition, allowing for the possibility of widely variable connectivity patterns among individuals with the disorder. Further differences in reported results may be attributable to the age and sex of participants included, designs of the resting-state scan, and to the analysis technique used to evaluate the data. This review systematically examines the resting-state fMRI autism literature to date and compares studies in an attempt to draw overall conclusions that are presently challenging. We also propose future direction for rs-fMRI use to categorize individuals with autism spectrum disorder, serve as a possible diagnostic tool, and best utilize data-sharing initiatives. PMID:28101064

  6. Improving national-scale invasion maps: Tamarisk in the western United States

    USGS Publications Warehouse

    Jarnevich, C.S.; Evangelista, P.; Stohlgren, T.J.; Morisette, J.

    2011-01-01

    New invasions, better field data, and novel spatial-modeling techniques often drive the need to revisit previous maps and models of invasive species. Such is the case with the at least 10 species of Tamarix, which are invading riparian systems in the western United States and expanding their range throughout North America. In 2006, we developed a National Tamarisk Map by using a compilation of presence and absence locations with remotely sensed data and statistical modeling techniques. Since the publication of that work, our database of Tamarix distributions has grown significantly. Using the updated database of species occurrence, new predictor variables, and the maximum entropy (Maxent) model, we have revised our potential Tamarix distribution map for the western United States. Distance-to-water was the strongest predictor in the model (58.1%), while mean temperature of the warmest quarter was the second best predictor (18.4%). Model validation, averaged from 25 model iterations, indicated that our analysis had strong predictive performance (AUC = 0.93) and that the extent of Tamarix distributions is much greater than previously thought. The southwestern United States had the greatest suitable habitat, and this result differed from the 2006 model. Our work highlights the utility of iterative modeling for invasive species habitat modeling as new information becomes available. ?? 2011.

  7. Uncertainty of Videogrammetric Techniques used for Aerodynamic Testing

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Liu, Tianshu; DeLoach, Richard

    2002-01-01

    The uncertainty of videogrammetric techniques used for the measurement of static aeroelastic wind tunnel model deformation and wind tunnel model pitch angle is discussed. Sensitivity analyses and geometrical considerations of uncertainty are augmented by analyses of experimental data in which videogrammetric angle measurements were taken simultaneously with precision servo accelerometers corrected for dynamics. An analysis of variance (ANOVA) to examine error dependence on angle of attack, sensor used (inertial or optical). and on tunnel state variables such as Mach number is presented. Experimental comparisons with a high-accuracy indexing table are presented. Small roll angles are found to introduce a zero-shift in the measured angles. It is shown experimentally that. provided the proper constraints necessary for a solution are met, a single- camera solution can he comparable to a 2-camera intersection result. The relative immunity of optical techniques to dynamics is illustrated.

  8. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  9. Perceptions of Voice Teachers Regarding Students' Vocal Behaviors During Singing and Speaking.

    PubMed

    Beeman, Shellie A

    2017-01-01

    This study examined voice teachers' perceptions of their instruction of healthy singing and speaking voice techniques. An online, researcher-generated questionnaire based on the McClosky technique was administered to college/university voice teachers listed as members in the 2012-2013 College Music Society directory. A majority of participants believed there to be a relationship between the health of the singing voice and the health of the speaking voice. Participants' perception scores were the most positive for variable MBSi, the monitoring of students' vocal behaviors during singing. Perception scores for variable TVB, the teaching of healthy vocal behaviors, and variable MBSp, the monitoring of students' vocal behaviors while speaking, ranked second and third, respectively. Perception scores for variable TVB were primarily associated with participants' familiarity with voice rehabilitation techniques, gender, and familiarity with the McClosky technique. Perception scores for variable MBSi were primarily associated with participants' familiarity with voice rehabilitation techniques, gender, type of student taught, and instruction of a student with a voice disorder. Perception scores for variable MBSp were correlated with the greatest number of characteristics, including participants' familiarity with voice rehabilitation techniques, familiarity with the McClosky technique, type of student taught, years of teaching experience, and instruction of a student with a voice disorder. Voice teachers are purportedly working with injured voices and attempting to include vocal health in their instruction. Although a voice teacher is not obligated to pursue further rehabilitative training, the current study revealed a positive relationship between familiarity with specific rehabilitation techniques and vocal health. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  10. Nonstationary time series prediction combined with slow feature analysis

    NASA Astrophysics Data System (ADS)

    Wang, G.; Chen, X.

    2015-01-01

    Almost all climate time series have some degree of nonstationarity due to external driving forces perturbations of the observed system. Therefore, these external driving forces should be taken into account when reconstructing the climate dynamics. This paper presents a new technique of combining the driving force of a time series obtained using the Slow Feature Analysis (SFA) approach, then introducing the driving force into a predictive model to predict non-stationary time series. In essence, the main idea of the technique is to consider the driving forces as state variables and incorporate them into the prediction model. To test the method, experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted. The results showed improved and effective prediction skill.

  11. Communication system modeling

    NASA Technical Reports Server (NTRS)

    Holland, L. D.; Walsh, J. R., Jr.; Wetherington, R. D.

    1971-01-01

    This report presents the results of work on communications systems modeling and covers three different areas of modeling. The first of these deals with the modeling of signals in communication systems in the frequency domain and the calculation of spectra for various modulations. These techniques are applied in determining the frequency spectra produced by a unified carrier system, the down-link portion of the Command and Communications System (CCS). The second modeling area covers the modeling of portions of a communication system on a block basis. A detailed analysis and modeling effort based on control theory is presented along with its application to modeling of the automatic frequency control system of an FM transmitter. A third topic discussed is a method for approximate modeling of stiff systems using state variable techniques.

  12. On the adaptive sliding mode controller for a hyperchaotic fractional-order financial system

    NASA Astrophysics Data System (ADS)

    Hajipour, Ahamad; Hajipour, Mojtaba; Baleanu, Dumitru

    2018-05-01

    This manuscript mainly focuses on the construction, dynamic analysis and control of a new fractional-order financial system. The basic dynamical behaviors of the proposed system are studied such as the equilibrium points and their stability, Lyapunov exponents, bifurcation diagrams, phase portraits of state variables and the intervals of system parameters. It is shown that the system exhibits hyperchaotic behavior for a number of system parameters and fractional-order values. To stabilize the proposed hyperchaotic fractional system with uncertain dynamics and disturbances, an efficient adaptive sliding mode controller technique is developed. Using the proposed technique, two hyperchaotic fractional-order financial systems are also synchronized. Numerical simulations are presented to verify the successful performance of the designed controllers.

  13. Plasticity models of material variability based on uncertainty quantification techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Reese E.; Rizzi, Francesco; Boyce, Brad

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less

  14. Ideal, nonideal, and no-marker variables: The confirmatory factor analysis (CFA) marker technique works when it matters.

    PubMed

    Williams, Larry J; O'Boyle, Ernest H

    2015-09-01

    A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).

  15. A FORTRAN technique for correlating a circular environmental variable with a linear physiological variable in the sugar maple.

    PubMed

    Pease, J M; Morselli, M F

    1987-01-01

    This paper deals with a computer program adapted to a statistical method for analyzing an unlimited quantity of binary recorded data of an independent circular variable (e.g. wind direction), and a linear variable (e.g. maple sap flow volume). Circular variables cannot be statistically analyzed with linear methods, unless they have been transformed. The program calculates a critical quantity, the acrophase angle (PHI, phi o). The technique is adapted from original mathematics [1] and is written in Fortran 77 for easier conversion between computer networks. Correlation analysis can be performed following the program or regression which, because of the circular nature of the independent variable, becomes periodic regression. The technique was tested on a file of approximately 4050 data pairs.

  16. A Statistical Methodology for Detecting and Monitoring Change in Forest Ecosystems Using Remotely Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Kumar, J.; Hoffman, F. M.; Hargrove, W. W.; Spruce, J.

    2011-12-01

    Variations in vegetation phenology, the annual temporal pattern of leaf growth and senescence, can be a strong indicator of ecological change or disturbance. However, phenology is also strongly influenced by seasonal, interannual, and long-term trends in climate, making identification of changes in forest ecosystems a challenge. Forest ecosystems are vulnerable to extreme weather events, insect and disease attacks, wildfire, harvesting, and other land use change. Normalized difference vegetation index (NDVI), a remotely sensed measure of greenness, provides a proxy for phenology. NDVI for the conterminous United States (CONUS) derived from the Moderate Resolution Spectroradiometer (MODIS) at 250 m resolution was used in this study to develop phenological signatures of ecological regimes called phenoregions. By applying a quantitative data mining technique to the NDVI measurements for every eight days over the entire MODIS record, annual maps of phenoregions were developed. This geospatiotemporal cluster analysis technique employs high performance computing resources, enabling analysis of such very large data sets. This technique produces a prescribed number of prototypical phenological states to which every location belongs in any year. Analysis of the shifts among phenological states yields information about responses to interannual climate variability and, more importantly, changes in ecosystem health due to disturbances. Moreover, a large change in the phenological states occupied by a single location over time indicates a significant disturbance or ecological shift. This methodology has been applied for identification of various forest disturbance events, including wildfire, tree mortality due to Mountain Pine Beetle, and other insect infestation and diseases, as well as extreme events like storms and hurricanes in the U.S. Presented will be results from analysis of phenological state dynamics, along with disturbance and validation data.

  17. Geospatial Modeling of Asthma Population in Relation to Air Pollution

    NASA Technical Reports Server (NTRS)

    Kethireddy, Swatantra R.; Tchounwou, Paul B.; Young, John H.; Luvall, Jeffrey C.; Alhamdan, Mohammad

    2013-01-01

    Current observations indicate that asthma is growing every year in the United States, specific reasons for this are not well understood. This study stems from an ongoing research effort to investigate the spatio-temporal behavior of asthma and its relatedness to air pollution. The association between environmental variables such as air quality and asthma related health issues over Mississippi State are investigated using Geographic Information Systems (GIS) tools and applications. Health data concerning asthma obtained from Mississippi State Department of Health (MSDH) for 9-year period of 2003-2011, and data of air pollutant concentrations (PM2.5) collected from USEPA web resources, and are analyzed geospatially to establish the impacts of air quality on human health specifically related to asthma. Disease mapping using geospatial techniques provides valuable insights into the spatial nature, variability, and association of asthma to air pollution. Asthma patient hospitalization data of Mississippi has been analyzed and mapped using quantitative Choropleth techniques in ArcGIS. Patients have been geocoded to their respective zip codes. Potential air pollutant sources of Interstate highways, Industries, and other land use data have been integrated in common geospatial platform to understand their adverse contribution on human health. Existing hospitals and emergency clinics are being injected into analysis to further understand their proximity and easy access to patient locations. At the current level of analysis and understanding, spatial distribution of Asthma is observed in the populations of Zip code regions in gulf coast, along the interstates of south, and in counties of Northeast Mississippi. It is also found that asthma is prevalent in most of the urban population. This GIS based project would be useful to make health risk assessment and provide information support to the administrators and decision makers for establishing satellite clinics in future.

  18. DOING Astronomy Research in High Schools.

    NASA Astrophysics Data System (ADS)

    Nook, M. A.; Williams, D. L.

    2000-12-01

    A collaboration between six science teachers at five central Minnesota high schools and astronomers at St. Cloud State University designed and implemented a program to involve high school students in active observational astronomy research. The emphasis of the program is to engage students and teachers in a research project that allows them to better understand the nature of scientific endeavor. Small, computerized telescopes and CCD cameras make it possible for high schools to develop astronomical research programs where the process of science can be experienced first hand. Each school obtained an 8-inch or 10-inch computerized SCT and a CCD camera or SLR. Astronomers from St. Cloud State University (SCSU) trained the teachers in proper astronomical techniques, as well as helping to establish the goals and objectives of the research projects. Each high school instructor trained students in observing and data reduction techniques and served as the research director for their school's project. Student observations continued throughout the school year concluding in the spring, 2000. A Variable Star Symposium was held May 20, 2000 as a culminating event. Each student involved in the process was invited to attend and give a presentation on the results of their research on variable stars. The symposium included an invited talk by a professional astronomer, and student oral and poster presentations. The research is continuing in all five of the original high schools. Eight additional schools have expressed interest in this program and are becoming involved in developing their research programs. This work is supported by Toyota Motor Sales, USA, Inc. and administered by the National Science Teachers Association through a 1999 Toyota TAPESTRY Grant and by St. Cloud State University and Independent School District 742, St. Cloud, MN.

  19. Tropical Ocean Surface Energy Balance Variability: Linking Weather to Climate Scales

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Clayson, Carol Anne

    2013-01-01

    Radiative and turbulent surface exchanges of heat and moisture across the atmosphere-ocean interface are fundamental components of the Earth s energy and water balance. Characterizing the spatiotemporal variability of these exchanges of heat and moisture is critical to understanding the global water and energy cycle variations, quantifying atmosphere-ocean feedbacks, and improving model predictability. These fluxes are integral components to tropical ocean-atmosphere variability; they can drive ocean mixed layer variations and modify the atmospheric boundary layer properties including moist static stability, thereby influencing larger-scale tropical dynamics. Non-parametric cluster-based classification of atmospheric and ocean surface properties has shown an ability to identify coherent weather regimes, each typically associated with similar properties and processes. Using satellite-based observational radiative and turbulent energy flux products, this study investigates the relationship between these weather states and surface energy processes within the context of tropical climate variability. Investigations of surface energy variations accompanying intraseasonal and interannual tropical variability often use composite-based analyses of the mean quantities of interest. Here, a similar compositing technique is employed, but the focus is on the distribution of the heat and moisture fluxes within their weather regimes. Are the observed changes in surface energy components dominated by changes in the frequency of the weather regimes or through changes in the associated fluxes within those regimes? It is this question that the presented work intends to address. The distribution of the surface heat and moisture fluxes is evaluated for both normal and non-normal states. By examining both phases of the climatic oscillations, the symmetry of energy and water cycle responses are considered.

  20. Random Assignment of Schools to Groups in the Drug Resistance Strategies Rural Project: Some New Methodological Twists

    PubMed Central

    Pettigrew, Jonathan; Miller-Day, Michelle; Krieger, Janice L.; Zhou, Jiangxiu; Hecht, Michael L.

    2014-01-01

    Random assignment to groups is the foundation for scientifically rigorous clinical trials. But assignment is challenging in group randomized trials when only a few units (schools) are assigned to each condition. In the DRSR project, we assigned 39 rural Pennsylvania and Ohio schools to three conditions (rural, classic, control). But even with 13 schools per condition, achieving pretest equivalence on important variables is not guaranteed. We collected data on six important school-level variables: rurality, number of grades in the school, enrollment per grade, percent white, percent receiving free/assisted lunch, and test scores. Key to our procedure was the inclusion of school-level drug use data, available for a subset of the schools. Also, key was that we handled the partial data with modern missing data techniques. We chose to create one composite stratifying variable based on the seven school-level variables available. Principal components analysis with the seven variables yielded two factors, which were averaged to form the composite inflate-suppress (CIS) score which was the basis of stratification. The CIS score was broken into three strata within each state; schools were assigned at random to the three program conditions from within each stratum, within each state. Results showed that program group membership was unrelated to the CIS score, the two factors making up the CIS score, and the seven items making up the factors. Program group membership was not significantly related to pretest measures of drug use (alcohol, cigarettes, marijuana, chewing tobacco; smallest p>.15), thus verifying that pretest equivalence was achieved. PMID:23722619

  1. Locomotor-respiratory coupling patterns and oxygen consumption during walking above and below preferred stride frequency.

    PubMed

    O'Halloran, Joseph; Hamill, Joseph; McDermott, William J; Remelius, Jebb G; Van Emmerik, Richard E A

    2012-03-01

    Locomotor respiratory coupling patterns in humans have been assessed on the basis of the interaction between different physiological and motor subsystems; these interactions have implications for movement economy. A complex and dynamical systems framework may provide more insight than entrainment into the variability and adaptability of these rhythms and their coupling. The purpose of this study was to investigate the relationship between steady state locomotor-respiratory coordination dynamics and oxygen consumption [Formula: see text] of the movement by varying walking stride frequency from preferred. Twelve male participants walked on a treadmill at a self-selected speed. Stride frequency was varied from -20 to +20% of preferred stride frequency (PSF) while respiratory airflow, gas exchange variables, and stride kinematics were recorded. Discrete relative phase and return map techniques were used to evaluate the strength, stability, and variability of both frequency and phase couplings. Analysis of [Formula: see text] during steady-state walking showed a U-shaped response (P = 0.002) with a minimum at PSF and PSF - 10%. Locomotor-respiratory frequency coupling strength was not greater (P = 0.375) at PSF than any other stride frequency condition. The dominant coupling across all conditions was 2:1 with greater occurrences at the lower stride frequencies. Variability in coupling was the greatest during PSF, indicating an exploration of coupling strategies to search for the coupling frequency strategy with the least oxygen consumption. Contrary to the belief that increased strength of frequency coupling would decrease oxygen consumption; these results conclude that it is the increased variability of frequency coupling that results in lower oxygen consumption.

  2. Hybrid state vector methods for structural dynamic and aeroelastic boundary value problems

    NASA Technical Reports Server (NTRS)

    Lehman, L. L.

    1982-01-01

    A computational technique is developed that is suitable for performing preliminary design aeroelastic and structural dynamic analyses of large aspect ratio lifting surfaces. The method proves to be quite general and can be adapted to solving various two point boundary value problems. The solution method, which is applicable to both fixed and rotating wing configurations, is based upon a formulation of the structural equilibrium equations in terms of a hybrid state vector containing generalized force and displacement variables. A mixed variational formulation is presented that conveniently yields a useful form for these state vector differential equations. Solutions to these equations are obtained by employing an integrating matrix method. The application of an integrating matrix provides a discretization of the differential equations that only requires solutions of standard linear matrix systems. It is demonstrated that matrix partitioning can be used to reduce the order of the required solutions. Results are presented for several example problems in structural dynamics and aeroelasticity to verify the technique and to demonstrate its use. These problems examine various types of loading and boundary conditions and include aeroelastic analyses of lifting surfaces constructed from anisotropic composite materials.

  3. Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction

    NASA Astrophysics Data System (ADS)

    Leverrier, Anthony

    2017-05-01

    Establishing the security of continuous-variable quantum key distribution against general attacks in a realistic finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group U (n ) (instead of the symmetric group Sn as in usual de Finetti theorems), and by introducing generalized S U (2 ,2 ) coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against Gaussian collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.

  4. Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction.

    PubMed

    Leverrier, Anthony

    2017-05-19

    Establishing the security of continuous-variable quantum key distribution against general attacks in a realistic finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of Gaussian de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group U(n) (instead of the symmetric group S_{n} as in usual de Finetti theorems), and by introducing generalized SU(2,2) coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against Gaussian collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.

  5. Aerosol Drug Delivery During Noninvasive Positive Pressure Ventilation: Effects of Intersubject Variability and Excipient Enhanced Growth

    PubMed Central

    Walenga, Ross L.; Kaviratna, Anubhav; Hindle, Michael

    2017-01-01

    Abstract Background: Nebulized aerosol drug delivery during the administration of noninvasive positive pressure ventilation (NPPV) is commonly implemented. While studies have shown improved patient outcomes for this therapeutic approach, aerosol delivery efficiency is reported to be low with high variability in lung-deposited dose. Excipient enhanced growth (EEG) aerosol delivery is a newly proposed technique that may improve drug delivery efficiency and reduce intersubject aerosol delivery variability when coupled with NPPV. Materials and Methods: A combined approach using in vitro experiments and computational fluid dynamics (CFD) was used to characterize aerosol delivery efficiency during NPPV in two new nasal cavity models that include face mask interfaces. Mesh nebulizer and in-line dry powder inhaler (DPI) sources of conventional and EEG aerosols were both considered. Results: Based on validated steady-state CFD predictions, EEG aerosol delivery improved lung penetration fraction (PF) values by factors ranging from 1.3 to 6.4 compared with conventional-sized aerosols. Furthermore, intersubject variability in lung PF was very high for conventional aerosol sizes (relative differences between subjects in the range of 54.5%–134.3%) and was reduced by an order of magnitude with the EEG approach (relative differences between subjects in the range of 5.5%–17.4%). Realistic in vitro experiments of cyclic NPPV demonstrated similar trends in lung delivery to those observed with the steady-state simulations, but with lower lung delivery efficiencies. Reaching the lung delivery efficiencies reported with the steady-state simulations of 80%–90% will require synchronization of aerosol administration during inspiration and reducing the size of the EEG aerosol delivery unit. Conclusions: The EEG approach enabled high-efficiency lung delivery of aerosols administered during NPPV and reduced intersubject aerosol delivery variability by an order of magnitude. Use of an in-line DPI device that connects to the NPPV mask appears to be a convenient method to rapidly administer an EEG aerosol and synchronize the delivery with inspiration. PMID:28075194

  6. Matrix completion by deep matrix factorization.

    PubMed

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Comparison of heart rate variability and pulse rate variability detected with photoplethysmography

    NASA Astrophysics Data System (ADS)

    Rauh, Robert; Limley, Robert; Bauer, Rainer-Dieter; Radespiel-Troger, Martin; Mueck-Weymann, Michael

    2004-08-01

    This study compares ear photoplethysmography (PPG) and electrocardiogram (ECG) in providing accurate heart beat intervals for use in calculations of heart rate variability (HRV, from ECG) or of pulse rate variability (PRV, from PPG) respectively. Simultaneous measurements were taken from 44 healthy subjects at rest during spontaneous breathing and during forced metronomic breathing (6/min). Under both conditions, highly significant (p > 0.001) correlations (1.0 > r > 0.97) were found between all evaluated common HRV and PRV parameters. However, under both conditions the PRV parameters were higher than HRV. In addition, we calculated the limits of agreement according to Bland and Altman between both techniques and found good agreement (< 10% difference) for heart rate and standard deviation of normal-to-normal intervals (SDNN), but only moderate (10-20%) or even insufficient (> 20%) agreement for other standard HRV and PRV parameters. Thus, PRV data seem to be acceptable for screening purposes but, at least at this state of knowledge, not for medical decision making. However, further studies are needed before more certain determination can be made.

  8. Impacts analysis of car following models considering variable vehicular gap policies

    NASA Astrophysics Data System (ADS)

    Xin, Qi; Yang, Nan; Fu, Rui; Yu, Shaowei; Shi, Zhongke

    2018-07-01

    Due to the important roles playing in the vehicles' adaptive cruise control system, variable vehicular gap polices were employed to full velocity difference model (FVDM) to investigate the traffic flow properties. In this paper, two new car following models were put forward by taking constant time headway(CTH) policy and variable time headway(VTH) policy into optimal velocity function, separately. By steady state analysis of the new models, an equivalent optimal velocity function was defined. To determine the linear stable conditions of the new models, we introduce equivalent expressions of safe vehicular gap, and then apply small amplitude perturbation analysis and long terms of wave expansion techniques to obtain the new models' linear stable conditions. Additionally, the first order approximate solutions of the new models were drawn at the stable region, by transforming the models into typical Burger's partial differential equations with reductive perturbation method. The FVDM based numerical simulations indicate that the variable vehicular gap polices with proper parameters directly contribute to the improvement of the traffic flows' stability and the avoidance of the unstable traffic phenomena.

  9. Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Ash, Robert L.

    1992-01-01

    The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.

  10. New evidence favoring multilevel decomposition and optimization

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Polignone, Debra A.

    1990-01-01

    The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.

  11. Phased-mission system analysis using Boolean algebraic methods

    NASA Technical Reports Server (NTRS)

    Somani, Arun K.; Trivedi, Kishor S.

    1993-01-01

    Most reliability analysis techniques and tools assume that a system is used for a mission consisting of a single phase. However, multiple phases are natural in many missions. The failure rates of components, system configuration, and success criteria may vary from phase to phase. In addition, the duration of a phase may be deterministic or random. Recently, several researchers have addressed the problem of reliability analysis of such systems using a variety of methods. A new technique for phased-mission system reliability analysis based on Boolean algebraic methods is described. Our technique is computationally efficient and is applicable to a large class of systems for which the failure criterion in each phase can be expressed as a fault tree (or an equivalent representation). Our technique avoids state space explosion that commonly plague Markov chain-based analysis. A phase algebra to account for the effects of variable configurations and success criteria from phase to phase was developed. Our technique yields exact (as opposed to approximate) results. The use of our technique was demonstrated by means of an example and present numerical results to show the effects of mission phases on the system reliability.

  12. Procedures for generation and reduction of linear models of a turbofan engine

    NASA Technical Reports Server (NTRS)

    Seldner, K.; Cwynar, D. S.

    1978-01-01

    A real time hybrid simulation of the Pratt & Whitney F100-PW-F100 turbofan engine was used for linear-model generation. The linear models were used to analyze the effect of disturbances about an operating point on the dynamic performance of the engine. A procedure that disturbs, samples, and records the state and control variables was developed. For large systems, such as the F100 engine, the state vector is large and may contain high-frequency information not required for control. This, reducing the full-state to a reduced-order model may be a practicable approach to simplifying the control design. A reduction technique was developed to generate reduced-order models. Selected linear and nonlinear output responses to exhaust-nozzle area and main-burner fuel flow disturbances are presented for comparison.

  13. Momentum fractionation on superstrata

    DOE PAGES

    Bena, Iosif; Martinec, Emil; Turton, David; ...

    2016-05-11

    Superstrata are bound states in string theory that carry D1, D5, and momentum charges, and whose supergravity descriptions are parameterized by arbitrary functions of (at least) two variables. In the D1-D5 CFT, typical three-charge states reside in highdegree twisted sectors, and their momentum charge is carried by modes that individually have fractional momentum. Understanding this momentum fractionation holographically is crucial for understanding typical black-hole microstates in this system. We use solution-generating techniques to add momentum to a multi-wound supertube and thereby construct the first examples of asymptotically-flat superstrata. The resulting supergravity solutions are horizonless and smooth up to well-understood orbifoldmore » singularities. Upon taking the AdS3 decoupling limit, our solutions are dual to CFT states with momentum fractionation. We give a precise proposal for these dual CFT states. Lastly, our construction establishes the very nontrivial fact that large classes of CFT states with momentum fractionation can be realized in the bulk as smooth horizonless supergravity solutions.« less

  14. A 16-year time series of 1 km AVHRR satellite data of the conterminous United States and Alaska

    USGS Publications Warehouse

    Eidenshink, Jeff

    2006-01-01

    The U.S. Geological Survey (USGS) has developed a 16-year time series of vegetation condition information for the conterminous United States and Alaska using 1 km Advanced Very High Resolution Radiometer (AVHRR) data. The AVHRR data have been processed using consistent methods that account for radiometric variability due to calibration uncertainty, the effects of the atmosphere on surface radiometric measurements obtained from wide field-of-view observations, and the geometric registration accuracy. The conterminous United States and Alaska data sets have an atmospheric correction for water vapor, ozone, and Rayleigh scattering and include a cloud mask derived using the Clouds from AVHRR (CLAVR) algorithm. In comparison with other AVHRR time series data sets, the conterminous United States and Alaska data are processed using similar techniques. The primary difference is that the conterminous United States and Alaska data are at 1 km resolution, while others are at 8 km resolution. The time series consists of weekly and biweekly maximum normalized difference vegetation index (NDVI) composites.

  15. Electron localization mechanism in the normal state of high- T c superconductors

    NASA Astrophysics Data System (ADS)

    Yamani, Z.; Akhavan, M.

    The ceramic compounds Gd 1- xPr xCu 3O 7- y (GdPr-123) with 0.0 ≤ x≤1.0, were synthesized by standard solid state reaction technique. XRD analysis shows a predominantly single phase perovskite structure with the orthorhombic Pmmm symmetry. The samples have been examined for superconductivity by measuring electrical resistivity within the temperature range 10-300 K. These measurements show a suppression of superconductivity with increasing x. It is observed that the critical Pr concentration ( x cr) required to suppress superconductivity is about 0.45, the samples with x < 0.45 become superconducting and are metallic in their normal state, the samples with x ≥ 0.45 do not become superconducting and show a semiconducting behavior above 10 K. To interpret the normal state properties of the samples, the quantum percolation theory based on localized states is applied. A cross-over between variable-range hopping (VRH) and Coulomb gap (CG) mechanisms is observed as a result of decreasing the Pr content.

  16. The influence of trait and state rumination on cardiovascular recovery from a negative emotional stressor.

    PubMed

    Key, Brenda L; Campbell, Tavis S; Bacon, Simon L; Gerin, William

    2008-06-01

    The purpose of this study was to evaluate the influence of trait and state rumination on cardiovascular recovery following a negative emotional stressor. Cardiovascular data was collected from 64 undergraduate women during a 10-min baseline period, 5-min emotional recall stress task, and a 15-min recovery period. Trait rumination was assessed using the Stress Reactive Rumination Scale and state rumination was assessed 5 and 10 min after the stressor, using a thought-report technique. Results indicated that trait and state rumination interacted such that low trait ruminators who were ruminating at 10 min after the termination of the stressor had poorer diastolic blood pressure and high-frequency heart rate variability recovery compared to low trait ruminators who were not ruminating. State rumination was not associated with cardiovascular recovery in high trait ruminators. Results suggest that rumination may play a role in the association between stress and hypertension by prolonging cardiovascular activation following stress.

  17. Variability in Humoral Immunity to Measles Vaccine: New Developments

    PubMed Central

    Haralambieva, Iana H.; Kennedy, Richard B.; Ovsyannikova, Inna G.; Whitaker, Jennifer A.; Poland, Gregory A.

    2015-01-01

    Despite the existence of an effective measles vaccine, resurgence in measles cases in the United States and across Europe has occurred, including in individuals vaccinated with two doses of the vaccine. Host genetic factors result in inter-individual variation in measles vaccine-induced antibodies, and play a role in vaccine failure. Studies have identified HLA and non-HLA genetic influences that individually or jointly contribute to the observed variability in the humoral response to vaccination among healthy individuals. In this exciting era, new high-dimensional approaches and techniques including vaccinomics, systems biology, GWAS, epitope prediction and sophisticated bioinformatics/statistical algorithms, provide powerful tools to investigate immune response mechanisms to the measles vaccine. These might predict, on an individual basis, outcomes of acquired immunity post measles vaccination. PMID:26602762

  18. Optimal control of a variable spin speed CMG system for space vehicles. [Control Moment Gyros

    NASA Technical Reports Server (NTRS)

    Liu, T. C.; Chubb, W. B.; Seltzer, S. M.; Thompson, Z.

    1973-01-01

    Many future NASA programs require very high accurate pointing stability. These pointing requirements are well beyond anything attempted to date. This paper suggests a control system which has the capability of meeting these requirements. An optimal control law for the suggested system is specified. However, since no direct method of solution is known for this complicated system, a computation technique using successive approximations is used to develop the required solution. The method of calculus of variations is applied for estimating the changes of index of performance as well as those constraints of inequality of state variables and terminal conditions. Thus, an algorithm is obtained by the steepest descent method and/or conjugate gradient method. Numerical examples are given to show the optimal controls.

  19. Development of a variable structure-based fault detection and diagnosis strategy applied to an electromechanical system

    NASA Astrophysics Data System (ADS)

    Gadsden, S. Andrew; Kirubarajan, T.

    2017-05-01

    Signal processing techniques are prevalent in a wide range of fields: control, target tracking, telecommunications, robotics, fault detection and diagnosis, and even stock market analysis, to name a few. Although first introduced in the 1950s, the most popular method used for signal processing and state estimation remains the Kalman filter (KF). The KF offers an optimal solution to the estimation problem under strict assumptions. Since this time, a number of other estimation strategies and filters were introduced to overcome robustness issues, such as the smooth variable structure filter (SVSF). In this paper, properties of the SVSF are explored in an effort to detect and diagnosis faults in an electromechanical system. The results are compared with the KF method, and future work is discussed.

  20. PREDICTING TWO-DIMENSIONAL STEADY-STATE SOIL FREEZING FRONTS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1986-01-01

    The complex variable boundary element method (CVBEM) is used instead of a real variable boundary element method due to the available modeling error evaluation techniques developed. The modeling accuracy is evaluated by the model-user in the determination of an approximative boundary upon which the CVBEM provides an exact solution. Although inhomogeneity (and anisotropy) can be included in the CVBEM model, the resulting fully populated matrix system quickly becomes large. Therefore in this paper, the domain is assumed homogeneous and isotropic except for differences in frozen and thawed conduction parameters on either side of the freezing front. The example problems presented were obtained by use of a popular 64K microcomputer (the current version of the program used in this study has the capacity to accommodate 30 nodal points).

  1. Application of the Ecosystem Assessment Model to Lake Norman: A cooling lake in North Carolina: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porcella, D.B.; Bowie, G.L.; Campbell, C.L.

    The Ecosystem Assessment Model (EAM) of the Cooling Lake Assessment Methodology was applied to the extensive ecological field data collected at Lake Norman, North Carolina by Duke Power Company to evaluate its capability to simulate lake ecosystems and the ecological effects of steam electric power plants. The EAM provided simulations over a five-year verification period that behaved as expected based on a one-year calibration. Major state variables of interest to utilities and regulatory agencies are: temperature, dissolved oxygen, and fish community variables. In qualitative terms, temperature simulation was very accurate, dissolved oxygen simulation was accurate, and fish prediction was reasonablymore » accurate. The need for more accurate fisheries data collected at monthly intervals and non-destructive sampling techniques was identified.« less

  2. Photoswitchable carbohydrate-based fluorosurfactants as tuneable ice recrystallization inhibitors.

    PubMed

    Adam, Madeleine K; Hu, Yingxue; Poisson, Jessica S; Pottage, Matthew J; Ben, Robert N; Wilkinson, Brendan L

    2017-02-01

    Cryopreservation is an important technique employed for the storage and preservation of biological tissues and cells. The limited effectiveness and significant toxicity of conventionally-used cryoprotectants, such as DMSO, have prompted efforts toward the rational design of less toxic alternatives, including carbohydrate-based surfactants. In this paper, we report the modular synthesis and ice recrystallization inhibition (IRI) activity of a library of variably substituted, carbohydrate-based fluorosurfactants. Carbohydrate-based fluorosurfactants possessed a variable mono- or disaccharide head group appended to a hydrophobic fluoroalkyl-substituted azobenzene tail group. Light-addressable fluorosurfactants displayed weak-to-moderate IRI activity that could be tuned through selection of carbohydrate head group, position of the trifluoroalkyl group on the azobenzene ring, and isomeric state of the azobenzene tail fragment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. State of the Art in Large-Scale Soil Moisture Monitoring

    NASA Technical Reports Server (NTRS)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  4. Effects of practice on tip-of-the-tongue states.

    PubMed

    Smith, S M; Balfour, S P; Brown, J M

    1994-03-01

    Tip-of-the-tongue (TOT) states were examined in relation to acquisition manipulations, using named imaginary animals (TOTimals) as targets. High levels of TOT states were found in three experiments. In the first experiment an increase in the duration of initial exposure to target material improved recall and recognition, and reduced the number of unrecalled items not in TOT states (NTOTs), but did not affect TOT levels. In Experiment 2 practice at writing target names, as compared with only reading them, improved recall performance and decreased TOT levels, but did not reduce NTOTs. Experiment 3 replicated the finding that writing during practice reduced TOT states, but did not reduce NTOTs, and also found that more frequent practice trials increased recall without affecting TOT levels. The results suggest that practice writing target names prevents TOT states by strengthening otherwise deficient phonological connections in memory, a deficiency that can cause TOT states when visual-to-lexical connections give only partial access to a target in memory. The results also demonstrate the usefulness of the TOTimal technique for testing effects of acquisition variables on TOT experiences.

  5. Practical somewhat-secure quantum somewhat-homomorphic encryption with coherent states

    NASA Astrophysics Data System (ADS)

    Tan, Si-Hui; Ouyang, Yingkai; Rohde, Peter P.

    2018-04-01

    We present a scheme for implementing homomorphic encryption on coherent states encoded using phase-shift keys. The encryption operations require only rotations in phase space, which commute with computations in the code space performed via passive linear optics, and with generalized nonlinear phase operations that are polynomials of the photon-number operator in the code space. This encoding scheme can thus be applied to any computation with coherent-state inputs, and the computation proceeds via a combination of passive linear optics and generalized nonlinear phase operations. An example of such a computation is matrix multiplication, whereby a vector representing coherent-state amplitudes is multiplied by a matrix representing a linear optics network, yielding a new vector of coherent-state amplitudes. By finding an orthogonal partitioning of the support of our encoded states, we quantify the security of our scheme via the indistinguishability of the encrypted code words. While we focus on coherent-state encodings, we expect that this phase-key encoding technique could apply to any continuous-variable computation scheme where the phase-shift operator commutes with the computation.

  6. Variable Coupling Scheme for High Frequency Electron Spin Resonance Resonators Using Asymmetric Meshes

    PubMed Central

    Tipikin, D. S.; Earle, K. A.; Freed, J. H.

    2010-01-01

    The sensitivity of a high frequency electron spin resonance (ESR) spectrometer depends strongly on the structure used to couple the incident millimeter wave to the sample that generates the ESR signal. Subsequent coupling of the ESR signal to the detection arm of the spectrometer is also a crucial consideration for achieving high spectrometer sensitivity. In previous work, we found that a means for continuously varying the coupling was necessary for attaining high sensitivity reliably and reproducibly. We report here on a novel asymmetric mesh structure that achieves continuously variable coupling by rotating the mesh in its own plane about the millimeter wave transmission line optical axis. We quantify the performance of this device with nitroxide spin-label spectra in both a lossy aqueous solution and a low loss solid state system. These two systems have very different coupling requirements and are representative of the range of coupling achievable with this technique. Lossy systems in particular are a demanding test of the achievable sensitivity and allow us to assess the suitability of this approach for applying high frequency ESR to the study of biological systems at physiological conditions, for example. The variable coupling technique reported on here allows us to readily achieve a factor of ca. 7 improvement in signal to noise at 170 GHz and a factor of ca. 5 at 95 GHz over what has previously been reported for lossy samples. PMID:20458356

  7. Analysis and Design of High-Order Parallel Resonant Converters

    NASA Astrophysics Data System (ADS)

    Batarseh, Issa Eid

    1990-01-01

    In this thesis, a special state variable transformation technique has been derived for the analysis of high order dc-to-dc resonant converters. Converters comprised of high order resonant tanks have the advantage of utilizing the parasitic elements by making them part of the resonant tank. A new set of state variables is defined in order to make use of two-dimensional state-plane diagrams in the analysis of high order converters. Such a method has been successfully used for the analysis of the conventional Parallel Resonant Converters (PRC). Consequently, two -dimensional state-plane diagrams are used to analyze the steady state response for third and fourth order PRC's when these converters are operated in the continuous conduction mode. Based on this analysis, a set of control characteristic curves for the LCC-, LLC- and LLCC-type PRC are presented from which various converter design parameters are obtained. Various design curves for component value selections and device ratings are given. This analysis of high order resonant converters shows that the addition of the reactive components to the resonant tank results in converters with better performance characteristics when compared with the conventional second order PRC. Complete design procedure along with design examples for 2nd, 3rd and 4th order converters are presented. Practical power supply units, normally used for computer applications, were built and tested by using the LCC-, LLC- and LLCC-type commutation schemes. In addition, computer simulation results are presented for these converters in order to verify the theoretical results.

  8. A Scalable, Parallel Approach for Multi-Point, High-Fidelity Aerostructural Optimization of Aircraft Configurations

    NASA Astrophysics Data System (ADS)

    Kenway, Gaetan K. W.

    This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW minimization results in a 4.2% reduction in TOGW with a 6.6% fuel burn reduction, while the fuel burn optimization resulted in a 11.2% fuel burn reduction with no change to the takeoff gross weight.

  9. Prediction equations of forced oscillation technique: the insidious role of collinearity.

    PubMed

    Narchi, Hassib; AlBlooshi, Afaf

    2018-03-27

    Many studies have reported reference data for forced oscillation technique (FOT) in healthy children. The prediction equation of FOT parameters were derived from a multivariable regression model examining the effect of age, gender, weight and height on each parameter. As many of these variables are likely to be correlated, collinearity might have affected the accuracy of the model, potentially resulting in misleading, erroneous or difficult to interpret conclusions.The aim of this work was: To review all FOT publications in children since 2005 to analyze whether collinearity was considered in the construction of the published prediction equations. Then to compare these prediction equations with our own study. And to analyse, in our study, how collinearity between the explanatory variables might affect the predicted equations if it was not considered in the model. The results showed that none of the ten reviewed studies had stated whether collinearity was checked for. Half of the reports had also included in their equations variables which are physiologically correlated, such as age, weight and height. The predicted resistance varied by up to 28% amongst these studies. And in our study, multicollinearity was identified between the explanatory variables initially considered for the regression model (age, weight and height). Ignoring it would have resulted in inaccuracies in the coefficients of the equation, their signs (positive or negative), their 95% confidence intervals, their significance level and the model goodness of fit. In Conclusion with inaccurately constructed and improperly reported models, understanding the results and reproducing the models for future research might be compromised.

  10. Intraannual variability of tides in the thermosphere from model simulations and in situ satellite observations

    NASA Astrophysics Data System (ADS)

    Häusler, K.; Hagan, M. E.; Forbes, J. M.; Zhang, X.; Doornbos, E.; Bruinsma, S.; Lu, G.

    2015-01-01

    In this paper, we provide insights into limitations imposed by current satellite-based strategies to delineate tidal variability in the thermosphere, as well as the ability of a state-of-the-art model to replicate thermospheric tidal determinations. Toward this end, we conducted a year-long thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (TIME-GCM) simulation for 2009, which is characterized by low solar and geomagnetic activity. In order to account for tropospheric waves and tides propagating upward into the ˜30-400 km model domain, we used 3-hourly MERRA (Modern-Era Retrospective Analysis for Research and Application) reanalysis data. We focus on exospheric tidal temperatures, which are also compared with 72 day mean determinations from combined Challenging Minisatellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE) satellite observations to assess the model's capability to capture the observed tidal signatures and to quantify the uncertainties associated with the satellite exospheric temperature determination technique. We found strong day-to-day tidal variability in TIME-GCM that is smoothed out when averaged over as few as ten days. TIME-GCM notably overestimates the 72 day mean eastward propagating tides observed by CHAMP/GRACE, while capturing many of the salient features of other tidal components. However, the CHAMP/GRACE tidal determination technique only provides a gross climatological representation, underestimates the majority of the tidal components in the climatological spectrum, and moreover fails to characterize the extreme variability that drives the dynamics and electrodynamics of the ionosphere-thermosphere system. A multisatellite mission that samples at least six local times simultaneously is needed to provide this quantification.

  11. Precision estimate for Odin-OSIRIS limb scatter retrievals

    NASA Astrophysics Data System (ADS)

    Bourassa, A. E.; McLinden, C. A.; Bathgate, A. F.; Elash, B. J.; Degenstein, D. A.

    2012-02-01

    The limb scatter measurements made by the Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on the Odin spacecraft are used to routinely produce vertically resolved trace gas and aerosol extinction profiles. Version 5 of the ozone and stratospheric aerosol extinction retrievals, which are available for download, are performed using a multiplicative algebraic reconstruction technique (MART). The MART inversion is a type of relaxation method, and as such the covariance of the retrieved state is estimated numerically, which, if done directly, is a computationally heavy task. Here we provide a methodology for the derivation of a numerical estimate of the covariance matrix for the retrieved state using the MART inversion that is sufficiently efficient to perform for each OSIRIS measurement. The resulting precision is compared with the variability in a large set of pairs of OSIRIS measurements that are close in time and space in the tropical stratosphere where the natural atmospheric variability is weak. These results are found to be highly consistent and thus provide confidence in the numerical estimate of the precision in the retrieved profiles.

  12. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  13. Characterization of depressive States in bipolar patients using wearable textile technology and instantaneous heart rate variability assessment.

    PubMed

    Valenza, Gaetano; Citi, Luca; Gentili, Claudio; Lanata, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2015-01-01

    The analysis of cognitive and autonomic responses to emotionally relevant stimuli could provide a viable solution for the automatic recognition of different mood states, both in normal and pathological conditions. In this study, we present a methodological application describing a novel system based on wearable textile technology and instantaneous nonlinear heart rate variability assessment, able to characterize the autonomic status of bipolar patients by considering only electrocardiogram recordings. As a proof of this concept, our study presents results obtained from eight bipolar patients during their normal daily activities and being elicited according to a specific emotional protocol through the presentation of emotionally relevant pictures. Linear and nonlinear features were computed using a novel point-process-based nonlinear autoregressive integrative model and compared with traditional algorithmic methods. The estimated indices were used as the input of a multilayer perceptron to discriminate the depressive from the euthymic status. Results show that our system achieves much higher accuracy than the traditional techniques. Moreover, the inclusion of instantaneous higher order spectra features significantly improves the accuracy in successfully recognizing depression from euthymia.

  14. Application of low-dimensional techniques for closed-loop control of turbulent flows

    NASA Astrophysics Data System (ADS)

    Ausseur, Julie

    The groundwork for an advanced closed-loop control of separated shear layer flows is laid out in this document. The experimental testbed for the present investigation is the turbulent flow over a NACA-4412 model airfoil tested in the Syracuse University subsonic wind tunnel at Re=135,000. The specified control objective is to delay separation - or stall - by constantly keeping the flow attached to the surface of the wing. The proper orthogonal decomposition (POD) is shown to he a valuable tool to provide a low-dimensional estimate of the flow state and the first POD expansion coefficient is proposed to he used as the control variable. Other reduced-order techniques such as the modified linear and quadratic stochastic measurement methods (mLSM, mQSM) are applied to reduce the complexity of the flow field and their ability to accurately estimate the flow state from surface pressure measurements alone is examined. A simple proportional feedback control is successfully implemented in real-time using these tools and flow separation is efficiently delayed by over 3 degrees angle of attack. To further improve the quality of the flow state estimate, the implementation of a Kalman filter is foreseen, in which the knowledge of the flow dynamics is added to the computation of the control variable to correct for the potential measurement errors. To this aim, a reduced-order model (ROM) of the flow is developed using the least-squares method to obtain the coefficients of the POD/Galerkin projection of the Navier-Stokes equations from experimental data. To build the training ensemble needed in this experimental procedure, the spectral mLSM is performed to generate time-resolved series of POD expansion coefficients from which temporal derivatives are computed. This technique, which is applied to independent PIV velocity snapshots and time-resolved surface measurements, is able to retrieve the rational temporal evolution of the flow physics in the entire 2-D measurement area. The quality of the spectral measurements is confirmed by the results from both the linear and quadratic dynamical systems. The preliminary results from the linear ROM strengthens the motivation for future control implementation of a linear Kalman filter in this flow.

  15. Nonlocal Intracranial Cavity Extraction

    PubMed Central

    Manjón, José V.; Eskildsen, Simon F.; Coupé, Pierrick; Romero, José E.; Collins, D. Louis; Robles, Montserrat

    2014-01-01

    Automatic and accurate methods to estimate normalized regional brain volumes from MRI data are valuable tools which may help to obtain an objective diagnosis and followup of many neurological diseases. To estimate such regional brain volumes, the intracranial cavity volume (ICV) is often used for normalization. However, the high variability of brain shape and size due to normal intersubject variability, normal changes occurring over the lifespan, and abnormal changes due to disease makes the ICV estimation problem challenging. In this paper, we present a new approach to perform ICV extraction based on the use of a library of prelabeled brain images to capture the large variability of brain shapes. To this end, an improved nonlocal label fusion scheme based on BEaST technique is proposed to increase the accuracy of the ICV estimation. The proposed method is compared with recent state-of-the-art methods and the results demonstrate an improved performance both in terms of accuracy and reproducibility while maintaining a reduced computational burden. PMID:25328511

  16. An ANOVA approach for statistical comparisons of brain networks.

    PubMed

    Fraiman, Daniel; Fraiman, Ricardo

    2018-03-16

    The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

  17. Evaluating data-driven causal inference techniques in noisy physical and ecological systems

    NASA Astrophysics Data System (ADS)

    Tennant, C.; Larsen, L.

    2016-12-01

    Causal inference from observational time series challenges traditional approaches for understanding processes and offers exciting opportunities to gain new understanding of complex systems where nonlinearity, delayed forcing, and emergent behavior are common. We present a formal evaluation of the performance of convergent cross-mapping (CCM) and transfer entropy (TE) for data-driven causal inference under real-world conditions. CCM is based on nonlinear state-space reconstruction, and causality is determined by the convergence of prediction skill with an increasing number of observations of the system. TE is the uncertainty reduction based on transition probabilities of a pair of time-lagged variables. With TE, causal inference is based on asymmetry in information flow between the variables. Observational data and numerical simulations from a number of classical physical and ecological systems: atmospheric convection (the Lorenz system), species competition (patch-tournaments), and long-term climate change (Vostok ice core) were used to evaluate the ability of CCM and TE to infer causal-relationships as data series become increasingly corrupted by observational (instrument-driven) or process (model-or -stochastic-driven) noise. While both techniques show promise for causal inference, TE appears to be applicable to a wider range of systems, especially when the data series are of sufficient length to reliably estimate transition probabilities of system components. Both techniques also show a clear effect of observational noise on causal inference. For example, CCM exhibits a negative logarithmic decline in prediction skill as the noise level of the system increases. Changes in TE strongly depend on noise type and which variable the noise was added to. The ability of CCM and TE to detect driving influences suggest that their application to physical and ecological systems could be transformative for understanding driving mechanisms as Earth systems undergo change.

  18. Feedback linearization of singularly perturbed systems based on canonical similarity transformations

    NASA Astrophysics Data System (ADS)

    Kabanov, A. A.

    2018-05-01

    This paper discusses the problem of feedback linearization of a singularly perturbed system in a state-dependent coefficient form. The result is based on the introduction of a canonical similarity transformation. The transformation matrix is constructed from separate blocks for fast and slow part of an original singularly perturbed system. The transformed singular perturbed system has a linear canonical form that significantly simplifies a control design problem. Proposed similarity transformation allows accomplishing linearization of the system without considering the virtual output (as it is needed for normal form method), a technique of a transition from phase coordinates of the transformed system to state variables of the original system is simpler. The application of the proposed approach is illustrated through example.

  19. Finite state modeling of aeroelastic systems

    NASA Technical Reports Server (NTRS)

    Vepa, R.

    1977-01-01

    A general theory of finite state modeling of aerodynamic loads on thin airfoils and lifting surfaces performing completely arbitrary, small, time-dependent motions in an airstream is developed and presented. The nature of the behavior of the unsteady airloads in the frequency domain is explained, using as raw materials any of the unsteady linearized theories that have been mechanized for simple harmonic oscillations. Each desired aerodynamic transfer function is approximated by means of an appropriate Pade approximant, that is, a rational function of finite degree polynomials in the Laplace transform variable. The modeling technique is applied to several two dimensional and three dimensional airfoils. Circular, elliptic, rectangular and tapered planforms are considered as examples. Identical functions are also obtained for control surfaces for two and three dimensional airfoils.

  20. A Pilot Study of Heart Rate Variability Biofeedback Therapy in the Treatment of Perinatal Depression on a Specialized Perinatal Psychiatry Inpatient Unit

    PubMed Central

    Beckham, Jenna; Greene, Tammy B.; Meltzer-Brody, Samantha

    2012-01-01

    Purpose Heart rate variability biofeedback (HRVB) therapy may be useful in treating the prominent anxiety features of perinatal depression. We investigated the use of this non-pharmacologic therapy among women hospitalized with severe perinatal depression. Methods Three questionnaires, the State Trait Anxiety Inventory (STAI), Warwick Edinburgh Mental Well-Being Scale (WEMWBS), and Linear Analog Self Assessment (LASA), were administered to fifteen women in a specialized inpatient perinatal psychiatry unit. Participants were also contacted by telephone after discharge to assess continued use of HRVB techniques. Results The use of HRVB was associated with an improvement in all three scales. The greatest improvement (−13.867, p<0.001 and −11.533, p<0.001) was among STAI scores. A majority (81.9%, n=9) of women surveyed by telephone also reported continued frequent use at least once per week, and over half (54.6%, n=6) described the use of HRVB techniques as very or extremely beneficial. Conclusions The use of HRVB was associated with statistically significant improvement on all instrument scores, the greatest of which was STAI scores, and most women reported frequent continued use of HRVB techniques after discharge. These results suggest that HRVB may be particularly beneficial in the treatment of the prominent anxiety features of perinatal depression, both in inpatient and outpatient settings. PMID:23179141

  1. An Experiment in Scientific Program Understanding

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.; Owen, Karl (Technical Monitor)

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  2. Improved ultrasonic standard reference blocks

    NASA Technical Reports Server (NTRS)

    Eitzen, D. G.; Sushinsky, G. F.; Chwirut, D. J.; Bechtoldt, C. J.; Ruff, A. W.

    1976-01-01

    A program to improve the quality, reproducibility and reliability of nondestructive testing through the development of improved ASTM-type ultrasonic reference standards is described. Reference blocks of aluminum, steel, and titanium alloys are to be considered. Equipment representing the state-of-the-art in laboratory and field ultrasonic equipment was obtained and evaluated. RF and spectral data on ten sets of ultrasonic reference blocks have been taken as part of a task to quantify the variability in response from nominally identical blocks. Techniques for residual stress, preferred orientation, and micro-structural measurements were refined and are applied to a reference block rejected by the manufacturer during fabrication in order to evaluate the effect of metallurgical condition on block response. New fabrication techniques for reference blocks are discussed and ASTM activities are summarized.

  3. Noninvasive evaluation system of fractured bone based on speckle interferometry

    NASA Astrophysics Data System (ADS)

    Yamanada, Shinya; Murata, Shigeru; Tanaka, Yohsuke

    2010-11-01

    This paper presents a noninvasive evaluation system of fractured bone based on speckle interferometry using a modified evaluation index for higher performance, and the experiments are carried out to examine the feasibility in evaluating bone fracture healing and the influence of some system parameters on the performance. From experimental results, it is shown that the presence of fractured part of bone and the state of bone fracture healing are successfully estimated by observing fine speckle fringes on the object surface. The proposed evaluation index also can successfully express the difference between the cases with cut and without it. Since most system parameters are found not to affect the performance of the present technique, the present technique is expected to be applied to various patients that have considerable individual variability.

  4. Desires and management preferences of stakeholders regarding feral cats in the Hawaiian islands.

    PubMed

    Lohr, Cheryl A; Lepczyk, Christopher A

    2014-04-01

    Feral cats are abundant in many parts of the world and a source of conservation conflict. Our goal was to clarify the beliefs and desires held by stakeholders regarding feral cat abundance and management. We measured people's desired abundance of feral cats in the Hawaiian Islands and identified an order of preference for 7 feral cat management techniques. In 2011 we disseminated a survey to 5407 Hawaii residents. Approximately 46% of preidentified stakeholders and 20% of random residents responded to the survey (1510 surveys returned). Results from the potential for conflict index revealed a high level of consensus (86.9% of respondents) that feral cat abundance should be decreased. The 3 most common explanatory variables for respondents' stated desires were enjoyment from seeing feral cats (84%), intrinsic value of feral cats (12%), and threat to native fauna (73%). The frequency with which respondents saw cats and change in the perceived abundance of cats also affected respondent's desired abundance of cats; 41.3% of respondents stated that they saw feral cats daily and 44.7% stated that the cat population had increased in recent years. Other potential environmental impacts of feral cats had little affect on desired abundance. The majority of respondents (78%) supported removing feral cats from the natural environment permanently. Consensus convergence models with data from 1388 respondents who completed the relevant questions showed live capture and lethal injection was the most preferred technique and trap-neuter-release was the least preferred technique for managing feral cats. However, the acceptability of each technique varied among stakeholders. Our results suggest that the majority of Hawaii's residents would like to see effective management that reduces the abundance of feral or free-roaming cats. © 2013 Society for Conservation Biology.

  5. Downscaling GCM Output with Genetic Programming Model

    NASA Astrophysics Data System (ADS)

    Shi, X.; Dibike, Y. B.; Coulibaly, P.

    2004-05-01

    Climate change impact studies on watershed hydrology require reliable data at appropriate spatial and temporal resolution. However, the outputs of the current global climate models (GCMs) cannot be used directly because GCM do not provide hourly or daily precipitation and temperature reliable enough for hydrological modeling. Nevertheless, we can get more reliable data corresponding to future climate scenarios derived from GCM outputs using the so called 'downscaling techniques'. This study applies Genetic Programming (GP) based technique to downscale daily precipitation and temperature values at the Chute-du-Diable basin of the Saguenay watershed in Canada. In applying GP downscaling technique, the objective is to find a relationship between the large-scale predictor variables (NCEP data which provide daily information concerning the observed large-scale state of the atmosphere) and the predictand (meteorological data which describes conditions at the site scale). The selection of the most relevant predictor variables is achieved using the Pearson's coefficient of determination ( R2) (between the large-scale predictor variables and the daily meteorological data). In this case, the period (1961 - 2000) is identified to represent the current climate condition. For the forty years of data, the first 30 years (1961-1990) are considered for calibrating the models while the remaining ten years of data (1991-2000) are used to validate those models. In general, the R2 between the predictor variables and each predictand is very low in case of precipitation compared to that of maximum and minimum temperature. Moreover, the strength of individual predictors varies for every month and for each GP grammar. Therefore, the most appropriate combination of predictors has to be chosen by looking at the output analysis of all the twelve months and the different GP grammars. During the calibration of the GP model for precipitation downscaling, in addition to the mean daily precipitation and daily precipitation variability for each month, monthly average dry and wet-spell lengths are also considered as performance criteria. For the cases of Tmax and Tmin, means and variances of these variables corresponding to each month were considered as performance criteria. The GP downscaling results show satisfactory agreement between the observed daily temperature (Tmax and Tmin) and the simulated temperature. However, the downscaling results for the daily precipitation still require some improvement - suggesting further investigation of other grammars. KEY WORDS: Climate change; GP downscaling; GCM.

  6. Climate Change Impact Assessment in Pacific North West Using Copula based Coupling of Temperature and Precipitation variables

    NASA Astrophysics Data System (ADS)

    Qin, Y.; Rana, A.; Moradkhani, H.

    2014-12-01

    The multi downscaled-scenario products allow us to better assess the uncertainty of the changes/variations of precipitation and temperature in the current and future periods. Joint Probability distribution functions (PDFs), of both the climatic variables, might help better understand the interdependence of the two, and thus in-turn help in accessing the future with confidence. Using the joint distribution of temperature and precipitation is also of significant importance in hydrological applications and climate change studies. In the present study, we have used multi-modelled statistically downscaled-scenario ensemble of precipitation and temperature variables using 2 different statistically downscaled climate dataset. The datasets used are, 10 Global Climate Models (GCMs) downscaled products from CMIP5 daily dataset, namely, those from the Bias Correction and Spatial Downscaling (BCSD) technique generated at Portland State University and from the Multivariate Adaptive Constructed Analogs (MACA) technique, generated at University of Idaho, leading to 2 ensemble time series from 20 GCM products. Thereafter the ensemble PDFs of both precipitation and temperature is evaluated for summer, winter, and yearly periods for all the 10 sub-basins across Columbia River Basin (CRB). Eventually, Copula is applied to establish the joint distribution of two variables enabling users to model the joint behavior of the variables with any level of correlation and dependency. Moreover, the probabilistic distribution helps remove the limitations on marginal distributions of variables in question. The joint distribution is then used to estimate the change trends of the joint precipitation and temperature in the current and future, along with estimation of the probabilities of the given change. Results have indicated towards varied change trends of the joint distribution of, summer, winter, and yearly time scale, respectively in all 10 sub-basins. Probabilities of changes, as estimated by the joint precipitation and temperature, will provide useful information/insights for hydrological and climate change predictions.

  7. Preserving subject variability in group fMRI analysis: performance evaluation of GICA vs. IVA

    PubMed Central

    Michael, Andrew M.; Anderson, Mathew; Miller, Robyn L.; Adalı, Tülay; Calhoun, Vince D.

    2014-01-01

    Independent component analysis (ICA) is a widely applied technique to derive functionally connected brain networks from fMRI data. Group ICA (GICA) and Independent Vector Analysis (IVA) are extensions of ICA that enable users to perform group fMRI analyses; however a full comparison of the performance limits of GICA and IVA has not been investigated. Recent interest in resting state fMRI data with potentially higher degree of subject variability makes the evaluation of the above techniques important. In this paper we compare component estimation accuracies of GICA and an improved version of IVA using simulated fMRI datasets. We systematically change the degree of inter-subject spatial variability of components and evaluate estimation accuracy over all spatial maps (SMs) and time courses (TCs) of the decomposition. Our results indicate the following: (1) at low levels of SM variability or when just one SM is varied, both GICA and IVA perform well, (2) at higher levels of SM variability or when more than one SMs are varied, IVA continues to perform well but GICA yields SM estimates that are composites of other SMs with errors in TCs, (3) both GICA and IVA remove spatial correlations of overlapping SMs and introduce artificial correlations in their TCs, (4) if number of SMs is over estimated, IVA continues to perform well but GICA introduces artifacts in the varying and extra SMs with artificial correlations in the TCs of extra components, and (5) in the absence or presence of SMs unique to one subject, GICA produces errors in TCs and IVA estimates are accurate. In summary, our simulation experiments (both simplistic and realistic) and our holistic analyses approach indicate that IVA produces results that are closer to ground truth and thereby better preserves subject variability. The improved version of IVA is now packaged into the GIFT toolbox (http://mialab.mrn.org/software/gift). PMID:25018704

  8. Comparison of tunnel variability between trans-portal and outside-in techniques in ACL reconstruction.

    PubMed

    Sim, Jae-Ang; Kim, Jong-Min; Lee, Sahnghoon; Bae, Ji-Yong; Seon, Jong-Keun

    2017-04-01

    Although trans-portal and outside-in techniques are commonly used for anatomical ACL reconstruction, there is very little information on variability in tunnel placement between two techniques. A total of 103 patients who received ACL reconstruction using trans-portal (50 patients) and outside-in techniques (53 patients) were included in the study. The ACL tunnel location, length and graft-femoral tunnel angle were analyzed using the 3D CT knee models, and we compared the location and length of the femoral and tibial tunnels, and graft bending angle between the two techniques. The variability in each technique regarding the tunnel location, length and graft tunnel angle using the range values was also compared. There were no differences in the average of femoral tunnel depth and height between the two groups. The ranges of femoral tunnel depth and height showed no difference between two groups (36 and 41 % in trans-portal technique vs. 32 and 41 % in outside-in technique). The average value and ranges of tibial tunnel location also showed similar results in two groups. The outside-in technique showed longer femoral tunnel than the trans-portal technique (34.0 vs. 36.8 mm, p = 0.001). The range of femoral tunnel was also wider in trans-portal technique than in outside-in technique. Although the outside-in technique showed significant acute graft bending angle than trans-portal technique in average values, the trans-portal technique showed wider ranges in graft bending angle than outside-in technique [ranges 73° (SD 13.6) vs. 53° (SD 10.7), respectively]. Although both trans-portal and outside-in techniques in ACL reconstruction can provide relatively consistent in femoral and tibial tunnel locations, trans-portal technique showed high variability in femoral tunnel length and graft bending angles than outside-in technique. Therefore, the outside-in technique in ACL reconstruction is considered as the effective method for surgeons to make more consistent femoral tunnel. III.

  9. Dynamics and Control of Three-Dimensional Perching Maneuver under Dynamic Stall Influence

    NASA Astrophysics Data System (ADS)

    Feroskhan, Mir Alikhan Bin Mohammad

    Perching is a type of aggressive maneuver performed by the class 'Aves' species to attain precision point landing with a generally short landing distance. Perching capability is desirable on unmanned aerial vehicles (UAVs) due to its efficient deceleration process that potentially expands the functionality and flight envelope of the aircraft. This dissertation extends the previous works on perching, which is mostly limited to two-dimensional (2D) cases, to its state-of-the-art threedimensional (3D) variety. This dissertation presents the aerodynamic modeling and optimization framework adopted to generate unprecedented variants of the 3D perching maneuver that include the sideslip perching trajectory, which ameliorates the existing 2D perching concept by eliminating the undesirable undershoot and reliance on gravity. The sideslip perching technique methodically utilizes the lateral and longitudinal drag mechanisms through consecutive phases of yawing and pitching-up motion. Since perching maneuver involves high rates of change in the angles of attack and large turn rates, introduction of three internal variables thus becomes necessary for addressing the influence of dynamic stall delay on the UAV's transient post-stall behavior. These variables are then integrated into a static nonlinear aerodynamic model, developed using empirical and analytical methods, and into an optimization framework that generates a trajectory of sideslip perching maneuver, acquiring over 70% velocity reduction. An impact study of the dynamic stall influence on the optimal perching trajectories suggests that consideration of dynamic stall delay is essential due to the significant discrepancies in the corresponding control inputs required. A comparative study between 2D and 3D perching is also conducted to examine the different drag mechanisms employed by 2D and 3D perching respectively. 3D perching is presented as a more efficient deceleration technique with respect to spatial costs and initial altitude range. Contraction analysis is shown to be a useful technique in identifying the state variables that are required to be tracked for attaining stability of optimal perching trajectories. Based on the selected tracking variables, two sliding control strategies are proposed and comparatively examined to close the control loop and provide the required robustness and convergence to the optimal perching trajectory in the presence of perturbations and dynamic stall model inaccuracies. This dissertation concludes that the sliding controller with the adaptive gain feature is more effective and essential in providing better tracking performance through illustrations of the corresponding convergence area and at higher intensity of perturbations.

  10. The Effect of State Regulatory Stringency on Nursing Home Quality

    PubMed Central

    Mukamel, Dana B; Weimer, David L; Harrington, Charlene; Spector, William D; Ladd, Heather; Li, Yue

    2012-01-01

    Objective To test the hypothesis that more stringent quality regulations contribute to better quality nursing home care and to assess their cost-effectiveness. Data Sources/Setting Primary and secondary data from all states and U.S. nursing homes between 2005 and 2006. Study Design We estimated seven models, regressing quality measures on the Harrington Regulation Stringency Index and control variables. To account for endogeneity between regulation and quality, we used instrumental variables techniques. Quality was measured by staffing hours by type per case-mix adjusted day, hotel expenditures, and risk-adjusted decline in activities of daily living, high-risk pressure sores, and urinary incontinence. Data Collection All states' licensing and certification offices were surveyed to obtain data about deficiencies. Secondary data included the Minimum Data Set, Medicare Cost Reports, and the Economic Freedom Index. Principal Findings Regulatory stringency was significantly associated with better quality for four of the seven measures studied. The cost-effectiveness for the activities-of-daily-living measure was estimated at about 72,000 in 2011/ Quality Adjusted Life Year. Conclusions Quality regulations lead to better quality in nursing homes along some dimensions, but not all. Our estimates of cost-effectiveness suggest that increased regulatory stringency is in the ballpark of other acceptable cost-effective practices. PMID:22946859

  11. Reconfigurable quadruple quantum dots in a silicon nanowire transistor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Betz, A. C., E-mail: ab2106@cam.ac.uk; Broström, M.; Gonzalez-Zalba, M. F.

    2016-05-16

    We present a reconfigurable metal-oxide-semiconductor multi-gate transistor that can host a quadruple quantum dot in silicon. The device consists of an industrial quadruple-gate silicon nanowire field-effect transistor. Exploiting the corner effect, we study the versatility of the structure in the single quantum dot and the serial double quantum dot regimes and extract the relevant capacitance parameters. We address the fabrication variability of the quadruple-gate approach which, paired with improved silicon fabrication techniques, makes the corner state quantum dot approach a promising candidate for a scalable quantum information architecture.

  12. NECAP: NASA's Energy-Cost Analysis Program. Part 1: User's manual

    NASA Technical Reports Server (NTRS)

    Henninger, R. H. (Editor)

    1975-01-01

    The NECAP is a sophisticated building design and energy analysis tool which has embodied within it all of the latest ASHRAE state-of-the-art techniques for performing thermal load calculation and energy usage predictions. It is a set of six individual computer programs which include: response factor program, data verification program, thermal load analysis program, variable temperature program, system and equipment simulation program, and owning and operating cost program. Each segment of NECAP is described, and instructions are set forth for preparing the required input data and for interpreting the resulting reports.

  13. Running Technique is an Important Component of Running Economy and Performance

    PubMed Central

    FOLLAND, JONATHAN P.; ALLEN, SAM J.; BLACK, MATTHEW I.; HANDSAKER, JOSEPH C.; FORRESTER, STEPHANIE E.

    2017-01-01

    ABSTRACT Despite an intuitive relationship between technique and both running economy (RE) and performance, and the diverse techniques used by runners to achieve forward locomotion, the objective importance of overall technique and the key components therein remain to be elucidated. Purpose This study aimed to determine the relationship between individual and combined kinematic measures of technique with both RE and performance. Methods Ninety-seven endurance runners (47 females) of diverse competitive standards performed a discontinuous protocol of incremental treadmill running (4-min stages, 1-km·h−1 increments). Measurements included three-dimensional full-body kinematics, respiratory gases to determine energy cost, and velocity of lactate turn point. Five categories of kinematic measures (vertical oscillation, braking, posture, stride parameters, and lower limb angles) and locomotory energy cost (LEc) were averaged across 10–12 km·h−1 (the highest common velocity < velocity of lactate turn point). Performance was measured as season's best (SB) time converted to a sex-specific z-score. Results Numerous kinematic variables were correlated with RE and performance (LEc, 19 variables; SB time, 11 variables). Regression analysis found three variables (pelvis vertical oscillation during ground contact normalized to height, minimum knee joint angle during ground contact, and minimum horizontal pelvis velocity) explained 39% of LEc variability. In addition, four variables (minimum horizontal pelvis velocity, shank touchdown angle, duty factor, and trunk forward lean) combined to explain 31% of the variability in performance (SB time). Conclusions This study provides novel and robust evidence that technique explains a substantial proportion of the variance in RE and performance. We recommend that runners and coaches are attentive to specific aspects of stride parameters and lower limb angles in part to optimize pelvis movement, and ultimately enhance performance. PMID:28263283

  14. Effectiveness of the Touch Math Technique in Teaching Basic Addition to Children with Autism

    ERIC Educational Resources Information Center

    Yikmis, Ahmet

    2016-01-01

    This study aims to reveal whether the touch math technique is effective in teaching basic addition to children with autism. The dependent variable of this study is the children's skills to solve addition problems correctly, whereas teaching with the touch math technique is the independent variable. Among the single-subject research models, a…

  15. Benefits of a holistic breathing technique in patients on hemodialysis.

    PubMed

    Stanley, Ruth; Leither, Thomas W; Sindelir, Cathy

    2011-01-01

    Health-related quality of life and heart rate variability are often depressed in patients on hemodialysis. This pilot program used a simple holistic, self-directed breathing technique designed to improve heart rate variability, with the hypothesis that improving heart rate variability would subsequently enhance health-related quality of life. Patient self-reported benefits included reductions in anxiety, fatigue, insomnia, and pain. Using holistic physiologic techniques may offer a unique and alternative tool for nurses to help increase health-related quality of life in patients on hemodialysis.

  16. Landscape epidemiology and machine learning: A geospatial approach to modeling West Nile virus risk in the United States

    NASA Astrophysics Data System (ADS)

    Young, Sean Gregory

    The complex interactions between human health and the physical landscape and environment have been recognized, if not fully understood, since the ancient Greeks. Landscape epidemiology, sometimes called spatial epidemiology, is a sub-discipline of medical geography that uses environmental conditions as explanatory variables in the study of disease or other health phenomena. This theory suggests that pathogenic organisms (whether germs or larger vector and host species) are subject to environmental conditions that can be observed on the landscape, and by identifying where such organisms are likely to exist, areas at greatest risk of the disease can be derived. Machine learning is a sub-discipline of artificial intelligence that can be used to create predictive models from large and complex datasets. West Nile virus (WNV) is a relatively new infectious disease in the United States, and has a fairly well-understood transmission cycle that is believed to be highly dependent on environmental conditions. This study takes a geospatial approach to the study of WNV risk, using both landscape epidemiology and machine learning techniques. A combination of remotely sensed and in situ variables are used to predict WNV incidence with a correlation coefficient as high as 0.86. A novel method of mitigating the small numbers problem is also tested and ultimately discarded. Finally a consistent spatial pattern of model errors is identified, indicating the chosen variables are capable of predicting WNV disease risk across most of the United States, but are inadequate in the northern Great Plains region of the US.

  17. MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING THE VARIABILITY-LUMINOSITY RELATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Anne H.; Seitz, Stella; Jerke, Jonathan

    2011-05-10

    We introduce a technique to measure gravitational lensing magnification using the variability of type I quasars. Quasars' variability amplitudes and luminosities are tightly correlated, on average. Magnification due to gravitational lensing increases the quasars' apparent luminosity, while leaving the variability amplitude unchanged. Therefore, the mean magnification of an ensemble of quasars can be measured through the mean shift in the variability-luminosity relation. As a proof of principle, we use this technique to measure the magnification of quasars spectroscopically identified in the Sloan Digital Sky Survey (SDSS), due to gravitational lensing by galaxy clusters in the SDSS MaxBCG catalog. The Palomar-QUESTmore » Variability Survey, reduced using the DeepSky pipeline, provides variability data for the sources. We measure the average quasar magnification as a function of scaled distance (r/R{sub 200}) from the nearest cluster; our measurements are consistent with expectations assuming Navarro-Frenk-White cluster profiles, particularly after accounting for the known uncertainty in the clusters' centers. Variability-based lensing measurements are a valuable complement to shape-based techniques because their systematic errors are very different, and also because the variability measurements are amenable to photometric errors of a few percent and to depths seen in current wide-field surveys. Given the volume data of the expected from current and upcoming surveys, this new technique has the potential to be competitive with weak lensing shear measurements of large-scale structure.« less

  18. Climate-informed stochastic hydrological modeling: Incorporating decadal-scale variability using paleoclimate data

    NASA Astrophysics Data System (ADS)

    Henley, B. J.; Thyer, M. A.; Kuczera, G. A.

    2012-12-01

    A hierarchical framework for incorporating modes of climate variability into stochastic simulations of hydrological data is developed, termed the climate-informed multi-time scale stochastic (CIMSS) framework. To characterize long-term variability for the first level of the hierarchy, paleoclimate and instrumental data describing the Interdecadal Pacific Oscillation (IPO) and the Pacific Decadal Oscillation (PDO) are analyzed. A new paleo IPO-PDO time series dating back 440 yrs is produced, combining seven IPO-PDO paleo sources using an objective smoothing procedure to fit low-pass filters to individual records. The paleo data analysis indicates that wet/dry IPO-PDO states have a broad range of run-lengths, with 90% between 3 and 33 yr and a mean of 15 yr. Model selection techniques were used to determine a suitable stochastic model to simulate these run-lengths. The Markov chain model, previously used to simulate oscillating wet/dry climate states, was found to underestimate the probability of wet/dry periods >5 yr, and was rejected in favor of a gamma distribution. For the second level of the hierarchy, a seasonal rainfall model is conditioned on the simulated IPO-PDO state. Application to two high-quality rainfall sites close to water supply reservoirs found that mean seasonal rainfall in the IPO-PDO dry state was 15%-28% lower than the wet state. The model was able to replicate observed statistics such as seasonal and multi-year accumulated rainfall distributions and interannual autocorrelations for the case study sites. In comparison, an annual lag-one autoregressive AR(1) model was unable to adequately capture the observed rainfall distribution within separate IPO-PDO states. Furthermore, analysis of the impact of the CIMSS framework on drought risk analysis found that short-term drought risks conditional on IPO/PDO state were considerably higher than the traditional AR(1) model.hort-term conditional water supply drought risks for the CIMSS and AR(1) models for the dry IPO-PDO scenario with a range of initial storage levels expressed as a proportion of the annual demand (yield).

  19. Unenhanced respiratory-gated magnetic resonance angiography (MRA) of renal artery in hypertensive patients using true fast imaging with steady-state precession technique compared with contrast-enhanced MRA.

    PubMed

    Zhang, Weisheng; Lin, Jiang; Wang, Shaowu; Lv, Peng; Wang, Lili; Liu, Hao; Chen, Caizhong; Zeng, Mengsu

    2014-01-01

    This study was aimed to evaluate the accuracy of "True Fast Imaging with Steady-State Precession" (TrueFISP) MR angiography (MRA) for diagnosis of renal arterial stenosis (RAS) in hypertensive patients. Twenty-two patients underwent both TrueFISP MRA and contrast-enhanced MRA (CE-MRA) on a 1.5-T MR imager. Volume of main renal arteries, length of maximal visible renal arteries, number of visualized branches, stenotic grade, and subjective quality were compared. Paired 2-tailed Student t test and Wilcoxon signed rank test were applied to evaluate the significance of these variables. Volume of main renal arteries, length of maximal visible renal arteries, and number of branches indicated no significant difference between the 2 techniques (P > 0.05). Stenotic degree of 10 RAS was greater on CE-MRA than on TrueFISP MRA. Qualitative scores from TrueFISP MRA were higher than those from CE-MRA (P < 0.05). TrueFISP MRA is a reliable and accurate method for evaluating RAS.

  20. Auditory steady-state evoked potentials vs. compound action potentials for the measurement of suppression tuning curves in the sedated dog puppy.

    PubMed

    Markessis, Emily; Poncelet, Luc; Colin, Cécile; Hoonhorst, Ingrid; Collet, Grégory; Deltenre, Paul; Moore, Brian C J

    2010-06-01

    Auditory steady-state evoked potential (ASSEP) tuning curves were compared to compound action potential (CAP) tuning curves, both measured at 2 Hz, using sedated beagle puppies. The effect of two types of masker (narrowband noise and sinusoidal) on the tuning curve parameters was assessed. Whatever the masker type, CAP tuning curve parameters were qualitatively and quantitatively similar to the ASSEP ones, with a similar inter-subject variability, but with a greater incidence of upward tip displacement. Whatever the procedure, sinusoidal maskers produced sharper tuning curves than narrow-band maskers. Although these differences are not likely to have significant implications for clinical work, from a fundamental point of view, their origin requires further investigations. The same amount of time was needed to record a CAP and an ASSEP 13-point tuning curve. The data further validate the ASSEP technique, which has the advantages of having a smaller tendency to produce upward tip shifts than the CAP technique. Moreover, being non invasive, ASSEP tuning curves can be easily repeated over time in the same subject for clinical and research purposes.

  1. Quantum key distribution using gaussian-modulated coherent states

    NASA Astrophysics Data System (ADS)

    Grosshans, Frédéric; Van Assche, Gilles; Wenger, Jérôme; Brouri, Rosa; Cerf, Nicolas J.; Grangier, Philippe

    2003-01-01

    Quantum continuous variables are being explored as an alternative means to implement quantum key distribution, which is usually based on single photon counting. The former approach is potentially advantageous because it should enable higher key distribution rates. Here we propose and experimentally demonstrate a quantum key distribution protocol based on the transmission of gaussian-modulated coherent states (consisting of laser pulses containing a few hundred photons) and shot-noise-limited homodyne detection; squeezed or entangled beams are not required. Complete secret key extraction is achieved using a reverse reconciliation technique followed by privacy amplification. The reverse reconciliation technique is in principle secure for any value of the line transmission, against gaussian individual attacks based on entanglement and quantum memories. Our table-top experiment yields a net key transmission rate of about 1.7 megabits per second for a loss-free line, and 75 kilobits per second for a line with losses of 3.1dB. We anticipate that the scheme should remain effective for lines with higher losses, particularly because the present limitations are essentially technical, so that significant margin for improvement is available on both the hardware and software.

  2. Measuring Conformational Dynamics of Single Biomolecules Using Nanoscale Electronic Devices

    NASA Astrophysics Data System (ADS)

    Akhterov, Maxim V.; Choi, Yongki; Sims, Patrick C.; Olsen, Tivoli J.; Gul, O. Tolga; Corso, Brad L.; Weiss, Gregory A.; Collins, Philip G.

    2014-03-01

    Molecular motion can be a rate-limiting step of enzyme catalysis, but motions are typically too quick to resolve with fluorescent single molecule techniques. Recently, we demonstrated a label-free technique that replaced fluorophores with nano-electronic circuits to monitor protein motions. The solid-state electronic technique used single-walled carbon nanotube (SWNT) transistors to monitor conformational motions of a single molecule of T4 lysozyme while processing its substrate, peptidoglycan. As lysozyme catalyzes the hydrolysis of glycosidic bonds, two protein domains undergo 8 Å hinge bending motion that generates an electronic signal in the SWNT transistor. We describe improvements to the system that have extended our temporal resolution to 2 μs . Electronic recordings at this level of detail directly resolve not just transitions between open and closed conformations but also the durations for those transition events. Statistical analysis of many events determines transition timescales characteristic of enzyme activity and shows a high degree of variability within nominally identical chemical events. The high resolution technique can be readily applied to other complex biomolecules to gain insights into their kinetic parameters and catalytic function.

  3. Effect of yogic colon cleansing (Laghu Sankhaprakshalana Kriya) on pain, spinal flexibility, disability and state anxiety in chronic low back pain

    PubMed Central

    Haldavnekar, Richa Vivek; Tekur, Padmini; Nagarathna, Raghuram; Nagendra, Hongasandra Ramarao

    2014-01-01

    Background: Studies have shown that Integrated Yoga reduces pain, disability, anxiety and depression and increases spinal flexibility and quality-of-life in chronic low back pain (CLBP) patients. Objective: The objective of this study was to compare the effect of two yoga practices namely laghu shankha prakshalana (LSP) kriya, a yogic colon cleansing technique and back pain specific asanas (Back pain special technique [BST]) on pain, disability, spinal flexibility and state anxiety in patients with CLBP. Materials and Methods: In this randomized control (self as control) study, 40 in-patients (25 were males, 15 were females) between 25 and 70 years (44.05 ± 13.27) with CLBP were randomly assigned to receive LSP or BST sessions. The measurements were taken immediately before and after each session of either of the practices (30 min) in the same participant. Randomization was used to decide the day of the session (3rd or 5th day after admission) to ensure random distribution of the hang over effect of the two practices. Statistical analysis was performed using the repeated measures analysis of variance. Results: Significant group * time interaction (P < 0.001) was observed in 11 point numerical rating scale, spinal flexibility (on Leighton type Goniometer) and (straight leg raise test in both legs), Oswestry Disability Index, State Anxiety (XI component of Spieldberger's state and trait anxiety inventory. There was significantly (P < 0.001, between groups) better reduction in LSP than BST group on all variables. No adverse effects were reported by any participant. Conclusion: Clearing the bowel by yoga based colon cleansing technique (LSP) is safe and offers immediate analgesic effect with reduced disability, anxiety and improved spinal flexibility in patients with CLBP. PMID:25035620

  4. Constructing networks with correlation maximization methods.

    PubMed

    Mellor, Joseph C; Wu, Jie; Delisi, Charles

    2004-01-01

    Problems of inference in systems biology are ideally reduced to formulations which can efficiently represent the features of interest. In the case of predicting gene regulation and pathway networks, an important feature which describes connected genes and proteins is the relationship between active and inactive forms, i.e. between the "on" and "off" states of the components. While not optimal at the limits of resolution, these logical relationships between discrete states can often yield good approximations of the behavior in larger complex systems, where exact representation of measurement relationships may be intractable. We explore techniques for extracting binary state variables from measurement of gene expression, and go on to describe robust measures for statistical significance and information that can be applied to many such types of data. We show how statistical strength and information are equivalent criteria in limiting cases, and demonstrate the application of these measures to simple systems of gene regulation.

  5. Why farmers adopt best management practice in the United States: A meta-analysis of the adoption literature

    USGS Publications Warehouse

    Baumgart-Getz, Adam; Stalker Prokopy, Linda; Floress, Kristin

    2012-01-01

    This meta-analysis of both published and unpublished studies assesses factors believed to influence adoption of agricultural Best Management Practices in the United States. Using an established statistical technique to summarize the adoption literature in the United States, we identified the following variables as having the largest impact on adoption: access to and quality of information, financial capacity, and being connected to agency or local networks of farmers or watershed groups. This study shows that various approaches to data collection affect the results and comparability of adoption studies. In particular, environmental awareness and farmer attitudes have been inconsistently used and measured across the literature. This meta-analysis concludes with suggestions regarding the future direction of adoption studies, along with guidelines for how data should be presented to enhance the adoption of conservation practices and guide research.

  6. Variable Complexity Optimization of Composite Structures

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    2002-01-01

    The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.

  7. Parametric Study of Variable Emissivity Radiator Surfaces

    NASA Technical Reports Server (NTRS)

    Grob, Lisa M.; Swanson, Theodore D.

    2000-01-01

    The goal of spacecraft thermal design is to accommodate a high function satellite in a low weight and real estate package. The extreme environments that the satellite is exposed during its orbit are handled using passive and active control techniques. Heritage passive heat rejection designs are sized for the hot conditions and augmented for the cold end with heaters. The active heat rejection designs to date are heavy, expensive and/or complex. Incorporating an active radiator into the design that is lighter, cheaper and more simplistic will allow designers to meet the previously stated goal of thermal spacecraft design Varying the radiator's surface properties without changing the radiating area (as with VCHP), or changing the radiators' views (traditional louvers) is the objective of the variable emissivity (vary-e) radiator technologies. A parametric evaluation of the thermal performance of three such technologies is documented in this paper. Comparisons of the Micro-Electromechanical Systems (MEMS), Electrochromics, and Electrophoretics radiators to conventional radiators, both passive and active are quantified herein. With some noted limitations, the vary-e radiator surfaces provide significant advantages over traditional radiators and a promising alternative design technique for future spacecraft thermal systems.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maimone, F., E-mail: f.maimone@gsi.de; Tinschert, K.; Endermann, M.

    In order to increase the intensity of the highly charged ions produced by the Electron Cyclotron Resonance Ion Sources (ECRISs), techniques like the frequency tuning and the afterglow mode have been developed and in this paper the effect on the ion production is shown for the first time when combining both techniques. Recent experimental results proved that the tuning of the operating frequency of the ECRIS is a promising technique to achieve higher ion currents of higher charge states. On the other hand, it is well known that the afterglow mode of the ECRIS operation can provide more intense pulsedmore » ion beams in comparison with the continuous wave (cw) operation. These two techniques can be combined by pulsing the variable frequency signal driving the traveling wave tube amplifier which provides the high microwave power to the ECRIS. In order to analyze the effect of these two combined techniques on the ion source performance, several experiments were carried out on the pulsed frequency tuned CAPRICE (Compacte source A Plusiers Résonances Ionisantes Cyclotron Electroniques)-type ECRIS. Different waveforms and pulse lengths have been investigated under different settings of the ion source. The results of the pulsed mode have been compared with those of cw operation.« less

  9. Analysis of intracranial pressure: past, present, and future.

    PubMed

    Di Ieva, Antonio; Schmitz, Erika M; Cusimano, Michael D

    2013-12-01

    The monitoring of intracranial pressure (ICP) is an important tool in medicine for its ability to portray the brain's compliance status. The bedside monitor displays the ICP waveform and intermittent mean values to guide physicians in the management of patients, particularly those having sustained a traumatic brain injury. Researchers in the fields of engineering and physics have investigated various mathematical analysis techniques applicable to the waveform in order to extract additional diagnostic and prognostic information, although they largely remain limited to research applications. The purpose of this review is to present the current techniques used to monitor and interpret ICP and explore the potential of using advanced mathematical techniques to provide information about system perturbations from states of homeostasis. We discuss the limits of each proposed technique and we propose that nonlinear analysis could be a reliable approach to describe ICP signals over time, with the fractal dimension as a potential predictive clinically meaningful biomarker. Our goal is to stimulate translational research that can move modern analysis of ICP using these techniques into widespread practical use, and to investigate to the clinical utility of a tool capable of simplifying multiple variables obtained from various sensors.

  10. Dynamic and Transient Performance of Turbofan/Turboshaft Convertible Engine With Variable Inlet Guide Vanes

    NASA Technical Reports Server (NTRS)

    McArdle, Jack G.; Barth, Richard L.; Wenzel, Leon M.; Biesiadny, Thomas J.

    1996-01-01

    A convertible engine called the CEST TF34, using the variable inlet guide vane method of power change, was tested on an outdoor stand at the NASA Lewis Research Center with a waterbrake dynamometer for the shaft load. A new digital electronic system, in conjunction with a modified standard TF34 hydromechanical fuel control, kept engine operation stable and safely within limits. All planned testing was completed successfully. Steady-state performance and acoustic characteristics were reported previously and are referenced. This report presents results of transient and dynamic tests. The transient tests measured engine response to several rapid changes in thrust and torque commands at constant fan (shaft) speed. Limited results from dynamic tests using the pseudorandom binary noise technique are also presented. Performance of the waterbrake dynamometer is discussed in an appendix.

  11. Predicting discharge mortality after acute ischemic stroke using balanced data.

    PubMed

    Ho, King Chung; Speier, William; El-Saden, Suzie; Liebeskind, David S; Saver, Jeffery L; Bui, Alex A T; Arnold, Corey W

    2014-01-01

    Several models have been developed to predict stroke outcomes (e.g., stroke mortality, patient dependence, etc.) in recent decades. However, there is little discussion regarding the problem of between-class imbalance in stroke datasets, which leads to prediction bias and decreased performance. In this paper, we demonstrate the use of the Synthetic Minority Over-sampling Technique to overcome such problems. We also compare state of the art machine learning methods and construct a six-variable support vector machine (SVM) model to predict stroke mortality at discharge. Finally, we discuss how the identification of a reduced feature set allowed us to identify additional cases in our research database for validation testing. Our classifier achieved a c-statistic of 0.865 on the cross-validated dataset, demonstrating good classification performance using a reduced set of variables.

  12. Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.

    PubMed

    He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher

    2016-01-01

    Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.

  13. Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much

    PubMed Central

    He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher

    2016-01-01

    Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance. PMID:28344429

  14. Expert system for testing industrial processes and determining sensor status

    DOEpatents

    Gross, K.C.; Singer, R.M.

    1998-06-02

    A method and system are disclosed for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 24 figs.

  15. Expert system for testing industrial processes and determining sensor status

    DOEpatents

    Gross, Kenneth C.; Singer, Ralph M.

    1998-01-01

    A method and system for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.

  16. Behavior sensitivities for control augmented structures

    NASA Technical Reports Server (NTRS)

    Manning, R. A.; Lust, R. V.; Schmit, L. A.

    1987-01-01

    During the past few years it has been recognized that combining passive structural design methods with active control techniques offers the prospect of being able to find substantially improved designs. These developments have stimulated interest in augmenting structural synthesis by adding active control system design variables to those usually considered in structural optimization. An essential step in extending the approximation concepts approach to control augmented structural synthesis is the development of a behavior sensitivity analysis capability for determining rates of change of dynamic response quantities with respect to changes in structural and control system design variables. Behavior sensitivity information is also useful for man-machine interactive design as well as in the context of system identification studies. Behavior sensitivity formulations for both steady state and transient response are presented and the quality of the resulting derivative information is evaluated.

  17. Effect of experimental technique on the determination of strontium distribution coefficients of a surficial sediment from the Idaho National Engineering Laboratory, Idaho

    USGS Publications Warehouse

    Hemming, C.H.; Bunde, R.L.; Liszewski, M.J.; Rosentreter, J.J.; Welhan, J.

    1997-01-01

    The effect of experimental technique on strontium distribution coefficients (K(d)'s) was determined as part of an investigation of strontium geochemical transport properties of surficial sediment from the Idaho National Engineering Laboratory, Idaho. The investigation was conducted by the U.S. Geological Survey and Idaho State University, in cooperation with the U.S. Department of Energy. Batch experiments were conducted to quantify the effect of different experimental techniques on experimentally derived strontium K(d)'s at a fixed pH of 8.0. Combinations of three variables were investigated: method of sample agitation (rotating-mixer and shaker table), ratio of the mass-of-sediment to the volume-of-reaction-solution (1:2 and 1:20), and method of sediment preparation (crushed and non-crushed). Strontium K(d)'s ranged from 11 to 23 mlg-1 among all three experimental variables examined. Strontium K(d)'s were bimodally grouped around 12 and 21 mlg-1. Among the three experimental variables examined, the mass-to-volume ratio appeared to be the only one that could account for this bimodal distribution. The bimodal distribution of the derived strontium K(d)'s may occur because the two different mass-to-volume ratios represent different natural systems. The high mass-to-volume ratio of 1:2 models a natural system, such as an aquifer, in which there is an abundance of favorable sorption sites relative to the amount of strontium in solution. The low mass-to-volume ratio of 1:20 models a natural system, such as a stream, in which the relative amount of strontium in solution exceeds the favorable surface sorption site concentration. Except for low mass-to-volume ratios of non-crushed sediment using a rotating mixer, the method of agitation and sediment preparation appears to have little influence on derived strontium K(d)'s.The effect of experimental technique on strontium distribution coefficients (Kd's) was determined as part of an investigation of strontium geochemical transport properties of surficial sediment from the Idaho National Engineering Laboratory, Idaho. The investigation was conducted by the U.S. Geological Survey and Idaho State University, in cooperation with the U.S. Department of Energy. Batch experiments were conducted to quantify the effect of different experimental techniques on experimentally derived strontium Kd's at a fixed pH of 8.0. Combinations of three variables were investigated: method of sample agitation (rotating-mixer and shaker table), ratio of the mass-of-sediment to the volume-of-reaction-solution (1:2 and 1:20), and method of sediment preparation (crushed and non-crushed). Strontium Kd's ranged from 11 to 23 mlg-1 among all three experimental variables examined. Strontium Kd's were bimodally grouped around 12 and 21 mlg-1. Among the three experimental variables examined, the mass-to-volume ratio appeared to be the only one that could account for this bimodal distribution. The bimodal distribution of the derived strontium Kd's may occur because the two different mass-to-volume ratios represent different natural systems. The high mass-to-volume ratio of 1:2 models a natural system, such as an aquifer, in which there is an abundance of favorable sorption sites relative to the amount of strontium in solution. The low mass-to-volume ratio of 1:20 models a natural system, such as a stream, in which the relative amount of strontium in solution exceeds the favorable surface sorption site concentration. Except for low mass-to-volume ratios of non-crushed sediment using a rotating mixer, the method of agitation and sediment preparation appears to have little influence on derived strontium Kd's.

  18. T-wave alternans and beat-to-beat variability of repolarization: pathophysiological backgrounds and clinical relevance.

    PubMed

    Floré, Vincent; Willems, Rik

    2012-12-01

    In this review, we focus on temporal variability of cardiac repolarization. This phenomenon has been related to a higher risk for ventricular arrhythmia and is therefore interesting as a marker of sudden cardiac death risk. We review two non-invasive clinical techniques quantifying repolarization variability: T-wave alternans (TWA) and beat-to-beat variability of repolarization (BVR). We discuss their pathophysiological link with ventricular arrhythmia and the current clinical relevance of these techniques.

  19. Statistical Techniques for Analyzing Process or "Similarity" Data in TID Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, R.

    2010-01-01

    We investigate techniques for estimating the contributions to TID hardness variability for families of linear bipolar technologies, determining how part-to-part and lot-to-lot variability change for different part types in the process.

  20. Comparing the reliability of a trigonometric technique to goniometry and inclinometry in measuring ankle dorsiflexion.

    PubMed

    Sidaway, Ben; Euloth, Tracey; Caron, Heather; Piskura, Matthew; Clancy, Jessica; Aide, Alyson

    2012-07-01

    The purpose of this study was to compare the reliability of three previously used techniques for the measurement of ankle dorsiflexion ROM, open-chained goniometry, closed-chained goniometry, and inclinometry, to a novel trigonometric technique. Twenty-one physiotherapy students used four techniques (open-chained goniometry, closed-chained goniometry, inclinometry, and trigonometry) to assess dorsiflexion range of motion in 24 healthy volunteers. All student raters underwent training to establish competence in the four techniques. Raters then measured dorsiflexion with a randomly assigned measuring technique four times over two sessions, one week apart. Data were analyzed using a technique by session analysis of variance, technique measurement variability being the primary index of reliability. Comparisons were also made between the measurements derived from the four techniques and those obtained from a computerized video analysis system. Analysis of the rater measurement variability around the technique means revealed significant differences between techniques with the least variation being found in the trigonometric technique. Significant differences were also found between the technique means but no differences between sessions were evident. The trigonometric technique produced mean ROMs closest in value to those derived from computer analysis. Application of the trigonometric technique resulted in the least variability in measurement across raters and consequently should be considered for use when changes in dorsiflexion ROM need to be reliably assessed. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Rising Company’s Performance through Leadership Role: Culture, Strategies, and Management System as a Marine State

    NASA Astrophysics Data System (ADS)

    Wardi, Jeni; Yandra, Alexsander

    2018-05-01

    This research aims to learn the direct influence of transformational and transactional leaderships on Indonesian company’s performance through company’s culture, strategy, management accounting and control system as a marine state. This research involves descriptive and inferential designs in solving the research problem. To test the model and the hypothesis, SEM analysis is used. The populations of this research are companies registered in Indonesian stock exchange in 2012. The sampling technique uses purposive sampling. The data of the research are obtained from questionnaires distributed to respondents. The respondents are companies’ managers represented by accounting and finance managers with the positions 1 and 2 levels below top management team who have direct communication with the top management. The results of the research show that transformational leadership influences company’s performance directly, but not the transactional leadership. The company’s culture is not the mediation variable in indirect influence on the company’s performance, either in transformational or transactional leadership. On the other hand, management control system proves to be the mediation in transactional leadership on the performance but not for transformational leadership. Meanwhile, management accounting system proves to be the mediation variable in the influence of transformational and transactional leaderships. Except the variables of company’s culture, strategy, management accounting system and management control system, each directly influences the performance.

  2. Physiological coherence in healthy volunteers during laboratory-induced stress and controlled breathing.

    PubMed

    Mejía-Mejía, Elisa; Torres, Robinson; Restrepo, Diana

    2018-06-01

    Physiological coherence has been related with a general sense of well-being and improvements in health and physical, social, and cognitive performance. The aim of this study was to evaluate the relationship between acute stress, controlled breathing, and physiological coherence, and the degree of body systems synchronization during a coherence-generation exercise. Thirty-four university employees were evaluated during a 20-min test consisting of four stages of 5-min duration each, during which basal measurements were obtained (Stage 1), acute stress was induced using validated mental stressors (Stroop test and mental arithmetic task, during Stage 2 and 3, respectively), and coherence states were generated using a controlled breathing technique (Stage 4). Physiological coherence and cardiorespiratory synchronization were assessed during each stage from heart rate variability, pulse transit time, and respiration. Coherence measurements derived from the three analyzed variables increased during controlled respiration. Moreover, signals synchronized during the controlled breathing stage, implying a cardiorespiratory synchronization was achieved by most participants. Hence, physiological coherence and cardiopulmonary synchronization, which could lead to improvements in health and better life quality, can be achieved using slow, controlled breathing exercises. Meanwhile, coherence measured during basal state and stressful situations did not show relevant differences using heart rate variability and pulse transit time. More studies are needed to evaluate the ability of coherence ratio to reflect acute stress. © 2017 Society for Psychophysiological Research.

  3. Promoting response variability and stimulus generalization in martial arts training.

    PubMed Central

    Harding, Jay W; Wacker, David P; Berg, Wendy K; Rick, Gary; Lee, John F

    2004-01-01

    The effects of reinforcement and extinction on response variability and stimulus generalization in the punching and kicking techniques of 2 martial arts students were evaluated across drill and sparring conditions. During both conditions, the students were asked to demonstrate different techniques in response to an instructor's punching attack. During baseline, the students received no feedback on their responses in either condition. During the intervention phase, the students received differential reinforcement in the form of instructor feedback for each different punching or kicking technique they performed during a session of the drill condition, but no reinforcement was provided for techniques in the sparring condition. Results showed that both students increased the number of different techniques they performed when reinforcement and extinction procedures were conducted during the drill condition, and that this increase in response variability generalized to the sparring condition. PMID:15293637

  4. A real time Pegasus propulsion system model for VSTOL piloted simulation evaluation

    NASA Technical Reports Server (NTRS)

    Mihaloew, J. R.; Roth, S. P.; Creekmore, R.

    1981-01-01

    A real time propulsion system modeling technique suitable for use in man-in-the-loop simulator studies was developd. This technique provides the system accuracy, stability, and transient response required for integrated aircraft and propulsion control system studies. A Pegasus-Harrier propulsion system was selected as a baseline for developing mathematical modeling and simulation techniques for VSTOL. Initially, static and dynamic propulsion system characteristics were modeled in detail to form a nonlinear aerothermodynamic digital computer simulation of a Pegasus engine. From this high fidelity simulation, a real time propulsion model was formulated by applying a piece-wise linear state variable methodology. A hydromechanical and water injection control system was also simulated. The real time dynamic model includes the detail and flexibility required for the evaluation of critical control parameters and propulsion component limits over a limited flight envelope. The model was programmed for interfacing with a Harrier aircraft simulation. Typical propulsion system simulation results are presented.

  5. Software Safety Analysis of a Flight Guidance System

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W. (Technical Monitor); Tribble, Alan C.; Miller, Steven P.; Lempia, David L.

    2004-01-01

    This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.

  6. Automated vehicle guidance using discrete reference markers. [road surface steering techniques

    NASA Technical Reports Server (NTRS)

    Johnston, A. R.; Assefi, T.; Lai, J. Y.

    1979-01-01

    Techniques for providing steering control for an automated vehicle using discrete reference markers fixed to the road surface are investigated analytically. Either optical or magnetic approaches can be used for the sensor, which generates a measurement of the lateral offset of the vehicle path at each marker to form the basic data for steering control. Possible mechanizations of sensor and controller are outlined. Techniques for handling certain anomalous conditions, such as a missing marker, or loss of acquisition, and special maneuvers, such as u-turns and switching, are briefly discussed. A general analysis of the vehicle dynamics and the discrete control system is presented using the state variable formulation. Noise in both the sensor measurement and in the steering servo are accounted for. An optimal controller is simulated on a general purpose computer, and the resulting plots of vehicle path are presented. Parameters representing a small multipassenger tram were selected, and the simulation runs show response to an erroneous sensor measurement and acquisition following large initial path errors.

  7. An examination of variables which influence high school students to enroll in an undergraduate engineering or physical science major

    NASA Astrophysics Data System (ADS)

    Porter, Christopher H.

    The purpose of this study was to examine the variables which influence a high school student to enroll in an engineering discipline versus a physical science discipline. Data was collected utilizing the High School Activities, Characteristics, and Influences Survey, which was administered to students who were freshmen in an engineering or physical science major at an institution in the Southeastern United States. A total of 413 students participated in the survey. Collected data were analyzed using descriptive statistics, two-sample Wilcoxon tests, and binomial logistic regression techniques. A total of 29 variables were deemed significant between the general engineering and physical science students. The 29 significant variables were further analyzed to see which have an independent impact on a student to enroll in an undergraduate engineering program, as opposed to an undergraduate physical science program. Four statistically significant variables were found to have an impact on a student's decision to enroll in a engineering undergraduate program versus a physical science program: father's influence, participation in Project Lead the Way, and the subjects of mathematics and physics. Recommendations for theory, policy, and practice were discussed based on the results of the study. This study presented suggestions for developing ways to attract, educate, and move future engineers into the workforce.

  8. Reconstructing Tropical Southwest Pacific Climate Variability and Mean State Changes at Vanuatu during the Medieval Climate Anomaly using Geochemical Proxies from Corals

    NASA Astrophysics Data System (ADS)

    Lawman, A. E.; Quinn, T. M.; Partin, J. W.; Taylor, F. W.; Thirumalai, K.; WU, C. C.; Shen, C. C.

    2017-12-01

    The Medieval Climate Anomaly (MCA: 950-1250 CE) is identified as a period during the last 2 millennia with Northern Hemisphere surface temperatures similar to the present. However, our understanding of tropical climate variability during the MCA is poorly constrained due to a lack of sub-annually resolved proxy records. We investigate seasonal and interannual variability during the MCA using geochemical records developed from two well preserved Porites lutea fossilized corals from the tropical southwest Pacific (Tasmaloum, Vanuatu; 15.6°S, 166.9°E). Absolute U/Th dates of 1127.1 ± 2.7 CE and 1105.1 ± 3.0 CE indicate that the selected fossil corals lived during the MCA. We use paired coral Sr/Ca and δ18O measurements to reconstruct sea surface temperature (SST) and the δ18O of seawater (a proxy for salinity). To provide context for the fossil coral records and test whether the mean state and climate variability at Vanuatu during the MCA is similar to the modern climate, our analysis also incorporates two modern coral records from Sabine Bank (15.9°S, 166.0°E) and Malo Channel (15.7°S, 167.2°E), Vanuatu for comparison. We quantify the uncertainty in our modern and fossil coral SST estimates via replication with multiple, overlapping coral records. Both the modern and fossil corals reproduce their respective mean SST value over their common period of overlap, which is 25 years in both cases. Based on over 100 years of monthly Sr/Ca data from each time period, we find that SSTs at Vanuatu during the MCA are 1.3 ± 0.7°C cooler relative to the modern. We also find that the median amplitude of the annual cycle is 0.8 ± 0.3°C larger during the MCA relative to the modern. Multiple data analysis techniques, including the standard deviation and the difference between the 95th and 5th percentiles of the annual SST cycle estimates, also show that the MCA has greater annual SST variability relative to the modern. Stable isotope data acquisition is ongoing, and when complete we will have a suite of records of paired coral Sr/Ca and δ18O measurements. We will apply similar statistical techniques developed for the Sr/Ca-SST record to also investigate variability in the δ18O of seawater (salinity). Modern salinity variability at Vanuatu arises due to hydrological anomalies associated with the El Niño-Southern Oscillation in the tropical Pacific.

  9. HEATING 7. 1 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1991-07-01

    HEATING is a FORTRAN program designed to solve steady-state and/or transient heat conduction problems in one-, two-, or three- dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heating generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- and position-dependent. The boundary conditions, which maymore » be surface-to-boundary or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General graybody radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING is variably dimensioned and utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution (for one-dimensional or two-dimensional problems), and conjugate gradient. Transient problems may be solved using one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method (which for some circumstances allows a time step greater than the CEP stability criterion). The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  10. YAlO3:Ce3+ powders: Synthesis, characterization, thermoluminescence and optical studies

    NASA Astrophysics Data System (ADS)

    Parganiha, Yogita; Kaur, Jagjeet; Dubey, Vikas; Shrivastava, Ravi

    2015-09-01

    Yttrium aluminum perovskite (YAP) is a promising high temperature ceramic material, known for its mechanical, structural and optical properties. YAP's also known as an ideal host material for solid-state lasers and phosphors. In this work, Ce3+ doped YAlO3 phosphors were synthesized by solid state reaction method, which is very suitable technique for large scale production. A prepared phosphor was characterized by X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), Scanning electron microscopy (SEM), Photoluminescence spectra and Thermoluminescence (TL) glow curve study. The starting reagents used for sample preparation are Y2O3, Al2O3 and CeO2, boric acid used as a flux. Ratio of Y:Al was 1:1 which shows perovskite structure confirmed by the X-ray diffraction (XRD) study. The entire prepared sample was studied by PL excitation and emission spectra. Prominent peak at 446 nm (blue emission) which shows broad emission spectra of photoluminescence. It proves that prepared phosphor can act as a single host for blue emission of light and can be used for display applications. Commission Internationale de I'Eclairage (CIE) techniques proves the blue emission of light (x = .148, y = .117). TL glow curve analysis of prepared phosphor shows the prominent peak at 189 °C for the variable UV exposure time and high temperature peak shows the more stability and less fading in the prepared phosphor. Kinetic data of prepared phosphor were evaluated by peak shape method for variable UV exposure time (5-25 min).

  11. Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Martin, Gary R.

    2003-01-01

    This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.

  12. Impact damage resistance of composite fuselage structure, part 2

    NASA Technical Reports Server (NTRS)

    Dost, Ernest F.; Finn, Scott R.; Murphy, Daniel P.; Huisken, Amy B.

    1993-01-01

    The strength of laminated composite materials may be significantly reduced by foreign object impact induced damage. An understanding of the damage state is required in order to predict the behavior of structure under operational loads or to optimize the structural configuration. Types of damage typically induced in laminated materials during an impact event include transverse matrix cracking, delamination, and/or fiber breakage. The details of the damage state and its influence on structural behavior depend on the location of the impact. Damage in the skin may act as a soft inclusion or affect panel stability, while damage occurring over a stiffener may include debonding of the stiffener flange from the skin. An experiment to characterize impact damage resistance of fuselage structure as a function of structural configuration and impact threat was performed. A wide range of variables associated with aircraft fuselage structure such as material type and stiffener geometry (termed, intrinsic variables) and variables related to the operating environment such as impactor mass and diameter (termed, extrinsic variables) were studied using a statistically based design-of-experiments technique. The experimental design resulted in thirty-two different 3-stiffener panels. These configured panels were impacted in various locations with a number of impactor configurations, weights, and energies. The results obtained from an examination of impacts in the skin midbay and hail simulation impacts are documented. The current discussion is a continuation of that work with a focus on nondiscrete characterization of the midbay hail simulation impacts and discrete characterization of impact damage for impacts over the stiffener.

  13. Advancing the Use of Passive Sampling in Risk Assessment and Management of Sediments Contaminated with Hydrophobic Organic Chemicals: Results of an International Ex Situ Passive Sampling Interlaboratory Comparison

    PubMed Central

    2018-01-01

    This work presents the results of an international interlaboratory comparison on ex situ passive sampling in sediments. The main objectives were to map the state of the science in passively sampling sediments, identify sources of variability, provide recommendations and practical guidance for standardized passive sampling, and advance the use of passive sampling in regulatory decision making by increasing confidence in the use of the technique. The study was performed by a consortium of 11 laboratories and included experiments with 14 passive sampling formats on 3 sediments for 25 target chemicals (PAHs and PCBs). The resulting overall interlaboratory variability was large (a factor of ∼10), but standardization of methods halved this variability. The remaining variability was primarily due to factors not related to passive sampling itself, i.e., sediment heterogeneity and analytical chemistry. Excluding the latter source of variability, by performing all analyses in one laboratory, showed that passive sampling results can have a high precision and a very low intermethod variability (

  14. Large-Scale Effects of Timber Harvesting on Stream Systems in the Ouachita Mountains, Arkansas, USA

    NASA Astrophysics Data System (ADS)

    Williams, Lance R.; Taylor, Christopher M.; Warren, Melvin L., Jr.; Clingenpeel, J. Alan

    2002-01-01

    Using Basin Area Stream Survey (BASS) data from the United States Forest Service, we evaluated how timber harvesting influenced patterns of variation in physical stream features and regional fish and macroinvertebrate assemblages. Data were collected for three years (1990-1992) from six hydrologically variable streams in the Ouachita Mountains, Arkansas, USA that were paired by management regime within three drainage basins. Specifically, we used multivariate techniques to partition variability in assemblage structure (taxonomic and trophic) that could be explained by timber harvesting, drainage basin differences, year-to-year variability, and their shared variance components. Most of the variation in fish assemblages was explained by drainage basin differences, and both basin and year-of-sampling influenced macroinvertebrate assemblages. All three factors modeled, including interactions between drainage basins and timber harvesting, influenced variability in physical stream features. Interactions between timber harvesting and drainage basins indicated that differences in physical stream features were important in determining the effects of logging within a basin. The lack of a logging effect on the biota contradicts predictions for these small, hydrologically variable streams. We believe this pattern is related to the large scale of this study and the high levels of natural variability in the streams. Alternatively, there may be time-specific effects we were unable to detect with our sampling design and analyses.

  15. Identification of complex metabolic states in critically injured patients using bioinformatic cluster analysis.

    PubMed

    Cohen, Mitchell J; Grossman, Adam D; Morabito, Diane; Knudson, M Margaret; Butte, Atul J; Manley, Geoffrey T

    2010-01-01

    Advances in technology have made extensive monitoring of patient physiology the standard of care in intensive care units (ICUs). While many systems exist to compile these data, there has been no systematic multivariate analysis and categorization across patient physiological data. The sheer volume and complexity of these data make pattern recognition or identification of patient state difficult. Hierarchical cluster analysis allows visualization of high dimensional data and enables pattern recognition and identification of physiologic patient states. We hypothesized that processing of multivariate data using hierarchical clustering techniques would allow identification of otherwise hidden patient physiologic patterns that would be predictive of outcome. Multivariate physiologic and ventilator data were collected continuously using a multimodal bioinformatics system in the surgical ICU at San Francisco General Hospital. These data were incorporated with non-continuous data and stored on a server in the ICU. A hierarchical clustering algorithm grouped each minute of data into 1 of 10 clusters. Clusters were correlated with outcome measures including incidence of infection, multiple organ failure (MOF), and mortality. We identified 10 clusters, which we defined as distinct patient states. While patients transitioned between states, they spent significant amounts of time in each. Clusters were enriched for our outcome measures: 2 of the 10 states were enriched for infection, 6 of 10 were enriched for MOF, and 3 of 10 were enriched for death. Further analysis of correlations between pairs of variables within each cluster reveals significant differences in physiology between clusters. Here we show for the first time the feasibility of clustering physiological measurements to identify clinically relevant patient states after trauma. These results demonstrate that hierarchical clustering techniques can be useful for visualizing complex multivariate data and may provide new insights for the care of critically injured patients.

  16. Integrating environmental gap analysis with spatial conservation prioritization: a case study from Victoria, Australia.

    PubMed

    Sharafi, Seyedeh Mahdieh; Moilanen, Atte; White, Matt; Burgman, Mark

    2012-12-15

    Gap analysis is used to analyse reserve networks and their coverage of biodiversity, thus identifying gaps in biodiversity representation that may be filled by additional conservation measures. Gap analysis has been used to identify priorities for species and habitat types. When it is applied to identify gaps in the coverage of environmental variables, it embodies the assumption that combinations of environmental variables are effective surrogates for biodiversity attributes. The question remains of how to fill gaps in conservation systems efficiently. Conservation prioritization software can identify those areas outside existing conservation areas that contribute to the efficient covering of gaps in biodiversity features. We show how environmental gap analysis can be implemented using high-resolution information about environmental variables and ecosystem condition with the publicly available conservation prioritization software, Zonation. Our method is based on the conversion of combinations of environmental variables into biodiversity features. We also replicated the analysis by using Species Distribution Models (SDMs) as biodiversity features to evaluate the robustness and utility of our environment-based analysis. We apply the technique to a planning case study of the state of Victoria, Australia. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Analytic Thermoelectric Couple Modeling: Variable Material Properties and Transient Operation

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan A.; Sehirlioglu, Alp; Dynys, Fred

    2015-01-01

    To gain a deeper understanding of the operation of a thermoelectric couple a set of analytic solutions have been derived for a variable material property couple and a transient couple. Using an analytic approach, as opposed to commonly used numerical techniques, results in a set of useful design guidelines. These guidelines can serve as useful starting conditions for further numerical studies, or can serve as design rules for lab built couples. The analytic modeling considers two cases and accounts for 1) material properties which vary with temperature and 2) transient operation of a couple. The variable material property case was handled by means of an asymptotic expansion, which allows for insight into the influence of temperature dependence on different material properties. The variable property work demonstrated the important fact that materials with identical average Figure of Merits can lead to different conversion efficiencies due to temperature dependence of the properties. The transient couple was investigated through a Greens function approach; several transient boundary conditions were investigated. The transient work introduces several new design considerations which are not captured by the classic steady state analysis. The work helps to assist in designing couples for optimal performance, and also helps assist in material selection.

  18. Correspondence Analysis-Theory and Application in Management Accounting Research

    NASA Astrophysics Data System (ADS)

    Duller, Christine

    2010-09-01

    Correspondence analysis is an explanatory data analytic technique and is used to identify systematic relations between categorical variables. It is related to principal component analysis and the results provide information on the structure of categorical variables similar to the results given by a principal component analysis in case of metric variables. Classical correspondence analysis is designed two-dimensional, whereas multiple correspondence analysis is an extension to more than two variables. After an introductory overview of the idea and the implementation in standard software packages (PASW, SAS, R) an example in recent research is presented, which deals with strategic management accounting in family and non-family enterprises in Austria, where 70% to 80% of all enterprises can be classified as family firms. Although there is a growing body of literature focusing on various management issues in family firms, so far the state of the art of strategic management accounting in family firms is an empirically under-researched subject. In relevant literature only the (empirically untested) hypothesis can be found, that family firms tend to have less formalized management accounting systems than non-family enterprises. Creating a correspondence analysis will help to identify the underlying structure, which is responsible for differences in strategic management accounting.

  19. Measurement of soil carbon oxidation state and oxidative ratio by 13C nuclear magnetic resonance

    USGS Publications Warehouse

    Hockaday, W.C.; Masiello, C.A.; Randerson, J.T.; Smernik, R.J.; Baldock, J.A.; Chadwick, O.A.; Harden, J.W.

    2009-01-01

    The oxidative ratio (OR) of the net ecosystem carbon balance is the ratio of net O2 and CO2 fluxes resulting from photosynthesis, respiration, decomposition, and other lateral and vertical carbon flows. The OR of the terrestrial biosphere must be well characterized to accurately estimate the terrestrial CO2 sink using atmospheric measurements of changing O2 and CO2 levels. To estimate the OR of the terrestrial biosphere, measurements are needed of changes in the OR of aboveground and belowground carbon pools associated with decadal timescale disturbances (e.g., land use change and fire). The OR of aboveground pools can be measured using conventional approaches including elemental analysis. However, measuring the OR of soil carbon pools is technically challenging, and few soil OR data are available. In this paper we test three solid-state nuclear magnetic resonance (NMR) techniques for measuring soil OR, all based on measurements of the closely related parameter, organic carbon oxidation state (Cox). Two of the three techniques make use of a molecular mixing model which converts NMR spectra into concentrations of a standard suite of biological molecules of known C ox. The third technique assigns Cox values to each peak in the NMR spectrum. We assess error associated with each technique using pure chemical compounds and plant biomass standards whose Cox and OR values can be directly measured by elemental analyses. The most accurate technique, direct polarization solid-state 13C NMR with the molecular mixing model, agrees with elemental analyses to ??0.036 Cox units (??0.009 OR units). Using this technique, we show a large natural variability in soil Cox and OR values. Soil Cox values have a mean of -0.26 and a range from -0.45 to 0.30, corresponding to OR values of 1.08 ?? 0.06 and a range from 0.96 to 1.22. We also estimate the OR of the carbon flux from a boreal forest fire. Analysis of soils from nearby intact soil profiles imply that soil carbon losses associated with the fire had an OR of 1.091 (??0.003). Fire appears to be a major factor driving the soil C pool to higher oxidation states and lower OR values. Episodic fluxes caused by disturbances like fire may have substantially different ORs from ecosystem respiration fluxes and therefore should be better quantified to reduce uncertainties associated with our understanding of the global atmospheric carbon budget. Copyright 2009 by the American Geophysical Union.

  20. Statistics anxiety, state anxiety during an examination, and academic achievement.

    PubMed

    Macher, Daniel; Paechter, Manuela; Papousek, Ilona; Ruggeri, Kai; Freudenthaler, H Harald; Arendasy, Martin

    2013-12-01

    A large proportion of students identify statistics courses as the most anxiety-inducing courses in their curriculum. Many students feel impaired by feelings of state anxiety in the examination and therefore probably show lower achievements. The study investigates how statistics anxiety, attitudes (e.g., interest, mathematical self-concept) and trait anxiety, as a general disposition to anxiety, influence experiences of anxiety as well as achievement in an examination. Participants were 284 undergraduate psychology students, 225 females and 59 males. Two weeks prior to the examination, participants completed a demographic questionnaire and measures of the STARS, the STAI, self-concept in mathematics, and interest in statistics. At the beginning of the statistics examination, students assessed their present state anxiety by the KUSTA scale. After 25 min, all examination participants gave another assessment of their anxiety at that moment. Students' examination scores were recorded. Structural equation modelling techniques were used to test relationships between the variables in a multivariate context. Statistics anxiety was the only variable related to state anxiety in the examination. Via state anxiety experienced before and during the examination, statistics anxiety had a negative influence on achievement. However, statistics anxiety also had a direct positive influence on achievement. This result may be explained by students' motivational goals in the specific educational setting. The results provide insight into the relationship between students' attitudes, dispositions, experiences of anxiety in the examination, and academic achievement, and give recommendations to instructors on how to support students prior to and in the examination. © 2012 The British Psychological Society.

  1. Extensions of the Johnson-Neyman Technique to Linear Models with Curvilinear Effects: Derivations and Analytical Tools

    ERIC Educational Resources Information Center

    Miller, Jason W.; Stromeyer, William R.; Schwieterman, Matthew A.

    2013-01-01

    The past decade has witnessed renewed interest in the use of the Johnson-Neyman (J-N) technique for calculating the regions of significance for the simple slope of a focal predictor on an outcome variable across the range of a second, continuous independent variable. Although tools have been developed to apply this technique to probe 2- and 3-way…

  2. Rotorcraft system identification techniques for handling qualities and stability and control evaluation

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.

    1978-01-01

    An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.

  3. Generation of linear dynamic models from a digital nonlinear simulation

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Krosel, S. M.

    1979-01-01

    The results and methodology used to derive linear models from a nonlinear simulation are presented. It is shown that averaged positive and negative perturbations in the state variables can reduce numerical errors in finite difference, partial derivative approximations and, in the control inputs, can better approximate the system response in both directions about the operating point. Both explicit and implicit formulations are addressed. Linear models are derived for the F 100 engine, and comparisons of transients are made with the nonlinear simulation. The problem of startup transients in the nonlinear simulation in making these comparisons is addressed. Also, reduction of the linear models is investigated using the modal and normal techniques. Reduced-order models of the F 100 are derived and compared with the full-state models.

  4. Superconducting fault current-limiter with variable shunt impedance

    DOEpatents

    Llambes, Juan Carlos H; Xiong, Xuming

    2013-11-19

    A superconducting fault current-limiter is provided, including a superconducting element configured to resistively or inductively limit a fault current, and one or more variable-impedance shunts electrically coupled in parallel with the superconducting element. The variable-impedance shunt(s) is configured to present a first impedance during a superconducting state of the superconducting element and a second impedance during a normal resistive state of the superconducting element. The superconducting element transitions from the superconducting state to the normal resistive state responsive to the fault current, and responsive thereto, the variable-impedance shunt(s) transitions from the first to the second impedance. The second impedance of the variable-impedance shunt(s) is a lower impedance than the first impedance, which facilitates current flow through the variable-impedance shunt(s) during a recovery transition of the superconducting element from the normal resistive state to the superconducting state, and thus, facilitates recovery of the superconducting element under load.

  5. Adjoint-Based Design of Rotors Using the Navier-Stokes Equations in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.

    2010-01-01

    Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated by using comparisons with a complex-variable technique, and a number of single- and multipoint optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.

  6. Adjoint-Based Design of Rotors using the Navier-Stokes Equations in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.

    2009-01-01

    Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated using comparisons with a complex-variable technique, and a number of single- and multi-point optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.

  7. Surface-Water Techniques: On Demand Training Opportunities

    USGS Publications Warehouse

    ,

    2007-01-01

    The U.S. Geological Survey (USGS) has been collecting streamflow information since 1889 using nationally consistent methods. The need for such information was envisioned by John Wesley Powell as a key component for settlement of the arid western United States. Because of Powell?s vision the nation now has a rich streamflow data base that can be analyzed with confidence in both space and time. This means that data collected at a stream gaging station in Maine in 1903 can be compared to data collected in 2007 at the same gage in Maine or at a different gage in California. Such comparisons are becoming increasingly important as we work to assess climate variability and anthropogenic effects on streamflow. Training employees in proper and consistent techniques to collect and analyze streamflow data forms a cornerstone for maintaining the integrity of this rich data base.

  8. A LATIN-based model reduction approach for the simulation of cycling damage

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Mainak; Fau, Amelie; Nackenhorst, Udo; Néron, David; Ladevèze, Pierre

    2017-11-01

    The objective of this article is to introduce a new method including model order reduction for the life prediction of structures subjected to cycling damage. Contrary to classical incremental schemes for damage computation, a non-incremental technique, the LATIN method, is used herein as a solution framework. This approach allows to introduce a PGD model reduction technique which leads to a drastic reduction of the computational cost. The proposed framework is exemplified for structures subjected to cyclic loading, where damage is considered to be isotropic and micro-defect closure effects are taken into account. A difficulty herein for the use of the LATIN method comes from the state laws which can not be transformed into linear relations through an internal variable transformation. A specific treatment of this issue is introduced in this work.

  9. A prediction model for lift-fan simulator performance. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Yuska, J. A.

    1972-01-01

    The performance characteristics of a model VTOL lift-fan simulator installed in a two-dimensional wing are presented. The lift-fan simulator consisted of a 15-inch diameter fan driven by a turbine contained in the fan hub. The performance of the lift-fan simulator was measured in two ways: (1) the calculated momentum thrust of the fan and turbine (total thrust loading), and (2) the axial-force measured on a load cell force balance (axial-force loading). Tests were conducted over a wide range of crossflow velocities, corrected tip speeds, and wing angle of attack. A prediction modeling technique was developed to help in analyzing the performance characteristics of lift-fan simulators. A multiple linear regression analysis technique is presented which calculates prediction model equations for the dependent variables.

  10. Spatially explicit modeling of blackbird abundance in the Prairie Pothole Region

    USGS Publications Warehouse

    Forcey, Greg M.; Thogmartin, Wayne E.; Linz, George M.; McKann, Patrick C.; Crimmins, Shawn M.

    2015-01-01

    Knowledge of factors influencing animal abundance is important to wildlife biologists developing management plans. This is especially true for economically important species such as blackbirds (Icteridae), which cause more than $100 million in crop damages annually in the United States. Using data from the North American Breeding Bird Survey, the National Land Cover Dataset, and the National Climatic Data Center, we modeled effects of regional environmental variables on relative abundance of 3 blackbird species (red-winged blackbird,Agelaius phoeniceus; yellow-headed blackbird, Xanthocephalus xanthocephalus; common grackle, Quiscalus quiscula) in the Prairie Pothole Region of the central United States. We evaluated landscape covariates at 3 logarithmically related spatial scales (1,000 ha, 10,000 ha, and 100,000 ha) and modeled weather variables at the 100,000-ha scale. We constructed models a priori using information from published habitat associations. We fit models with WinBUGS using Markov chain Monte Carlo techniques. Both landscape and weather variables contributed strongly to predicting blackbird relative abundance (95% credibility interval did not overlap 0). Variables with the strongest associations with blackbird relative abundance were the percentage of wetland area and precipitation amount from the year before bird surveys were conducted. The influence of spatial scale appeared small—models with the same variables expressed at different scales were often in the best model subset. This large-scale study elucidated regional effects of weather and landscape variables, suggesting that management strategies aimed at reducing damages caused by these species should consider the broader landscape, including weather effects, because such factors may outweigh the influence of localized conditions or site-specific management actions. The regional species distributional models we developed for blackbirds provide a tool for understanding these broader landscape effects and guiding wildlife management practices to areas that are optimally beneficial. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  11. Sequential state estimation of nonlinear/non-Gaussian systems with stochastic input for turbine degradation estimation

    NASA Astrophysics Data System (ADS)

    Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying

    2016-05-01

    Health state estimation of inaccessible components in complex systems necessitates effective state estimation techniques using the observable variables of the system. The task becomes much complicated when the system is nonlinear/non-Gaussian and it receives stochastic input. In this work, a novel sequential state estimation framework is developed based on particle filtering (PF) scheme for state estimation of general class of nonlinear dynamical systems with stochastic input. Performance of the developed framework is then validated with simulation on a Bivariate Non-stationary Growth Model (BNGM) as a benchmark. In the next step, three-year operating data of an industrial gas turbine engine (GTE) are utilized to verify the effectiveness of the developed framework. A comprehensive thermodynamic model for the GTE is therefore developed to formulate the relation of the observable parameters and the dominant degradation symptoms of the turbine, namely, loss of isentropic efficiency and increase of the mass flow. The results confirm the effectiveness of the developed framework for simultaneous estimation of multiple degradation symptoms in complex systems with noisy measured inputs.

  12. Maintenance and Representation of Mind Wandering during Resting-State fMRI.

    PubMed

    Chou, Ying-Hui; Sundman, Mark; Whitson, Heather E; Gaur, Pooja; Chu, Mei-Lan; Weingarten, Carol P; Madden, David J; Wang, Lihong; Kirste, Imke; Joliot, Marc; Diaz, Michele T; Li, Yi-Ju; Song, Allen W; Chen, Nan-Kuei

    2017-01-12

    Major advances in resting-state functional magnetic resonance imaging (fMRI) techniques in the last two decades have provided a tool to better understand the functional organization of the brain both in health and illness. Despite such developments, characterizing regulation and cerebral representation of mind wandering, which occurs unavoidably during resting-state fMRI scans and may induce variability of the acquired data, remains a work in progress. Here, we demonstrate that a decrease or decoupling in functional connectivity involving the caudate nucleus, insula, medial prefrontal cortex and other domain-specific regions was associated with more sustained mind wandering in particular thought domains during resting-state fMRI. Importantly, our findings suggest that temporal and between-subject variations in functional connectivity of above-mentioned regions might be linked with the continuity of mind wandering. Our study not only provides a preliminary framework for characterizing the maintenance and cerebral representation of different types of mind wandering, but also highlights the importance of taking mind wandering into consideration when studying brain organization with resting-state fMRI in the future.

  13. Detection and attribution of streamflow timing changes to climate change in the Western United States

    USGS Publications Warehouse

    Hidalgo, H.G.; Das, T.; Dettinger, M.D.; Cayan, D.R.; Pierce, D.W.; Barnett, T.P.; Bala, G.; Mirin, A.; Wood, A.W.; Bonfils, Celine; Santer, B.D.; Nozawa, T.

    2009-01-01

    This article applies formal detection and attribution techniques to investigate the nature of observed shifts in the timing of streamflow in the western United States. Previous studies have shown that the snow hydrology of the western United States has changed in the second half of the twentieth century. Such changes manifest themselves in the form of more rain and less snow, in reductions in the snow water contents, and in earlier snowmelt and associated advances in streamflow "center" timing (the day in the "water-year" on average when half the water-year flow at a point has passed). However, with one exception over a more limited domain, no other study has attempted to formally attribute these changes to anthropogenic increases of greenhouse gases in the atmosphere. Using the observations together with a set of global climate model simulations and a hydrologic model (applied to three major hydrological regions of the western United States_the California region, the upper Colorado River basin, and the Columbia River basin), it is found that the observed trends toward earlier "center" timing of snowmelt-driven streamflows in the western United States since 1950 are detectably different from natural variability (significant at the p < 0.05 level). Furthermore, the nonnatural parts of these changes can be attributed confidently to climate changes induced by anthropogenic greenhouse gases, aerosols, ozone, and land use. The signal from the Columbia dominates the analysis, and it is the only basin that showed a detectable signal when the analysis was performed on individual basins. It should be noted that although climate change is an important signal, other climatic processes have also contributed to the hydrologic variability of large basins in the western United States. ?? 2009 American Meteorological Society.

  14. Multi-year climate variability in the Southwestern United States within a context of a dynamically downscaled twentieth century reanalysis

    NASA Astrophysics Data System (ADS)

    Carrillo, Carlos M.; Castro, Christopher L.; Chang, Hsin-I.; Luong, Thang M.

    2017-12-01

    This investigation evaluates whether there is coherency in warm and cool season precipitation at the low-frequency scale that may be responsible for multi-year droughts in the US Southwest. This low-frequency climate variability at the decadal scale and longer is studied within the context of a twentieth-century reanalysis (20CR) and its dynamically-downscaled version (DD-20CR). A spectral domain matrix methods technique (Multiple-Taper-Method Singular Value Decomposition) is applied to these datasets to identify statistically significant spatiotemporal precipitation patterns for the cool (November-April) and warm (July-August) seasons. The low-frequency variability in the 20CR is evaluated by exploring global to continental-scale spatiotemporal variability in moisture flux convergence (MFC) to the occurrence of multiyear droughts and pluvials in Central America, as this region has a demonstrated anti-phase relationship in low-frequency climate variability with northern Mexico and the southwestern US By using the MFC in lieu of precipitation, this study reveals that the 20CR is able to resolve well the low-frequency, multiyear climate variability. In the context of the DD-20CR, multiyear droughts and pluvials in the southwestern US (in the early twentieth century) are significantly related to this low-frequency climate variability. The precipitation anomalies at these low-frequency timescales are in phase between the cool and warm seasons, consistent with the concept of dual-season drought as has been suggested in tree ring studies.

  15. Variability in surface ECG morphology: signal or noise?

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    Using data collected from canine models of acute myocardial ischemia, we investigated two issues of major relevance to electrocardiographic signal averaging: ECG epoch alignment, and the spectral characteristics of the beat-to-beat variability in ECG morphology. With initial digitization rates of 1 kHz, an iterative a posteriori matched filtering alignment scheme, and linear interpolation, we demonstrated that there is sufficient information in the body surface ECG to merit alignment to a precision of 0.1 msecs. Applying this technique to align QRS complexes and atrial pacing artifacts independently, we demonstrated that the conduction delay from atrial stimulus to ventricular activation may be so variable as to preclude using atrial pacing as an alignment mechanism, and that this variability in conduction time be modulated at the frequency of respiration and at a much lower frequency (0.02-0.03Hz). Using a multidimensional spectral technique, we investigated the beat-to-beat variability in ECG morphology, demonstrating that the frequency spectrum of ECG morphological variation reveals a readily discernable modulation at the frequency of respiration. In addition, this technique detects a subtle beat-to-beat alternation in surface ECG morphology which accompanies transient coronary artery occlusion. We conclude that physiologically important information may be stored in the variability in the surface electrocardiogram, and that this information is lost by conventional averaging techniques.

  16. Approximate techniques of structural reanalysis

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1974-01-01

    A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.

  17. Characterization of Shrubland-Atmosphere Interactions through Use of the Eddy Covariance Method, Distributed Footprint Sampling, and Imagery from Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Anderson, C.; Vivoni, E. R.; Pierini, N.; Robles-Morua, A.; Rango, A.; Laliberte, A.; Saripalli, S.

    2012-12-01

    Ecohydrological dynamics can be evaluated from field observations of land-atmosphere states and fluxes, including water, carbon, and energy exchanges measured through the eddy covariance method. In heterogeneous landscapes, the representativeness of these measurements is not well understood due to the variable nature of the sampling footprint and the mixture of underlying herbaceous, shrub, and soil patches. In this study, we integrate new field techniques to understand how ecosystem surface states are related to turbulent fluxes in two different semiarid shrubland settings in the Jornada (New Mexico) and Santa Rita (Arizona) Experimental Ranges. The two sites are characteristic of Chihuahuan (NM) and Sonoran (AZ) Desert mixed-shrub communities resulting from woody plant encroachment into grassland areas. In each study site, we deployed continuous soil moisture and soil temperature profile observations at twenty sites around an eddy covariance tower after local footprint estimation revealed the optimal sensor network design. We then characterized the tower footprint through terrain and vegetation analyses derived at high resolution (<1 m) from imagery obtained from a fixed-wing and rotary-wing Unmanned Aerial Vehicles (UAV). Our analysis focuses on the summertime land-atmosphere states and fluxes during which each ecosystem responded differentially to the North American monsoon. We found that vegetation heterogeneity induces spatial differences in soil moisture and temperature that are important to capture when relating these states to the eddy covariance flux measurements. Spatial distributions of surface states at different depths reveal intricate patterns linked to vegetation cover that vary between the two sites. Furthermore, single site measurements at the tower are insufficient to capture the footprint conditions and their influence on turbulent fluxes. We also discuss techniques for aggregating the surface states based upon the vegetation and soil classifications obtained from the high-resolution aerial imagery. Overall, the integration of the different techniques yielded new insight into the spatiotemporal variation of land surface states and their relation to sensible and latent heat fluxes in two shrubland sites, with the potential application in other ecosystems worldwide.

  18. On the use of internal state variables in thermoviscoplastic constitutive equations

    NASA Technical Reports Server (NTRS)

    Allen, D. H.; Beek, J. M.

    1985-01-01

    The general theory of internal state variables are reviewed to apply it to inelastic metals in use in high temperature environments. In this process, certain constraints and clarifications will be made regarding internal state variables. It is shown that the Helmholtz free energy can be utilized to construct constitutive equations which are appropriate for metallic superalloys. Internal state variables are shown to represent locally averaged measures of dislocation arrangement, dislocation density, and intergranular fracture. The internal state variable model is demonstrated to be a suitable framework for comparison of several currently proposed models for metals and can therefore be used to exhibit history dependence, nonlinearity, and rate as well as temperature sensitivity.

  19. Comparison of Sequential and Variational Data Assimilation

    NASA Astrophysics Data System (ADS)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  20. Testing quantum contextuality of continuous-variable states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKeown, Gerard; Paternostro, Mauro; Paris, Matteo G. A.

    2011-06-15

    We investigate the violation of noncontextuality by a class of continuous-variable states, including variations of entangled coherent states and a two-mode continuous superposition of coherent states. We generalize the Kochen-Specker (KS) inequality discussed by Cabello [A. Cabello, Phys. Rev. Lett. 101, 210401 (2008)] by using effective bidimensional observables implemented through physical operations acting on continuous-variable states, in a way similar to an approach to the falsification of Bell-Clauser-Horne-Shimony-Holt inequalities put forward recently. We test for state-independent violation of KS inequalities under variable degrees of state entanglement and mixedness. We then demonstrate theoretically the violation of a KS inequality for anymore » two-mode state by using pseudospin observables and a generalized quasiprobability function.« less

  1. A non-parametric postprocessor for bias-correcting multi-model ensemble forecasts of hydrometeorological and hydrologic variables

    NASA Astrophysics Data System (ADS)

    Brown, James; Seo, Dong-Jun

    2010-05-01

    Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results are presented. Extension to multimodel ensembles from the NCEP GFS and Short Range Ensemble Forecast (SREF) systems is also proposed.

  2. Anesthesia Technique and Outcomes of Mechanical Thrombectomy in Patients With Acute Ischemic Stroke.

    PubMed

    Bekelis, Kimon; Missios, Symeon; MacKenzie, Todd A; Tjoumakaris, Stavropoula; Jabbour, Pascal

    2017-02-01

    The impact of anesthesia technique on the outcomes of mechanical thrombectomy for acute ischemic stroke remains an issue of debate. We investigated the association of general anesthesia with outcomes in patients undergoing mechanical thrombectomy for ischemic stroke. We performed a cohort study involving patients undergoing mechanical thrombectomy for ischemic stroke from 2009 to 2013, who were registered in the New York Statewide Planning and Research Cooperative System database. An instrumental variable (hospital rate of general anesthesia) analysis was used to simulate the effects of randomization and investigate the association of anesthesia technique with case-fatality and length of stay. Among 1174 patients, 441 (37.6%) underwent general anesthesia and 733 (62.4%) underwent conscious sedation. Using an instrumental variable analysis, we identified that general anesthesia was associated with a 6.4% increased case-fatality (95% confidence interval, 1.9%-11.0%) and 8.4 days longer length of stay (95% confidence interval, 2.9-14.0) in comparison to conscious sedation. This corresponded to 15 patients needing to be treated with conscious sedation to prevent 1 death. Our results were robust in sensitivity analysis with mixed effects regression and propensity score-adjusted regression models. Using a comprehensive all-payer cohort of acute ischemic stroke patients undergoing mechanical thrombectomy in New York State, we identified an association of general anesthesia with increased case-fatality and length of stay. These considerations should be taken into account when standardizing acute stroke care. © 2017 American Heart Association, Inc.

  3. Mechanical Characterization of Bone: State of the Art in Experimental Approaches-What Types of Experiments Do People Do and How Does One Interpret the Results?

    PubMed

    Bailey, Stacyann; Vashishth, Deepak

    2018-06-18

    The mechanical integrity of bone is determined by the direct measurement of bone mechanical properties. This article presents an overview of the current, most common, and new and upcoming experimental approaches for the mechanical characterization of bone. The key outcome variables of mechanical testing, as well as interpretations of the results in the context of bone structure and biology are also discussed. Quasi-static tests are the most commonly used for determining the resistance to structural failure by a single load at the organ (whole bone) level. The resistance to crack initiation or growth by fracture toughness testing and fatigue loading offers additional and more direct characterization of tissue material properties. Non-traditional indentation techniques and in situ testing are being increasingly used to probe the material properties of bone ultrastructure. Destructive ex vivo testing or clinical surrogate measures are considered to be the gold standard for estimating fracture risk. The type of mechanical test used for a particular investigation depends on the length scale of interest, where the outcome variables are influenced by the interrelationship between bone structure and composition. Advancement in the sensitivity of mechanical characterization techniques to detect changes in bone at the levels subjected to modifications by aging, disease, and/or pharmaceutical treatment is required. As such, a number of techniques are now available to aid our understanding of the factors that contribute to fracture risk.

  4. Reproducibility of the exponential rise technique of CO(2) rebreathing for measuring P(v)CO(2) and C(v)CO(2 )to non-invasively estimate cardiac output during incremental, maximal treadmill exercise.

    PubMed

    Cade, W Todd; Nabar, Sharmila R; Keyser, Randall E

    2004-05-01

    The purpose of this study was to determine the reproducibility of the indirect Fick method for the measurement of mixed venous carbon dioxide partial pressure (P(v)CO(2)) and venous carbon dioxide content (C(v)CO(2)) for estimation of cardiac output (Q(c)), using the exponential rise method of carbon dioxide rebreathing, during non-steady-state treadmill exercise. Ten healthy participants (eight female and two male) performed three incremental, maximal exercise treadmill tests to exhaustion within 1 week. Non-invasive Q(c) measurements were evaluated at rest, during each 3-min stage, and at peak exercise, across three identical treadmill tests, using the exponential rise technique for measuring mixed venous PCO(2) and CCO(2) and estimating venous-arterio carbon dioxide content difference (C(v-a)CO(2)). Measurements were divided into measured or estimated variables [heart rate (HR), oxygen consumption (VO(2)), volume of expired carbon dioxide (VCO(2)), end-tidal carbon dioxide (P(ET)CO(2)), arterial carbon dioxide partial pressure (P(a)CO(2)), venous carbon dioxide partial pressure ( P(v)CO(2)), and C(v-a)CO(2)] and cardiorespiratory variables derived from the measured variables [Q(c), stroke volume (V(s)), and arteriovenous oxygen difference ( C(a-v)O(2))]. In general, the derived cardiorespiratory variables demonstrated acceptable (R=0.61) to high (R>0.80) reproducibility, especially at higher intensities and peak exercise. Measured variables, excluding P(a)CO(2) and C(v-a)CO(2), also demonstrated acceptable (R=0.6 to 0.79) to high reliability. The current study demonstrated acceptable to high reproducibility of the exponential rise indirect Fick method in measurement of mixed venous PCO(2) and CCO(2) for estimation of Q(c) during incremental treadmill exercise testing, especially at high-intensity and peak exercise.

  5. Data-driven Climate Modeling and Prediction

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.

    2016-12-01

    Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.

  6. A stochastic global identification framework for aerospace structures operating under varying flight states

    NASA Astrophysics Data System (ADS)

    Kopsaftopoulos, Fotis; Nardari, Raphael; Li, Yu-Hung; Chang, Fu-Kuo

    2018-01-01

    In this work, a novel data-based stochastic "global" identification framework is introduced for aerospace structures operating under varying flight states and uncertainty. In this context, the term "global" refers to the identification of a model that is capable of representing the structure under any admissible flight state based on data recorded from a sample of these states. The proposed framework is based on stochastic time-series models for representing the structural dynamics and aeroelastic response under multiple flight states, with each state characterized by several variables, such as the airspeed, angle of attack, altitude and temperature, forming a flight state vector. The method's cornerstone lies in the new class of Vector-dependent Functionally Pooled (VFP) models which allow the explicit analytical inclusion of the flight state vector into the model parameters and, hence, system dynamics. This is achieved via the use of functional data pooling techniques for optimally treating - as a single entity - the data records corresponding to the various flight states. In this proof-of-concept study the flight state vector is defined by two variables, namely the airspeed and angle of attack of the vehicle. The experimental evaluation and assessment is based on a prototype bio-inspired self-sensing composite wing that is subjected to a series of wind tunnel experiments under multiple flight states. Distributed micro-sensors in the form of stretchable sensor networks are embedded in the composite layup of the wing in order to provide the sensing capabilities. Experimental data collected from piezoelectric sensors are employed for the identification of a stochastic global VFP model via appropriate parameter estimation and model structure selection methods. The estimated VFP model parameters constitute two-dimensional functions of the flight state vector defined by the airspeed and angle of attack. The identified model is able to successfully represent the wing's aeroelastic response under the admissible flight states via a minimum number of estimated parameters compared to standard identification approaches. The obtained results demonstrate the high accuracy and effectiveness of the proposed global identification framework, thus constituting a first step towards the next generation of "fly-by-feel" aerospace vehicles with state awareness capabilities.

  7. Aircraft model prototypes which have specified handling-quality time histories

    NASA Technical Reports Server (NTRS)

    Johnson, S. H.

    1978-01-01

    Several techniques for obtaining linear constant-coefficient airplane models from specified handling-quality time histories are discussed. The pseudodata method solves the basic problem, yields specified eigenvalues, and accommodates state-variable transfer-function zero suppression. The algebraic equations to be solved are bilinear, at worst. The disadvantages are reduced generality and no assurance that the resulting model will be airplane like in detail. The method is fully illustrated for a fourth-order stability-axis small motion model with three lateral handling quality time histories specified. The FORTRAN program which obtains and verifies the model is included and fully documented.

  8. Determining the Compositions of Extraterrestrial Lava Flows

    NASA Technical Reports Server (NTRS)

    Fink, Jonathan H.

    2002-01-01

    The primary purpose of this research project has been to develop techniques that allow the emplacement conditions of volcanic landforms on other planets to be related to attributes that can be remotely detected with available instrumentation. The underlying assumption of our work is that the appearance of a volcano, lava flow, debris avalanche, or exhumed magmatic intrusion can provide clues about the conditions operating when that feature was first emplaced. Magma composition, amount of crustal heat flow, state of tectonic stress, and climatic conditions are among the important variables that can be inferred from the morphology and texture of an igneous body.

  9. A fast non-contact imaging photoplethysmography method using a tissue-like model

    NASA Astrophysics Data System (ADS)

    McDuff, Daniel J.; Blackford, Ethan B.; Estepp, Justin R.; Nishidate, Izumi

    2018-02-01

    Imaging photoplethysmography (iPPG) allows non-contact, concomitant measurement and visualization of peripheral blood flow using just an RGB camera. Most iPPG methods require a window of temporal data and complex computation, this makes real-time measurement and spatial visualization impossible. We present a fast,"window-less", non-contact imaging photoplethysmography method, based on a tissue-like model of the skin, that allows accurate measurement of heart rate and heart rate variability parameters. The error in heart rate estimates is equivalent to state-of-the-art techniques and computation is much faster.

  10. Digital robust active control law synthesis for large order flexible structure using parameter optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.

    1988-01-01

    A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.

  11. Dynamical complexity changes during two forms of meditation

    NASA Astrophysics Data System (ADS)

    Li, Jin; Hu, Jing; Zhang, Yinhong; Zhang, Xiaofeng

    2011-06-01

    Detection of dynamical complexity changes in natural and man-made systems has deep scientific and practical meaning. We use the base-scale entropy method to analyze dynamical complexity changes for heart rate variability (HRV) series during specific traditional forms of Chinese Chi and Kundalini Yoga meditation techniques in healthy young adults. The results show that dynamical complexity decreases in meditation states for two forms of meditation. Meanwhile, we detected changes in probability distribution of m-words during meditation and explained this changes using probability distribution of sine function. The base-scale entropy method may be used on a wider range of physiologic signals.

  12. Electro-impulse de-icing electrodynamic solution by discrete elements

    NASA Technical Reports Server (NTRS)

    Bernhart, W. D.; Schrag, R. L.

    1988-01-01

    This paper describes a technique for analyzing the electrodynamic phenomena associated with electro-impulse deicing. The analysis is done in the time domain and utilizes a discrete element formulation concept expressed in state variable form. Calculated results include coil current, eddy currents in the target (aircraft leading edge skin), pressure distribution on the target, and total force and impulse on the target. Typical results are presented and described. Some comparisons are made between calculated and experimental results, and also between calculated values from other theoretical approaches. Application to the problem of a nonrigid target is treated briefly.

  13. The initiation, propagation, and effect of matrix microcracks in cross-ply and related laminates

    NASA Technical Reports Server (NTRS)

    Nairn, John A.; Hu, Shoufeng; Liu, Siulie; Bark, Jong

    1991-01-01

    Recently, a variational mechanics approach was used to determine the thermoelastic stress state in cracked laminates. Described here is a generalization of the variational mechanics techniques to handle other cross-ply laminates, related laminates, and to account for delaminations emanating from microcrack tips. Microcracking experiments on Hercules 3501-6/AS4 carbon fiber/epoxy laminates show a staggered cracking pattern. These results can be explained by the variational mechanics analysis. The analysis of delaminations emanating from microcrack tips has resulted in predictions about the structural and material variables controlling competition between microcracking and delamination failure modes.

  14. Gelation of Regenerated Fibroin Solution

    NASA Astrophysics Data System (ADS)

    Nagarkar, Shailesh; Lele, Ashish; Chassenieux, Christophe; Nicolai, Taco; Durand, Dominique

    2008-07-01

    Silk fibroin is a high molecular weight multiblock ampiphillic protein known for its ability to form high strength fibers. It is also biocompatible; silk sutures have been traditionally used for many centuries. Recently, there has been much interest in making silk hydrogels for applications ranging from tissue engineering to controlled delivery. Fibroin gels can be formed from aqueous solutions by changing one or more state variables such as pH, temperature and ionic strength. In this work we present our investigations on the gelation of aqueous fibroin solutions derived from Bombyx Mori silk using light scattering, confocal microscopy and rheological techniques.

  15. Bounds on internal state variables in viscoplasticity

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.

    1993-01-01

    A typical viscoplastic model will introduce up to three types of internal state variables in order to properly describe transient material behavior; they are as follows: the back stress, the yield stress, and the drag strength. Different models employ different combinations of these internal variables--their selection and description of evolution being largely dependent on application and material selection. Under steady-state conditions, the internal variables cease to evolve and therefore become related to the external variables (stress and temperature) through simple functional relationships. A physically motivated hypothesis is presented that links the kinetic equation of viscoplasticity with that of creep under steady-state conditions. From this hypothesis one determines how the internal variables relate to one another at steady state, but most importantly, one obtains bounds on the magnitudes of stress and back stress, and on the yield stress and drag strength.

  16. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  17. Inverse estimation of parameters for an estuarine eutrophication model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, J.; Kuo, A.Y.

    1996-11-01

    An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulationsmore » with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.« less

  18. Implementation of Kalman filter algorithm on models reduced using singular pertubation approximation method and its application to measurement of water level

    NASA Astrophysics Data System (ADS)

    Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky

    2018-03-01

    The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.

  19. Integrating Ecosystem Carbon Dynamics into State-and-Transition Simulation Models of Land Use/Land Cover Change

    NASA Astrophysics Data System (ADS)

    Sleeter, B. M.; Daniel, C.; Frid, L.; Fortin, M. J.

    2016-12-01

    State-and-transition simulation models (STSMs) provide a general approach for incorporating uncertainty into forecasts of landscape change. Using a Monte Carlo approach, STSMs generate spatially-explicit projections of the state of a landscape based upon probabilistic transitions defined between states. While STSMs are based on the basic principles of Markov chains, they have additional properties that make them applicable to a wide range of questions and types of landscapes. A current limitation of STSMs is that they are only able to track the fate of discrete state variables, such as land use/land cover (LULC) classes. There are some landscape modelling questions, however, for which continuous state variables - for example carbon biomass - are also required. Here we present a new approach for integrating continuous state variables into spatially-explicit STSMs. Specifically we allow any number of continuous state variables to be defined for each spatial cell in our simulations; the value of each continuous variable is then simulated forward in discrete time as a stochastic process based upon defined rates of change between variables. These rates can be defined as a function of the realized states and transitions of each cell in the STSM, thus providing a connection between the continuous variables and the dynamics of the landscape. We demonstrate this new approach by (1) developing a simple IPCC Tier 3 compliant model of ecosystem carbon biomass, where the continuous state variables are defined as terrestrial carbon biomass pools and the rates of change as carbon fluxes between pools, and (2) integrating this carbon model with an existing LULC change model for the state of Hawaii, USA.

  20. An investigation of constraint-based component-modeling for knowledge representation in computer-aided conceptual design

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1990-01-01

    Originally, computer programs for engineering design focused on detailed geometric design. Later, computer programs for algorithmically performing the preliminary design of specific well-defined classes of objects became commonplace. However, due to the need for extreme flexibility, it appears unlikely that conventional programming techniques will prove fruitful in developing computer aids for engineering conceptual design. The use of symbolic processing techniques, such as object-oriented programming and constraint propagation, facilitate such flexibility. Object-oriented programming allows programs to be organized around the objects and behavior to be simulated, rather than around fixed sequences of function- and subroutine-calls. Constraint propagation allows declarative statements to be understood as designating multi-directional mathematical relationships among all the variables of an equation, rather than as unidirectional assignments to the variable on the left-hand side of the equation, as in conventional computer programs. The research has concentrated on applying these two techniques to the development of a general-purpose computer aid for engineering conceptual design. Object-oriented programming techniques are utilized to implement a user-extensible database of design components. The mathematical relationships which model both geometry and physics of these components are managed via constraint propagation. In addition, to this component-based hierarchy, special-purpose data structures are provided for describing component interactions and supporting state-dependent parameters. In order to investigate the utility of this approach, a number of sample design problems from the field of aerospace engineering were implemented using the prototype design tool, Rubber Airplane. The additional level of organizational structure obtained by representing design knowledge in terms of components is observed to provide greater convenience to the program user, and to result in a database of engineering information which is easier both to maintain and to extend.

  1. Trace elements in lake sediments measured by the PIXE technique

    NASA Astrophysics Data System (ADS)

    Gatti, Luciana V.; Mozeto, Antônio A.; Artaxo, Paulo

    1999-04-01

    Lakes are ecosystems where there is a great potential of metal accumulation in sediments due to their depositional characteristics. Total concentration of trace elements was measured on a 50 cm long sediment core from the Infernão Lake, that is an oxbow lake of the Moji-Guaçu River basin, in the state of São Paulo, Brazil. Dating of the core shows up to 180 yrs old sediment layers. The use of the PIXE technique for elemental analysis avoids the traditional acid digestion procedure common in other techniques. The multielemental characteristic of PIXE allows a simultaneous determination of about 20 elements in the sediment samples, such as, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, Rb, Sr, Zr, Ba, and Pb. Average values for the elemental composition were found to be similar to the bulk crustal composition. The lake flooding pattern strongly influences the time series of the elemental profiles. Factor analysis of the elemental variability shows five factors. Two of the factors represent the mineralogical matrix, and others represent the organic component, a factor with lead, and another loaded with chromium. The mineralogical component consists of elements such as, Fe, Al, V, Ti, Mn, Ni, K, Zr, Sr, Cu and Zn. The variability of Si is explained by two distinct factors, because it is influenced by two different sources, aluminum-silicates and quartz, and the effect of inundation are different for each other. The organic matter is strongly associated with calcium, and also bounded with S, Zn, Cu and P. Lead and chromium appears as separated factors, although it is not clear the evidences for their anthropogenic origin. The techniques developed for sample preparation and PIXE analysis was proven as advantageous and provided very good reproducibility and accuracy.

  2. Evaluation of the Williams-type spring wheat model in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Leduc, S. (Principal Investigator)

    1982-01-01

    The Williams type model, developed similarly to previous models of C.V.D. Williams, uses monthly temperature and precipitation data as well as soil and topological variables to predict the yield of the spring wheat crop. The models are statistically developed using the regression technique. Eight model characteristics are examined in the evaluation of the model. Evaluation is at the crop reporting district level, the state level and for the entire region. A ten year bootstrap test was the basis of the statistical evaluation. The accuracy and current indication of modeled yield reliability could show improvement. There is great variability in the bias measured over the districts, but there is a slight overall positive bias. The model estimates for the east central crop reporting district in Minnesota are not accurate. The estimate of yield for 1974 were inaccurate for all of the models.

  3. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  4. Computing in the presence of soft bit errors. [caused by single event upset on spacecraft

    NASA Technical Reports Server (NTRS)

    Rasmussen, R. D.

    1984-01-01

    It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.

  5. Experimental evidence of exciton-plasmon coupling in densely packed dye doped core-shell nanoparticles obtained via microfluidic technique

    NASA Astrophysics Data System (ADS)

    De Luca, A.; Iazzolino, A.; Salmon, J.-B.; Leng, J.; Ravaine, S.; Grigorenko, A. N.; Strangi, G.

    2014-09-01

    The interplay between plasmons and excitons in bulk metamaterials are investigated by performing spectroscopic studies, including variable angle pump-probe ellipsometry. Gain functionalized gold nanoparticles have been densely packed through a microfluidic chip, representing a scalable process towards bulk metamaterials based on self-assembly approach. Chromophores placed at the hearth of plasmonic subunits ensure exciton-plasmon coupling to convey excitation energy to the quasi-static electric field of the plasmon states. The overall complex polarizability of the system, probed by variable angle spectroscopic ellipsometry, shows a significant modification under optical excitation, as demonstrated by the behavior of the ellipsometric angles Ψ and Δ as a function of suitable excitation fields. The plasmon resonances observed in densely packed gain functionalized core-shell gold nanoparticles represent a promising step to enable a wide range of electromagnetic properties and fascinating applications of plasmonic bulk systems for advanced optical materials.

  6. Evaluation of the adequacy of information from research on infant mortality in Recife, Pernambuco, Brazil.

    PubMed

    Oliveira, Conceição Maria de; Guimarães, Maria José Bezerra; Bonfim, Cristine Vieira do; Frias, Paulo Germano; Antonino, Verônica Cristina Sposito; Guimarães, Aline Luzia Sampaio; Medeiros, Zulma Maria

    2018-03-01

    This study is an evaluation of infant death research in Recife, Pernambuco (PE). It is a cross-sectional study with 120 variables grouped into six dimensions (prenatal, birth, child care, family characteristics, occurrence of death, and conclusion and recommendations), weighted by consensus technique. The research was classifiedas adequate, partially adequate or inadequate according to a composite indicator assessment (ICA). There was dissension on 11 variables (9 in prenatal dimension, one in labor and birth, and 1 in the conclusions and recommendations). Of the 568 deaths studied, 56.2% have adequate research. The occurrence of death was the best-evaluated dimension and prenatal the poorest. The preparation of the ICA enables professionals and managers of child health policies to identify bottlenecks in the investigation of infant deaths for better targeting of actions, and contributing to the discussion about surveillance in other cities and states.

  7. Two-factor theory – at the intersection of health care management and patient satisfaction

    PubMed Central

    Bohm, Josef

    2012-01-01

    Using data obtained from the 2004 Joint Canadian/United States Survey of Health, an analytic model using principles derived from Herzberg’s motivational hygiene theory was developed for evaluating patient satisfaction with health care. The analysis sought to determine whether survey variables associated with consumer satisfaction act as Hertzberg factors and contribute to survey participants’ self-reported levels of health care satisfaction. To validate the technique, data from the survey were analyzed using logistic regression methods and then compared with results obtained from the two-factor model. The findings indicate a high degree of correlation between the two methods. The two-factor analytical methodology offers advantages due to its ability to identify whether a factor assumes a motivational or hygienic role and assesses the influence of a factor within select populations. Its ease of use makes this methodology well suited for assessment of multidimensional variables. PMID:23055755

  8. Two-factor theory - at the intersection of health care management and patient satisfaction.

    PubMed

    Bohm, Josef

    2012-01-01

    Using data obtained from the 2004 Joint Canadian/United States Survey of Health, an analytic model using principles derived from Herzberg's motivational hygiene theory was developed for evaluating patient satisfaction with health care. The analysis sought to determine whether survey variables associated with consumer satisfaction act as Hertzberg factors and contribute to survey participants' self-reported levels of health care satisfaction. To validate the technique, data from the survey were analyzed using logistic regression methods and then compared with results obtained from the two-factor model. The findings indicate a high degree of correlation between the two methods. The two-factor analytical methodology offers advantages due to its ability to identify whether a factor assumes a motivational or hygienic role and assesses the influence of a factor within select populations. Its ease of use makes this methodology well suited for assessment of multidimensional variables.

  9. Airfoil optimization by the one-shot method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1994-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  10. Does Music Therapy Improve Anxiety and Depression in Alzheimer's Patients?

    PubMed

    de la Rubia Ortí, José Enrique; García-Pardo, María Pilar; Iranzo, Carmen Cabañés; Madrigal, José Joaquin Cerón; Castillo, Sandra Sancho; Rochina, Mariano Julián; Gascó, Vicente Javier Prado

    2018-01-01

    To evaluate the effectiveness of the implementation of a short protocol of music therapy as a tool to reduce stress and improve the emotional state in patients with mild Alzheimer's disease. A sample of 25 patients with mild Alzheimer's received therapy based on the application of a music therapy session lasting 60 min. Before and after the therapy, patient saliva was collected to quantify the level of salivary cortisol using the Enzyme-Linked ImmunoSorbent Assay (ELISA) immunoassay technique and a questionnaire was completed to measure anxiety and depression (Hospital Anxiety and Depression Scale). The results show that the application of this therapy lowers the level of stress and decreases significantly depression and anxiety, establishing a linear correlation between the variation of these variables and the variation of cortisol. A short protocol of music therapy can be an alternative medicine to improve emotional variables in Alzheimer patients.

  11. Advanced Techniques in Pulmonary Function Test Analysis Interpretation and Diagnosis

    PubMed Central

    Gildea, T.J.; Bell, C. William

    1980-01-01

    The Pulmonary Functions Analysis and Diagnostic System is an advanced clinical processing system developed for use at the Pulmonary Division, Department of Medicine at the University of Nebraska Medical Center. The system generates comparative results and diagnostic impressions for a variety of routine and specialized pulmonary functions test data. Routine evaluation deals with static lung volumes, breathing mechanics, diffusing capacity, and blood gases while specialized tests include lung compliance studies, small airways dysfunction studies and dead space to tidal volume ratios. Output includes tabular results of normal vs. observed values, clinical impressions and commentary and, where indicated, a diagnostic impression. A number of pulmonary physiological and state variables are entered or sampled (A to D) with periodic status reports generated for the test supervisor. Among the various physiological variables sampled are respiratory frequency, minute ventilation, oxygen consumption, carbon dioxide production, and arterial oxygen saturation.

  12. Spatial partitioning of environmental correlates of avian biodiversity in the conterminous United States

    USGS Publications Warehouse

    O'Connor, R.J.; Jones, M.T.; White, D.; Hunsaker, C.; Loveland, Tom; Jones, Bruce; Preston, E.

    1996-01-01

    Classification and regression tree (CART) analysis was used to create hierarchically organized models of the distribution of bird species richness across the conterminous United States. Species richness data were taken from the Breeding Bird Survey and were related to climatic and land use data. We used a systematic spatial grid of approximately 12,500 hexagons, each approximately 640 square kilometres in area. Within each hexagon land use was characterized by the Loveland et al. land cover classification based on Advanced Very High Resolution Radiometer (AVHRR) data from NOAA polar orbiting meteorological satellites. These data were aggregated to yield fourteen land classes equivalent to an Anderson level II coverage; urban areas were added from the Digital Chart of the World. Each hexagon was characterized by climate data and landscape pattern metrics calculated from the land cover. A CART model then related the variation in species richness across the 1162 hexagons for which bird species richness data were available to the independent variables, yielding an R2-type goodness of fit metric of 47.5% deviance explained. The resulting model recognized eleven groups of hexagons, with species richness within each group determined by unique sequences of hierarchically constrained independent variables. Within the hierarchy, climate data accounted for more variability in the bird data, followed by land cover proportion, and then pattern metrics. The model was then used to predict species richness in all 12,500 hexagons of the conterminous United States yielding a map of the distribution of these eleven classes of bird species richness as determined by the environmental correlates. The potential for using this technique to interface biogeographic theory with the hierarchy theory of ecology is discussed. ?? 1996 Blackwell Science Ltd.

  13. A single network adaptive critic (SNAC) architecture for optimal control synthesis for a class of nonlinear systems.

    PubMed

    Padhi, Radhakant; Unnikrishnan, Nishant; Wang, Xiaohua; Balakrishnan, S N

    2006-12-01

    Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.

  14. Latent variable method for automatic adaptation to background states in motor imagery BCI

    NASA Astrophysics Data System (ADS)

    Dagaev, Nikolay; Volkova, Ksenia; Ossadtchi, Alexei

    2018-02-01

    Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model’s parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.

  15. Upper-Division Student Difficulties with Separation of Variables

    ERIC Educational Resources Information Center

    Wilcox, Bethany R.; Pollock, Steven J.

    2015-01-01

    Separation of variables can be a powerful technique for solving many of the partial differential equations that arise in physics contexts. Upper-division physics students encounter this technique in multiple topical areas including electrostatics and quantum mechanics. To better understand the difficulties students encounter when utilizing the…

  16. Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.

    2015-05-15

    Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed anmore » investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction.« less

  17. Effect of initial conditions of a catchment on seasonal streamflow prediction using ensemble streamflow prediction (ESP) technique for the Rangitata and Waitaki River basins on the South Island of New Zealand

    NASA Astrophysics Data System (ADS)

    Singh, Shailesh Kumar; Zammit, Christian; Hreinsson, Einar; Woods, Ross; Clark, Martyn; Hamlet, Alan

    2013-04-01

    Increased access to water is a key pillar of the New Zealand government plan for economic growths. Variable climatic conditions coupled with market drivers and increased demand on water resource result in critical decision made by water managers based on climate and streamflow forecast. Because many of these decisions have serious economic implications, accurate forecast of climate and streamflow are of paramount importance (eg irrigated agriculture and electricity generation). New Zealand currently does not have a centralized, comprehensive, and state-of-the-art system in place for providing operational seasonal to interannual streamflow forecasts to guide water resources management decisions. As a pilot effort, we implement and evaluate an experimental ensemble streamflow forecasting system for the Waitaki and Rangitata River basins on New Zealand's South Island using a hydrologic simulation model (TopNet) and the familiar ensemble streamflow prediction (ESP) paradigm for estimating forecast uncertainty. To provide a comprehensive database for evaluation of the forecasting system, first a set of retrospective model states simulated by the hydrologic model on the first day of each month were archived from 1972-2009. Then, using the hydrologic simulation model, each of these historical model states was paired with the retrospective temperature and precipitation time series from each historical water year to create a database of retrospective hindcasts. Using the resulting database, the relative importance of initial state variables (such as soil moisture and snowpack) as fundamental drivers of uncertainties in forecasts were evaluated for different seasons and lead times. The analysis indicate that the sensitivity of flow forecast to initial condition uncertainty is depend on the hydrological regime and season of forecast. However initial conditions do not have a large impact on seasonal flow uncertainties for snow dominated catchments. Further analysis indicates that this result is valid when the hindcast database is conditioned by ENSO classification. As a result hydrological forecasts based on ESP technique, where present initial conditions with histological forcing data are used may be plausible for New Zealand catchments.

  18. Direct SQP-methods for solving optimal control problems with delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goellmann, L.; Bueskens, C.; Maurer, H.

    The maximum principle for optimal control problems with delays leads to a boundary value problem (BVP) which is retarded in the state and advanced in the costate function. Based on shooting techniques, solution methods for this type of BVP have been proposed. In recent years, direct optimization methods have been favored for solving control problems without delays. Direct methods approximate the control and the state over a fixed mesh and solve the resulting NLP-problem with SQP-methods. These methods dispense with the costate function and have shown to be robust and efficient. In this paper, we propose a direct SQP-method formore » retarded control problems. In contrast to conventional direct methods, only the control variable is approximated by e.g. spline-functions. The state is computed via a high order Runge-Kutta type algorithm and does not enter explicitly the NLP-problem through an equation. This approach reduces the number of optimization variables considerably and is implementable even on a PC. Our method is illustrated by the numerical solution of retarded control problems with constraints. In particular, we consider the control of a continuous stirred tank reactor which has been solved by dynamic programming. This example illustrates the robustness and efficiency of the proposed method. Open questions concerning sufficient conditions and convergence of discretized NLP-problems are discussed.« less

  19. Physically based modeling in catchment hydrology at 50: Survey and outlook

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Putti, Mario

    2015-09-01

    Integrated, process-based numerical models in hydrology are rapidly evolving, spurred by novel theories in mathematical physics, advances in computational methods, insights from laboratory and field experiments, and the need to better understand and predict the potential impacts of population, land use, and climate change on our water resources. At the catchment scale, these simulation models are commonly based on conservation principles for surface and subsurface water flow and solute transport (e.g., the Richards, shallow water, and advection-dispersion equations), and they require robust numerical techniques for their resolution. Traditional (and still open) challenges in developing reliable and efficient models are associated with heterogeneity and variability in parameters and state variables; nonlinearities and scale effects in process dynamics; and complex or poorly known boundary conditions and initial system states. As catchment modeling enters a highly interdisciplinary era, new challenges arise from the need to maintain physical and numerical consistency in the description of multiple processes that interact over a range of scales and across different compartments of an overall system. This paper first gives an historical overview (past 50 years) of some of the key developments in physically based hydrological modeling, emphasizing how the interplay between theory, experiments, and modeling has contributed to advancing the state of the art. The second part of the paper examines some outstanding problems in integrated catchment modeling from the perspective of recent developments in mathematical and computational science.

  20. Anatomic and Pathologic Variability During Radiotherapy for a Hybrid Active Breath-Hold Gating Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glide-Hurst, Carri K.; Gopan, Ellen; Department of Radiation Oncology Wayne State University, Detroit, MI

    2010-07-01

    Purpose: To evaluate intra- and interfraction variability of tumor and lung volume and position using a hybrid active breath-hold gating technique. Methods and Materials: A total of 159 repeat normal inspiration active breath-hold CTs were acquired weekly during radiotherapy for 9 lung cancer patients (12-21 scans per patient). A physician delineated the gross tumor volume (GTV), lungs, and spinal cord on the first breath-hold CT, and contours were propagated semiautomatically. Intra- and interfraction variability of tumor and lung position and volume were evaluated. Tumor centroid and border variability were quantified. Results: On average, intrafraction variability of lung and GTV centroidmore » position was <2.0 mm. Interfraction population variability was 3.6-6.7 mm (systematic) and 3.1-3.9 mm (random) for the GTV centroid and 1.0-3.3 mm (systematic) and 1.5-2.6 mm (random) for the lungs. Tumor volume regressed 44.6% {+-} 23.2%. Gross tumor volume border variability was patient specific and demonstrated anisotropic shape change in some subjects. Interfraction GTV positional variability was associated with tumor volume regression and contralateral lung volume (p < 0.05). Inter-breath-hold reproducibility was unaffected by time point in the treatment course (p > 0.1). Increases in free-breathing tidal volume were associated with increases in breath-hold ipsilateral lung volume (p < 0.05). Conclusions: The breath-hold technique was reproducible within 2 mm during each fraction. Interfraction variability of GTV position and shape was substantial because of tumor volume and breath-hold lung volume change during therapy. These results support the feasibility of a hybrid breath-hold gating technique and suggest that online image guidance would be beneficial.« less

  1. Great Basin vegetation response to groundwater fluctuation, climate variability, and previous land cultivation: The application of multitemporal measurements from remote sensing data to regional vegetation dynamics

    NASA Astrophysics Data System (ADS)

    Elmore, Andrew James

    The conversion of large natural basins to managed watersheds for the purpose of providing water to urban centers has had a negative impact on semiarid ecosystems, worldwide. We view semiarid plant communities as being adapted to short, regular periods of drought. However, human induced changes in the water balance often remove these systems from the range of natural variability that has been historically established. This thesis explores vegetation changes over a 13-yr period for Owens Valley, in eastern California. Using remotely sensed measurements of vegetation cover, an extensive vegetation survey, field data and observations, precipitation records, and data on water table depth, I identify the key modes of response of xeric, phreatophytic, and exotic Great Basin plant communities. Three specific advancements were reached as a result of this work. (1) A change classification technique was developed that was used to separate regions of land-cover that were dependent on precipitation from regions dependent on groundwater. This technique utilized Spectral Mixture Analysis of annually acquired Landsat Thematic Mapper remote sensing data, to retrieve regional estimates of percent vegetation cover. (2) A threshold response related to depth-to-water dependence was identified for phreatophytic Alkali Meadow communities. Plant communities that were subject to groundwater depths below this threshold exhibited greater invasion by precipitation sensitive plants. (3) The floristic differences between previously cultivated and uncultivated land were found to account for an increased sensitivity of plant communities to precipitation variability. Through (2) and (3), two human influences (groundwater decline and previous land cultivation) were shown to alter land cover such that the land became more sensitive to precipitation change. Climate change predictions include a component of increased climate variability for the western United States; therefore, these results place serious doubt on the sustainability of human activities in this region. The results from this work broadly cover topics from remote sensing techniques to the ecology of Great Basin plant communities and are applicable wherever large regions of land are being managed in an era of changing environmental conditions.

  2. Multi-point Adjoint-Based Design of Tilt-Rotors in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Acree, Cecil W.

    2014-01-01

    Optimization of tilt-rotor systems requires the consideration of performance at multiple design points. In the current study, an adjoint-based optimization of a tilt-rotor blade is considered. The optimization seeks to simultaneously maximize the rotorcraft figure of merit in hover and the propulsive efficiency in airplane-mode for a tilt-rotor system. The design is subject to minimum thrust constraints imposed at each design point. The rotor flowfields at each design point are cast as steady-state problems in a noninertial reference frame. Geometric design variables used in the study to control blade shape include: thickness, camber, twist, and taper represented by as many as 123 separate design variables. Performance weighting of each operational mode is considered in the formulation of the composite objective function, and a build up of increasing geometric degrees of freedom is used to isolate the impact of selected design variables. In all cases considered, the resulting designs successfully increase both the hover figure of merit and the airplane-mode propulsive efficiency for a rotor designed with classical techniques.

  3. A spectro-interferometric view of l Carinae's modulated pulsations

    NASA Astrophysics Data System (ADS)

    Anderson, Richard I.; Mérand, Antoine; Kervella, Pierre; Breitfelder, Joanne; Eyer, Laurent; Gallenne, Alexandre

    Classical Cepheids are radially pulsating stars that enable important tests of stellar evolution and play a crucial role in the calibration of the local Hubble constant. l Carinae is a particularly well-known distance calibrator, being the closest long-period (P ~ 35.5 d) Cepheid and subtending the largest angular diameter. We have carried out an unprecedented observing program to investigate whether recently discovered cycle-to-cycle changes (modulations) of l Carinae's radial velocity (RV) variability are mirrored by its variability in angular size. To this end, we have secured a fully contemporaneous dataset of high-precision RVs and high-precision angular diameters. Here we provide a concise summary of our project and report preliminary results. We confirm the modulated nature of the RV variability and find tentative evidence of cycle-to-cycle differences in l Car's maximal angular diameter. Our analysis is exploring the limits of state-of-the-art instrumentation and reveals additional complexity in the pulsations of Cepheids. If confirmed, our result suggests a previously unknown pulsation cycle dependence of projection factors required for determining Cepheid distances via the Baade-Wesselink technique.

  4. Tourism trends in the world׳s main destinations before and after the 2008 financial crisis using UNWTO official data.

    PubMed

    Claveria, Oscar; Poluzzi, Alessio

    2016-06-01

    The first decade of the present century has been characterized by several economic shocks such as the 2008 financial crisis. In this data article we present the annual percentage growth rates of the main tourism indicators in the world׳s top tourist destinations: the United States, China, France, Spain, Italy, United Kingdom, Germany, Turkey, Mexico and Austria. We use data from the Compendium of Tourism Statistics provided by the World Tourism Organization (http://www2.unwto.org/content/data-0). It has been demonstrated that the dynamics of growth in the tourism industry pose different challenges to each destination in the previous study "Positioning and clustering of the world׳s top tourist destinations by means of dimensionality reduction techniques for categorical data" (Claveria and Poluzzi, 2016, [1]). We provide a descriptive analysis of the variables over the period comprised between 2000 and 2010. We complement the analysis by graphing the evolution of the main variables so as to visually represent the co-movements between tourism variables and economic growth.

  5. Finite-element simulation of ground-water flow in the vicinity of Yucca Mountain, Nevada-California

    USGS Publications Warehouse

    Czarnecki, J.B.; Waddell, R.K.

    1984-01-01

    A finite-element model of the groundwater flow system in the vicinity of Yucca Mountain at the Nevada Test Site was developed using parameter estimation techniques. The model simulated steady-state ground-water flow occurring in tuffaceous, volcanic , and carbonate rocks, and alluvial aquifers. Hydraulic gradients in the modeled area range from 0.00001 for carbonate aquifers to 0.19 for barriers in tuffaceous rocks. Three model parameters were used in estimating transmissivity in six zones. Simulated hydraulic-head values range from about 1,200 m near Timber Mountain to about 300 m near Furnace Creek Ranch. Model residuals for simulated versus measured hydraulic heads range from -28.6 to 21.4 m; most are less than +/-7 m, indicating an acceptable representation of the hydrologic system by the model. Sensitivity analyses of the model 's flux boundary condition variables were performed to assess the effect of varying boundary fluxes on the calculation of estimated model transmissivities. Varying the flux variables representing discharge at Franklin Lake and Furnace Creek Ranch has greater effect than varying other flux variables. (Author 's abstract)

  6. Modeling demand for public transit services in rural areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attaluri, P.; Seneviratne, P.N.; Javid, M.

    1997-05-01

    Accurate estimates of demand are critical for planning, designing, and operating public transit systems. Previous research has demonstrated that the expected demand in rural areas is a function of both demographic and transit system variables. Numerous models have been proposed to describe the relationship between the aforementioned variables. However, most of them are site specific and their validity over time and space is not reported or perhaps has not been tested. Moreover, input variables in some cases are extremely difficult to quantify. In this article, the estimation of demand using the generalized linear modeling technique is discussed. Two separate models,more » one for fixed-route and another for demand-responsive services, are presented. These models, calibrated with data from systems in nine different states, are used to demonstrate the appropriateness and validity of generalized linear models compared to the regression models. They explain over 70% of the variation in expected demand for fixed-route services and 60% of the variation in expected demand for demand-responsive services. It was found that the models are spatially transferable and that data for calibration are easily obtainable.« less

  7. Efficient variable time-stepping scheme for intense field-atom interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, C.; Kosloff, R.

    1993-03-01

    The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less

  8. Using Multigroup-Multiphase Latent State-Trait Models to Study Treatment-Induced Changes in Intra-Individual State Variability: An Application to Smokers' Affect.

    PubMed

    Geiser, Christian; Griffin, Daniel; Shiffman, Saul

    2016-01-01

    Sometimes, researchers are interested in whether an intervention, experimental manipulation, or other treatment causes changes in intra-individual state variability. The authors show how multigroup-multiphase latent state-trait (MG-MP-LST) models can be used to examine treatment effects with regard to both mean differences and differences in state variability. The approach is illustrated based on a randomized controlled trial in which N = 338 smokers were randomly assigned to nicotine replacement therapy (NRT) vs. placebo prior to quitting smoking. We found that post quitting, smokers in both the NRT and placebo group had significantly reduced intra-individual affect state variability with respect to the affect items calm and content relative to the pre-quitting phase. This reduction in state variability did not differ between the NRT and placebo groups, indicating that quitting smoking may lead to a stabilization of individuals' affect states regardless of whether or not individuals receive NRT.

  9. Using Multigroup-Multiphase Latent State-Trait Models to Study Treatment-Induced Changes in Intra-Individual State Variability: An Application to Smokers' Affect

    PubMed Central

    Geiser, Christian; Griffin, Daniel; Shiffman, Saul

    2016-01-01

    Sometimes, researchers are interested in whether an intervention, experimental manipulation, or other treatment causes changes in intra-individual state variability. The authors show how multigroup-multiphase latent state-trait (MG-MP-LST) models can be used to examine treatment effects with regard to both mean differences and differences in state variability. The approach is illustrated based on a randomized controlled trial in which N = 338 smokers were randomly assigned to nicotine replacement therapy (NRT) vs. placebo prior to quitting smoking. We found that post quitting, smokers in both the NRT and placebo group had significantly reduced intra-individual affect state variability with respect to the affect items calm and content relative to the pre-quitting phase. This reduction in state variability did not differ between the NRT and placebo groups, indicating that quitting smoking may lead to a stabilization of individuals' affect states regardless of whether or not individuals receive NRT. PMID:27499744

  10. Characteristics of temperature rise in variable inductor employing magnetorheological fluid driven by a high-frequency pulsed voltage source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ho-Young; Kang, In Man, E-mail: imkang@ee.knu.ac.kr; Shon, Chae-Hwa

    2015-05-07

    A variable inductor with magnetorheological (MR) fluid has been successfully applied to power electronics applications; however, its thermal characteristics have not been investigated. To evaluate the performance of the variable inductor with respect to temperature, we measured the characteristics of temperature rise and developed a numerical analysis technique. The characteristics of temperature rise were determined experimentally and verified numerically by adopting a multiphysics analysis technique. In order to accurately estimate the temperature distribution in a variable inductor with an MR fluid-gap, the thermal solver should import the heat source from the electromagnetic solver to solve the eddy current problem. Tomore » improve accuracy, the B–H curves of the MR fluid under operating temperature were obtained using the magnetic property measurement system. In addition, the Steinmetz equation was applied to evaluate the core loss in a ferrite core. The predicted temperature rise for a variable inductor showed good agreement with the experimental data and the developed numerical technique can be employed to design a variable inductor with a high-frequency pulsed voltage source.« less

  11. A new statistical tool for NOAA local climate studies

    NASA Astrophysics Data System (ADS)

    Timofeyeva, M. M.; Meyers, J. C.; Hollingshead, A.

    2011-12-01

    The National Weather Services (NWS) Local Climate Analysis Tool (LCAT) is evolving out of a need to support and enhance the National Oceanic and Atmospheric Administration (NOAA) National Weather Service (NWS) field offices' ability to efficiently access, manipulate, and interpret local climate data and characterize climate variability and change impacts. LCAT will enable NOAA's staff to conduct regional and local climate studies using state-of-the-art station and reanalysis gridded data and various statistical techniques for climate analysis. The analysis results will be used for climate services to guide local decision makers in weather and climate sensitive actions and to deliver information to the general public. LCAT will augment current climate reference materials with information pertinent to the local and regional levels as they apply to diverse variables appropriate to each locality. The LCAT main emphasis is to enable studies of extreme meteorological and hydrological events such as tornadoes, flood, drought, severe storms, etc. LCAT will close a very critical gap in NWS local climate services because it will allow addressing climate variables beyond average temperature and total precipitation. NWS external partners and government agencies will benefit from the LCAT outputs that could be easily incorporated into their own analysis and/or delivery systems. Presently we identified five existing requirements for local climate: (1) Local impacts of climate change; (2) Local impacts of climate variability; (3) Drought studies; (4) Attribution of severe meteorological and hydrological events; and (5) Climate studies for water resources. The methodologies for the first three requirements will be included in the LCAT first phase implementation. Local rate of climate change is defined as a slope of the mean trend estimated from the ensemble of three trend techniques: (1) hinge, (2) Optimal Climate Normals (running mean for optimal time periods), (3) exponentially-weighted moving average. Root mean squared error is used to determine the best fit of trend to the observations with the least error. The studies of climate variability impacts on local extremes use composite techniques applied to various definitions of local variables: from specified percentiles to critical thresholds. Drought studies combine visual capabilities of Google maps with statistical estimates of drought severity indices. The process of development will be linked to local office interactions with users to ensure the tool will meet their needs as well as provide adequate training. A rigorous internal and tiered peer-review process will be implemented to ensure the studies are scientifically-sound that will be published and submitted to the local studies catalog (database) and eventually to external sources, such as the Climate Portal.

  12. Seasonal Variability Study of the Tropospheric Zenithal Delay in the South America using regional Numerical Weather Prediction model

    NASA Astrophysics Data System (ADS)

    Sapucci, L. F.; Monico, J. G.; Machado, L. T.

    2007-05-01

    In 2010 a new navigation and administration system of the air traffic, denominated CNS-ATM (Communication Navigation Surveillance - Air Traffic Management) should be running operationally in South America. This new system will basically employ the positioning techniques by satellites to the management and air traffic control. However, the efficiency of this new system demands the knowledge of the behavior of the atmosphere, consequently, an appropriated Zenithal Tropospheric Delay (ZTD) modeling in a regional scale. The predictions of ZTD values from Numeric Weather Prediction (NWP), denominated here dynamic modeling, is an alternative to model the atmospheric gases effects in the radio-frequency signals in real time. Brazilian Center for Weather Forecasting and Climate Studies (CPTEC) of the National Institute for Space Research (INPE), jointly with researchers from UNESP (Sao Paulo State University), has generated operationally prediction of ZTD values to South America Continent (available in the electronic address http:satelite.cptec.inpe.br/htmldocs/ztd/zenithal.htm). The available regional version is obtained using ETA model (NWP model with horizontal resolution of 20 km and 42 levels in the vertical). The application of NWP permit assess the temporal and spatial variation of ZTD values, which is an important characteristic of this techniques. The aim of the present paper is to investigate the ZTD seasonal variability over South America continent. A variability analysis of the ZTD components [hydrostatic(ZHD) and wet(ZWD)] is also presented, as such as discussion of main factors that influence this variation in this region. The hydrostatic component variation is related with atmospheric pressure oscillation, which is influenced by relief and high pressure centers that prevail over different region of the South America continent. The wet component oscillation is due to the temperature and humidity variability, which is also influenced by relief and by synoptic events like: the penetration the cold front from Antarctic pole into the continent and occurrence of humidity convergence zones. In South America there are two main convergence zones that has strong influence in the troposphere variability, the ITCZ (Inter Tropical Convergence Zone) and the SACZ (South Atlantic Convergence Zone) zones. These convergence zones are characterized by an extensive precipitation band and high nebulosity almost stationary. The physical processes associated with these convergence zones present strong impacts in the variability of ZWD values. This work aims to contribute with ZTD modeling over South America continent using NWP to identify where and when the ZTD values present lower predictability in this region, and consequently, minimizing the error in the GNSS positioning that apply this technique.

  13. Dynamic rupture modeling with laboratory-derived constitutive relations

    USGS Publications Warehouse

    Okubo, P.G.

    1989-01-01

    A laboratory-derived state variable friction constitutive relation is used in the numerical simulation of the dynamic growth of an in-plane or mode II shear crack. According to this formulation, originally presented by J.H. Dieterich, frictional resistance varies with the logarithm of the slip rate and with the logarithm of the frictional state variable as identified by A.L. Ruina. Under conditions of steady sliding, the state variable is proportional to (slip rate)-1. Following suddenly introduced increases in slip rate, the rate and state dependencies combine to produce behavior which resembles slip weakening. When rupture nucleation is artificially forced at fixed rupture velocity, rupture models calculated with the state variable friction in a uniformly distributed initial stress field closely resemble earlier rupture models calculated with a slip weakening fault constitutive relation. Model calculations suggest that dynamic rupture following a state variable friction relation is similar to that following a simpler fault slip weakening law. However, when modeling the full cycle of fault motions, rate-dependent frictional responses included in the state variable formulation are important at low slip rates associated with rupture nucleation. -from Author

  14. Documentation for the State Variables Package for the Groundwater-Management Process of MODFLOW-2005 (GWM-2005)

    USGS Publications Warehouse

    Ahlfeld, David P.; Barlow, Paul M.; Baker, Kristine M.

    2011-01-01

    Many groundwater-management problems are concerned with the control of one or more variables that reflect the state of a groundwater-flow system or a coupled groundwater/surface-water system. These system state variables include the distribution of heads within an aquifer, streamflow rates within a hydraulically connected stream, and flow rates into or out of aquifer storage. This report documents the new State Variables Package for the Groundwater-Management Process of MODFLOW-2005 (GWM-2005). The new package provides a means to explicitly represent heads, streamflows, and changes in aquifer storage as state variables in a GWM-2005 simulation. The availability of these state variables makes it possible to include system state in the objective function and enhances existing capabilities for constructing constraint sets for a groundwater-management formulation. The new package can be used to address groundwater-management problems such as the determination of withdrawal strategies that meet water-supply demands while simultaneously maximizing heads or streamflows, or minimizing changes in aquifer storage. Four sample problems are provided to demonstrate use of the new package for typical groundwater-management applications.

  15. Use of High Resolution Mobile Monitoring Techniques to Assess Near Road Air Quality Variability

    EPA Science Inventory

    This presentation provides a description of the techniques used to develop and conduct effective mobile monitoring studies. It also provides a summary of mobile monitoring assessment studies that have been used to assess near-road concentrations and the variability of pollutant l...

  16. Use of High Resolution Mobile Monitoring Techniques to Assess Near-Road Air Quality Variability

    EPA Science Inventory

    This presentation provides a description of the techniques used to develop and conduct effective mobile monitoring studies. It also provides a summary of mobile monitoring assessment studies that have been used to assess near-road concentrations and the variability of pollutant l...

  17. Promoting Response Variability and Stimulus Generalization in Martial Arts Training

    ERIC Educational Resources Information Center

    Harding, Jay W.; Wacker, David P.; Berg, Wendy K.; Rick, Gary; Lee, John F.

    2004-01-01

    The effects of reinforcement and extinction on response variability and stimulus generalization in the punching and kicking techniques of 2 martial arts students were evaluated across drill and sparring conditions. During both conditions, the students were asked to demonstrate different techniques in response to an instructor's punching attack.…

  18. Respondent Techniques for Reduction of Emotions Limiting School Adjustment: A Quantitative Review and Methodological Critique.

    ERIC Educational Resources Information Center

    Misra, Anjali; Schloss, Patrick J.

    1989-01-01

    The critical analysis of 23 studies using respondent techniques for the reduction of excessive emotional reactions in school children focuses on research design, dependent variables, independent variables, component analysis, and demonstrations of generalization and maintenance. Results indicate widespread methodological flaws that limit the…

  19. Attracting Dynamics of Frontal Cortex Ensembles during Memory-Guided Decision-Making

    PubMed Central

    Seamans, Jeremy K.; Durstewitz, Daniel

    2011-01-01

    A common theoretical view is that attractor-like properties of neuronal dynamics underlie cognitive processing. However, although often proposed theoretically, direct experimental support for the convergence of neural activity to stable population patterns as a signature of attracting states has been sparse so far, especially in higher cortical areas. Combining state space reconstruction theorems and statistical learning techniques, we were able to resolve details of anterior cingulate cortex (ACC) multiple single-unit activity (MSUA) ensemble dynamics during a higher cognitive task which were not accessible previously. The approach worked by constructing high-dimensional state spaces from delays of the original single-unit firing rate variables and the interactions among them, which were then statistically analyzed using kernel methods. We observed cognitive-epoch-specific neural ensemble states in ACC which were stable across many trials (in the sense of being predictive) and depended on behavioral performance. More interestingly, attracting properties of these cognitively defined ensemble states became apparent in high-dimensional expansions of the MSUA spaces due to a proper unfolding of the neural activity flow, with properties common across different animals. These results therefore suggest that ACC networks may process different subcomponents of higher cognitive tasks by transiting among different attracting states. PMID:21625577

  20. Optical transitions in GaNAs quantum wells with variable nitrogen content embedded in AlGaAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elborg, M., E-mail: ELBORG.Martin@nims.go.jp; Noda, T.; Mano, T.

    2016-06-15

    We investigate the optical transitions of GaN{sub x}As{sub 1−x} quantum wells (QWs) embedded in wider band gap AlGaAs. A combination of absorption and emission spectroscopic techniques is employed to systematically investigate the properties of GaNAs QWs with N concentrations ranging from 0 – 3%. From measurement of the photocurrent spectra, we find that besides QW ground state and first excited transition, distinct increases in photocurrent generation are observed. Their origin can be explained by N-induced modifications in the density of states at higher energies above the QW ground state. Photoluminescence experiments reveal that peak position dependence with temperature changes withmore » N concentration. The characteristic S-shaped dependence for low N concentrations of 0.5% changes with increasing N concentration where the low temperature red-shift of the S-shape gradually disappears. This change indicates a gradual transition from impurity picture, where localized N induced energy states are present, to alloying picture, where an impurity-band is formed. In the highest-N sample, photoluminescence emission shows remarkable temperature stability. This phenomenon is explained by the interplay of N-induced energy states and QW confined states.« less

  1. Koopman Invariant Subspaces and Finite Linear Representations of Nonlinear Dynamical Systems for Control.

    PubMed

    Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan

    2016-01-01

    In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.

  2. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  3. Maintenance Audit through Value Analysis Technique: A Case Study

    NASA Astrophysics Data System (ADS)

    Carnero, M. C.; Delgado, S.

    2008-11-01

    The increase in competitiveness, technological changes and the increase in the requirements of quality and service have forced a change in the design and application of maintenance, as well as the way in which it is considered within the managerial strategy. There are numerous maintenance activities that must be developed in a service company. As a result the maintenance functions as a whole have to be outsourced. Nevertheless, delegating this subject to specialized personnel does not exempt the company from responsibilities, but rather leads to the need for control of each maintenance activity. In order to achieve this control and to evaluate the efficiency and effectiveness of the company it is essential to carry out an audit that diagnoses the problems that could develop. In this paper a maintenance audit applied to a service company is developed. The methodology applied is based on the expert systems. The expert system by means of rules uses the weighting technique SMART and value analysis to obtain the weighting between the decision functions and between the alternatives. The expert system applies numerous rules and relations between different variables associated with the specific maintenance functions, to obtain the maintenance state by sections and the general maintenance state of the enterprise. The contributions of this paper are related to the development of a maintenance audit in a service enterprise, in which maintenance is not generally considered a strategic subject and to the integration of decision-making tools such as the weighting technique SMART with value analysis techniques, typical in the design of new products, in the area of the rule-based expert systems.

  4. An AO-assisted Variability Study of Four Globular Clusters

    NASA Astrophysics Data System (ADS)

    Salinas, R.; Contreras Ramos, R.; Strader, J.; Hakala, P.; Catelan, M.; Peacock, M. B.; Simunovic, M.

    2016-09-01

    The image-subtraction technique applied to study variable stars in globular clusters represented a leap in the number of new detections, with the drawback that many of these new light curves could not be transformed to magnitudes due to severe crowding. In this paper, we present observations of four Galactic globular clusters, M 2 (NGC 7089), M 10 (NGC 6254), M 80 (NGC 6093), and NGC 1261, taken with the ground-layer adaptive optics module at the SOAR Telescope, SAM. We show that the higher image quality provided by SAM allows for the calibration of the light curves of the great majority of the variables near the cores of these clusters as well as the detection of new variables, even in clusters where image-subtraction searches were already conducted. We report the discovery of 15 new variables in M 2 (12 RR Lyrae stars and 3 SX Phe stars), 12 new variables in M 10 (11 SX Phe and 1 long-period variable), and 1 new W UMa-type variable in NGC 1261. No new detections are found in M 80, but previous uncertain detections are confirmed and the corresponding light curves are calibrated into magnitudes. Additionally, based on the number of detected variables and new Hubble Space Telescope/UVIS photometry, we revisit a previous suggestion that M 80 may be the globular cluster with the richest population of blue stragglers in our Galaxy. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).

  5. A variable-gain output feedback control design methodology

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Moerder, Daniel D.; Broussard, John R.; Taylor, Deborah B.

    1989-01-01

    A digital control system design technique is developed in which the control system gain matrix varies with the plant operating point parameters. The design technique is obtained by formulating the problem as an optimal stochastic output feedback control law with variable gains. This approach provides a control theory framework within which the operating range of a control law can be significantly extended. Furthermore, the approach avoids the major shortcomings of the conventional gain-scheduling techniques. The optimal variable gain output feedback control problem is solved by embedding the Multi-Configuration Control (MCC) problem, previously solved at ICS. An algorithm to compute the optimal variable gain output feedback control gain matrices is developed. The algorithm is a modified version of the MCC algorithm improved so as to handle the large dimensionality which arises particularly in variable-gain control problems. The design methodology developed is applied to a reconfigurable aircraft control problem. A variable-gain output feedback control problem was formulated to design a flight control law for an AFTI F-16 aircraft which can automatically reconfigure its control strategy to accommodate failures in the horizontal tail control surface. Simulations of the closed-loop reconfigurable system show that the approach produces a control design which can accommodate such failures with relative ease. The technique can be applied to many other problems including sensor failure accommodation, mode switching control laws and super agility.

  6. Variability in Nose-to-Lung Aerosol Delivery

    PubMed Central

    Walenga, Ross L; Tian, Geng; Hindle, Michael; Yelverton, Joshua; Dodson, Kelley; Longest, P. Worth

    2014-01-01

    Nasal delivery of lung targeted pharmaceutical aerosols is ideal for drugs that need to be administered during high flow nasal cannula (HFNC) gas delivery, but based on previous studies losses and variability through both the delivery system and nasal cavity are expected to be high. The objective of this study was to assess the variability in aerosol delivery through the nose to the lungs with a nasal cannula interface for conventional and excipient enhanced growth (EEG) delivery techniques. A database of nasal cavity computed tomography (CT) scans was collected and analyzed, from which four models were selected to represent a wide range of adult anatomies, quantified based on the nasal surface area-to-volume ratio (SA/V). Computational fluid dynamics (CFD) methods were validated with existing in vitro data and used to predict aerosol delivery through a streamlined nasal cannula and the four nasal models at a steady state flow rate of 30 L/min. Aerosols considered were solid particles for EEG delivery (initial 0.9 μm and 1.5 μm aerodynamic diameters) and conventional droplets (5 μm) for a control case. Use of the EEG approach was found to reduce depositional losses in the nasal cavity by an order of magnitude and substantially reduce variability. Specifically, for aerosol deposition efficiency in the four geometries, the 95% confidence intervals (CI) for 0.9 and 5 μm aerosols were 2.3-3.1 and 15.5-66.3%, respectively. Simulations showed that the use of EEG as opposed to conventional methods improved delivered dose of aerosols through the nasopharynx, expressed as penetration fraction (PF), by approximately a factor of four. Variability of PF, expressed by the coefficient of variation (CV), was reduced by a factor of four with EEG delivery compared with the control case. Penetration fraction correlated well with SA/V for larger aerosols, but smaller aerosols showed some dependence on nasopharyngeal exit hydraulic diameter. In conclusion, results indicated that the EEG technique not only improved lung aerosol delivery, but largely eliminated variability in both nasal depositional loss and lung PF in a newly developed set of nasal airway models. PMID:25308992

  7. Variable Temperature Nuclear Magnetic Resonance and Magnetic Resonance Imaging System as a Novel Technique for In Situ Monitoring of Food Phase Transition.

    PubMed

    Song, Yukun; Cheng, Shasha; Wang, Huihui; Zhu, Bei-Wei; Zhou, Dayong; Yang, Peiqiang; Tan, Mingqian

    2018-01-24

    A nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) system with a 45 mm variable temperature (VT) sample probe (VT-NMR-MRI) was developed as an innovative technique for in situ monitoring of food phase transition. The system was designed to allow for dual deployment in either a freezing (-37 °C) or high temperature (150 °C) environment. The major breakthrough of the developed VT-NMR-MRI system is that it is able to measure the water states simultaneously in situ during food processing. The performance of the VT-NMR-MRI system was evaluated by measuring the phase transition for salmon flesh and hen egg samples. The NMR relaxometry results demonstrated that the freezing point of salmon flesh was -8.08 °C, and the salmon flesh denaturation temperature was 42.16 °C. The protein denaturation of egg was 70.61 °C, and the protein denaturation occurred at 24.12 min. Meanwhile, the use of MRI in phase transition of food was also investigated to gain internal structural information. All these results showed that the VT-NMR-MRI system provided an effective means for in situ monitoring of phase transition in food processing.

  8. Anisotropy in thermal conductivity of graphite flakes–SiC{sub p}/matrix composites: Implications in heat sinking design for thermal management applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molina, J.M., E-mail: jmmj@ua.es; Departamento de Física Aplicada, Universidad de Alicante, Ap. 99, E-03080 Alicante; Departamento de Química Inorgánica, Universidad de Alicante, Ap. 99, | E-03080 Alicante

    2015-11-15

    Within the frame of heat dissipation for electronics, a very interesting family of anisotropic composite materials, fabricated by liquid infiltration of a matrix into preforms of oriented graphite flakes and SiC particles, has been recently proposed. Aiming to investigate the implications of the inherent anisotropy of these composites on their thermal conductivity, and hence on their potential applications, materials with matrices of Al–12 wt.% Si alloy and epoxy polymer have been fabricated. Samples have been cut at a variable angle with respect to the flakes plane and thermal conductivity has been measured by means of two standard techniques, namely, steadymore » state technique and laser flash method. Experimental results are presented and discussed in terms of current models, from which important technological implications for heat sinking design can be derived. - Highlights: • Anisotropy in thermal conductivity of graphite flakes-based composites is evaluated. • Samples are cut in a direction forming a variable angle with the oriented flakes. • For angles 0° and 90°, thermal conductivity does not depend on sample geometry. • For intermediate angles, thermal conductivity strongly depends on sample geometry. • “Thin” samples must be thicker than 600 μm, “thick” samples must be encapsulated.« less

  9. Budget goal commitment, clinical managers' use of budget information and performance.

    PubMed

    Macinati, Manuela S; Rizzo, Marco G

    2014-08-01

    Despite the importance placed on accounting as a means to influence performance in public healthcare, there is still a lot to be learned about the role of management accounting in clinical managers' work behavior and their link with organizational performance. The article aims at analyzing the motivational role of budgetary participation and the intervening role of individuals' mental states and behaviors in influencing the relationship between budgetary participation and performance. According to the goal-setting theory, SEM technique was used to test the relationships among variables. The data were collected by a survey conducted in an Italian hospital. The results show that: (i) budgetary participation does not directly influence the use of budget information, but the latter is encouraged by the level of budget goal commitment which, as a result, is influenced by the positive motivational consequences of participative budgeting; (ii) budget goal commitment does not directly influence performance, but the relationship is mediated by the use of budget information. This study contributes to health policy and management accounting literature and has significant policy implications. Mainly, the findings prove that the introduction of business-like techniques in the healthcare sector can improve performance if attitudinal and behavioral variables are adequately stimulated. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Comment on "An Efficient and Stable Hydrodynamic Model With Novel Source Term Discretization Schemes for Overland Flow and Flood Simulations" by Xilin Xia et al.

    NASA Astrophysics Data System (ADS)

    Lu, Xinhua; Mao, Bing; Dong, Bingjiang

    2018-01-01

    Xia et al. (2017) proposed a novel, fully implicit method for the discretization of the bed friction terms for solving the shallow-water equations. The friction terms contain h-7/3 (h denotes water depth), which may be extremely large, introducing machine error when h approaches zero. To address this problem, Xia et al. (2017) introduces auxiliary variables (their equations (37) and (38)) so that h-4/3 rather than h-7/3 is calculated and solves a transformed equation (their equation (39)). The introduced auxiliary variables require extra storage. We implemented an analysis on the magnitude of the friction terms to find that these terms on the whole do not exceed the machine floating-point number precision, and thus we proposed a simple-to-implement technique by splitting h-7/3 into different parts of the friction terms to avoid introducing machine error. This technique does not need extra storage or to solve a transformed equation and thus is more efficient for simulations. We also showed that the surface reconstruction method proposed by Xia et al. (2017) may lead to predictions with spurious wiggles because the reconstructed Riemann states may misrepresent the water gravitational effect.

  11. Mapping Variables.

    ERIC Educational Resources Information Center

    Stone, Mark H.; Wright, Benjamin D.; Stenner, A. Jackson

    1999-01-01

    Describes mapping variables, the principal technique for planning and constructing a test or rating instrument. A variable map is also useful for interpreting results. Provides several maps to show the importance and value of mapping a variable by person and item data. (Author/SLD)

  12. Energy awareness for supercapacitors using Kalman filter state-of-charge tracking

    NASA Astrophysics Data System (ADS)

    Nadeau, Andrew; Hassanalieragh, Moeen; Sharma, Gaurav; Soyata, Tolga

    2015-11-01

    Among energy buffering alternatives, supercapacitors can provide unmatched efficiency and durability. Additionally, the direct relation between a supercapacitor's terminal voltage and stored energy can improve energy awareness. However, a simple capacitive approximation cannot adequately represent the stored energy in a supercapacitor. It is shown that the three branch equivalent circuit model provides more accurate energy awareness. This equivalent circuit uses three capacitances and associated resistances to represent the supercapacitor's internal SOC (state-of-charge). However, the SOC cannot be determined from one observation of the terminal voltage, and must be tracked over time using inexact measurements. We present: 1) a Kalman filtering solution for tracking the SOC; 2) an on-line system identification procedure to efficiently estimate the equivalent circuit's parameters; and 3) experimental validation of both parameter estimation and SOC tracking for 5 F, 10 F, 50 F, and 350 F supercapacitors. Validation is done within the operating range of a solar powered application and the associated power variability due to energy harvesting. The proposed techniques are benchmarked against the simple capacitive model and prior parameter estimation techniques, and provide a 67% reduction in root-mean-square error for predicting usable buffered energy.

  13. Investigating Mesoscale Convective Systems and their Predictability Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Daher, H.; Duffy, D.; Bowen, M. K.

    2016-12-01

    A mesoscale convective system (MCS) is a thunderstorm region that lasts several hours long and forms near weather fronts and can often develop into tornadoes. Here we seek to answer the question of whether these tornadoes are "predictable" by looking for a defining characteristic(s) separating MCSs that evolve into tornadoes versus those that do not. Using NASA's Modern Era Retrospective-analysis for Research and Applications 2 reanalysis data (M2R12K), we apply several state of the art machine learning techniques to investigate this question. The spatial region examined in this experiment is Tornado Alley in the United States over the peak tornado months. A database containing select variables from M2R12K is created using PostgreSQL. This database is then analyzed using machine learning methods such as Symbolic Aggregate approXimation (SAX) and DBSCAN (an unsupervised density-based data clustering algorithm). The incentive behind using these methods is to mathematically define a MCS so that association rule mining techniques can be used to uncover some sort of signal or teleconnection that will help us forecast which MCSs will result in tornadoes and therefore give society more time to prepare and in turn reduce casualties and destruction.

  14. Web based visualization of large climate data sets

    USGS Publications Warehouse

    Alder, Jay R.; Hostetler, Steven W.

    2015-01-01

    We have implemented the USGS National Climate Change Viewer (NCCV), which is an easy-to-use web application that displays future projections from global climate models over the United States at the state, county and watershed scales. We incorporate the NASA NEX-DCP30 statistically downscaled temperature and precipitation for 30 global climate models being used in the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC), and hydrologic variables we simulated using a simple water-balance model. Our application summarizes very large, complex data sets at scales relevant to resource managers and citizens and makes climate-change projection information accessible to users of varying skill levels. Tens of terabytes of high-resolution climate and water-balance data are distilled to compact binary format summary files that are used in the application. To alleviate slow response times under high loads, we developed a map caching technique that reduces the time it takes to generate maps by several orders of magnitude. The reduced access time scales to >500 concurrent users. We provide code examples that demonstrate key aspects of data processing, data exporting/importing and the caching technique used in the NCCV.

  15. Evaluating the performance of a fault detection and diagnostic system for vapor compression equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breuker, M.S.; Braun, J.E.

    This paper presents a detailed evaluation of the performance of a statistical, rule-based fault detection and diagnostic (FDD) technique presented by Rossi and Braun (1997). Steady-state and transient tests were performed on a simple rooftop air conditioner over a range of conditions and fault levels. The steady-state data without faults were used to train models that predict outputs for normal operation. The transient data with faults were used to evaluate FDD performance. The effect of a number of design variables on FDD sensitivity for different faults was evaluated and two prototype systems were specified for more complete evaluation. Good performancemore » was achieved in detecting and diagnosing five faults using only six temperatures (2 input and 4 output) and linear models. The performance improved by about a factor of two when ten measurements (three input and seven output) and higher order models were used. This approach for evaluating and optimizing the performance of the statistical, rule-based FDD technique could be used as a design and evaluation tool when applying this FDD method to other packaged air-conditioning systems. Furthermore, the approach could also be modified to evaluate the performance of other FDD methods.« less

  16. New insights on the Dronino iron meteorite by double-pulse micro-Laser-Induced Breakdown Spectroscopy

    NASA Astrophysics Data System (ADS)

    Tempesta, Gioacchino; Senesi, Giorgio S.; Manzari, Paola; Agrosì, Giovanna

    2018-06-01

    Two fragments of an iron meteorite shower named Dronino were characterized by a novel technique, i.e. Double-Pulse micro-Laser Induced Breakdown Spectroscopy (DP-μLIBS) combined with optical microscope. This technique allowed to perform a fast and detailed analysis of the chemical composition of the fragments and permitted to determine their composition, the alteration state differences and the cooling rate of the meteorite. Qualitative analysis indicated the presence of Fe, Ni and Co in both fragments, whereas the elements Al, Ca, Mg, Si and, for the first time Li, were detected only in one fragment and were related to its post-falling alteration and contamination by weathering processes. Quantitative analysis data obtained using the calibration-free (CF) - LIBS method showed a good agreement with those obtained by traditional methods generally applied to meteorite analysis, i.e. Electron Dispersion Spectroscopy - Scanning Electron Microscopy (EDS-SEM), also performed in this study, and Electron Probe Microanalysis (EMPA) (literature data). The local and coupled variability of Ni and Co (increase of Ni and decrease of Co) determined for the unaltered portions exhibiting plessite texture, suggested the occurrence of solid state diffusion processes under a slow cooling rate for the Dronino meteorite.

  17. Tickling, a Technique for Inducing Positive Affect When Handling Rats.

    PubMed

    Cloutier, Sylvie; LaFollette, Megan R; Gaskill, Brianna N; Panksepp, Jaak; Newberry, Ruth C

    2018-05-08

    Handling small animals such as rats can lead to several adverse effects. These include the fear of humans, resistance to handling, increased injury risk for both the animals and the hands of their handlers, decreased animal welfare, and less valid research data. To minimize negative effects on experimental results and human-animal relationships, research animals are often habituated to being handled. However, the methods of habituation are highly variable and often of limited effectiveness. More potently, it is possible for humans to mimic aspects of the animals' playful rough-and-tumble behavior during handling. When applied to laboratory rats in a systematic manner, this playful handling, referred to as tickling, consistently gives rise to positive behavioral responses. This article provides a detailed description of a standardized rat tickling technique. This method can contribute to future investigations into positive affective states in animals, make it easier to handle rats for common husbandry activities such as cage changing or medical/research procedures such as injection, and be implemented as a source of social enrichment. It is concluded that this method can be used to efficiently and practicably reduce rats' fearfulness of humans and improve their welfare, as well as reliably model positive affective states.

  18. Relative Linkages of Stream Dissolved Oxygen with the Hydroclimatic and Biogeochemical Drivers across the Gulf Coast of U.S.A.

    NASA Astrophysics Data System (ADS)

    Gebreslase, A. K.; Abdul-Aziz, O. I.

    2017-12-01

    Dynamics of coastal stream water quality is influenced by a multitude of interacting environmental drivers. A systematic data analytics approach was employed to determine the relative linkages of stream dissolved oxygen (DO) with the hydroclimatic and biogeochemical variables across the Gulf Coast of U.S.A. Multivariate pattern recognition techniques of PCA and FA, alongside Pearson's correlation matrix, were utilized to examine the interrelation of variables at 36 water quality monitoring stations from USGS NWIS and EPA STORET databases. Power-law based partial least square regression models with a bootstrap Monte Carlo procedure (1000 iterations) were developed to estimate the relative linkages of dissolved oxygen with the hydroclimatic and biogeochemical variables by appropriately resolving multicollinearity (Nash-Sutcliffe efficiency = 0.58-0.94). Based on the dominant drivers, stations were divided into four environmental regimes. Water temperature was the dominant driver of DO in the majority of streams, representing most the northern part of Gulf Coast states. However, streams in the southern part of Texas and Florida showed a dominant pH control on stream DO. Further, streams representing the transition zone of the two environmental regimes showed notable controls of multiple drivers (i.e., water temperature, stream flow, and specific conductance) on the stream DO. The data analytics research provided profound insight to understand the dynamics of stream DO with the hydroclimatic and biogeochemical variables. The knowledge can help water quality managers in formulating plans for effective stream water quality and watershed management in the U.S. Gulf Coast. Keywords Data analytics, coastal streams, relative linkages, dissolved oxygen, environmental regimes, Gulf Coast, United States.

  19. Sampling saddle points on a free energy surface

    NASA Astrophysics Data System (ADS)

    Samanta, Amit; Chen, Ming; Yu, Tang-Qing; Tuckerman, Mark; E, Weinan

    2014-04-01

    Many problems in biology, chemistry, and materials science require knowledge of saddle points on free energy surfaces. These saddle points act as transition states and are the bottlenecks for transitions of the system between different metastable states. For simple systems in which the free energy depends on a few variables, the free energy surface can be precomputed, and saddle points can then be found using existing techniques. For complex systems, where the free energy depends on many degrees of freedom, this is not feasible. In this paper, we develop an algorithm for finding the saddle points on a high-dimensional free energy surface "on-the-fly" without requiring a priori knowledge the free energy function itself. This is done by using the general strategy of the heterogeneous multi-scale method by applying a macro-scale solver, here the gentlest ascent dynamics algorithm, with the needed force and Hessian values computed on-the-fly using a micro-scale model such as molecular dynamics. The algorithm is capable of dealing with problems involving many coarse-grained variables. The utility of the algorithm is illustrated by studying the saddle points associated with (a) the isomerization transition of the alanine dipeptide using two coarse-grained variables, specifically the Ramachandran dihedral angles, and (b) the beta-hairpin structure of the alanine decamer using 20 coarse-grained variables, specifically the full set of Ramachandran angle pairs associated with each residue. For the alanine decamer, we obtain a detailed network showing the connectivity of the minima obtained and the saddle-point structures that connect them, which provides a way to visualize the gross features of the high-dimensional surface.

  20. Long-term forecasting of meteorological time series using Nonlinear Canonical Correlation Analysis (NLCCA)

    NASA Astrophysics Data System (ADS)

    Woldesellasse, H. T.; Marpu, P. R.; Ouarda, T.

    2016-12-01

    Wind is one of the crucial renewable energy sources which is expected to bring solutions to the challenges of clean energy and the global issue of climate change. A number of linear and nonlinear multivariate techniques has been used to predict the stochastic character of wind speed. A wind forecast with good accuracy has a positive impact on the reduction of electricity system cost and is essential for the effective grid management. Over the past years, few studies have been done on the assessment of teleconnections and its possible effects on the long-term wind speed variability in the UAE region. In this study Nonlinear Canonical Correlation Analysis (NLCCA) method is applied to study the relationship between global climate oscillation indices and meteorological variables, with a major emphasis on wind speed and wind direction, of Abu Dhabi, UAE. The wind dataset was obtained from six ground stations. The first mode of NLCCA is capable of capturing the nonlinear mode of the climate indices at different seasons, showing the symmetry between the warm states and the cool states. The strength of the nonlinear canonical correlation between the two sets of variables varies with the lead/lag time. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE) and Mean absolute error (MAE). The results indicated that NLCCA models provide more accurate information about the nonlinear intrinsic behaviour of the dataset of variables than linear CCA model in terms of the correlation and root mean square error. Key words: Nonlinear Canonical Correlation Analysis (NLCCA), Canonical Correlation Analysis, Neural Network, Climate Indices, wind speed, wind direction

Top