NASA Astrophysics Data System (ADS)
Sleeter, B. M.; Daniel, C.; Frid, L.; Fortin, M. J.
2016-12-01
State-and-transition simulation models (STSMs) provide a general approach for incorporating uncertainty into forecasts of landscape change. Using a Monte Carlo approach, STSMs generate spatially-explicit projections of the state of a landscape based upon probabilistic transitions defined between states. While STSMs are based on the basic principles of Markov chains, they have additional properties that make them applicable to a wide range of questions and types of landscapes. A current limitation of STSMs is that they are only able to track the fate of discrete state variables, such as land use/land cover (LULC) classes. There are some landscape modelling questions, however, for which continuous state variables - for example carbon biomass - are also required. Here we present a new approach for integrating continuous state variables into spatially-explicit STSMs. Specifically we allow any number of continuous state variables to be defined for each spatial cell in our simulations; the value of each continuous variable is then simulated forward in discrete time as a stochastic process based upon defined rates of change between variables. These rates can be defined as a function of the realized states and transitions of each cell in the STSM, thus providing a connection between the continuous variables and the dynamics of the landscape. We demonstrate this new approach by (1) developing a simple IPCC Tier 3 compliant model of ecosystem carbon biomass, where the continuous state variables are defined as terrestrial carbon biomass pools and the rates of change as carbon fluxes between pools, and (2) integrating this carbon model with an existing LULC change model for the state of Hawaii, USA.
Testing quantum contextuality of continuous-variable states
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKeown, Gerard; Paternostro, Mauro; Paris, Matteo G. A.
2011-06-15
We investigate the violation of noncontextuality by a class of continuous-variable states, including variations of entangled coherent states and a two-mode continuous superposition of coherent states. We generalize the Kochen-Specker (KS) inequality discussed by Cabello [A. Cabello, Phys. Rev. Lett. 101, 210401 (2008)] by using effective bidimensional observables implemented through physical operations acting on continuous-variable states, in a way similar to an approach to the falsification of Bell-Clauser-Horne-Shimony-Holt inequalities put forward recently. We test for state-independent violation of KS inequalities under variable degrees of state entanglement and mixedness. We then demonstrate theoretically the violation of a KS inequality for anymore » two-mode state by using pseudospin observables and a generalized quasiprobability function.« less
Wu, Huiquan; White, Maury; Khan, Mansoor A
2011-02-28
The aim of this work was to develop an integrated process analytical technology (PAT) approach for a dynamic pharmaceutical co-precipitation process characterization and design space development. A dynamic co-precipitation process by gradually introducing water to the ternary system of naproxen-Eudragit L100-alcohol was monitored at real-time in situ via Lasentec FBRM and PVM. 3D map of count-time-chord length revealed three distinguishable process stages: incubation, transition, and steady-state. The effects of high risk process variables (slurry temperature, stirring rate, and water addition rate) on both derived co-precipitation process rates and final chord-length-distribution were evaluated systematically using a 3(3) full factorial design. Critical process variables were identified via ANOVA for both transition and steady state. General linear models (GLM) were then used for parameter estimation for each critical variable. Clear trends about effects of each critical variable during transition and steady state were found by GLM and were interpreted using fundamental process principles and Nyvlt's transfer model. Neural network models were able to link process variables with response variables at transition and steady state with R(2) of 0.88-0.98. PVM images evidenced nucleation and crystal growth. Contour plots illustrated design space via critical process variables' ranges. It demonstrated the utility of integrated PAT approach for QbD development. Published by Elsevier B.V.
State-Space Formulation for Circuit Analysis
ERIC Educational Resources Information Center
Martinez-Marin, T.
2010-01-01
This paper presents a new state-space approach for temporal analysis of electrical circuits. The method systematically obtains the state-space formulation of nondegenerate linear networks without using concepts of topology. It employs nodal/mesh systematic analysis to reduce the number of undesired variables. This approach helps students to…
Mechanistic materials modeling for nuclear fuel performance
Tonks, Michael R.; Andersson, David; Phillpot, Simon R.; ...
2017-03-15
Fuel performance codes are critical tools for the design, certification, and safety analysis of nuclear reactors. However, their ability to predict fuel behavior under abnormal conditions is severely limited by their considerable reliance on empirical materials models correlated to burn-up (a measure of the number of fission events that have occurred, but not a unique measure of the history of the material). In this paper, we propose a different paradigm for fuel performance codes to employ mechanistic materials models that are based on the current state of the evolving microstructure rather than burn-up. In this approach, a series of statemore » variables are stored at material points and define the current state of the microstructure. The evolution of these state variables is defined by mechanistic models that are functions of fuel conditions and other state variables. The material properties of the fuel and cladding are determined from microstructure/property relationships that are functions of the state variables and the current fuel conditions. Multiscale modeling and simulation is being used in conjunction with experimental data to inform the development of these models. Finally, this mechanistic, microstructure-based approach has the potential to provide a more predictive fuel performance capability, but will require a team of researchers to complete the required development and to validate the approach.« less
ERIC Educational Resources Information Center
Goreham, Gary A.; And Others
Significant social, demographic, and economic changes have occurred in the North Central states since 1960. This document examines structural and policy variables related to distribution of income, during the years 1960-80 in the 397 counties defined as agriculture-dependent in 13 North Central states. Personal income distribution has been…
NASA Astrophysics Data System (ADS)
De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano
2012-11-01
In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.
ERIC Educational Resources Information Center
Floyd, Carol Everly
Statewide planning for higher education and the approaches that states take to budgeting and accountability are reviewed in this monograph. Statewide planning involves identifying problems and collecting relevant data, analyzing interrelationships among variables, and choosing the most desirable alternatives to reach objectives. State-level higher…
Quantum information processing in phase space: A modular variables approach
NASA Astrophysics Data System (ADS)
Ketterer, A.; Keller, A.; Walborn, S. P.; Coudreau, T.; Milman, P.
2016-08-01
Binary quantum information can be fault-tolerantly encoded in states defined in infinite-dimensional Hilbert spaces. Such states define a computational basis, and permit a perfect equivalence between continuous and discrete universal operations. The drawback of this encoding is that the corresponding logical states are unphysical, meaning infinitely localized in phase space. We use the modular variables formalism to show that, in a number of protocols relevant for quantum information and for the realization of fundamental tests of quantum mechanics, it is possible to loosen the requirements on the logical subspace without jeopardizing their usefulness or their successful implementation. Such protocols involve measurements of appropriately chosen modular variables that permit the readout of the encoded discrete quantum information from the corresponding logical states. Finally, we demonstrate the experimental feasibility of our approach by applying it to the transverse degrees of freedom of single photons.
An approach to online network monitoring using clustered patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jinoh; Sim, Alex; Suh, Sang C.
Network traffic monitoring is a core element in network operations and management for various purposes such as anomaly detection, change detection, and fault/failure detection. In this study, we introduce a new approach to online monitoring using a pattern-based representation of the network traffic. Unlike the past online techniques limited to a single variable to summarize (e.g., sketch), the focus of this study is on capturing the network state from the multivariate attributes under consideration. To this end, we employ clustering with its benefit of the aggregation of multidimensional variables. The clustered result represents the state of the network with regardmore » to the monitored variables, which can also be compared with the previously observed patterns visually and quantitatively. Finally, we demonstrate the proposed method with two popular use cases, one for estimating state changes and the other for identifying anomalous states, to confirm its feasibility.« less
An approach to online network monitoring using clustered patterns
Kim, Jinoh; Sim, Alex; Suh, Sang C.; ...
2017-03-13
Network traffic monitoring is a core element in network operations and management for various purposes such as anomaly detection, change detection, and fault/failure detection. In this study, we introduce a new approach to online monitoring using a pattern-based representation of the network traffic. Unlike the past online techniques limited to a single variable to summarize (e.g., sketch), the focus of this study is on capturing the network state from the multivariate attributes under consideration. To this end, we employ clustering with its benefit of the aggregation of multidimensional variables. The clustered result represents the state of the network with regardmore » to the monitored variables, which can also be compared with the previously observed patterns visually and quantitatively. Finally, we demonstrate the proposed method with two popular use cases, one for estimating state changes and the other for identifying anomalous states, to confirm its feasibility.« less
Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems
NASA Technical Reports Server (NTRS)
Balling, R. J.; Wilkinson, C. A.
1997-01-01
A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.
NASA Astrophysics Data System (ADS)
Shnip, A. I.
2018-01-01
Based on the entropy-free thermodynamic approach, a generalized theory of thermodynamic systems with internal variables of state is being developed. For the case of nonlinear thermodynamic systems with internal variables of state and linear relaxation, the necessary and sufficient conditions have been proved for fulfillment of the second law of thermodynamics in entropy-free formulation which, according to the basic theorem of the theory, are also necessary and sufficient for the existence of a thermodynamic potential. Moreover, relations of correspondence between thermodynamic systems with memory and systems with internal variables of state have been established, as well as some useful relations in the spaces of states of both types of systems.
Geiser, Christian; Griffin, Daniel; Shiffman, Saul
2016-01-01
Sometimes, researchers are interested in whether an intervention, experimental manipulation, or other treatment causes changes in intra-individual state variability. The authors show how multigroup-multiphase latent state-trait (MG-MP-LST) models can be used to examine treatment effects with regard to both mean differences and differences in state variability. The approach is illustrated based on a randomized controlled trial in which N = 338 smokers were randomly assigned to nicotine replacement therapy (NRT) vs. placebo prior to quitting smoking. We found that post quitting, smokers in both the NRT and placebo group had significantly reduced intra-individual affect state variability with respect to the affect items calm and content relative to the pre-quitting phase. This reduction in state variability did not differ between the NRT and placebo groups, indicating that quitting smoking may lead to a stabilization of individuals' affect states regardless of whether or not individuals receive NRT.
Geiser, Christian; Griffin, Daniel; Shiffman, Saul
2016-01-01
Sometimes, researchers are interested in whether an intervention, experimental manipulation, or other treatment causes changes in intra-individual state variability. The authors show how multigroup-multiphase latent state-trait (MG-MP-LST) models can be used to examine treatment effects with regard to both mean differences and differences in state variability. The approach is illustrated based on a randomized controlled trial in which N = 338 smokers were randomly assigned to nicotine replacement therapy (NRT) vs. placebo prior to quitting smoking. We found that post quitting, smokers in both the NRT and placebo group had significantly reduced intra-individual affect state variability with respect to the affect items calm and content relative to the pre-quitting phase. This reduction in state variability did not differ between the NRT and placebo groups, indicating that quitting smoking may lead to a stabilization of individuals' affect states regardless of whether or not individuals receive NRT. PMID:27499744
ERIC Educational Resources Information Center
Bergman, Lars R.; Nurmi, Jari-Erik; von Eye, Alexander A.
2012-01-01
I-states-as-objects-analysis (ISOA) is a person-oriented methodology for studying short-term developmental stability and change in patterns of variable values. ISOA is based on longitudinal data with the same set of variables measured at all measurement occasions. A key concept is the "i-state," defined as a person's pattern of variable…
Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.
Zhang, Yue; Berhane, Kiros
2016-01-01
We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, L.; Witzel, G.; Ghez, A. M.
2014-08-10
Continuously time variable sources are often characterized by their power spectral density and flux distribution. These quantities can undergo dramatic changes over time if the underlying physical processes change. However, some changes can be subtle and not distinguishable using standard statistical approaches. Here, we report a methodology that aims to identify distinct but similar states of time variability. We apply this method to the Galactic supermassive black hole, where 2.2 μm flux is observed from a source associated with Sgr A* and where two distinct states have recently been suggested. Our approach is taken from mathematical finance and works withmore » conditional flux density distributions that depend on the previous flux value. The discrete, unobserved (hidden) state variable is modeled as a stochastic process and the transition probabilities are inferred from the flux density time series. Using the most comprehensive data set to date, in which all Keck and a majority of the publicly available Very Large Telescope data have been merged, we show that Sgr A* is sufficiently described by a single intrinsic state. However, the observed flux densities exhibit two states: noise dominated and source dominated. Our methodology reported here will prove extremely useful to assess the effects of the putative gas cloud G2 that is on its way toward the black hole and might create a new state of variability.« less
NASA Astrophysics Data System (ADS)
Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.
2017-12-01
NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.
Multilevel resistive information storage and retrieval
Lohn, Andrew; Mickel, Patrick R.
2016-08-09
The present invention relates to resistive random-access memory (RRAM or ReRAM) systems, as well as methods of employing multiple state variables to form degenerate states in such memory systems. The methods herein allow for precise write and read steps to form multiple state variables, and these steps can be performed electrically. Such an approach allows for multilevel, high density memory systems with enhanced information storage capacity and simplified information retrieval.
State Space Model with hidden variables for reconstruction of gene regulatory networks.
Wu, Xi; Li, Peng; Wang, Nan; Gong, Ping; Perkins, Edward J; Deng, Youping; Zhang, Chaoyang
2011-01-01
State Space Model (SSM) is a relatively new approach to inferring gene regulatory networks. It requires less computational time than Dynamic Bayesian Networks (DBN). There are two types of variables in the linear SSM, observed variables and hidden variables. SSM uses an iterative method, namely Expectation-Maximization, to infer regulatory relationships from microarray datasets. The hidden variables cannot be directly observed from experiments. How to determine the number of hidden variables has a significant impact on the accuracy of network inference. In this study, we used SSM to infer Gene regulatory networks (GRNs) from synthetic time series datasets, investigated Bayesian Information Criterion (BIC) and Principle Component Analysis (PCA) approaches to determining the number of hidden variables in SSM, and evaluated the performance of SSM in comparison with DBN. True GRNs and synthetic gene expression datasets were generated using GeneNetWeaver. Both DBN and linear SSM were used to infer GRNs from the synthetic datasets. The inferred networks were compared with the true networks. Our results show that inference precision varied with the number of hidden variables. For some regulatory networks, the inference precision of DBN was higher but SSM performed better in other cases. Although the overall performance of the two approaches is compatible, SSM is much faster and capable of inferring much larger networks than DBN. This study provides useful information in handling the hidden variables and improving the inference precision.
NASA Astrophysics Data System (ADS)
Moll, Andreas; Stegert, Christoph
2007-01-01
This paper outlines an approach to couple a structured zooplankton population model with state variables for eggs, nauplii, two copepodites stages and adults adapted to Pseudocalanus elongatus into the complex marine ecosystem model ECOHAM2 with 13 state variables resolving the carbon and nitrogen cycle. Different temperature and food scenarios derived from laboratory culture studies were examined to improve the process parameterisation for copepod stage dependent development processes. To study annual cycles under realistic weather and hydrographic conditions, the coupled ecosystem-zooplankton model is applied to a water column in the northern North Sea. The main ecosystem state variables were validated against observed monthly mean values. Then vertical profiles of selected state variables were compared to the physical forcing to study differences between zooplankton as one biomass state variable or partitioned into five population state variables. Simulated generation times are more affected by temperature than food conditions except during the spring phytoplankton bloom. Up to six generations within the annual cycle can be discerned in the simulation.
A State-by-State Analysis of Laws Dealing With Driving Under the Influence of Drugs
DOT National Transportation Integrated Search
2009-12-01
This study reviewed each State statute regarding drug-impaired driving as of December 2008. There : is a high degree of variability across the States in the ways they approach drug-impaired driving. : Current laws in many States contain provisions ma...
Continuous-variable teleportation of a negative Wigner function
NASA Astrophysics Data System (ADS)
Mišta, Ladislav, Jr.; Filip, Radim; Furusawa, Akira
2010-07-01
Teleportation is a basic primitive for quantum communication and quantum computing. We address the problem of continuous-variable (unconditional and conditional) teleportation of a pure single-photon state and a mixed attenuated single-photon state generally in a nonunity-gain regime. Our figure of merit is the maximum negativity of the Wigner function, which demonstrates a highly nonclassical feature of the teleported state. We find that the negativity of the Wigner function of the single-photon state can be unconditionally teleported for an arbitrarily weak squeezed state used to create the entangled state shared in teleportation. In contrast, for the attenuated single-photon state there is a strict threshold squeezing one has to surpass to successfully teleport the negativity of its Wigner function. The conditional teleportation allows one to approach perfect transmission of the single photon for an arbitrarily low squeezing at a cost of decrease of the success rate. In contrast, for the attenuated single photon state, conditional teleportation cannot overcome the squeezing threshold of the unconditional teleportation and it approaches negativity of the input state only if the squeezing increases simultaneously. However, as soon as the threshold squeezing is surpassed, conditional teleportation still pronouncedly outperforms the unconditional one. The main consequences for quantum communication and quantum computing with continuous variables are discussed.
Continuous-variable teleportation of a negative Wigner function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mista, Ladislav Jr.; Filip, Radim; Furusawa, Akira
2010-07-15
Teleportation is a basic primitive for quantum communication and quantum computing. We address the problem of continuous-variable (unconditional and conditional) teleportation of a pure single-photon state and a mixed attenuated single-photon state generally in a nonunity-gain regime. Our figure of merit is the maximum negativity of the Wigner function, which demonstrates a highly nonclassical feature of the teleported state. We find that the negativity of the Wigner function of the single-photon state can be unconditionally teleported for an arbitrarily weak squeezed state used to create the entangled state shared in teleportation. In contrast, for the attenuated single-photon state there ismore » a strict threshold squeezing one has to surpass to successfully teleport the negativity of its Wigner function. The conditional teleportation allows one to approach perfect transmission of the single photon for an arbitrarily low squeezing at a cost of decrease of the success rate. In contrast, for the attenuated single photon state, conditional teleportation cannot overcome the squeezing threshold of the unconditional teleportation and it approaches negativity of the input state only if the squeezing increases simultaneously. However, as soon as the threshold squeezing is surpassed, conditional teleportation still pronouncedly outperforms the unconditional one. The main consequences for quantum communication and quantum computing with continuous variables are discussed.« less
ERIC Educational Resources Information Center
Spada, Marcantonio M.; Moneta, Giovanni B.
2014-01-01
The objective of this study was to verify the structure of a model of how surface approach to studying is influenced by the trait variables of motivation and metacognition and the state variables of avoidance coping and evaluation anxiety. We extended the model to include: (1) the investigation of the relative contribution of the five…
Discovering Free Energy Basins for Macromolecular Systems via Guided Multiscale Simulation
Sereda, Yuriy V.; Singharoy, Abhishek B.; Jarrold, Martin F.; Ortoleva, Peter J.
2012-01-01
An approach for the automated discovery of low free energy states of macromolecular systems is presented. The method does not involve delineating the entire free energy landscape but proceeds in a sequential free energy minimizing state discovery, i.e., it first discovers one low free energy state and then automatically seeks a distinct neighboring one. These states and the associated ensembles of atomistic configurations are characterized by coarse-grained variables capturing the large-scale structure of the system. A key facet of our approach is the identification of such coarse-grained variables. Evolution of these variables is governed by Langevin dynamics driven by thermal-average forces and mediated by diffusivities, both of which are constructed by an ensemble of short molecular dynamics runs. In the present approach, the thermal-average forces are modified to account for the entropy changes following from our knowledge of the free energy basins already discovered. Such forces guide the system away from the known free energy minima, over free energy barriers, and to a new one. The theory is demonstrated for lactoferrin, known to have multiple energy-minimizing structures. The approach is validated using experimental structures and traditional molecular dynamics. The method can be generalized to enable the interpretation of nanocharacterization data (e.g., ion mobility – mass spectrometry, atomic force microscopy, chemical labeling, and nanopore measurements). PMID:22423635
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
Latent variable method for automatic adaptation to background states in motor imagery BCI
NASA Astrophysics Data System (ADS)
Dagaev, Nikolay; Volkova, Ksenia; Ossadtchi, Alexei
2018-02-01
Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model’s parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.
Miller, William H.; Cotton, Stephen J.
2016-08-28
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory - e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer valuesmore » of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states - and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.« less
Miller, William H; Cotton, Stephen J
2016-08-28
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory-e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer values of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states-and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.
A STATE-VARIABLE APPROACH FOR PREDICTING THE TIME REQUIRED FOR 50% RECRYSTALLIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. STOUT; ET AL
2000-08-01
It is important to be able to model the recrystallization kinetics in aluminum alloys during hot deformation. The industrial relevant process of hot rolling is an example of where the knowledge of whether or not a material recrystallizes is critical to making a product with the correct properties. Classically, the equations that describe the kinetics of recrystallization predict the time to 50% recrystallization. These equations are largely empirical; they are based on the free energy for recrystallization, a Zener-Holloman parameter, and have several adjustable exponents to fit the equation to engineering data. We have modified this form of classical theorymore » replacing the Zener-Hollomon parameter with a deformation energy increment, a free energy available to drive recrystallization. The advantage of this formulation is that the deformation energy increment is calculated based on the previously determined temperature and strain-rate sensitivity of the constitutive response. We modeled the constitutive response of the AA5182 aluminum using a state variable approach, the value of the state variable is a function of the temperature and strain-rate history of deformation. Thus, the recrystallization kinetics is a function of only the state variable and free energy for recrystallization. There are no adjustable exponents as in classical theory. Using this approach combined with engineering recrystallization data we have been able to predict the kinetics of recrystallization in AA5182 as a function of deformation strain rate and temperature.« less
Discrete optimal control approach to a four-dimensional guidance problem near terminal areas
NASA Technical Reports Server (NTRS)
Nagarajan, N.
1974-01-01
Description of a computer-oriented technique to generate the necessary control inputs to guide an aircraft in a given time from a given initial state to a prescribed final state subject to the constraints on airspeed, acceleration, and pitch and bank angles of the aircraft. A discrete-time mathematical model requiring five state variables and three control variables is obtained, assuming steady wind and zero sideslip. The guidance problem is posed as a discrete nonlinear optimal control problem with a cost functional of Bolza form. A solution technique for the control problem is investigated, and numerical examples are presented. It is believed that this approach should prove to be useful in automated air traffic control schemes near large terminal areas.
Relating Neuronal to Behavioral Performance: Variability of Optomotor Responses in the Blowfly
Rosner, Ronny; Warzecha, Anne-Kathrin
2011-01-01
Behavioral responses of an animal vary even when they are elicited by the same stimulus. This variability is due to stochastic processes within the nervous system and to the changing internal states of the animal. To what extent does the variability of neuronal responses account for the overall variability at the behavioral level? To address this question we evaluate the neuronal variability at the output stage of the blowfly's (Calliphora vicina) visual system by recording from motion-sensitive interneurons mediating head optomotor responses. By means of a simple modelling approach representing the sensory-motor transformation, we predict head movements on the basis of the recorded responses of motion-sensitive neurons and compare the variability of the predicted head movements with that of the observed ones. Large gain changes of optomotor head movements have previously been shown to go along with changes in the animals' activity state. Our modelling approach substantiates that these gain changes are imposed downstream of the motion-sensitive neurons of the visual system. Moreover, since predicted head movements are clearly more reliable than those actually observed, we conclude that substantial variability is introduced downstream of the visual system. PMID:22066014
Choice of Variables and Preconditioning for Time Dependent Problems
NASA Technical Reports Server (NTRS)
Turkel, Eli; Vatsa, Verr N.
2003-01-01
We consider the use of low speed preconditioning for time dependent problems. These are solved using a dual time step approach. We consider the effect of this dual time step on the parameter of the low speed preconditioning. In addition, we compare the use of two sets of variables, conservation and primitive variables, to solve the system. We show the effect of these choices on both the convergence to a steady state and the accuracy of the numerical solutions for low Mach number steady state and time dependent flows.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas
2004-08-01
Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.
The equal combination synchronization of a class of chaotic systems with discontinuous output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Runzi; Zeng, Yanhui
This paper investigates the equal combination synchronization of a class of chaotic systems. The chaotic systems are assumed that only the output state variable is available and the output may be discontinuous state variable. By constructing proper observers, some novel criteria for the equal combination synchronization are proposed. The Lorenz chaotic system is taken as an example to demonstrate the efficiency of the proposed approach.
An improved maximum power point tracking method for a photovoltaic system
NASA Astrophysics Data System (ADS)
Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes
2016-06-01
In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.
Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.
Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L
2017-10-01
The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (p<0.005) and correlated with independently identified visual EEG patterns such as generalized periodic discharges (p<0.02). Receiver operating characteristic (ROC) analysis confirmed the predictive value of lower state space velocity for poor clinical outcome after cardiac arrest (AUC 80.8, 70% sensitivity, 15% false positive rate). Model-based quantitative EEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic encephalopathy. Copyright © 2017 Elsevier B.V. All rights reserved.
A partial Hamiltonian approach for current value Hamiltonian systems
NASA Astrophysics Data System (ADS)
Naz, R.; Mahomed, F. M.; Chaudhry, Azam
2014-10-01
We develop a partial Hamiltonian framework to obtain reductions and closed-form solutions via first integrals of current value Hamiltonian systems of ordinary differential equations (ODEs). The approach is algorithmic and applies to many state and costate variables of the current value Hamiltonian. However, we apply the method to models with one control, one state and one costate variable to illustrate its effectiveness. The current value Hamiltonian systems arise in economic growth theory and other economic models. We explain our approach with the help of a simple illustrative example and then apply it to two widely used economic growth models: the Ramsey model with a constant relative risk aversion (CRRA) utility function and Cobb Douglas technology and a one-sector AK model of endogenous growth are considered. We show that our newly developed systematic approach can be used to deduce results given in the literature and also to find new solutions.
Coarse-Grained Clustering Dynamics of Heterogeneously Coupled Neurons.
Moon, Sung Joon; Cook, Katherine A; Rajendran, Karthikeyan; Kevrekidis, Ioannis G; Cisternas, Jaime; Laing, Carlo R
2015-12-01
The formation of oscillating phase clusters in a network of identical Hodgkin-Huxley neurons is studied, along with their dynamic behavior. The neurons are synaptically coupled in an all-to-all manner, yet the synaptic coupling characteristic time is heterogeneous across the connections. In a network of N neurons where this heterogeneity is characterized by a prescribed random variable, the oscillatory single-cluster state can transition-through [Formula: see text] (possibly perturbed) period-doubling and subsequent bifurcations-to a variety of multiple-cluster states. The clustering dynamic behavior is computationally studied both at the detailed and the coarse-grained levels, and a numerical approach that can enable studying the coarse-grained dynamics in a network of arbitrarily large size is suggested. Among a number of cluster states formed, double clusters, composed of nearly equal sub-network sizes are seen to be stable; interestingly, the heterogeneity parameter in each of the double-cluster components tends to be consistent with the random variable over the entire network: Given a double-cluster state, permuting the dynamical variables of the neurons can lead to a combinatorially large number of different, yet similar "fine" states that appear practically identical at the coarse-grained level. For weak heterogeneity we find that correlations rapidly develop, within each cluster, between the neuron's "identity" (its own value of the heterogeneity parameter) and its dynamical state. For single- and double-cluster states we demonstrate an effective coarse-graining approach that uses the Polynomial Chaos expansion to succinctly describe the dynamics by these quickly established "identity-state" correlations. This coarse-graining approach is utilized, within the equation-free framework, to perform efficient computations of the neuron ensemble dynamics.
REGIONAL LAKE TROPHIC PATTERNS IN THE NORTHEASTERN UNITED STATES: THREE APPROACHES
During the summers of 1991-1994, the Environmental Monitoring and Assessment Progam (EMAP) conducted variable probability sampling on 344 lakes throughout the northeastern United States. Trophic state data were analyzed for the Northeast as a whole and for each of its three major...
Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2006-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).
Sliding mode control for Mars entry based on extended state observer
NASA Astrophysics Data System (ADS)
Lu, Kunfeng; Xia, Yuanqing; Shen, Ganghui; Yu, Chunmei; Zhou, Liuyu; Zhang, Lijun
2017-11-01
This paper addresses high-precision Mars entry guidance and control approach via sliding mode control (SMC) and Extended State Observer (ESO). First, differential flatness (DF) approach is applied to the dynamic equations of the entry vehicle to represent the state variables more conveniently. Then, the presented SMC law can guarantee the property of finite-time convergence of tracking error, which requires no information on high uncertainties that are estimated by ESO, and the rigorous proof of tracking error convergence is given. Finally, Monte Carlo simulation results are presented to demonstrate the effectiveness of the suggested approach.
Stramaglia, Sebastiano; Angelini, Leonardo; Wu, Guorong; Cortes, Jesus M; Faes, Luca; Marinazzo, Daniele
2016-12-01
We develop a framework for the analysis of synergy and redundancy in the pattern of information flow between subsystems of a complex network. The presence of redundancy and/or synergy in multivariate time series data renders difficulty to estimate the neat flow of information from each driver variable to a given target. We show that adopting an unnormalized definition of Granger causality, one may put in evidence redundant multiplets of variables influencing the target by maximizing the total Granger causality to a given target, over all the possible partitions of the set of driving variables. Consequently, we introduce a pairwise index of synergy which is zero when two independent sources additively influence the future state of the system, differently from previous definitions of synergy. We report the application of the proposed approach to resting state functional magnetic resonance imaging data from the Human Connectome Project showing that redundant pairs of regions arise mainly due to space contiguity and interhemispheric symmetry, while synergy occurs mainly between nonhomologous pairs of regions in opposite hemispheres. Redundancy and synergy, in healthy resting brains, display characteristic patterns, revealed by the proposed approach. The pairwise synergy index, here introduced, maps the informational character of the system at hand into a weighted complex network: the same approach can be applied to other complex systems whose normal state corresponds to a balance between redundant and synergetic circuits.
A State-trait Analysis of Alpha Density and Personality Variables in a Normal Population
ERIC Educational Resources Information Center
Degood, Douglas E.; Valle, Ronald S.
1975-01-01
This paper examined the relationship of some selected trait measures of personality with resting samples of alpha density in a normal population and the implications of such data for a state-trait approach to alpha and the experiential states associated with alpha. (Author/RK)
Analyzing Variability in Ebola-Related Controls Applied to Returned Travelers in the United States
Siedner, Mark J.; Stoto, Michael A.
2015-01-01
Public health authorities have adopted entry screening and subsequent restrictions on travelers from Ebola-affected West African countries as a strategy to prevent importation of Ebola virus disease (EVD) cases. We analyzed international, federal, and state policies—principally based on the policy documents themselves and media reports—to evaluate policy variability. We employed means-ends fit analysis to elucidate policy objectives. We found substantial variation in the specific approaches favored by WHO, CDC, and various American states. Several US states impose compulsory quarantine on a broader range of travelers or require more extensive monitoring than recommended by CDC or WHO. Observed differences likely partially resulted from different actors having different policy goals—particularly the federal government having to balance foreign policy objectives less salient to states. Further, some state-level variation appears to be motivated by short-term political goals. We propose recommendations to improve future policies, which include the following: (1) actors should explicitly clarify their objectives, (2) legal authority should be modernized and clarified, and (3) the federal government should consider preempting state approaches that imperil its goals. PMID:26348222
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William H.; Cotton, Stephen J.
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory - e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer valuesmore » of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states - and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.« less
Zhao, Zhibiao
2011-06-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.
A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences
NASA Astrophysics Data System (ADS)
Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert
2011-09-01
Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.
Scheidegger, Stephan; Fuchs, Hans U; Zaugg, Kathrin; Bodis, Stephan; Füchslin, Rudolf M
2013-01-01
In order to overcome the limitations of the linear-quadratic model and include synergistic effects of heat and radiation, a novel radiobiological model is proposed. The model is based on a chain of cell populations which are characterized by the number of radiation induced damages (hits). Cells can shift downward along the chain by collecting hits and upward by a repair process. The repair process is governed by a repair probability which depends upon state variables used for a simplistic description of the impact of heat and radiation upon repair proteins. Based on the parameters used, populations up to 4-5 hits are relevant for the calculation of the survival. The model describes intuitively the mathematical behaviour of apoptotic and nonapoptotic cell death. Linear-quadratic-linear behaviour of the logarithmic cell survival, fractionation, and (with one exception) the dose rate dependencies are described correctly. The model covers the time gap dependence of the synergistic cell killing due to combined application of heat and radiation, but further validation of the proposed approach based on experimental data is needed. However, the model offers a work bench for testing different biological concepts of damage induction, repair, and statistical approaches for calculating the variables of state.
State variable theories based on Hart's formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korhonen, M.A.; Hannula, S.P.; Li, C.Y.
In this paper a review of the development of a state variable theory for nonelastic deformation is given. The physical and phenomenological basis of the theory and the constitutive equations describing macroplastic, microplastic, anelastic and grain boundary sliding enhanced deformation are presented. The experimental and analytical evaluation of different parameters in the constitutive equations are described in detail followed by a review of the extensive experimental work on different materials. The technological aspects of the state variable approach are highlighted by examples of the simulative and predictive capabilities of the theory. Finally, a discussion of general capabilities, limitations and futuremore » developments of the theory and particularly the possible extensions to cover an even wider range of deformation or deformation-related phenomena is presented.« less
Markov state modeling of sliding friction
NASA Astrophysics Data System (ADS)
Pellegrini, F.; Landes, François P.; Laio, A.; Prestipino, S.; Tosatti, E.
2016-11-01
Markov state modeling (MSM) has recently emerged as one of the key techniques for the discovery of collective variables and the analysis of rare events in molecular simulations. In particular in biochemistry this approach is successfully exploited to find the metastable states of complex systems and their evolution in thermal equilibrium, including rare events, such as a protein undergoing folding. The physics of sliding friction and its atomistic simulations under external forces constitute a nonequilibrium field where relevant variables are in principle unknown and where a proper theory describing violent and rare events such as stick slip is still lacking. Here we show that MSM can be extended to the study of nonequilibrium phenomena and in particular friction. The approach is benchmarked on the Frenkel-Kontorova model, used here as a test system whose properties are well established. We demonstrate that the method allows the least prejudiced identification of a minimal basis of natural microscopic variables necessary for the description of the forced dynamics of sliding, through their probabilistic evolution. The steps necessary for the application to realistic frictional systems are highlighted.
State space approach to mixed boundary value problems.
NASA Technical Reports Server (NTRS)
Chen, C. F.; Chen, M. M.
1973-01-01
A state-space procedure for the formulation and solution of mixed boundary value problems is established. This procedure is a natural extension of the method used in initial value problems; however, certain special theorems and rules must be developed. The scope of the applications of the approach includes beam, arch, and axisymmetric shell problems in structural analysis, boundary layer problems in fluid mechanics, and eigenvalue problems for deformable bodies. Many classical methods in these fields developed by Holzer, Prohl, Myklestad, Thomson, Love-Meissner, and others can be either simplified or unified under new light shed by the state-variable approach. A beam problem is included as an illustration.
Walenga, Ross L.; Kaviratna, Anubhav; Hindle, Michael
2017-01-01
Abstract Background: Nebulized aerosol drug delivery during the administration of noninvasive positive pressure ventilation (NPPV) is commonly implemented. While studies have shown improved patient outcomes for this therapeutic approach, aerosol delivery efficiency is reported to be low with high variability in lung-deposited dose. Excipient enhanced growth (EEG) aerosol delivery is a newly proposed technique that may improve drug delivery efficiency and reduce intersubject aerosol delivery variability when coupled with NPPV. Materials and Methods: A combined approach using in vitro experiments and computational fluid dynamics (CFD) was used to characterize aerosol delivery efficiency during NPPV in two new nasal cavity models that include face mask interfaces. Mesh nebulizer and in-line dry powder inhaler (DPI) sources of conventional and EEG aerosols were both considered. Results: Based on validated steady-state CFD predictions, EEG aerosol delivery improved lung penetration fraction (PF) values by factors ranging from 1.3 to 6.4 compared with conventional-sized aerosols. Furthermore, intersubject variability in lung PF was very high for conventional aerosol sizes (relative differences between subjects in the range of 54.5%–134.3%) and was reduced by an order of magnitude with the EEG approach (relative differences between subjects in the range of 5.5%–17.4%). Realistic in vitro experiments of cyclic NPPV demonstrated similar trends in lung delivery to those observed with the steady-state simulations, but with lower lung delivery efficiencies. Reaching the lung delivery efficiencies reported with the steady-state simulations of 80%–90% will require synchronization of aerosol administration during inspiration and reducing the size of the EEG aerosol delivery unit. Conclusions: The EEG approach enabled high-efficiency lung delivery of aerosols administered during NPPV and reduced intersubject aerosol delivery variability by an order of magnitude. Use of an in-line DPI device that connects to the NPPV mask appears to be a convenient method to rapidly administer an EEG aerosol and synchronize the delivery with inspiration. PMID:28075194
Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.
O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao
2017-07-01
Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.
Segmenting hospitals for improved management strategy.
Malhotra, N K
1989-09-01
The author presents a conceptual framework for the a priori and clustering-based approaches to segmentation and evaluates them in the context of segmenting institutional health care markets. An empirical study is reported in which the hospital market is segmented on three state-of-being variables. The segmentation approach also takes into account important organizational decision-making variables. The sophisticated Thurstone Case V procedure is employed. Several marketing implications for hospitals, other health care organizations, hospital suppliers, and donor publics are identified.
Continuity-based model interfacing for plant-wide simulation: a general approach.
Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A
2006-08-01
In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out.
MODFLOW/MT3DMS-based simulation of variable-density ground water flow and transport
Langevin, C.D.; Guo, W.
2006-01-01
This paper presents an approach for coupling MODFLOW and MT3DMS for the simulation of variable-density ground water flow. MODFLOW routines were modified to solve a variable-density form of the ground water flow equation in which the density terms are calculated using an equation of state and the simulated MT3DMS solute concentrations. Changes to the MODFLOW and MT3DMS input files were kept to a minimum, and thus existing data files and data files created with most pre- and postprocessors can be used directly with the SEAWAT code. The approach was tested by simulating the Henry problem and two of the saltpool laboratory experiments (low- and high-density cases). For the Henry problem, the simulated results compared well with the steady-state semianalytic solution and also the transient isochlor movement as simulated by a finite-element model. For the saltpool problem, the simulated breakthrough curves compared better with the laboratory measurements for the low-density case than for the high-density case but showed good agreement with the measured salinity isosurfaces for both cases. Results from the test cases presented here indicate that the MODFLOW/MT3DMS approach provides accurate solutions for problems involving variable-density ground water flow and solute transport. ?? 2006 National Ground Water Association.
Corteville, D M R; Kjïrstad, Å; Henzler, T; Zöllner, F G; Schad, L R
2015-05-01
Fourier decomposition (FD) is a noninvasive method for assessing ventilation and perfusion-related information in the lungs. However, the technique has a low signal-to-noise ratio (SNR) in the lung parenchyma. We present an approach to increase the SNR in both morphological and functional images. The data used to create functional FD images are usually acquired using a standard balanced steady-state free precession (bSSFP) sequence. In the standard sequence, the possible range of the flip angle is restricted due to specific absorption rate (SAR) limitations. Thus, using a variable flip angle approach as an optimization is possible. This was validated using measurements from a phantom and six healthy volunteers. The SNR in both the morphological and functional FD images was increased by 32%, while the SAR restrictions were kept unchanged. Furthermore, due to the higher SNR, the effective resolution of the functional images was increased visibly. The variable flip angle approach did not introduce any new transient artifacts, and blurring artifacts were minimized. Both a gain in SNR and an effective resolution gain in functional lung images can be obtained using the FD method in conjunction with a variable flip angle optimized bSSFP sequence. © 2014 Wiley Periodicals, Inc.
Multivariate localization methods for ensemble Kalman filtering
NASA Astrophysics Data System (ADS)
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-05-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Optical hybrid quantum teleportation and its applications
NASA Astrophysics Data System (ADS)
Takeda, Shuntaro; Okada, Masanori; Furusawa, Akira
2017-08-01
Quantum teleportation, a transfer protocol of quantum states, is the essence of many sophisticated quantum information protocols. There have been two complementary approaches to optical quantum teleportation: discrete variables (DVs) and continuous variables (CVs). However, both approaches have pros and cons. Here we take a "hybrid" approach to overcome the current limitations: CV quantum teleportation of DVs. This approach enabled the first realization of deterministic quantum teleportation of photonic qubits without post-selection. We also applied the hybrid scheme to several experiments, including entanglement swapping between DVs and CVs, conditional CV teleportation of single photons, and CV teleportation of qutrits. We are now aiming at universal, scalable, and fault-tolerant quantum computing based on these hybrid technologies.
Nonparametric model validations for hidden Markov models with applications in financial econometrics
Zhao, Zhibiao
2011-01-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601
Synchronization of a Josephson junction array in terms of global variables
NASA Astrophysics Data System (ADS)
Vlasov, Vladimir; Pikovsky, Arkady
2013-08-01
We consider an array of Josephson junctions with a common LCR load. Application of the Watanabe-Strogatz approach [Physica DPDNPDT0167-278910.1016/0167-2789(94)90196-1 74, 197 (1994)] allows us to formulate the dynamics of the array via the global variables only. For identical junctions this is a finite set of equations, analysis of which reveals the regions of bistability of the synchronous and asynchronous states. For disordered arrays with distributed parameters of the junctions, the problem is formulated as an integro-differential equation for the global variables; here stability of the asynchronous states and the properties of the transition synchrony-asynchrony are established numerically.
Squeezed states and Hermite polynomials in a complex variable
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, S. Twareque, E-mail: twareque.ali@concordia.ca; Górska, K., E-mail: katarzyna.gorska@ifj.edu.pl; Horzela, A., E-mail: andrzej.horzela@ifj.edu.pl
2014-01-15
Following the lines of the recent paper of J.-P. Gazeau and F. H. Szafraniec [J. Phys. A: Math. Theor. 44, 495201 (2011)], we construct here three types of coherent states, related to the Hermite polynomials in a complex variable which are orthogonal with respect to a non-rotationally invariant measure. We investigate relations between these coherent states and obtain the relationship between them and the squeezed states of quantum optics. We also obtain a second realization of the canonical coherent states in the Bargmann space of analytic functions, in terms of a squeezed basis. All this is done in the flavormore » of the classical approach of V. Bargmann [Commun. Pure Appl. Math. 14, 187 (1961)].« less
Control approach development for variable recruitment artificial muscles
NASA Astrophysics Data System (ADS)
Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew
2016-04-01
This study characterizes hybrid control approaches for the variable recruitment of fluidic artificial muscles with double acting (antagonistic) actuation. Fluidic artificial muscle actuators have been explored by researchers due to their natural compliance, high force-to-weight ratio, and low cost of fabrication. Previous studies have attempted to improve system efficiency of the actuators through variable recruitment, i.e. using discrete changes in the number of active actuators. While current variable recruitment research utilizes manual valve switching, this paper details the current development of an online variable recruitment control scheme. By continuously controlling applied pressure and discretely controlling the number of active actuators, operation in the lowest possible recruitment state is ensured and working fluid consumption is minimized. Results provide insight into switching control scheme effects on working fluids, fabrication material choices, actuator modeling, and controller development decisions.
Gapped two-body Hamiltonian for continuous-variable quantum computation.
Aolita, Leandro; Roncaglia, Augusto J; Ferraro, Alessandro; Acín, Antonio
2011-03-04
We introduce a family of Hamiltonian systems for measurement-based quantum computation with continuous variables. The Hamiltonians (i) are quadratic, and therefore two body, (ii) are of short range, (iii) are frustration-free, and (iv) possess a constant energy gap proportional to the squared inverse of the squeezing. Their ground states are the celebrated Gaussian graph states, which are universal resources for quantum computation in the limit of infinite squeezing. These Hamiltonians constitute the basic ingredient for the adiabatic preparation of graph states and thus open new venues for the physical realization of continuous-variable quantum computing beyond the standard optical approaches. We characterize the correlations in these systems at thermal equilibrium. In particular, we prove that the correlations across any multipartition are contained exactly in its boundary, automatically yielding a correlation area law.
Forecasting conditional climate-change using a hybrid approach
Esfahani, Akbar Akbari; Friedel, Michael J.
2014-01-01
A novel approach is proposed to forecast the likelihood of climate-change across spatial landscape gradients. This hybrid approach involves reconstructing past precipitation and temperature using the self-organizing map technique; determining quantile trends in the climate-change variables by quantile regression modeling; and computing conditional forecasts of climate-change variables based on self-similarity in quantile trends using the fractionally differenced auto-regressive integrated moving average technique. The proposed modeling approach is applied to states (Arizona, California, Colorado, Nevada, New Mexico, and Utah) in the southwestern U.S., where conditional forecasts of climate-change variables are evaluated against recent (2012) observations, evaluated at a future time period (2030), and evaluated as future trends (2009–2059). These results have broad economic, political, and social implications because they quantify uncertainty in climate-change forecasts affecting various sectors of society. Another benefit of the proposed hybrid approach is that it can be extended to any spatiotemporal scale providing self-similarity exists.
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.
1996-01-01
This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.
Jernigan, Jan; Barnes, Seraphine Pitt; Shea, Pat; Davis, Rachel; Rutledge, Stephanie
2017-01-01
We provide an overview of the comprehensive evaluation of State Public Health Actions to Prevent and Control Diabetes, Heart Disease, Obesity and Associated Risk Factors and Promote School Health (State Public Health Actions). State Public Health Actions is a program funded by the Centers for Disease Control and Prevention to support the statewide implementation of cross-cutting approaches to promote health and prevent and control chronic diseases. The evaluation addresses the relevance, quality, and impact of the program by using 4 components: a national evaluation, performance measures, state evaluations, and evaluation technical assistance to states. Challenges of the evaluation included assessing the extent to which the program contributed to changes in the outcomes of interest and the variability in the states’ capacity to conduct evaluations and track performance measures. Given the investment in implementing collaborative approaches at both the state and national level, achieving meaningful findings from the evaluation is critical. PMID:29215974
Previous studies have reported that lower-income and minority populations are more likely to live near major roads. This study quantifies associations between socioeconomic status, racial/ethnic variables, and traffic-related exposure metrics for the United States. Using geograph...
Marini, Simone; Trifoglio, Emanuele; Barbarini, Nicola; Sambo, Francesco; Di Camillo, Barbara; Malovini, Alberto; Manfrini, Marco; Cobelli, Claudio; Bellazzi, Riccardo
2015-10-01
The increasing prevalence of diabetes and its related complications is raising the need for effective methods to predict patient evolution and for stratifying cohorts in terms of risk of developing diabetes-related complications. In this paper, we present a novel approach to the simulation of a type 1 diabetes population, based on Dynamic Bayesian Networks, which combines literature knowledge with data mining of a rich longitudinal cohort of type 1 diabetes patients, the DCCT/EDIC study. In particular, in our approach we simulate the patient health state and complications through discretized variables. Two types of models are presented, one entirely learned from the data and the other partially driven by literature derived knowledge. The whole cohort is simulated for fifteen years, and the simulation error (i.e. for each variable, the percentage of patients predicted in the wrong state) is calculated every year on independent test data. For each variable, the population predicted in the wrong state is below 10% on both models over time. Furthermore, the distributions of real vs. simulated patients greatly overlap. Thus, the proposed models are viable tools to support decision making in type 1 diabetes. Copyright © 2015 Elsevier Inc. All rights reserved.
Daniel, Colin J.; Sleeter, Benjamin M.; Frid, Leonardo; Fortin, Marie-Josée
2018-01-01
State-and-transition simulation models (STSMs) provide a general framework for forecasting landscape dynamics, including projections of both vegetation and land-use/land-cover (LULC) change. The STSM method divides a landscape into spatially-referenced cells and then simulates the state of each cell forward in time, as a discrete-time stochastic process using a Monte Carlo approach, in response to any number of possible transitions. A current limitation of the STSM method, however, is that all of the state variables must be discrete.Here we present a new approach for extending a STSM, in order to account for continuous state variables, called a state-and-transition simulation model with stocks and flows (STSM-SF). The STSM-SF method allows for any number of continuous stocks to be defined for every spatial cell in the STSM, along with a suite of continuous flows specifying the rates at which stock levels change over time. The change in the level of each stock is then simulated forward in time, for each spatial cell, as a discrete-time stochastic process. The method differs from the traditional systems dynamics approach to stock-flow modelling in that the stocks and flows can be spatially-explicit, and the flows can be expressed as a function of the STSM states and transitions.We demonstrate the STSM-SF method by integrating a spatially-explicit carbon (C) budget model with a STSM of LULC change for the state of Hawai'i, USA. In this example, continuous stocks are pools of terrestrial C, while the flows are the possible fluxes of C between these pools. Importantly, several of these C fluxes are triggered by corresponding LULC transitions in the STSM. Model outputs include changes in the spatial and temporal distribution of C pools and fluxes across the landscape in response to projected future changes in LULC over the next 50 years.The new STSM-SF method allows both discrete and continuous state variables to be integrated into a STSM, including interactions between them. With the addition of stocks and flows, STSMs provide a conceptually simple yet powerful approach for characterizing uncertainties in projections of a wide range of questions regarding landscape change.
Interregional migration in socialist countries: the case of China.
Wei, Y
1997-03-01
"This paper analyzes changing interregional migration in China and reveals that the recent eastward migration reverses patterns of migration under Mao. It finds that investment variables are more important than the conventional variables of income and job opportunities in determining China's recent interregional migration. It suggests that both state policy and the global force influence interregional migration, challenging the popular view that the socialist state is the only critical determinant. This paper also criticizes Mao's approach to interregional migration and discusses the impact of migration on development." excerpt
Pathmanathan, Pras; Shotwell, Matthew S; Gavaghan, David J; Cordeiro, Jonathan M; Gray, Richard A
2015-01-01
Perhaps the most mature area of multi-scale systems biology is the modelling of the heart. Current models are grounded in over fifty years of research in the development of biophysically detailed models of the electrophysiology (EP) of cardiac cells, but one aspect which is inadequately addressed is the incorporation of uncertainty and physiological variability. Uncertainty quantification (UQ) is the identification and characterisation of the uncertainty in model parameters derived from experimental data, and the computation of the resultant uncertainty in model outputs. It is a necessary tool for establishing the credibility of computational models, and will likely be expected of EP models for future safety-critical clinical applications. The focus of this paper is formal UQ of one major sub-component of cardiac EP models, the steady-state inactivation of the fast sodium current, INa. To better capture average behaviour and quantify variability across cells, we have applied for the first time an 'individual-based' statistical methodology to assess voltage clamp data. Advantages of this approach over a more traditional 'population-averaged' approach are highlighted. The method was used to characterise variability amongst cells isolated from canine epi and endocardium, and this variability was then 'propagated forward' through a canine model to determine the resultant uncertainty in model predictions at different scales, such as of upstroke velocity and spiral wave dynamics. Statistically significant differences between epi and endocardial cells (greater half-inactivation and less steep slope of steady state inactivation curve for endo) was observed, and the forward propagation revealed a lack of robustness of the model to underlying variability, but also surprising robustness to variability at the tissue scale. Overall, the methodology can be used to: (i) better analyse voltage clamp data; (ii) characterise underlying population variability; (iii) investigate consequences of variability; and (iv) improve the ability to validate a model. To our knowledge this article is the first to quantify population variability in membrane dynamics in this manner, and the first to perform formal UQ for a component of a cardiac model. The approach is likely to find much wider applicability across systems biology as current application domains reach greater levels of maturity. Published by Elsevier Ltd.
Ontology and modeling patterns for state-based behavior representation
NASA Technical Reports Server (NTRS)
Castet, Jean-Francois; Rozek, Matthew L.; Ingham, Michel D.; Rouquette, Nicolas F.; Chung, Seung H.; Kerzhner, Aleksandr A.; Donahue, Kenneth M.; Jenkins, J. Steven; Wagner, David A.; Dvorak, Daniel L.;
2015-01-01
This paper provides an approach to capture state-based behavior of elements, that is, the specification of their state evolution in time, and the interactions amongst them. Elements can be components (e.g., sensors, actuators) or environments, and are characterized by state variables that vary with time. The behaviors of these elements, as well as interactions among them are represented through constraints on state variables. This paper discusses the concepts and relationships introduced in this behavior ontology, and the modeling patterns associated with it. Two example cases are provided to illustrate their usage, as well as to demonstrate the flexibility and scalability of the behavior ontology: a simple flashlight electrical model and a more complex spacecraft model involving instruments, power and data behaviors. Finally, an implementation in a SysML profile is provided.
State funding for local public health: observations from six case studies.
Potter, Margaret A; Fitzpatrick, Tiffany
2007-01-01
The purpose of this study is to describe state funding of local public health within the context of state public health system types. These types are based on administrative relationships, legal structures, and relative proportion of state funding in local public health budgets. We selected six states representing various types and geographic regions. A case study for each state summarized available information and was validated by state public health officials. An analysis of the case studies reveals that the variability of state public health systems--even within a given type--is matched by variability in approaches to funding local public health. Nevertheless, some meaningful associations appear. For example, higher proportions of state funding occur along with higher levels of state oversight and the existence of local service mandates in state law. These associations suggest topics for future research on public health financing in relation to local accountability, local input to state priority-setting, mandated local services, and the absence of state funds for public health services in some local jurisdictions.
Bucklin, David N.; Watling, James I.; Speroterra, Carolina; Brandt, Laura A.; Mazzotti, Frank J.; Romañach, Stephanie S.
2013-01-01
High-resolution (downscaled) projections of future climate conditions are critical inputs to a wide variety of ecological and socioeconomic models and are created using numerous different approaches. Here, we conduct a sensitivity analysis of spatial predictions from climate envelope models for threatened and endangered vertebrates in the southeastern United States to determine whether two different downscaling approaches (with and without the use of a regional climate model) affect climate envelope model predictions when all other sources of variation are held constant. We found that prediction maps differed spatially between downscaling approaches and that the variation attributable to downscaling technique was comparable to variation between maps generated using different general circulation models (GCMs). Precipitation variables tended to show greater discrepancies between downscaling techniques than temperature variables, and for one GCM, there was evidence that more poorly resolved precipitation variables contributed relatively more to model uncertainty than more well-resolved variables. Our work suggests that ecological modelers requiring high-resolution climate projections should carefully consider the type of downscaling applied to the climate projections prior to their use in predictive ecological modeling. The uncertainty associated with alternative downscaling methods may rival that of other, more widely appreciated sources of variation, such as the general circulation model or emissions scenario with which future climate projections are created.
Grassmann phase space methods for fermions. I. Mode theory
NASA Astrophysics Data System (ADS)
Dalton, B. J.; Jeffers, J.; Barnett, S. M.
2016-07-01
In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggest the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. The theory of Grassmann phase space methods for fermions based on separate modes is developed, showing how the distribution function is defined and used to determine quantum correlation functions, Fock state populations and coherences via Grassmann phase space integrals, how the Fokker-Planck equations are obtained and then converted into equivalent Ito equations for stochastic Grassmann variables. The fermion distribution function is an even Grassmann function, and is unique. The number of c-number Wiener increments involved is 2n2, if there are n modes. The situation is somewhat different to the bosonic c-number case where only 2 n Wiener increments are involved, the sign of the drift term in the Ito equation is reversed and the diffusion matrix in the Fokker-Planck equation is anti-symmetric rather than symmetric. The un-normalised B distribution is of particular importance for determining Fock state populations and coherences, and as pointed out by Plimak, Collett and Olsen, the drift vector in its Fokker-Planck equation only depends linearly on the Grassmann variables. Using this key feature we show how the Ito stochastic equations can be solved numerically for finite times in terms of c-number stochastic quantities. Averages of products of Grassmann stochastic variables at the initial time are also involved, but these are determined from the initial conditions for the quantum state. The detailed approach to the numerics is outlined, showing that (apart from standard issues in such numerics) numerical calculations for Grassmann phase space theories of fermion systems could be carried out without needing to represent Grassmann phase space variables on the computer, and only involving processes using c-numbers. We compare our approach to that of Plimak, Collett and Olsen and show that the two approaches differ. As a simple test case we apply the B distribution theory and solve the Ito stochastic equations to demonstrate coupling between degenerate Cooper pairs in a four mode fermionic system involving spin conserving interactions between the spin 1 / 2 fermions, where modes with momenta - k , + k-each associated with spin up, spin down states, are involved.
An approach to evaluate the intra-urban thermal variability in summer using an urban indicator.
Massetti, Luciano; Petralli, Martina; Brandani, Giada; Orlandini, Simone
2014-09-01
Urban planners and managers need tools to evaluate the performance of the present state and future development of cities in terms of comfort and quality of life. In this paper, an approach to analyse the intra-urban summer thermal variability, using an urban planning indicator, is presented. The proportion of buildings and concrete surfaces in a specific buffer area are calculated. Besides, the relationship between urban and temperature indicators is investigated and used to produce thermal maps of the city. This approach is applied to the analysis of intra-urban variability in Florence (Italy), of two thermal indices (heat index and cooling degree days) used to evaluate impacts on thermal comfort and energy consumption for indoor cooling. Our results suggest that urban planning indicators can describe intra-urban thermal variability in a way that can easily be used by urban planners for evaluating the effects of future urbanization scenarios on human health. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multivariate localization methods for ensemble Kalman filtering
NASA Astrophysics Data System (ADS)
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-12-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
ERIC Educational Resources Information Center
Delaney, Jennifer A.
2016-01-01
This study considers the relationship between federal academic earmarks and three types of state spending for higher education--general appropriations, capital outlays, and student financial aid. Often referred to as "pork," federal academic earmarks are both controversial and understudied. Using a unique panel dataset from 1990-2003,…
A Comparison of Four Approaches to Account for Method Effects in Latent State-Trait Analyses
ERIC Educational Resources Information Center
Geiser, Christian; Lockhart, Ginger
2012-01-01
Latent state-trait (LST) analysis is frequently applied in psychological research to determine the degree to which observed scores reflect stable person-specific effects, effects of situations and/or person-situation interactions, and random measurement error. Most LST applications use multiple repeatedly measured observed variables as indicators…
Occupancy estimation and modeling with multiple states and state uncertainty
Nichols, J.D.; Hines, J.E.; MacKenzie, D.I.; Seamans, M.E.; Gutierrez, R.J.
2007-01-01
The distribution of a species over space is of central interest in ecology, but species occurrence does not provide all of the information needed to characterize either the well-being of a population or the suitability of occupied habitat. Recent methodological development has focused on drawing inferences about species occurrence in the face of imperfect detection. Here we extend those methods by characterizing occupied locations by some additional state variable ( e. g., as producing young or not). Our modeling approach deals with both detection probabilities,1 and uncertainty in state classification. We then use the approach with occupancy and reproductive rate data from California Spotted Owls (Strix occidentalis occidentalis) collected in the central Sierra Nevada during the breeding season of 2004 to illustrate the utility of the modeling approach. Estimates of owl reproductive rate were larger than naive estimates, indicating the importance of appropriately accounting for uncertainty in detection and state classification.
Expansion and retrenchment of the Swedish welfare state: a long-term approach.
Buendía, Luis
2015-01-01
In this article, we will undertake a long-term analysis of the evolution of the Swedish welfare state, seeking to explain that evolution through the use of a systemic approach. That is, our approach will consider the interrelations between economic growth (EG), the sociopolitical institutional framework (IF), and the welfare state (WS)-understood as a set of institutions embracing the labor market and its regulation, the tax system, and the so-called social wage-in order to find the main variables that elucidate its evolution. We will show that the expansive phase of the Swedish welfare state can be explained by the symbiotic relationships developed in the WS-EG-IF interaction, whereas the period of welfare state retrenchment is one result of changes operating within the sociopolitical IF and EG bases. © The Author(s) 2015 Reprints and permissions:]br]sagepub.co.uk/journalsPermissions.nav.
NASA Astrophysics Data System (ADS)
Lei, Hanlun; Xu, Bo; Circi, Christian
2018-05-01
In this work, the single-mode motions around the collinear and triangular libration points in the circular restricted three-body problem are studied. To describe these motions, we adopt an invariant manifold approach, which states that a suitable pair of independent variables are taken as modal coordinates and the remaining state variables are expressed as polynomial series of them. Based on the invariant manifold approach, the general procedure on constructing polynomial expansions up to a certain order is outlined. Taking the Earth-Moon system as the example dynamical model, we construct the polynomial expansions up to the tenth order for the single-mode motions around collinear libration points, and up to order eight and six for the planar and vertical-periodic motions around triangular libration point, respectively. The application of the polynomial expansions constructed lies in that they can be used to determine the initial states for the single-mode motions around equilibrium points. To check the validity, the accuracy of initial states determined by the polynomial expansions is evaluated.
Optimal savings and the value of population.
Arrow, Kenneth J; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P
2007-11-20
We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium.
Optimal savings and the value of population
Arrow, Kenneth J.; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P.
2007-01-01
We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium. PMID:17984059
NASA Astrophysics Data System (ADS)
Akita, T.; Takaki, R.; Shima, E.
2012-04-01
An adaptive estimation method of spacecraft thermal mathematical model is presented. The method is based on the ensemble Kalman filter, which can effectively handle the nonlinearities contained in the thermal model. The state space equations of the thermal mathematical model is derived, where both temperature and uncertain thermal characteristic parameters are considered as the state variables. In the method, the thermal characteristic parameters are automatically estimated as the outputs of the filtered state variables, whereas, in the usual thermal model correlation, they are manually identified by experienced engineers using trial-and-error approach. A numerical experiment of a simple small satellite is provided to verify the effectiveness of the presented method.
NASA Astrophysics Data System (ADS)
Medina, H.; Romano, N.; Chirico, G. B.
2012-12-01
We present a dual Kalman Filter (KF) approach for retrieving states and parameters controlling soil water dynamics in a homogenous soil column by using near-surface state observations. The dual Kalman filter couples a standard KF algorithm for retrieving the states and an unscented KF algorithm for retrieving the parameters. We examine the performance of the dual Kalman Filter applied to two alternative state-space formulations of the Richards equation, respectively differentiated by the type of variable employed for representing the states: either the soil water content (θ) or the soil matric pressure head (h). We use a synthetic time-series series of true states and noise corrupted observations and a synthetic time-series of meteorological forcing. The performance analyses account for the effect of the input parameters, the observation depth and the assimilation frequency as well as the relationship between the retrieved states and the assimilated variables. We show that the identifiability of the parameters is strongly conditioned by several factors, such as the initial guess of the unknown parameters, the wet or dry range of the retrieved states, the boundary conditions, as well as the form (h-based or θ-based) of the state-space formulation. State identifiability is instead efficient even with a relatively coarse time-resolution of the assimilated observation. The accuracy of the retrieved states exhibits limited sensitivity to the observation depth and the assimilation frequency.
A fault isolation method based on the incidence matrix of an augmented system
NASA Astrophysics Data System (ADS)
Chen, Changxiong; Chen, Liping; Ding, Jianwan; Wu, Yizhong
2018-03-01
A new approach is proposed for isolating faults and fast identifying the redundant sensors of a system in this paper. By introducing fault signal as additional state variable, an augmented system model is constructed by the original system model, fault signals and sensor measurement equations. The structural properties of an augmented system model are provided in this paper. From the viewpoint of evaluating fault variables, the calculating correlations of the fault variables in the system can be found, which imply the fault isolation properties of the system. Compared with previous isolation approaches, the highlights of the new approach are that it can quickly find the faults which can be isolated using exclusive residuals, at the same time, and can identify the redundant sensors in the system, which are useful for the design of diagnosis system. The simulation of a four-tank system is reported to validate the proposed method.
How to decompose arbitrary continuous-variable quantum operations.
Sefi, Seckin; van Loock, Peter
2011-10-21
We present a general, systematic, and efficient method for decomposing any given exponential operator of bosonic mode operators, describing an arbitrary multimode Hamiltonian evolution, into a set of universal unitary gates. Although our approach is mainly oriented towards continuous-variable quantum computation, it may be used more generally whenever quantum states are to be transformed deterministically, e.g., in quantum control, discrete-variable quantum computation, or Hamiltonian simulation. We illustrate our scheme by presenting decompositions for various nonlinear Hamiltonians including quartic Kerr interactions. Finally, we conclude with two potential experiments utilizing offline-prepared optical cubic states and homodyne detections, in which quantum information is processed optically or in an atomic memory using quadratic light-atom interactions. © 2011 American Physical Society
A Bayesian approach for convex combination of two Gumbel-Barnett copulas
NASA Astrophysics Data System (ADS)
Fernández, M.; González-López, V. A.
2013-10-01
In this paper it was applied a new Bayesian approach to model the dependence between two variables of interest in public policy: "Gonorrhea Rates per 100,000 Population" and "400% Federal Poverty Level and over" with a small number of paired observations (one pair for each U.S. state). We use a mixture of Gumbel-Barnett copulas suitable to represent situations with weak and negative dependence, which is the case treated here. The methodology allows even making a prediction of the dependence between the variables from one year to another, showing whether there was any alteration in the dependence.
NASA Astrophysics Data System (ADS)
Norris, P. M.; da Silva, A. M., Jr.
2016-12-01
Norris and da Silva recently published a method to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation (CDA). The gridcolumn model includes assumed-PDF intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used are MODIS cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. The new approach not only significantly reduces mean and standard deviation biases with respect to the assimilated observables, but also improves the simulated rotational-Ramman scattering cloud optical centroid pressure against independent (non-assimilated) retrievals from the OMI instrument. One obvious difficulty for the method, and other CDA methods, is the lack of information content in passive cloud observables on cloud vertical structure, beyond cloud-top and thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard is helpful, better honoring inversion structures in the background state.
Legendre spectral-collocation method for solving some types of fractional optimal control problems
Sweilam, Nasser H.; Al-Ajami, Tamer M.
2014-01-01
In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh–Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937
A Transformation Approach to Optimal Control Problems with Bounded State Variables
NASA Technical Reports Server (NTRS)
Hanafy, Lawrence Hanafy
1971-01-01
A technique is described and utilized in the study of the solutions to various general problems in optimal control theory, which are converted in to Lagrange problems in the calculus of variations. This is accomplished by mapping certain properties in Euclidean space onto closed control and state regions. Nonlinear control problems with a unit m cube as control region and unit n cube as state region are considered.
Quantum annealing with parametrically driven nonlinear oscillators
NASA Astrophysics Data System (ADS)
Puri, Shruti
While progress has been made towards building Ising machines to solve hard combinatorial optimization problems, quantum speedups have so far been elusive. Furthermore, protecting annealers against decoherence and achieving long-range connectivity remain important outstanding challenges. With the hope of overcoming these challenges, I introduce a new paradigm for quantum annealing that relies on continuous variable states. Unlike the more conventional approach based on two-level systems, in this approach, quantum information is encoded in two coherent states that are stabilized by parametrically driving a nonlinear resonator. I will show that a fully connected Ising problem can be mapped onto a network of such resonators, and outline an annealing protocol based on adiabatic quantum computing. During the protocol, the resonators in the network evolve from vacuum to coherent states representing the ground state configuration of the encoded problem. In short, the system evolves between two classical states following non-classical dynamics. As will be supported by numerical results, this new annealing paradigm leads to superior noise resilience. Finally, I will discuss a realistic circuit QED realization of an all-to-all connected network of parametrically driven nonlinear resonators. The continuous variable nature of the states in the large Hilbert space of the resonator provides new opportunities for exploring quantum phase transitions and non-stoquastic dynamics during the annealing schedule.
Shryock, Daniel F.; Havrilla, Caroline A.; DeFalco, Lesley; Esque, Todd C.; Custer, Nathan; Wood, Troy E.
2015-01-01
Local adaptation influences plant species’ responses to climate change and their performance in ecological restoration. Fine-scale physiological or phenological adaptations that direct demographic processes may drive intraspecific variability when baseline environmental conditions change. Landscape genomics characterize adaptive differentiation by identifying environmental drivers of adaptive genetic variability and mapping the associated landscape patterns. We applied such an approach to Sphaeralcea ambigua, an important restoration plant in the arid southwestern United States, by analyzing variation at 153 amplified fragment length polymorphism loci in the context of environmental gradients separating 47 Mojave Desert populations. We identified 37 potentially adaptive loci through a combination of genome scan approaches. We then used a generalized dissimilarity model (GDM) to relate variability in potentially adaptive loci with spatial gradients in temperature, precipitation, and topography. We identified non-linear thresholds in loci frequencies driven by summer maximum temperature and water stress, along with continuous variation corresponding to temperature seasonality. Two GDM-based approaches for mapping predicted patterns of local adaptation are compared. Additionally, we assess uncertainty in spatial interpolations through a novel spatial bootstrapping approach. Our study presents robust, accessible methods for deriving spatially-explicit models of adaptive genetic variability in non-model species that will inform climate change modelling and ecological restoration.
Eng, K.; Milly, P.C.D.; Tasker, Gary D.
2007-01-01
To facilitate estimation of streamflow characteristics at an ungauged site, hydrologists often define a region of influence containing gauged sites hydrologically similar to the estimation site. This region can be defined either in geographic space or in the space of the variables that are used to predict streamflow (predictor variables). These approaches are complementary, and a combination of the two may be superior to either. Here we propose a hybrid region-of-influence (HRoI) regression method that combines the two approaches. The new method was applied with streamflow records from 1,091 gauges in the southeastern United States to estimate the 50-year peak flow (Q50). The HRoI approach yielded lower root-mean-square estimation errors and produced fewer extreme errors than either the predictor-variable or geographic region-of-influence approaches. It is concluded, for Q50 in the study region, that similarity with respect to the basin characteristics considered (area, slope, and annual precipitation) is important, but incomplete, and that the consideration of geographic proximity of stations provides a useful surrogate for characteristics that are not included in the analysis. ?? 2007 ASCE.
Sources of Biased Inference in Alcohol and Drug Services Research: An Instrumental Variable Approach
Schmidt, Laura A.; Tam, Tammy W.; Larson, Mary Jo
2012-01-01
Objective: This study examined the potential for biased inference due to endogeneity when using standard approaches for modeling the utilization of alcohol and drug treatment. Method: Results from standard regression analysis were compared with those that controlled for endogeneity using instrumental variables estimation. Comparable models predicted the likelihood of receiving alcohol treatment based on the widely used Aday and Andersen medical care–seeking model. Data were from the National Epidemiologic Survey on Alcohol and Related Conditions and included a representative sample of adults in households and group quarters throughout the contiguous United States. Results: Findings suggested that standard approaches for modeling treatment utilization are prone to bias because of uncontrolled reverse causation and omitted variables. Compared with instrumental variables estimation, standard regression analyses produced downwardly biased estimates of the impact of alcohol problem severity on the likelihood of receiving care. Conclusions: Standard approaches for modeling service utilization are prone to underestimating the true effects of problem severity on service use. Biased inference could lead to inaccurate policy recommendations, for example, by suggesting that people with milder forms of substance use disorder are more likely to receive care than is actually the case. PMID:22152672
A special protection scheme utilizing trajectory sensitivity analysis in power transmission
NASA Astrophysics Data System (ADS)
Suriyamongkol, Dan
In recent years, new measurement techniques have provided opportunities to improve the North American Power System observability, control and protection. This dissertation discusses the formulation and design of a special protection scheme based on a novel utilization of trajectory sensitivity techniques with inputs consisting of system state variables and parameters. Trajectory sensitivity analysis (TSA) has been used in previous publications as a method for power system security and stability assessment, and the mathematical formulation of TSA lends itself well to some of the time domain power system simulation techniques. Existing special protection schemes often have limited sets of goals and control actions. The proposed scheme aims to maintain stability while using as many control actions as possible. The approach here will use the TSA in a novel way by using the sensitivities of system state variables with respect to state parameter variations to determine the state parameter controls required to achieve the desired state variable movements. The initial application will operate based on the assumption that the modeled power system has full system observability, and practical considerations will be discussed.
Dynamic Programming for Structured Continuous Markov Decision Problems
NASA Technical Reports Server (NTRS)
Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu
2004-01-01
We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.
NASA Astrophysics Data System (ADS)
López-Estrada, F. R.; Astorga-Zaragoza, C. M.; Theilliol, D.; Ponsart, J. C.; Valencia-Palomo, G.; Torres, L.
2017-12-01
This paper proposes a methodology to design a Takagi-Sugeno (TS) descriptor observer for a class of TS descriptor systems. Unlike the popular approach that considers measurable premise variables, this paper considers the premise variables depending on unmeasurable vectors, e.g. the system states. This consideration covers a large class of nonlinear systems and represents a real challenge for the observer synthesis. Sufficient conditions to guarantee robustness against the unmeasurable premise variables and asymptotic convergence of the TS descriptor observer are obtained based on the H∞ approach together with the Lyapunov method. As a result, the designing conditions are given in terms of linear matrix inequalities (LMIs). In addition, sensor fault detection and isolation are performed by means of a generalised observer bank. Two numerical experiments, an electrical circuit and a rolling disc system, are presented in order to illustrate the effectiveness of the proposed method.
Hallquist, Michael N.; Hwang, Kai; Luna, Beatriz
2013-01-01
Recent resting-state functional connectivity fMRI (RS-fcMRI) research has demonstrated that head motion during fMRI acquisition systematically influences connectivity estimates despite bandpass filtering and nuisance regression, which are intended to reduce such nuisance variability. We provide evidence that the effects of head motion and other nuisance signals are poorly controlled when the fMRI time series are bandpass-filtered but the regressors are unfiltered, resulting in the inadvertent reintroduction of nuisance-related variation into frequencies previously suppressed by the bandpass filter, as well as suboptimal correction for noise signals in the frequencies of interest. This is important because many RS-fcMRI studies, including some focusing on motion-related artifacts, have applied this approach. In two cohorts of individuals (n = 117 and 22) who completed resting-state fMRI scans, we found that the bandpass-regress approach consistently overestimated functional connectivity across the brain, typically on the order of r = .10 – .35, relative to a simultaneous bandpass filtering and nuisance regression approach. Inflated correlations under the bandpass-regress approach were associated with head motion and cardiac artifacts. Furthermore, distance-related differences in the association of head motion and connectivity estimates were much weaker for the simultaneous filtering approach. We recommend that future RS-fcMRI studies ensure that the frequencies of nuisance regressors and fMRI data match prior to nuisance regression, and we advocate a simultaneous bandpass filtering and nuisance regression strategy that better controls nuisance-related variability. PMID:23747457
Nenov, Valeriy; Bergsneider, Marvin; Glenn, Thomas C.; Vespa, Paul; Martin, Neil
2007-01-01
Impeded by the rigid skull, assessment of physiological variables of the intracranial system is difficult. A hidden state estimation approach is used in the present work to facilitate the estimation of unobserved variables from available clinical measurements including intracranial pressure (ICP) and cerebral blood flow velocity (CBFV). The estimation algorithm is based on a modified nonlinear intracranial mathematical model, whose parameters are first identified in an offline stage using a nonlinear optimization paradigm. Following the offline stage, an online filtering process is performed using a nonlinear Kalman filter (KF)-like state estimator that is equipped with a new way of deriving the Kalman gain satisfying the physiological constraints on the state variables. The proposed method is then validated by comparing different state estimation methods and input/output (I/O) configurations using simulated data. It is also applied to a set of CBFV, ICP and arterial blood pressure (ABP) signal segments from brain injury patients. The results indicated that the proposed constrained nonlinear KF achieved the best performance among the evaluated state estimators and that the state estimator combined with the I/O configuration that has ICP as the measured output can potentially be used to estimate CBFV continuously. Finally, the state estimator combined with the I/O configuration that has both ICP and CBFV as outputs can potentially estimate the lumped cerebral arterial radii, which are not measurable in a typical clinical environment. PMID:17281533
Stochastic analysis of multiphase flow in porous media: II. Numerical simulations
NASA Astrophysics Data System (ADS)
Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.
1996-08-01
The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.
Smith, David V.; Utevsky, Amanda V.; Bland, Amy R.; Clement, Nathan; Clithero, John A.; Harsch, Anne E. W.; Carter, R. McKell; Huettel, Scott A.
2014-01-01
A central challenge for neuroscience lies in relating inter-individual variability to the functional properties of specific brain regions. Yet, considerable variability exists in the connectivity patterns between different brain areas, potentially producing reliable group differences. Using sex differences as a motivating example, we examined two separate resting-state datasets comprising a total of 188 human participants. Both datasets were decomposed into resting-state networks (RSNs) using a probabilistic spatial independent components analysis (ICA). We estimated voxelwise functional connectivity with these networks using a dual-regression analysis, which characterizes the participant-level spatiotemporal dynamics of each network while controlling for (via multiple regression) the influence of other networks and sources of variability. We found that males and females exhibit distinct patterns of connectivity with multiple RSNs, including both visual and auditory networks and the right frontal-parietal network. These results replicated across both datasets and were not explained by differences in head motion, data quality, brain volume, cortisol levels, or testosterone levels. Importantly, we also demonstrate that dual-regression functional connectivity is better at detecting inter-individual variability than traditional seed-based functional connectivity approaches. Our findings characterize robust—yet frequently ignored—neural differences between males and females, pointing to the necessity of controlling for sex in neuroscience studies of individual differences. Moreover, our results highlight the importance of employing network-based models to study variability in functional connectivity. PMID:24662574
Yielding physically-interpretable emulators - A Sparse PCA approach
NASA Astrophysics Data System (ADS)
Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.
2015-12-01
Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.
A novel approach to piecewise analytic agricultural machinery path reconstruction
NASA Astrophysics Data System (ADS)
Wörz, Sascha; Mederle, Michael; Heizinger, Valentin; Bernhardt, Heinz
2017-12-01
Before analysing machinery operation in fields, it has to be coped with the problem that the GPS signals of GPS receivers located on the machines contain measurement noise, are time-discrete, and the underlying physical system describing the positions, axial and absolute velocities, angular rates and angular orientation of the operating machines during the whole working time are unknown. This research work presents a new three-dimensional mathematical approach using kinematic relations based on control variables as Euler angular velocities and angles and a discrete target control problem, such that the state control function is given by the sum of squared residuals involving the state and control variables to get such a physical system, which yields a noise-free and piecewise analytic representation of the positions, velocities, angular rates and angular orientation. It can be used for a further detailed study and analysis of the problem of why agricultural vehicles operate in practice as they do.
A Three-Step Approach To Model Tree Mortality in the State of Georgia
Qingmin Meng; Chris J. Cieszewski; Roger C. Lowe; Michal Zasada
2005-01-01
Tree mortality is one of the most complex phenomena of forest growth and yield. Many types of factors affect tree mortality, which is considered difficult to predict. This study presents a new systematic approach to simulate tree mortality based on the integration of statistical models and geographical information systems. This method begins with variable preselection...
Conveying the Science of Climate Change: Explaining Natural Variability
NASA Astrophysics Data System (ADS)
Chanton, J.
2011-12-01
One of the main problems in climate change education is reconciling the role of humans and natural variability. The climate is always changing, so how can humans have a role in causing change? How do we reconcile and differentiate the anthropogenic effect from natural variability? This talk will offer several approaches that have been successful for the author. First, the context of climate change during the Pleistocene must be addressed. Second, is the role of the industrial revolution in significantly altering Pleistocene cycles, and introduction of the concept of the Anthropocene. Finally the positive feedbacks between climatic nudging due to increased insolation and greenhouse gas forcing can be likened to a rock rolling down a hill, without a leading cause. This approach has proven successful in presentations to undergraduates to state agencies.
Divergence-free approach for obtaining decompositions of quantum-optical processes
NASA Astrophysics Data System (ADS)
Sabapathy, K. K.; Ivan, J. S.; García-Patrón, R.; Simon, R.
2018-02-01
Operator-sum representations of quantum channels can be obtained by applying the channel to one subsystem of a maximally entangled state and deploying the channel-state isomorphism. However, for continuous-variable systems, such schemes contain natural divergences since the maximally entangled state is ill defined. We introduce a method that avoids such divergences by utilizing finitely entangled (squeezed) states and then taking the limit of arbitrary large squeezing. Using this method, we derive an operator-sum representation for all single-mode bosonic Gaussian channels where a unique feature is that both quantum-limited and noisy channels are treated on an equal footing. This technique facilitates a proof that the rank-1 Kraus decomposition for Gaussian channels at its respective entanglement-breaking thresholds, obtained in the overcomplete coherent-state basis, is unique. The methods could have applications to simulation of continuous-variable channels.
Probabilistic models for reactive behaviour in heterogeneous condensed phase media
NASA Astrophysics Data System (ADS)
Baer, M. R.; Gartling, D. K.; DesJardin, P. E.
2012-02-01
This work presents statistically-based models to describe reactive behaviour in heterogeneous energetic materials. Mesoscale effects are incorporated in continuum-level reactive flow descriptions using probability density functions (pdfs) that are associated with thermodynamic and mechanical states. A generalised approach is presented that includes multimaterial behaviour by treating the volume fraction as a random kinematic variable. Model simplifications are then sought to reduce the complexity of the description without compromising the statistical approach. Reactive behaviour is first considered for non-deformable media having a random temperature field as an initial state. A pdf transport relationship is derived and an approximate moment approach is incorporated in finite element analysis to model an example application whereby a heated fragment impacts a reactive heterogeneous material which leads to a delayed cook-off event. Modelling is then extended to include deformation effects associated with shock loading of a heterogeneous medium whereby random variables of strain, strain-rate and temperature are considered. A demonstrative mesoscale simulation of a non-ideal explosive is discussed that illustrates the joint statistical nature of the strain and temperature fields during shock loading to motivate the probabilistic approach. This modelling is derived in a Lagrangian framework that can be incorporated in continuum-level shock physics analysis. Future work will consider particle-based methods for a numerical implementation of this modelling approach.
European Scientific Notes. Volume 37, Number 1.
1983-01-31
instantoneous sea-state condition can be tions vary widely in their realism , with computed from a special data base coded some producing dynamic color pictures...between the variables of accuracy, approach channels, the alignment of practicality, realism , and expense. jetties, and the establishment of Because the...tidal current variables The system certainly seems to be valid, have been played into some of the and the smooth dynamics, realism , and simulator runs
Prey-mediated behavioral responses of feeding blue whales in controlled sound exposure experiments.
Friedlaender, A S; Hazen, E L; Goldbogen, J A; Stimpert, A K; Calambokidis, J; Southall, B L
2016-06-01
Behavioral response studies provide significant insights into the nature, magnitude, and consequences of changes in animal behavior in response to some external stimulus. Controlled exposure experiments (CEEs) to study behavioral response have faced challenges in quantifying the importance of and interaction among individual variability, exposure conditions, and environmental covariates. To investigate these complex parameters relative to blue whale behavior and how it may change as a function of certain sounds, we deployed multi-sensor acoustic tags and conducted CEEs using simulated mid-frequency active sonar (MFAS) and pseudo-random noise (PRN) stimuli, while collecting synoptic, quantitative prey measures. In contrast to previous approaches that lacked such prey data, our integrated approach explained substantially more variance in blue whale dive behavioral responses to mid-frequency sounds (r2 = 0.725 vs. 0.14 previously). Results demonstrate that deep-feeding whales respond more clearly and strongly to CEEs than those in other behavioral states, but this was only evident with the increased explanatory power provided by incorporating prey density and distribution as contextual covariates. Including contextual variables increases the ability to characterize behavioral variability and empirically strengthens previous findings that deep-feeding blue whales respond significantly to mid-frequency sound exposure. However, our results are only based on a single behavioral state with a limited sample size, and this analytical framework should be applied broadly across behavioral states. The increased capability to describe and account for individual response variability by including environmental variables, such as prey, that drive foraging behavior underscores the importance of integrating these and other relevant contextual parameters in experimental designs. Our results suggest the need to measure and account for the ecological dynamics of predator-prey interactions when studying the effects of anthropogenic disturbance in feeding animals.
Comparative study of flare control laws. [optimal control of b-737 aircraft approach and landing
NASA Technical Reports Server (NTRS)
Nadkarni, A. A.; Breedlove, W. J., Jr.
1979-01-01
A digital 3-D automatic control law was developed to achieve an optimal transition of a B-737 aircraft between various initial glid slope conditions and the desired final touchdown condition. A discrete, time-invariant, optimal, closed-loop control law presented for a linear regulator problem, was extended to include a system being acted upon by a constant disturbance. Two forms of control laws were derived to solve this problem. One method utilized the feedback of integral states defined appropriately and augmented with the original system equations. The second method formulated the problem as a control variable constraint, and the control variables were augmented with the original system. The control variable constraint control law yielded a better performance compared to feedback control law for the integral states chosen.
NASA Astrophysics Data System (ADS)
Sivaganesh, G.; Daniel Sweetlin, M.; Arulgnanam, A.
2016-07-01
In this paper, we present a numerical investigation on the robust synchronization phenomenon observed in a unidirectionally-coupled quasiperiodically-forced simple nonlinear electronic circuit system exhibiting strange non-chaotic attractors (SNAs) in its dynamics. The SNA obtained in the simple quasiperiodic system is characterized for its SNA behavior. Then, we studied the nature of the synchronized state in unidirectionally coupled SNAs by using the Master-Slave approach. The stability of the synchronized state is studied through the master stability functions (MSF) obtained for coupling different state variables of the drive and response system. The property of robust synchronization is analyzed for one type of coupling of the state variables through phase portraits, conditional lyapunov exponents and the Kaplan-Yorke dimension. The phenomenon of complete synchronization of SNAs via a unidirectional coupling scheme is reported for the first time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akarsu, Özgür; Bouhmadi-López, Mariam; Brilenkov, Maxim
We study the late-time evolution of the Universe where dark energy (DE) is presented by a barotropic fluid on top of cold dark matter (CDM) . We also take into account the radiation content of the Universe. Here by the late stage of the evolution we refer to the epoch where CDM is already clustered into inhomogeneously distributed discrete structures (galaxies, groups and clusters of galaxies). Under this condition the mechanical approach is an adequate tool to study the Universe deep inside the cell of uniformity. More precisely, we study scalar perturbations of the FLRW metric due to inhomogeneities ofmore » CDM as well as fluctuations of radiation and DE. For an arbitrary equation of state for DE we obtain a system of equations for the scalar perturbations within the mechanical approach. First, in the case of a constant DE equation of state parameter w, we demonstrate that our method singles out the cosmological constant as the only viable dark energy candidate. Then, we apply our approach to variable equation of state parameters in the form of three different linear parametrizations of w, e.g., the Chevallier-Polarski-Linder perfect fluid model. We conclude that all these models are incompatible with the theory of scalar perturbations in the late Universe.« less
Uncertainty Quantification in Scale-Dependent Models of Flow in Porous Media: SCALE-DEPENDENT UQ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, A. M.; Panzeri, M.; Tartakovsky, G. D.
Equations governing flow and transport in heterogeneous porous media are scale-dependent. We demonstrate that it is possible to identify a support scalemore » $$\\eta^*$$, such that the typically employed approximate formulations of Moment Equations (ME) yield accurate (statistical) moments of a target environmental state variable. Under these circumstances, the ME approach can be used as an alternative to the Monte Carlo (MC) method for Uncertainty Quantification in diverse fields of Earth and environmental sciences. MEs are directly satisfied by the leading moments of the quantities of interest and are defined on the same support scale as the governing stochastic partial differential equations (PDEs). Computable approximations of the otherwise exact MEs can be obtained through perturbation expansion of moments of the state variables in orders of the standard deviation of the random model parameters. As such, their convergence is guaranteed only for the standard deviation smaller than one. We demonstrate our approach in the context of steady-state groundwater flow in a porous medium with a spatially random hydraulic conductivity.« less
NASA Technical Reports Server (NTRS)
Lee, Jong-Won; Allen, D. H.; Harris, C. E.
1989-01-01
A mathematical model utilizing the internal state variable concept is proposed for predicting the upper bound of the reduced axial stiffnesses in cross-ply laminates with matrix cracks. The axial crack opening displacement is explicitly expressed in terms of the observable axial strain and the undamaged material properties. A crack parameter representing the effect of matrix cracks on the observable axial Young's modulus is calculated for glass/epoxy and graphite/epoxy material systems. The results show that the matrix crack opening displacement and the effective Young's modulus depend not on the crack length, but on its ratio to the crack spacing.
Louis R. Iverson; Anantha M. Prasad; Stephen N. Matthews; Matthew P. Peters
2011-01-01
We present an approach to modeling potential climate-driven changes in habitat for tree and bird species in the eastern United States. First, we took an empirical-statistical modeling approach, using randomForest, with species abundance data from national inventories combined with soil, climate, and landscape variables, to build abundance-based habitat models for 134...
Simulating ensembles of source water quality using a K-nearest neighbor resampling approach.
Towler, Erin; Rajagopalan, Balaji; Seidel, Chad; Summers, R Scott
2009-03-01
Climatological, geological, and water management factors can cause significant variability in surface water quality. As drinking water quality standards become more stringent, the ability to quantify the variability of source water quality becomes more important for decision-making and planning in water treatment for regulatory compliance. However, paucity of long-term water quality data makes it challenging to apply traditional simulation techniques. To overcome this limitation, we have developed and applied a robust nonparametric K-nearest neighbor (K-nn) bootstrap approach utilizing the United States Environmental Protection Agency's Information Collection Rule (ICR) data. In this technique, first an appropriate "feature vector" is formed from the best available explanatory variables. The nearest neighbors to the feature vector are identified from the ICR data and are resampled using a weight function. Repetition of this results in water quality ensembles, and consequently the distribution and the quantification of the variability. The main strengths of the approach are its flexibility, simplicity, and the ability to use a large amount of spatial data with limited temporal extent to provide water quality ensembles for any given location. We demonstrate this approach by applying it to simulate monthly ensembles of total organic carbon for two utilities in the U.S. with very different watersheds and to alkalinity and bromide at two other U.S. utilities.
Wang, Fugui; Mladenoff, David J; Forrester, Jodi A; Blanco, Juan A; Schelle, Robert M; Peckham, Scott D; Keough, Cindy; Lucash, Melissa S; Gower, Stith T
The effects of forest management on soil carbon (C) and nitrogen (N) dynamics vary by harvest type and species. We simulated long-term effects of bole-only harvesting of aspen (Populus tremuloides) on stand productivity and interaction of CN cycles with a multiple model approach. Five models, Biome-BGC, CENTURY, FORECAST, LANDIS-II with Century-based soil dynamics, and PnET-CN, were run for 350 yr with seven harvesting events on nutrient-poor, sandy soils representing northwestern Wisconsin, United States. Twenty CN state and flux variables were summarized from the models' outputs and statistically analyzed using ordination and variance analysis methods. The multiple models' averages suggest that bole-only harvest would not significantly affect long-term site productivity of aspen, though declines in soil organic matter and soil N were significant. Along with direct N removal by harvesting, extensive leaching after harvesting before canopy closure was another major cause of N depletion. These five models were notably different in output values of the 20 variables examined, although there were some similarities for certain variables. PnET-CN produced unique results for every variable, and CENTURY showed fewer outliers and similar temporal patterns to the mean of all models. In general, we demonstrated that when there are no site-specific data for fine-scale calibration and evaluation of a single model, the multiple model approach may be a more robust approach for long-term simulations. In addition, multimodeling may also improve the calibration and evaluation of an individual model.
Cursive Writing: Are Its Last Days Approaching?
ERIC Educational Resources Information Center
Supon, Vi
2009-01-01
Indicators are that technological advances and state-mandated tests, in addition to other variables, are forcing cursive writing to become a casualty of the American educational landscape. It behooves us to examine the historical, practical, and essential aspects relative to cursive writing.
Adjustment of driver behavior to an urban multi-lane roundabout.
DOT National Transportation Integrated Search
2007-05-01
In the summer of 2006, the city of Springfield, Oregon installed the first urban multi-lane roundabout in the state. It was hypothesized that after installation, speed variability on approaches to the intersection would decrease from the values with ...
ERIC Educational Resources Information Center
Hidalgo, Abelardo Castro; Carrasco, Decler Martinez; Alegria, Jorge Alegria; Elevancini, Cecilia Maldonado
2000-01-01
States that since the 1990s, professional technical education has produced profound transformations in the relationship between education and work in Chile. Examines in a study how modalities of bringing students to the world of work have affected students' socio-psychological characteristics in comparison to training received from traditional…
LLRW disposal facility siting approaches: Connecticut`s innovative volunteer approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forcella, D.; Gingerich, R.E.; Holeman, G.R.
1994-12-31
The Connecticut Hazardous Waste Management Service (CHWMS) has embarked on a volunteer approach to siting a LLRW disposal facility in Connecticut. This effort comes after an unsuccessful effort to site a facility using a step-wise, criteria-based site screening process that was a classic example of the decide/announce/defend approach. While some of the specific features of the CHWMS` volunteer process reflect the unique challenge presented by the state`s physical characteristics, political structure and recent unsuccessful siting experience, the basic elements of the process are applicable to siting LLRW disposal facilities in many parts of the United States. The CHWMS` volunteer processmore » is structured to reduce the {open_quotes}outrage{close_quotes} dimension of two of the variables that affect the public`s perception of risk. The two variables are the degree to which the risk is taken on voluntarily (voluntary risks are accepted more readily than those that are imposed) and the amount of control one has over the risk (risks under individual control are accepted more readily than those under government control). In the volunteer process, the CHWMS will only consider sites that have been been voluntarily offered by the community in which they are located and the CHWMS will share control over the development and operation of the facility with the community. In addition to these elements which have broad applicability, the CHWMS has tailored the volunteer approach to take advantage of the unique opportunities made possible by the earlier statewide site screening process. Specifically, the approach presents a {open_quotes}win-win{close_quotes} situation for elected officials in many communities if they decide to participate in the process.« less
Honoré, Peggy A; Simoes, Eduardo J; Moonesinghe, Ramal; Wang, Xueyuan; Brown, Lovetta
2007-01-01
Objectives of this study were to examine for associations of casino industry economic development on improving community health status and funding for public health services in two counties in the Mississippi Delta Region of the United States. An ecological approach was used to evaluate whether two counties with casino gaming had improved health status and public health funding in comparison with two noncasino counties in the same region with similar social, racial, and ethic backgrounds. Variables readily available from state health department records were used to develop a logic model for guiding analytical work. A linear regression model was built using a stepwise approach and hierarchical regression principles with many dependent variables and a set of fixed and nonfixed independent variables. County-level data for 23 variables over an 11-year period were used. Overall, this study found a lack of association between the presence of a casino and desirable health outcomes or funding for public health services. Changes in the environment were made to promote health by utilizing gaming revenues to build state-of-the-art community health and wellness centers and sports facilities. However, significant increases in funding for local public health services were not found in either of the counties with casinos. These findings are relevant for policy makers when debating economic development strategies. Analysis similar to this should be combined with other routine public health assessments after implementation of development strategies to increase knowledge of health outcome trends and shifts in socioeconomic position that may be expected to accrue from economic development projects.
NASA Astrophysics Data System (ADS)
Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Torres, L.; Escobar-Jiménez, R. F.; Valtierra-Rodríguez, M.
2017-12-01
In this paper, we propose a state-observer-based approach to synchronize variable-order fractional (VOF) chaotic systems. In particular, this work is focused on complete synchronization with a so-called unidirectional master-slave topology. The master is described by a dynamical system in state-space representation whereas the slave is described by a state observer. The slave is composed of a master copy and a correction term which in turn is constituted of an estimation error and an appropriate gain that assures the synchronization. The differential equations of the VOF chaotic system are described by the Liouville-Caputo and Atangana-Baleanu-Caputo derivatives. Numerical simulations involving the synchronization of Rössler oscillators, Chua's systems and multi-scrolls are studied. The simulations show that different chaotic behaviors can be obtained if different smooths functions defined in the interval (0 , 1 ] are used as the variable order of the fractional derivatives. Furthermore, simulations show that the VOF chaotic systems can be synchronized.
NASA Technical Reports Server (NTRS)
Scoggins, J. R.; Smith, O. E.
1973-01-01
A tablulation is given of rawinsonde data for NASA's first Atmospheric Variability Experiment (AVE 1) conducted during the period February 19-22, 1964. Methods of data handling and processing, and estimates of error magnitudes are also given. Data taken on the AVE 1 project in 1964 enabled an analysis of a large sector of the eastern United States on a fine resolution time scale. This experiment was run in February 1964, and data were collected as a wave developed in the East Gulf on a frontal system which extended through the eastern part of the United States. The primary objective of AVE 1 was to investigate the variability of parameters in space and over time intervals of three hours, and to integrate the results into NASA programs which require this type of information. The results presented are those from one approach, and represent only a portion of the total research effort that can be accomplished.
García-Soriano, Gemma; Rosell-Clari, Vicent; Serrano, Miguel Ángel
2016-05-23
Different variables have been associated with the development/ maintenance of contamination-related obsessive-compulsive disorder (OCD), although the relevance of these factors has not been clearly established. The present study aimed to analyze the relevance and specificity of these variables. Forty-five women with high scores on obsessive-compulsive contamination symptoms (n = 16) or checking symptoms (n = 15), or non-clinical scores (n = 14) participated in a behavioral approach/avoidance task (BAT) with a contamination-OCD stimulus. Vulnerability variables and participants' emotional, cognitive, physiological and behavioral responses to the BAT were appraised. Results show that fear of illness was a relevant vulnerability variable specific to contamination participants (p = .001; η2 p = .291). Contamination participants responded with significantly higher subjective disgust (p =.001; η2 p = .269), anxiety (p = .001; η2 p = .297), urge to wash (p < .001; η2 p = 370), threat from emotion (p < .001; η2 p = .338) and contamination severity (p = .002; η2 p = .260) appraisals, and with lower behavioral approach (p = .008; η2 p = .208) than the other two groups. Moreover, contamination participants showed lower heart rate acceleration (p = .046; η2 p = .170) and higher contamination likelihood appraisals (p < .001; η2 p = .342) than the non-clinical group. Urge to wash was predicted by state disgust (R 2 change = .346) and threat from emotion (R 2 change = .088). These responses were predicted by general anxiety sensitivity (R 2 change = .161), disgust propensity (R 2 change = .255) and fear of illness (R 2 change = .116), but not by other vulnerability variables such as dysfunctional beliefs about thoughts (Responsibility and Overestimation of threat) or disgust sensitivity. State disgust, threat from disgust, anxiety sensitivity and fear of illness were found to be the most relevant variables in contamination symptoms.
Predicting length of children's psychiatric hospitalizations: an "ecologic" approach.
Mossman, D; Songer, D A; Baker, D G
1991-08-01
This article describes the development and validation of a simple and modestly successful model for predicting inpatient length of stay (LOS) at a state-funded facility providing acute to long term care for children and adolescents in Ohio. Six variables--diagnostic group, legal status at time of admission, attending physician, age, sex, and county of residence--explained 30% of the variation in log10LOS in the subgroup used to create the model, and 26% of log10LOS variation in the cross-validation subgroup. The model also identified LOS outliers with moderate accuracy (ROC area = .68-0.76). The authors attribute the model's success to inclusion of variables that are correlated to idiosyncratic "ecologic" factors as well as variables related to severity of illness. Future attempts to construct LOS models may adopt similar approaches.
Construction of state-independent proofs for quantum contextuality
NASA Astrophysics Data System (ADS)
Tang, Weidong; Yu, Sixia
2017-12-01
Since the enlightening proofs of quantum contextuality first established by Kochen and Specker, and also by Bell, various simplified proofs have been constructed to exclude the noncontextual hidden variable theory of our nature at the microscopic scale. The conflict between the noncontextual hidden variable theory and quantum mechanics is commonly revealed by Kochen-Specker sets of yes-no tests, represented by projectors (or rays), via either logical contradictions or noncontextuality inequalities in a state-(in)dependent manner. Here we propose a systematic and programmable construction of a state-independent proof from a given set of nonspecific rays in C3 according to their Gram matrix. This approach brings us a greater convenience in the experimental arrangements. Besides, our proofs in C3 can also be generalized to any higher-dimensional systems by a recursive method.
Smith, David V; Utevsky, Amanda V; Bland, Amy R; Clement, Nathan; Clithero, John A; Harsch, Anne E W; McKell Carter, R; Huettel, Scott A
2014-07-15
A central challenge for neuroscience lies in relating inter-individual variability to the functional properties of specific brain regions. Yet, considerable variability exists in the connectivity patterns between different brain areas, potentially producing reliable group differences. Using sex differences as a motivating example, we examined two separate resting-state datasets comprising a total of 188 human participants. Both datasets were decomposed into resting-state networks (RSNs) using a probabilistic spatial independent component analysis (ICA). We estimated voxel-wise functional connectivity with these networks using a dual-regression analysis, which characterizes the participant-level spatiotemporal dynamics of each network while controlling for (via multiple regression) the influence of other networks and sources of variability. We found that males and females exhibit distinct patterns of connectivity with multiple RSNs, including both visual and auditory networks and the right frontal-parietal network. These results replicated across both datasets and were not explained by differences in head motion, data quality, brain volume, cortisol levels, or testosterone levels. Importantly, we also demonstrate that dual-regression functional connectivity is better at detecting inter-individual variability than traditional seed-based functional connectivity approaches. Our findings characterize robust-yet frequently ignored-neural differences between males and females, pointing to the necessity of controlling for sex in neuroscience studies of individual differences. Moreover, our results highlight the importance of employing network-based models to study variability in functional connectivity. Copyright © 2014 Elsevier Inc. All rights reserved.
CNES reliability approach for the qualification of MEMS for space
NASA Astrophysics Data System (ADS)
Pressecq, Francis; Lafontan, Xavier; Perez, Guy; Fortea, Jean-Pierre
2001-10-01
This paper describes the reliability approach performs at CNES to evaluate MEMS for space application. After an introduction and a detailed state of the art on the space requirements and on the use of MEMS for space, different approaches for taking into account MEMS in the qualification phases are presented. CNES proposes improvement to theses approaches in term of failure mechanisms identification. Our approach is based on a design and test phase deeply linked with a technology study. This workflow is illustrated with an example: the case of a variable capacitance processed with MUMPS process is presented.
Anonymous voting for multi-dimensional CV quantum system
NASA Astrophysics Data System (ADS)
Rong-Hua, Shi; Yi, Xiao; Jin-Jing, Shi; Ying, Guo; Moon-Ho, Lee
2016-06-01
We investigate the design of anonymous voting protocols, CV-based binary-valued ballot and CV-based multi-valued ballot with continuous variables (CV) in a multi-dimensional quantum cryptosystem to ensure the security of voting procedure and data privacy. The quantum entangled states are employed in the continuous variable quantum system to carry the voting information and assist information transmission, which takes the advantage of the GHZ-like states in terms of improving the utilization of quantum states by decreasing the number of required quantum states. It provides a potential approach to achieve the efficient quantum anonymous voting with high transmission security, especially in large-scale votes. Project supported by the National Natural Science Foundation of China (Grant Nos. 61272495, 61379153, and 61401519), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130162110012), and the MEST-NRF of Korea (Grant No. 2012-002521).
Distillation of mixed-state continuous-variable entanglement by photon subtraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Shengli; Loock, Peter van
2010-12-15
We present a detailed theoretical analysis for the distillation of one copy of a mixed two-mode continuous-variable entangled state using beam splitters and coherent photon-detection techniques, including conventional on-off detectors and photon-number-resolving detectors. The initial Gaussian mixed-entangled states are generated by transmitting a two-mode squeezed state through a lossy bosonic channel, corresponding to the primary source of errors in current approaches to optical quantum communication. We provide explicit formulas to calculate the entanglement in terms of logarithmic negativity before and after distillation, including losses in the channel and the photon detection, and show that one-copy distillation is still possible evenmore » for losses near the typical fiber channel attenuation length. A lower bound for the transmission coefficient of the photon-subtraction beam splitter is derived, representing the minimal value that still allows to enhance the entanglement.« less
Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.
2009-01-01
Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.
Co-state initialization for the minimum-time low-thrust trajectory optimization
NASA Astrophysics Data System (ADS)
Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya
2017-05-01
This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.
Nonlinear intrinsic variables and state reconstruction in multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dsilva, Carmeline J., E-mail: cdsilva@princeton.edu; Talmon, Ronen, E-mail: ronen.talmon@yale.edu; Coifman, Ronald R., E-mail: coifman@math.yale.edu
2013-11-14
Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certainmore » simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.« less
Nonlinear intrinsic variables and state reconstruction in multiscale simulations
NASA Astrophysics Data System (ADS)
Dsilva, Carmeline J.; Talmon, Ronen; Rabin, Neta; Coifman, Ronald R.; Kevrekidis, Ioannis G.
2013-11-01
Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certain simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.
Observed-Based Adaptive Fuzzy Tracking Control for Switched Nonlinear Systems With Dead-Zone.
Tong, Shaocheng; Sui, Shuai; Li, Yongming
2015-12-01
In this paper, the problem of adaptive fuzzy output-feedback control is investigated for a class of uncertain switched nonlinear systems in strict-feedback form. The considered switched systems contain unknown nonlinearities, dead-zone, and immeasurable states. Fuzzy logic systems are utilized to approximate the unknown nonlinear functions, a switched fuzzy state observer is designed and thus the immeasurable states are obtained by it. By applying the adaptive backstepping design principle and the average dwell time method, an adaptive fuzzy output-feedback tracking control approach is developed. It is proved that the proposed control approach can guarantee that all the variables in the closed-loop system are bounded under a class of switching signals with average dwell time, and also that the system output can track a given reference signal as closely as possible. The simulation results are given to check the effectiveness of the proposed approach.
The promise of the state space approach to time series analysis for nursing research.
Levy, Janet A; Elser, Heather E; Knobel, Robin B
2012-01-01
Nursing research, particularly related to physiological development, often depends on the collection of time series data. The state space approach to time series analysis has great potential to answer exploratory questions relevant to physiological development but has not been used extensively in nursing. The aim of the study was to introduce the state space approach to time series analysis and demonstrate potential applicability to neonatal monitoring and physiology. We present a set of univariate state space models; each one describing a process that generates a variable of interest over time. Each model is presented algebraically and a realization of the process is presented graphically from simulated data. This is followed by a discussion of how the model has been or may be used in two nursing projects on neonatal physiological development. The defining feature of the state space approach is the decomposition of the series into components that are functions of time; specifically, slowly varying level, faster varying periodic, and irregular components. State space models potentially simulate developmental processes where a phenomenon emerges and disappears before stabilizing, where the periodic component may become more regular with time, or where the developmental trajectory of a phenomenon is irregular. The ultimate contribution of this approach to nursing science will require close collaboration and cross-disciplinary education between nurses and statisticians.
Leptospirosis in Rio Grande do Sul, Brazil: An Ecosystem Approach in the Animal-Human Interface
Schneider, Maria Cristina; Najera, Patricia; Pereira, Martha M.; Machado, Gustavo; dos Anjos, Celso B.; Rodrigues, Rogério O.; Cavagni, Gabriela M.; Muñoz-Zanzi, Claudia; Corbellini, Luis G.; Leone, Mariana; Buss, Daniel F.; Aldighieri, Sylvain; Espinal, Marcos A.
2015-01-01
Background Leptospirosis is an epidemic-prone neglected disease that affects humans and animals, mostly in vulnerable populations. The One Health approach is a recommended strategy to identify drivers of the disease and plan for its prevention and control. In that context, the aim of this study was to analyze the distribution of human cases of leptospirosis in the State of Rio Grande do Sul, Brazil, and to explore possible drivers. Additionally, it sought to provide further evidence to support interventions and to identify hypotheses for new research at the human-animal-ecosystem interface. Methodology and findings The risk for human infection was described in relation to environmental, socioeconomic, and livestock variables. This ecological study used aggregated data by municipality (all 496). Data were extracted from secondary, publicly available sources. Thematic maps were constructed and univariate analysis performed for all variables. Negative binomial regression was used for multivariable statistical analysis of leptospirosis cases. An annual average of 428 human cases of leptospirosis was reported in the state from 2008 to 2012. The cumulative incidence in rural populations was eight times higher than in urban populations. Variables significantly associated with leptospirosis cases in the final model were: Parana/Paraiba ecoregion (RR: 2.25; CI95%: 2.03–2.49); Neossolo Litolítico soil (RR: 1.93; CI95%: 1.26–2.96); and, to a lesser extent, the production of tobacco (RR: 1.10; CI95%: 1.09–1.11) and rice (RR: 1.003; CI95%: 1.002–1.04). Conclusion Urban cases were concentrated in the capital and rural cases in a specific ecoregion. The major drivers identified in this study were related to environmental and production processes that are permanent features of the state. This study contributes to the basic knowledge on leptospirosis distribution and drivers in the state and encourages a comprehensive approach to address the disease in the animal-human-ecosystem interface. PMID:26562157
Leptospirosis in Rio Grande do Sul, Brazil: An Ecosystem Approach in the Animal-Human Interface.
Schneider, Maria Cristina; Najera, Patricia; Pereira, Martha M; Machado, Gustavo; dos Anjos, Celso B; Rodrigues, Rogério O; Cavagni, Gabriela M; Muñoz-Zanzi, Claudia; Corbellini, Luis G; Leone, Mariana; Buss, Daniel F; Aldighieri, Sylvain; Espinal, Marcos A
2015-11-01
Leptospirosis is an epidemic-prone neglected disease that affects humans and animals, mostly in vulnerable populations. The One Health approach is a recommended strategy to identify drivers of the disease and plan for its prevention and control. In that context, the aim of this study was to analyze the distribution of human cases of leptospirosis in the State of Rio Grande do Sul, Brazil, and to explore possible drivers. Additionally, it sought to provide further evidence to support interventions and to identify hypotheses for new research at the human-animal-ecosystem interface. The risk for human infection was described in relation to environmental, socioeconomic, and livestock variables. This ecological study used aggregated data by municipality (all 496). Data were extracted from secondary, publicly available sources. Thematic maps were constructed and univariate analysis performed for all variables. Negative binomial regression was used for multivariable statistical analysis of leptospirosis cases. An annual average of 428 human cases of leptospirosis was reported in the state from 2008 to 2012. The cumulative incidence in rural populations was eight times higher than in urban populations. Variables significantly associated with leptospirosis cases in the final model were: Parana/Paraiba ecoregion (RR: 2.25; CI95%: 2.03-2.49); Neossolo Litolítico soil (RR: 1.93; CI95%: 1.26-2.96); and, to a lesser extent, the production of tobacco (RR: 1.10; CI95%: 1.09-1.11) and rice (RR: 1.003; CI95%: 1.002-1.04). Urban cases were concentrated in the capital and rural cases in a specific ecoregion. The major drivers identified in this study were related to environmental and production processes that are permanent features of the state. This study contributes to the basic knowledge on leptospirosis distribution and drivers in the state and encourages a comprehensive approach to address the disease in the animal-human-ecosystem interface.
Kleis, Sebastian; Rueckmann, Max; Schaeffer, Christian G
2017-04-15
In this Letter, we propose a novel implementation of continuous variable quantum key distribution that operates with a real local oscillator placed at the receiver site. In addition, pulsing of the continuous wave laser sources is not required, leading to an extraordinary practical and secure setup. It is suitable for arbitrary schemes based on modulated coherent states and heterodyne detection. The shown results include transmission experiments, as well as an excess noise analysis applying a discrete 8-state phase modulation. Achievable key rates under collective attacks are estimated. The results demonstrate the high potential of the approach to achieve high secret key rates at relatively low effort and cost.
NASA Technical Reports Server (NTRS)
Stouffer, D. C.; Sheh, M. Y.
1988-01-01
A micromechanical model based on crystallographic slip theory was formulated for nickel-base single crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the effect of back stress in single crystals. The results showed that (1) the back stress is orientation dependent; and (2) the back stress state variable in the inelastic flow equation is necessary for predicting anelastic behavior of the material. The model also demonstrated improved fatigue predictive capability. Model predictions and experimental data are presented for single crystal superalloy Rene N4 at 982 C.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William H., E-mail: millerwh@berkeley.edu; Cotton, Stephen J., E-mail: StephenJCotton47@gmail.com
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory—e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer values of themore » action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states—and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.« less
Change rates and prevalence of a dichotomous variable: simulations and applications.
Brinks, Ralph; Landwehr, Sandra
2015-01-01
A common modelling approach in public health and epidemiology divides the population under study into compartments containing persons that share the same status. Here we consider a three-state model with the compartments: A, B and Dead. States A and B may be the states of any dichotomous variable, for example, Healthy and Ill, respectively. The transitions between the states are described by change rates, which depend on calendar time and on age. So far, a rigorous mathematical calculation of the prevalence of property B has been difficult, which has limited the use of the model in epidemiology and public health. We develop a partial differential equation (PDE) that simplifies the use of the three-state model. To demonstrate the validity of the PDE, it is applied to two simulation studies, one about a hypothetical chronic disease and one about dementia in Germany. In two further applications, the PDE may provide insights into smoking behaviour of males in Germany and the knowledge about the ovulatory cycle in Egyptian women.
NASA Astrophysics Data System (ADS)
Sabonis-Chafee, Theresa Marie
The successor states of Armenia, Lithuania and Ukraine arrived at independence facing extraordinary challenges in their energy sectors. Each state was a net importer, heavily dependent on cheap energy supplies, mostly from Russia. Each state also inherited a nuclear power complex over which it had not previously exercised full control. In the time period 1991--1996, each state attempted to impose coherence on the energy sector, selecting a new course for the pieces it had inherited from a much larger, highly integrated energy structure. Each state attempted to craft national energy policies in the midst of severe supply shocks and price shocks. Each state developed institutions to govern its nuclear power sector. The states' challenges were made even greater by the fact that they had few political or economic structures necessary for energy management, and sought to create those structures at the same time. This dissertation is a systematic, non-quantitative examination of how each state's energy policies developed during the 1991--1996 time period. The theoretical premise of the analysis (drawn from Statist realism) is that systemic variables---regional climate and energy vulnerability---provide the best explanations for the resulting energy policy decisions. The dependent variable is defined as creation and reform of energy institutions. The independent variables include domestic climate, regional climate, energy vulnerability and transnational assistance. All three states adopted rhetoric and legislation declaring energy a strategic sector. The evidence suggests that two of the states, Armenia and Lithuania, which faced tense regional climates and high levels of energy vulnerability, succeeded in actually treating energy strategically, approaching energy as a matter of national security or "high politics." The third state, Ukraine, failed to do so. The evidence presented suggests that the systemic variables (regional climate and energy vulnerability) provided a more favorable environment for Ukraine, one in which the state attempted reform of the sector, but not as a concerted national security issue.
From Metaphors to Formalism: A Heuristic Approach to Holistic Assessments of Ecosystem Health.
Fock, Heino O; Kraus, Gerd
2016-01-01
Environmental policies employ metaphoric objectives such as ecosystem health, resilience and sustainable provision of ecosystem services, which influence corresponding sustainability assessments by means of normative settings such as assumptions on system description, indicator selection, aggregation of information and target setting. A heuristic approach is developed for sustainability assessments to avoid ambiguity and applications to the EU Marine Strategy Framework Directive (MSFD) and OSPAR assessments are presented. For MSFD, nineteen different assessment procedures have been proposed, but at present no agreed assessment procedure is available. The heuristic assessment framework is a functional-holistic approach comprising an ex-ante/ex-post assessment framework with specifically defined normative and systemic dimensions (EAEPNS). The outer normative dimension defines the ex-ante/ex-post framework, of which the latter branch delivers one measure of ecosystem health based on indicators and the former allows to account for the multi-dimensional nature of sustainability (social, economic, ecological) in terms of modeling approaches. For MSFD, the ex-ante/ex-post framework replaces the current distinction between assessments based on pressure and state descriptors. The ex-ante and the ex-post branch each comprise an inner normative and a systemic dimension. The inner normative dimension in the ex-post branch considers additive utility models and likelihood functions to standardize variables normalized with Bayesian modeling. Likelihood functions allow precautionary target setting. The ex-post systemic dimension considers a posteriori indicator selection by means of analysis of indicator space to avoid redundant indicator information as opposed to a priori indicator selection in deconstructive-structural approaches. Indicator information is expressed in terms of ecosystem variability by means of multivariate analysis procedures. The application to the OSPAR assessment for the southern North Sea showed, that with the selected 36 indicators 48% of ecosystem variability could be explained. Tools for the ex-ante branch are risk and ecosystem models with the capability to analyze trade-offs, generating model output for each of the pressure chains to allow for a phasing-out of human pressures. The Bayesian measure of ecosystem health is sensitive to trends in environmental features, but robust to ecosystem variability in line with state space models. The combination of the ex-ante and ex-post branch is essential to evaluate ecosystem resilience and to adopt adaptive management. Based on requirements of the heuristic approach, three possible developments of this concept can be envisioned, i.e. a governance driven approach built upon participatory processes, a science driven functional-holistic approach requiring extensive monitoring to analyze complete ecosystem variability, and an approach with emphasis on ex-ante modeling and ex-post assessment of well-studied subsystems.
From Metaphors to Formalism: A Heuristic Approach to Holistic Assessments of Ecosystem Health
Kraus, Gerd
2016-01-01
Environmental policies employ metaphoric objectives such as ecosystem health, resilience and sustainable provision of ecosystem services, which influence corresponding sustainability assessments by means of normative settings such as assumptions on system description, indicator selection, aggregation of information and target setting. A heuristic approach is developed for sustainability assessments to avoid ambiguity and applications to the EU Marine Strategy Framework Directive (MSFD) and OSPAR assessments are presented. For MSFD, nineteen different assessment procedures have been proposed, but at present no agreed assessment procedure is available. The heuristic assessment framework is a functional-holistic approach comprising an ex-ante/ex-post assessment framework with specifically defined normative and systemic dimensions (EAEPNS). The outer normative dimension defines the ex-ante/ex-post framework, of which the latter branch delivers one measure of ecosystem health based on indicators and the former allows to account for the multi-dimensional nature of sustainability (social, economic, ecological) in terms of modeling approaches. For MSFD, the ex-ante/ex-post framework replaces the current distinction between assessments based on pressure and state descriptors. The ex-ante and the ex-post branch each comprise an inner normative and a systemic dimension. The inner normative dimension in the ex-post branch considers additive utility models and likelihood functions to standardize variables normalized with Bayesian modeling. Likelihood functions allow precautionary target setting. The ex-post systemic dimension considers a posteriori indicator selection by means of analysis of indicator space to avoid redundant indicator information as opposed to a priori indicator selection in deconstructive-structural approaches. Indicator information is expressed in terms of ecosystem variability by means of multivariate analysis procedures. The application to the OSPAR assessment for the southern North Sea showed, that with the selected 36 indicators 48% of ecosystem variability could be explained. Tools for the ex-ante branch are risk and ecosystem models with the capability to analyze trade-offs, generating model output for each of the pressure chains to allow for a phasing-out of human pressures. The Bayesian measure of ecosystem health is sensitive to trends in environmental features, but robust to ecosystem variability in line with state space models. The combination of the ex-ante and ex-post branch is essential to evaluate ecosystem resilience and to adopt adaptive management. Based on requirements of the heuristic approach, three possible developments of this concept can be envisioned, i.e. a governance driven approach built upon participatory processes, a science driven functional-holistic approach requiring extensive monitoring to analyze complete ecosystem variability, and an approach with emphasis on ex-ante modeling and ex-post assessment of well-studied subsystems. PMID:27509185
A variable structure approach to robust control of VTOL aircraft
NASA Technical Reports Server (NTRS)
Calise, A. J.; Kramer, F.
1982-01-01
This paper examines the application of variable structure control theory to the design of a flight control system for the AV-8A Harrier in a hover mode. The objective in variable structure design is to confine the motion to a subspace of the total state space. The motion in this subspace is insensitive to system parameter variations and external disturbances that lie in the range space of the control. A switching type of control law results from the design procedure. The control system was designed to track a vector velocity command defined in the body frame. For comparison purposes, a proportional controller was designed using optimal linear regulator theory. Both control designs were first evaluated for transient response performance using a linearized model, then a nonlinear simulation study of a hovering approach to landing was conducted. Wind turbulence was modeled using a 1052 destroyer class air wake model.
A probabilistic bridge safety evaluation against floods.
Liao, Kuo-Wei; Muto, Yasunori; Chen, Wei-Lun; Wu, Bang-Ho
2016-01-01
To further capture the influences of uncertain factors on river bridge safety evaluation, a probabilistic approach is adopted. Because this is a systematic and nonlinear problem, MPP-based reliability analyses are not suitable. A sampling approach such as a Monte Carlo simulation (MCS) or importance sampling is often adopted. To enhance the efficiency of the sampling approach, this study utilizes Bayesian least squares support vector machines to construct a response surface followed by an MCS, providing a more precise safety index. Although there are several factors impacting the flood-resistant reliability of a bridge, previous experiences and studies show that the reliability of the bridge itself plays a key role. Thus, the goal of this study is to analyze the system reliability of a selected bridge that includes five limit states. The random variables considered here include the water surface elevation, water velocity, local scour depth, soil property and wind load. Because the first three variables are deeply affected by river hydraulics, a probabilistic HEC-RAS-based simulation is performed to capture the uncertainties in those random variables. The accuracy and variation of our solutions are confirmed by a direct MCS to ensure the applicability of the proposed approach. The results of a numerical example indicate that the proposed approach can efficiently provide an accurate bridge safety evaluation and maintain satisfactory variation.
NASA Astrophysics Data System (ADS)
Arbuszewski, J. A.; Oppo, D.; Huang, K.; Dubois, N.; Galy, V.; Mohtadi, M.; Herbert, T.; Rosenthal, Y.; Linsley, B. K.
2012-12-01
The El Niño-Southern Oscillation (ENSO) is the most prominent mode of tropical Pacific climate variability and has the potential to significantly impact the climate of the Indo-Pacific region and globally1. In the past, the mean state of the Pacific Ocean has, at times, resembled El Niño or La Niña conditions2. Although the dynamical relationships responsible for these changes have been studied through paleoproxy reconstructions and climate modeling, many questions remain. Recent paleoproxy based studies of tropical Pacific hydrology and surface temperature variability have hypothesized that observed climatological changes over the Holocene are directly linked to ENSO and/or mean state variability, complementing studies that dynamically relate centennial scale ENSO variability to mean state changes3-8. These studies have suggested that mid Holocene ENSO variability was low and the mean state was more "La Niña" like3-6. In the late Holocene, paleoproxy data has been interpreted as indicating an increase in ENSO variability with a more moderate mean ocean state3-6. However, alternative explanations could exist. Here, we test the hypothesis that observed climatological changes in the eastern tropical Pacific are related to mean state or ENSO variability during the Holocene. We focus our study on two sets of cores from the equatorial Pacific, with one located in the Indo-Pacific Warm Pool (BJ803-119 GGC, 117MC, sedimentation rates ~29 cm/kyr) and the other just off the Galapagos in the heart of the Eastern Cold Tongue (KNR195-5 43 GGC, 42MC, sedimentation rates ~20cm/kyr). The western site lies in the region predicted by models to show the greatest variations in temperature and water column structure in response to mean state changes, while the eastern site lies in the area most prone to changes due to ENSO variability7. Together, these sites allow us the best chance to robustly reconstruct ENSO and mean state related changes. We use a multiproxy approach and consider records from organic (sterol abundances) and inorganic proxies (Mg/Ca and δ18O of 3 planktonic foraminiferal species, % G. bulloides) to reconstruct zonal tropical Pacific (sub)surface temperature and stratification gradients over the Holocene. A benefit of using this approach is that it enables us to combine the strengths of each individual proxy to derive more robust records. We will compare our records with published paleoproxy and model studies in the Pacific and Indo-Pacific regions. Armed with this information, we aim to better understand mean state changes in the tropical Pacific over the Holocene. 1 Ropelewski, C. F. & Halpert, M. S. Monthly Weather Review 115, 1606-1626 (1987). 2 Collins, M. et al. Nature Geoscience 3, doi: 10.1038/NGEO1868 (2010). 3 Koutavas, A., Lynch-Steiglitz, J., Marchitto, T. & Sachs, J. Science 297, 226-230 (2002). 4 Moy, C. M., Seltzer, G. O., Rodbell, D. T. & Anderson, D. M. Nature 420, 162-165 (2002). 5 Conroy, J. L., Overpeck, J. T., Cole, J. E., Shanahan, T. M. & Steinitz-Kannan, M. Quaternary Science Reviews 27, 1166-1180 (2008). 6 Makou, M. C., Eglinton, T. I., Oppo, D. W. & Hughen, K. A. Geology 38, 43-46 (2010). 7 Karnauskas, K., Smerdon, J., Seager, R. & Gonzalez-Rouco, J. Journal of Climate, doi: 10.1178/JCLI-D-1111-00421.00421 (2012 (in press)). 8 Clement, A., Seager, R. & Cane, M. Paleoceanography 14, 441-456 (2000).
Estepp, Justin R.; Christensen, James C.
2015-01-01
The passive brain-computer interface (pBCI) framework has been shown to be a very promising construct for assessing cognitive and affective state in both individuals and teams. There is a growing body of work that focuses on solving the challenges of transitioning pBCI systems from the research laboratory environment to practical, everyday use. An interesting issue is what impact methodological variability may have on the ability to reliably identify (neuro)physiological patterns that are useful for state assessment. This work aimed at quantifying the effects of methodological variability in a pBCI design for detecting changes in cognitive workload. Specific focus was directed toward the effects of replacing electrodes over dual sessions (thus inducing changes in placement, electromechanical properties, and/or impedance between the electrode and skin surface) on the accuracy of several machine learning approaches in a binary classification problem. In investigating these methodological variables, it was determined that the removal and replacement of the electrode suite between sessions does not impact the accuracy of a number of learning approaches when trained on one session and tested on a second. This finding was confirmed by comparing to a control group for which the electrode suite was not replaced between sessions. This result suggests that sensors (both neurological and peripheral) may be removed and replaced over the course of many interactions with a pBCI system without affecting its performance. Future work on multi-session and multi-day pBCI system use should seek to replicate this (lack of) effect between sessions in other tasks, temporal time courses, and data analytic approaches while also focusing on non-stationarity and variable classification performance due to intrinsic factors. PMID:25805963
Estepp, Justin R; Christensen, James C
2015-01-01
The passive brain-computer interface (pBCI) framework has been shown to be a very promising construct for assessing cognitive and affective state in both individuals and teams. There is a growing body of work that focuses on solving the challenges of transitioning pBCI systems from the research laboratory environment to practical, everyday use. An interesting issue is what impact methodological variability may have on the ability to reliably identify (neuro)physiological patterns that are useful for state assessment. This work aimed at quantifying the effects of methodological variability in a pBCI design for detecting changes in cognitive workload. Specific focus was directed toward the effects of replacing electrodes over dual sessions (thus inducing changes in placement, electromechanical properties, and/or impedance between the electrode and skin surface) on the accuracy of several machine learning approaches in a binary classification problem. In investigating these methodological variables, it was determined that the removal and replacement of the electrode suite between sessions does not impact the accuracy of a number of learning approaches when trained on one session and tested on a second. This finding was confirmed by comparing to a control group for which the electrode suite was not replaced between sessions. This result suggests that sensors (both neurological and peripheral) may be removed and replaced over the course of many interactions with a pBCI system without affecting its performance. Future work on multi-session and multi-day pBCI system use should seek to replicate this (lack of) effect between sessions in other tasks, temporal time courses, and data analytic approaches while also focusing on non-stationarity and variable classification performance due to intrinsic factors.
Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE
NASA Astrophysics Data System (ADS)
Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.
2015-12-01
Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.
Lin, Fu; Leyffer, Sven; Munson, Todd
2016-04-12
We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Fu; Leyffer, Sven; Munson, Todd
We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less
Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Windhorst, Robert; Phillips, James
1998-01-01
This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.
Optimization of Supersonic Transport Trajectories
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Windhorst, Robert; Phillips, James
1998-01-01
This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.
Quantitative Tomography for Continuous Variable Quantum Systems
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
2018-03-01
We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.
Remote creation of hybrid entanglement between particle-like and wave-like optical qubits
NASA Astrophysics Data System (ADS)
Morin, Olivier; Huang, Kun; Liu, Jianli; Le Jeannic, Hanna; Fabre, Claude; Laurat, Julien
2014-07-01
The wave-particle duality of light has led to two different encodings for optical quantum information processing. Several approaches have emerged based either on particle-like discrete-variable states (that is, finite-dimensional quantum systems) or on wave-like continuous-variable states (that is, infinite-dimensional systems). Here, we demonstrate the generation of entanglement between optical qubits of these different types, located at distant places and connected by a lossy channel. Such hybrid entanglement, which is a key resource for a variety of recently proposed schemes, including quantum cryptography and computing, enables information to be converted from one Hilbert space to the other via teleportation and therefore the connection of remote quantum processors based upon different encodings. Beyond its fundamental significance for the exploration of entanglement and its possible instantiations, our optical circuit holds promise for implementations of heterogeneous network, where discrete- and continuous-variable operations and techniques can be efficiently combined.
NASA Astrophysics Data System (ADS)
Ochoa, C. G.; Tidwell, V. C.
2012-12-01
In the arid southwestern United States community water management systems have adapted to cope with climate variability and with socio-cultural and economic changes that have occurred since the establishment of these systems more than 300 years ago. In New Mexico, the community-based irrigation systems were established by Spanish settlers and have endured climate variability in the form of low levels of precipitation and have prevailed over important socio-political changes including the transfer of territory between Spain and Mexico, and between Mexico and the United States. Because of their inherent nature of integrating land and water use with society involvement these community-based systems have multiple and complex economic, ecological, and cultural interactions. Current urban population growth and more variable climate conditions are adding pressure to the survival of these systems. We are conducting a multi-disciplinary research project that focuses on characterizing these intrinsically complex human and natural interactions in three community-based irrigation systems in northern New Mexico. We are using a system dynamics approach to integrate different hydrological, ecological, socio-cultural and economic aspects of these three irrigation systems. Coupled with intensive field data collection, we are building a system dynamics model that will enable us to simulate important linkages and interactions between environmental and human elements occurring in each of these water management systems. We will test different climate variability and population growth scenarios and the expectation is that we will be able to identify critical tipping points of these systems. Results from this model can be used to inform policy recommendations relevant to the environment and to urban and agricultural land use planning in the arid southwestern United States.
What can one learn about material structure given a single first-principles calculation?
NASA Astrophysics Data System (ADS)
Rajen, Nicholas; Coh, Sinisa
2018-05-01
We extract a variable X from electron orbitals Ψn k and energies En k in the parent high-symmetry structure of a wide range of complex oxides: perovskites, rutiles, pyrochlores, and cristobalites. Even though calculation was done only in the parent structure, with no distortions, we show that X dictates material's true ground-state structure. We propose using Wannier functions to extract concealed variables such as X both for material structure prediction and for high-throughput approaches.
Security of continuous-variable quantum key distribution against general attacks.
Leverrier, Anthony; García-Patrón, Raúl; Renner, Renato; Cerf, Nicolas J
2013-01-18
We prove the security of Gaussian continuous-variable quantum key distribution with coherent states against arbitrary attacks in the finite-size regime. In contrast to previously known proofs of principle (based on the de Finetti theorem), our result is applicable in the practically relevant finite-size regime. This is achieved using a novel proof approach, which exploits phase-space symmetries of the protocols as well as the postselection technique introduced by Christandl, Koenig, and Renner [Phys. Rev. Lett. 102, 020504 (2009)].
Dudley, Robert W.; Hodgkins, Glenn A.; Dickinson, Jesse
2017-01-01
We present a logistic regression approach for forecasting the probability of future groundwater levels declining or maintaining below specific groundwater-level thresholds. We tested our approach on 102 groundwater wells in different climatic regions and aquifers of the United States that are part of the U.S. Geological Survey Groundwater Climate Response Network. We evaluated the importance of current groundwater levels, precipitation, streamflow, seasonal variability, Palmer Drought Severity Index, and atmosphere/ocean indices for developing the logistic regression equations. Several diagnostics of model fit were used to evaluate the regression equations, including testing of autocorrelation of residuals, goodness-of-fit metrics, and bootstrap validation testing. The probabilistic predictions were most successful at wells with high persistence (low month-to-month variability) in their groundwater records and at wells where the groundwater level remained below the defined low threshold for sustained periods (generally three months or longer). The model fit was weakest at wells with strong seasonal variability in levels and with shorter duration low-threshold events. We identified challenges in deriving probabilistic-forecasting models and possible approaches for addressing those challenges.
Task planning with uncertainty for robotic systems. Thesis
NASA Technical Reports Server (NTRS)
Cao, Tiehua
1993-01-01
In a practical robotic system, it is important to represent and plan sequences of operations and to be able to choose an efficient sequence from them for a specific task. During the generation and execution of task plans, different kinds of uncertainty may occur and erroneous states need to be handled to ensure the efficiency and reliability of the system. An approach to task representation, planning, and error recovery for robotic systems is demonstrated. Our approach to task planning is based on an AND/OR net representation, which is then mapped to a Petri net representation of all feasible geometric states and associated feasibility criteria for net transitions. Task decomposition of robotic assembly plans based on this representation is performed on the Petri net for robotic assembly tasks, and the inheritance of properties of liveness, safeness, and reversibility at all levels of decomposition are explored. This approach provides a framework for robust execution of tasks through the properties of traceability and viability. Uncertainty in robotic systems are modeled by local fuzzy variables, fuzzy marking variables, and global fuzzy variables which are incorporated in fuzzy Petri nets. Analysis of properties and reasoning about uncertainty are investigated using fuzzy reasoning structures built into the net. Two applications of fuzzy Petri nets, robot task sequence planning and sensor-based error recovery, are explored. In the first application, the search space for feasible and complete task sequences with correct precedence relationships is reduced via the use of global fuzzy variables in reasoning about subgoals. In the second application, sensory verification operations are modeled by mutually exclusive transitions to reason about local and global fuzzy variables on-line and automatically select a retry or an alternative error recovery sequence when errors occur. Task sequencing and task execution with error recovery capability for one and multiple soft components in robotic systems are investigated.
Observing spatio-temporal dynamics of excitable media using reservoir computing
NASA Astrophysics Data System (ADS)
Zimmermann, Roland S.; Parlitz, Ulrich
2018-04-01
We present a dynamical observer for two dimensional partial differential equation models describing excitable media, where the required cross prediction from observed time series to not measured state variables is provided by Echo State Networks receiving input from local regions in space, only. The efficacy of this approach is demonstrated for (noisy) data from a (cubic) Barkley model and the Bueno-Orovio-Cherry-Fenton model describing chaotic electrical wave propagation in cardiac tissue.
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.
1978-01-01
An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.
A Robust Decision-Making Technique for Water Management under Decadal Scale Climate Variability
NASA Astrophysics Data System (ADS)
Callihan, L.; Zagona, E. A.; Rajagopalan, B.
2013-12-01
Robust decision making, a flexible and dynamic approach to managing water resources in light of deep uncertainties associated with climate variability at inter-annual to decadal time scales, is an analytical framework that detects when a system is in or approaching a vulnerable state. It provides decision makers the opportunity to implement strategies that both address the vulnerabilities and perform well over a wide range of plausible future scenarios. A strategy that performs acceptably over a wide range of possible future states is not likely to be optimal with respect to the actual future state. The degree of success--the ability to avoid vulnerable states and operate efficiently--thus depends on the skill in projecting future states and the ability to select the most efficient strategies to address vulnerabilities. This research develops a robust decision making framework that incorporates new methods of decadal scale projections with selection of efficient strategies. Previous approaches to water resources planning under inter-annual climate variability combining skillful seasonal flow forecasts with climatology for subsequent years are not skillful for medium term (i.e. decadal scale) projections as decision makers are not able to plan adequately to avoid vulnerabilities. We address this need by integrating skillful decadal scale streamflow projections into the robust decision making framework and making the probability distribution of this projection available to the decision making logic. The range of possible future hydrologic scenarios can be defined using a variety of nonparametric methods. Once defined, an ensemble projection of decadal flow scenarios are generated from a wavelet-based spectral K-nearest-neighbor resampling approach using historical and paleo-reconstructed data. This method has been shown to generate skillful medium term projections with a rich variety of natural variability. The current state of the system in combination with the probability distribution of the projected flow ensembles enables the selection of appropriate decision options. This process is repeated for each year of the planning horizon--resulting in system outcomes that can be evaluated on their performance and resiliency. The research utilizes the RiverSMART suite of software modeling and analysis tools developed under the Bureau of Reclamation's WaterSMART initiative and built around the RiverWare modeling environment. A case study is developed for the Gunnison and Upper Colorado River Basins. The ability to mitigate vulnerability using the framework is gauged by system performance indicators that measure the ability of the system to meet various water demands (i.e. agriculture, environmental flows, hydropower etc.). Options and strategies for addressing vulnerabilities include measures such as conservation, reallocation and adjustments to operational policy. In addition to being able to mitigate vulnerabilities, options and strategies are evaluated based on benefits, costs and reliability. Flow ensembles are also simulated to incorporate mean and variance from climate change projections for the planning horizon and the above robust decision-making framework is applied to evaluate its performance under changing climate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalashilin, Dmitrii V.; Burghardt, Irene
2008-08-28
In this article, two coherent-state based methods of quantum propagation, namely, coupled coherent states (CCS) and Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH), are put on the same formal footing, using a derivation from a variational principle in Lagrangian form. By this approach, oscillations of the classical-like Gaussian parameters and oscillations of the quantum amplitudes are formally treated in an identical fashion. We also suggest a new approach denoted here as coupled coherent states trajectories (CCST), which completes the family of Gaussian-based methods. Using the same formalism for all related techniques allows their systematization and a straightforward comparison of their mathematical structuremore » and cost.« less
General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models
Miller, David A.W.
2012-01-01
Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.
Heart-Rate Variability-More than Heart Beats?
Ernst, Gernot
2017-01-01
Heart-rate variability (HRV) is frequently introduced as mirroring imbalances within the autonomous nerve system. Many investigations are based on the paradigm that increased sympathetic tone is associated with decreased parasympathetic tone and vice versa . But HRV is probably more than an indicator for probable disturbances in the autonomous system. Some perturbations trigger not reciprocal, but parallel changes of vagal and sympathetic nerve activity. HRV has also been considered as a surrogate parameter of the complex interaction between brain and cardiovascular system. Systems biology is an inter-disciplinary field of study focusing on complex interactions within biological systems like the cardiovascular system, with the help of computational models and time series analysis, beyond others. Time series are considered surrogates of the particular system, reflecting robustness or fragility. Increased variability is usually seen as associated with a good health condition, whereas lowered variability might signify pathological changes. This might explain why lower HRV parameters were related to decreased life expectancy in several studies. Newer integrating theories have been proposed. According to them, HRV reflects as much the state of the heart as the state of the brain. The polyvagal theory suggests that the physiological state dictates the range of behavior and psychological experience. Stressful events perpetuate the rhythms of autonomic states, and subsequently, behaviors. Reduced variability will according to this theory not only be a surrogate but represent a fundamental homeostasis mechanism in a pathological state. The neurovisceral integration model proposes that cardiac vagal tone, described in HRV beyond others as HF-index, can mirror the functional balance of the neural networks implicated in emotion-cognition interactions. Both recent models represent a more holistic approach to understanding the significance of HRV.
Fernández-Chacón, Albert; Genovart, Meritxell; Álvarez, David; Cano, José M; Ojanguren, Alfredo F; Rodriguez-Muñoz, Rolando; Nicieza, Alfredo G
2015-06-01
In organisms such as fish, where body size is considered an important state variable for the study of their population dynamics, size-specific growth and survival rates can be influenced by local variation in both biotic and abiotic factors, but few studies have evaluated the complex relationships between environmental variability and size-dependent processes. We analysed a 6-year capture-recapture dataset of brown trout (Salmo trutta) collected at 3 neighbouring but heterogeneous mountain streams in northern Spain with the aim of investigating the factors shaping the dynamics of local populations. The influence of body size and water temperature on survival and individual growth was assessed under a multi-state modelling framework, an extension of classical capture-recapture models that considers the state (i.e. body size) of the individual in each capture occasion and allows us to obtain state-specific demographic rates and link them to continuous environmental variables. Individual survival and growth patterns varied over space and time, and evidence of size-dependent survival was found in all but the smallest stream. At this stream, the probability of reaching larger sizes was lower compared to the other wider and deeper streams. Water temperature variables performed better in the modelling of the highest-altitude population, explaining over a 99 % of the variability in maturation transitions and survival of large fish. The relationships between body size, temperature and fitness components found in this study highlight the utility of multi-state approaches to investigate small-scale demographic processes in heterogeneous environments, and to provide reliable ecological knowledge for management purposes.
What is the Effect of Interannual Hydroclimatic Variability on Water Supply Reservoir Operations?
NASA Astrophysics Data System (ADS)
Galelli, S.; Turner, S. W. D.
2015-12-01
Rather than deriving from a single distribution and uniform persistence structure, hydroclimatic data exhibit significant trends and shifts in their mean, variance, and lagged correlation through time. Consequentially, observed and reconstructed streamflow records are often characterized by features of interannual variability, including long-term persistence and prolonged droughts. This study examines the effect of these features on the operating performance of water supply reservoirs. We develop a Stochastic Dynamic Programming (SDP) model that can incorporate a regime-shifting climate variable. We then compare the performance of operating policies—designed with and without climate variable—to quantify the contribution of interannual variability to standard policy sub-optimality. The approach uses a discrete-time Markov chain to partition the reservoir inflow time series into small number of 'hidden' climate states. Each state defines a distinct set of inflow transition probability matrices, which are used by the SDP model to condition the release decisions on the reservoir storage, current-period inflow and hidden climate state. The experimental analysis is carried out on 99 hypothetical water supply reservoirs fed from pristine catchments in Australia—all impacted by the Millennium drought. Results show that interannual hydroclimatic variability is a major cause of sub-optimal hedging decisions. The practical import is that conventional optimization methods may misguide operators, particularly in regions susceptible to multi-year droughts.
A Model-Based Approach to Inventory Stratification
Ronald E. McRoberts
2006-01-01
Forest inventory programs report estimates of forest variables for areas of interest ranging in size from municipalities to counties to States and Provinces. Classified satellite imagery has been shown to be an effective source of ancillary data that, when used with stratified estimation techniques, contributes to increased precision with little corresponding increase...
Soil water improvements with the long-term use of a winter rye cover crop
USDA-ARS?s Scientific Manuscript database
The Midwestern United States is projected to experience increasing rainfall variability. One approach to mitigate climate impacts is to utilize crop and soil management practices that enhance soil water storage, reducing the risks of flooding as well as drought-induced crop water stress. While some ...
Soil water improvements with the long-term use of a winter rye cover crop
USDA-ARS?s Scientific Manuscript database
The Midwestern United States, a region that produces one-third of maize and one-quarter of soybeans globally, is projected to experience increasing rainfall variability with future climate change. One approach to mitigate climate impacts is to utilize crop and soil management practices that enhance ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haas, P M; Selby, D L; Hanley, M J
1983-09-01
This report summarizes results of research sponsored by the US Nuclear Regulatory Commission (NRC) Office of Nuclear Regulatory Research to initiate the use of the Systems Approach to Training in the evaluation of training programs and entry level qualifications for nuclear power plant (NPP) personnel. Variables (performance shaping factors) of potential importance to personnel selection and training are identified, and research to more rigorously define an operationally useful taxonomy of those variables is recommended. A high-level model of the Systems Approach to Training for use in the nuclear industry, which could serve as a model for NRC evaluation of industrymore » programs, is presented. The model is consistent with current publically stated NRC policy, with the approach being followed by the Institute for Nuclear Power Operations, and with current training technology. Checklists to be used by NRC evaluators to assess training programs for NPP control-room personnel are proposed which are based on this model.« less
Dimension reduction techniques for the integrative analysis of multi-omics data
Zeleznik, Oana A.; Thallinger, Gerhard G.; Kuster, Bernhard; Gholami, Amin M.
2016-01-01
State-of-the-art next-generation sequencing, transcriptomics, proteomics and other high-throughput ‘omics' technologies enable the efficient generation of large experimental data sets. These data may yield unprecedented knowledge about molecular pathways in cells and their role in disease. Dimension reduction approaches have been widely used in exploratory analysis of single omics data sets. This review will focus on dimension reduction approaches for simultaneous exploratory analyses of multiple data sets. These methods extract the linear relationships that best explain the correlated structure across data sets, the variability both within and between variables (or observations) and may highlight data issues such as batch effects or outliers. We explore dimension reduction techniques as one of the emerging approaches for data integration, and how these can be applied to increase our understanding of biological systems in normal physiological function and disease. PMID:26969681
Life Prediction Issues in Thermal/Environmental Barrier Coatings in Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Brewer, David N.; Murthy, Pappu L. N.
2001-01-01
Issues and design requirements for the environmental barrier coating (EBC)/thermal barrier coating (TBC) life that are general and those specific to the NASA Ultra-Efficient Engine Technology (UEET) development program have been described. The current state and trend of the research, methods in vogue related to the failure analysis, and long-term behavior and life prediction of EBCITBC systems are reported. Also, the perceived failure mechanisms, variables, and related uncertainties governing the EBCITBC system life are summarized. A combined heat transfer and structural analysis approach based on the oxidation kinetics using the Arrhenius theory is proposed to develop a life prediction model for the EBC/TBC systems. Stochastic process-based reliability approach that includes the physical variables such as gas pressure, temperature, velocity, moisture content, crack density, oxygen content, etc., is suggested. Benefits of the reliability-based approach are also discussed in the report.
Song, Chao; Zheng, Shi-Biao; Zhang, Pengfei; Xu, Kai; Zhang, Libo; Guo, Qiujiang; Liu, Wuxin; Xu, Da; Deng, Hui; Huang, Keqiang; Zheng, Dongning; Zhu, Xiaobo; Wang, H
2017-10-20
Geometric phase, associated with holonomy transformation in quantum state space, is an important quantum-mechanical effect. Besides fundamental interest, this effect has practical applications, among which geometric quantum computation is a paradigm, where quantum logic operations are realized through geometric phase manipulation that has some intrinsic noise-resilient advantages and may enable simplified implementation of multi-qubit gates compared to the dynamical approach. Here we report observation of a continuous-variable geometric phase and demonstrate a quantum gate protocol based on this phase in a superconducting circuit, where five qubits are controllably coupled to a resonator. Our geometric approach allows for one-step implementation of n-qubit controlled-phase gates, which represents a remarkable advantage compared to gate decomposition methods, where the number of required steps dramatically increases with n. Following this approach, we realize these gates with n up to 4, verifying the high efficiency of this geometric manipulation for quantum computation.
NASA Astrophysics Data System (ADS)
Wu, Hao; Nüske, Feliks; Paul, Fabian; Klus, Stefan; Koltai, Péter; Noé, Frank
2017-04-01
Markov state models (MSMs) and master equation models are popular approaches to approximate molecular kinetics, equilibria, metastable states, and reaction coordinates in terms of a state space discretization usually obtained by clustering. Recently, a powerful generalization of MSMs has been introduced, the variational approach conformation dynamics/molecular kinetics (VAC) and its special case the time-lagged independent component analysis (TICA), which allow us to approximate slow collective variables and molecular kinetics by linear combinations of smooth basis functions or order parameters. While it is known how to estimate MSMs from trajectories whose starting points are not sampled from an equilibrium ensemble, this has not yet been the case for TICA and the VAC. Previous estimates from short trajectories have been strongly biased and thus not variationally optimal. Here, we employ the Koopman operator theory and the ideas from dynamic mode decomposition to extend the VAC and TICA to non-equilibrium data. The main insight is that the VAC and TICA provide a coefficient matrix that we call Koopman model, as it approximates the underlying dynamical (Koopman) operator in conjunction with the basis set used. This Koopman model can be used to compute a stationary vector to reweight the data to equilibrium. From such a Koopman-reweighted sample, equilibrium expectation values and variationally optimal reversible Koopman models can be constructed even with short simulations. The Koopman model can be used to propagate densities, and its eigenvalue decomposition provides estimates of relaxation time scales and slow collective variables for dimension reduction. Koopman models are generalizations of Markov state models, TICA, and the linear VAC and allow molecular kinetics to be described without a cluster discretization.
Optimized tomography of continuous variable systems using excitation counting
NASA Astrophysics Data System (ADS)
Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang
2016-11-01
We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.
Symmetrical Windowing for Quantum States in Quasi-Classical Trajectory Simulations
NASA Astrophysics Data System (ADS)
Cotton, Stephen Joshua
An approach has been developed for extracting approximate quantum state-to-state information from classical trajectory simulations which "quantizes" symmetrically both the initial and final classical actions associated with the degrees of freedom of interest using quantum number bins (or "window functions") which are significantly narrower than unit-width. This approach thus imposes a more stringent quantization condition on classical trajectory simulations than has been traditionally employed, while doing so in a manner that is time-symmetric and microscopically reversible. To demonstrate this "symmetric quasi-classical" (SQC) approach for a simple real system, collinear H + H2 reactive scattering calculations were performed [S.J. Cotton and W.H. Miller, J. Phys. Chem. A 117, 7190 (2013)] with SQC-quantization applied to the H 2 vibrational degree of freedom (DOF). It was seen that the use of window functions of approximately 1/2-unit width led to calculated reaction probabilities in very good agreement with quantum mechanical results over the threshold energy region, representing a significant improvement over what is obtained using the traditional quasi-classical procedure. The SQC approach was then applied [S.J. Cotton and W.H. Miller, J. Chem. Phys. 139, 234112 (2013)] to the much more interesting and challenging problem of incorporating non-adiabatic effects into what would otherwise be standard classical trajectory simulations. To do this, the classical Meyer-Miller (MM) Hamiltonian was used to model the electronic DOFs, with SQC-quantization applied to the classical "electronic" actions of the MM model---representing the occupations of the electronic states---in order to extract the electronic state population dynamics. It was demonstrated that if one ties the zero-point energy (ZPE) of the electronic DOFs to the SQC windowing function's width parameter this very simple SQC/MM approach is capable of quantitatively reproducing quantum mechanical results for a range of standard benchmark models of electronically non-adiabatic processes, including applications where "quantum" coherence effects are significant. Notably, among these benchmarks was the well-studied "spin-boson" model of condensed phase non-adiabatic dynamics, in both its symmetric and asymmetric forms---the latter of which many classical approaches fail to treat successfully. The SQC/MM approach to the treatment of non-adiabatic dynamics was next applied [S.J. Cotton, K. Igumenshchev, and W.H. Miller, J. Chem. Phys., 141, 084104 (2014)] to several recently proposed models of condensed phase electron transfer (ET) processes. For these problems, a flux-side correlation function framework modified for consistency with the SQC approach was developed for the calculation of thermal ET rate constants, and excellent accuracy was seen over wide ranges of non-adiabatic coupling strength and energetic bias/exothermicity. Significantly, the "inverted regime" in thermal rate constants (with increasing bias) known from Marcus Theory was reproduced quantitatively for these models---representing the successful treatment of another regime that classical approaches generally have difficulty in correctly describing. Relatedly, a model of photoinduced proton coupled electron transfer (PCET) was also addressed, and it was shown that the SQC/MM approach could reasonably model the explicit population dynamics of the photoexcited electron donor and acceptor states over the four parameter regimes considered. The potential utility of the SQC/MM technique lies in its stunning simplicity and the ease by which it may readily be incorporated into "ordinary" molecular dynamics (MD) simulations. In short, a typical MD simulation may be augmented to take non-adiabatic effects into account simply by introducing an auxiliary pair of classical "electronic" action-angle variables for each energetically viable Born-Oppenheimer surface, and time-evolving these auxiliary variables via Hamilton's equations (using the MM electronic Hamiltonian) in the same manner that the other classical variables---i.e., the coordinates of all the nuclei---are evolved forward in time. In a complex molecular system involving many hundreds or thousands of nuclear DOFs, the propagation of these extra "electronic" variables represents a modest increase in computational effort, and yet, the examples presented herein suggest that in many instances the SQC/MM approach will describe the true non-adiabatic quantum dynamics to a reasonable and useful degree of quantitative accuracy.
New insight on intergenerational attachment from a relationship-based analysis.
Bailey, Heidi N; Tarabulsy, George M; Moran, Greg; Pederson, David R; Bento, Sandi
2017-05-01
Research on attachment transmission has focused on variable-centered analyses, where hypotheses are tested by examining linear associations between variables. The purpose of this study was to apply a relationship-centered approach to data analysis, where adult states of mind, maternal sensitivity, and infant attachment were conceived as being three components of a single, intergenerational relationship. These variables were assessed in 90 adolescent and 99 adult mother-infant dyads when infants were 12 months old. Initial variable-centered analyses replicated the frequently observed associations between these three core attachment variables. Relationship-based, latent class analyses then revealed that the most common pattern among young mother dyads featured maternal unresolved trauma, insensitive interactive behavior, and disorganized infant attachment (61%), whereas the most prevalent adult mother dyad relationship pattern involved maternal autonomy, sensitive maternal behavior, and secure infant attachment (59%). Three less prevalent relationship patterns were also observed. Moderation analyses revealed that the adolescent-adult mother distinction differentiated between secure and disorganized intergenerational relationship patterns, whereas experience of traumatic events distinguished between disorganized and avoidant patterns. Finally, socioeconomic status distinguished between avoidant and secure patterns. Results emphasize the value of a relationship-based approach, adding an angle of understanding to the study of attachment transmission.
Quantifying the increasing sensitivity of power systems to climate variability
NASA Astrophysics Data System (ADS)
Bloomfield, H. C.; Brayshaw, D. J.; Shaffrey, L. C.; Coker, P. J.; Thornton, H. E.
2016-12-01
Large quantities of weather-dependent renewable energy generation are expected in power systems under climate change mitigation policies, yet little attention has been given to the impact of long term climate variability. By combining state-of-the-art multi-decadal meteorological records with a parsimonious representation of a power system, this study characterises the impact of year-to-year climate variability on multiple aspects of the power system of Great Britain (including coal, gas and nuclear generation), demonstrating why multi-decadal approaches are necessary. All aspects of the example system are impacted by inter-annual climate variability, with the impacts being most pronounced for baseload generation. The impacts of inter-annual climate variability increase in a 2025 wind-power scenario, with a 4-fold increase in the inter-annual range of operating hours for baseload such as nuclear. The impacts on peak load and peaking-plant are comparably small. Less than 10 years of power supply and demand data are shown to be insufficient for providing robust power system planning guidance. This suggests renewable integration studies—widely used in policy, investment and system design—should adopt a more robust approach to climate characterisation.
Environmental strategies: A case study of systematic evaluation
NASA Astrophysics Data System (ADS)
Sherman, Douglas J.; Garès, Paul A.
1982-09-01
A major problem facing environmental managers is the necessity to effectively evaluate management alternatives. Traditional environmental assessments have emphasized the use of economic analyses. These approaches are often deficient due to difficulty in assigning dollar values to environmental systems and to social amenities. A more flexible decisionmaking model has been developed to analyze management options for coping with beach erosion problems at the Sandy Hook Unit of Gateway National Recreation Area in New Jersey. This model is comprised of decision-making variables which are formulated from a combination of environmental and management criteria, and it has an accept-reject format in which the management options are analyzed in terms of the variables. Through logical ordering of the insertion of the variables into the model, stepwise elimination of alternatives is possible. A hierarchy of variables is determined through estimating work required to complete an assessment of the alternatives for each variable. The assessment requiring the least work is performed first so that the more difficult evaluation will be limited to fewer alternatives. The application of this approach is illustrated with a case study in which beach protection alternatives were evaluated for the United States National Park Service.
Modeling of aircraft unsteady aerodynamic characteristics. Part 1: Postulated models
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Noderer, Keith D.
1994-01-01
A short theoretical study of aircraft aerodynamic model equations with unsteady effects is presented. The aerodynamic forces and moments are expressed in terms of indicial functions or internal state variables. The first representation leads to aircraft integro-differential equations of motion; the second preserves the state-space form of the model equations. The formulations of unsteady aerodynamics is applied in two examples. The first example deals with a one-degree-of-freedom harmonic motion about one of the aircraft body axes. In the second example, the equations for longitudinal short-period motion are developed. In these examples, only linear aerodynamic terms are considered. The indicial functions are postulated as simple exponentials and the internal state variables are governed by linear, time-invariant, first-order differential equations. It is shown that both approaches to the modeling of unsteady aerodynamics lead to identical models.
NASA Astrophysics Data System (ADS)
Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong
2009-12-01
Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.
NASA Astrophysics Data System (ADS)
Panu, U. S.; Ng, W.; Rasmussen, P. F.
2009-12-01
The modeling of weather states (i.e., precipitation occurrences) is critical when the historical data are not long enough for the desired analysis. Stochastic models (e.g., Markov Chain and Alternating Renewal Process (ARP)) of the precipitation occurrence processes generally assume the existence of short-term temporal-dependency between the neighboring states while implying the existence of long-term independency (randomness) of states in precipitation records. Existing temporal-dependent models for the generation of precipitation occurrences are restricted either by the fixed-length memory (e.g., the order of a Markov chain model), or by the reining states in segments (e.g., persistency of homogenous states within dry/wet-spell lengths of an ARP). The modeling of variable segment lengths and states could be an arduous task and a flexible modeling approach is required for the preservation of various segmented patterns of precipitation data series. An innovative Dictionary approach has been developed in the field of genome pattern recognition for the identification of frequently occurring genome segments in DNA sequences. The genome segments delineate the biologically meaningful ``words" (i.e., segments with a specific patterns in a series of discrete states) that can be jointly modeled with variable lengths and states. A meaningful “word”, in hydrology, can be referred to a segment of precipitation occurrence comprising of wet or dry states. Such flexibility would provide a unique advantage over the traditional stochastic models for the generation of precipitation occurrences. Three stochastic models, namely, the alternating renewal process using Geometric distribution, the second-order Markov chain model, and the Dictionary approach have been assessed to evaluate their efficacy for the generation of daily precipitation sequences. Comparisons involved three guiding principles namely (i) the ability of models to preserve the short-term temporal-dependency in data through the concepts of autocorrelation, average mutual information, and Hurst exponent, (ii) the ability of models to preserve the persistency within the homogenous dry/wet weather states through analysis of dry/wet-spell lengths between the observed and generated data, and (iii) the ability to assesses the goodness-of-fit of models through the likelihood estimates (i.e., AIC and BIC). Past 30 years of observed daily precipitation records from 10 Canadian meteorological stations were utilized for comparative analyses of the three models. In general, the Markov chain model performed well. The remainders of the models were found to be competitive from one another depending upon the scope and purpose of the comparison. Although the Markov chain model has a certain advantage in the generation of daily precipitation occurrences, the structural flexibility offered by the Dictionary approach in modeling the varied segment lengths of heterogeneous weather states provides a distinct and powerful advantage in the generation of precipitation sequences.
NASA Technical Reports Server (NTRS)
Alag, Gurbux S.; Gilyard, Glenn B.
1990-01-01
To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.
Affective Computing and the Impact of Gender and Age
Rukavina, Stefanie; Gruss, Sascha; Hoffmann, Holger; Tan, Jun-Wen; Walter, Steffen; Traue, Harald C.
2016-01-01
Affective computing aims at the detection of users’ mental states, in particular, emotions and dispositions during human-computer interactions. Detection can be achieved by measuring multimodal signals, namely, speech, facial expressions and/or psychobiology. Over the past years, one major approach was to identify the best features for each signal using different classification methods. Although this is of high priority, other subject-specific variables should not be neglected. In our study, we analyzed the effect of gender, age, personality and gender roles on the extracted psychobiological features (derived from skin conductance level, facial electromyography and heart rate variability) as well as the influence on the classification results. In an experimental human-computer interaction, five different affective states with picture material from the International Affective Picture System and ULM pictures were induced. A total of 127 subjects participated in the study. Among all potentially influencing variables (gender has been reported to be influential), age was the only variable that correlated significantly with psychobiological responses. In summary, the conducted classification processes resulted in 20% classification accuracy differences according to age and gender, especially when comparing the neutral condition with four other affective states. We suggest taking age and gender specifically into account for future studies in affective computing, as these may lead to an improvement of emotion recognition accuracy. PMID:26939129
2013-04-01
Teresa Wu, Arizona State University Eugene Rex Jalao, Arizona State University and University of the Philippines Christopher Auger, Lars Baldus, Brian...of Technology The RITE Approach to Agile Acquisition Timothy Boyce, Iva Sherman, and Nicholas Roussel Space and Naval Warfare Systems Center Pacific...Demonstration Office: Ad Hoc Problem Solving as a Mechanism for Adaptive Change Kathryn Aten and John T . Dillard Naval Postgraduate School A Comparative
NASA Astrophysics Data System (ADS)
Zurita-Milla, R.; Laurent, V. C. E.; van Gijsel, J. A. E.
2015-12-01
Monitoring biophysical and biochemical vegetation variables in space and time is key to understand the earth system. Operational approaches using remote sensing imagery rely on the inversion of radiative transfer models, which describe the interactions between light and vegetation canopies. The inversion required to estimate vegetation variables is, however, an ill-posed problem because of variable compensation effects that can cause different combinations of soil and canopy variables to yield extremely similar spectral responses. In this contribution, we present a novel approach to visualise the ill-posed problem using self-organizing maps (SOM), which are a type of unsupervised neural network. The approach is demonstrated with simulations for Sentinel-2 data (13 bands) made with the Soil-Leaf-Canopy (SLC) radiative transfer model. A look-up table of 100,000 entries was built by randomly sampling 14 SLC model input variables between their minimum and maximum allowed values while using both a dark and a bright soil. The Sentinel-2 spectral simulations were used to train a SOM of 200 × 125 neurons. The training projected similar spectral signatures onto either the same, or contiguous, neuron(s). Tracing back the inputs that generated each spectral signature, we created a 200 × 125 map for each of the SLC variables. The lack of spatial patterns and the variability in these maps indicate ill-posed situations, where similar spectral signatures correspond to different canopy variables. For Sentinel-2, our results showed that leaf area index, crown cover and leaf chlorophyll, water and brown pigment content are less confused in the inversion than variables with noisier maps like fraction of brown canopy area, leaf dry matter content and the PROSPECT mesophyll parameter. This study supports both educational and on-going research activities on inversion algorithms and might be useful to evaluate the uncertainties of retrieved canopy biophysical and biochemical state variables.
Alzougool, Basil; Chang, Shanton; Gray, Kathleen
2017-09-01
There has been little research that provides a comprehensive account of the nature and aspects of information needs of informal carers. The authors have previously developed and validated a framework that accounts for major underlying states of information need. This paper aims to apply this framework to explore whether there are common demographic and socioeconomic characteristics that affect the information needs states of carers. A questionnaire about the information needs states was completed by 198 carers above 18 years old. We use statistical methods to look for similarities and differences in respondents' information needs states, in terms of the demographic and socioeconomic variables. At least one information needs state varies among carers, in terms of seven demographic and socioeconomic variables: the age of the patient(s) that they are caring for; the condition(s) of the patient(s) that they are caring for; the number of patients that they are caring for; their length of time as a carer; their gender; the country that they live in; and the population of the area that they live in. The findings demonstrate the utility of the information needs state framework. We outline some practical implications of the framework.
Altering state policy: interest group effectiveness among state-level advocacy groups.
Hoefer, Richard
2005-07-01
Because social policy making continues to devolve to the state level, social workers should understand how advocacy and policy making occur at that level. Interest groups active in the human services arena were surveyed and data were used to test a model of interest group effectiveness in four states. The independent variables were amount of resources invested, strategy used, relationships with key actors, use of coalitions, and policy positions taken. Results indicate that the model explains low to middling amounts of the variation in group effectiveness. Results also show that the model fits different states to different degrees, indicating that social workers need to approach advocacy in different ways to achieve maximum effectiveness in altering state policy. Implications for altering state policy are provided.
Raghupathi, Wullianallur; Raghupathi, Viju
2018-01-01
In this research we explore the current state of chronic diseases in the United States, using data from the Centers for Disease Control and Prevention and applying visualization and descriptive analytics techniques. Five main categories of variables are studied, namely chronic disease conditions, behavioral health, mental health, demographics, and overarching conditions. These are analyzed in the context of regions and states within the U.S. to discover possible correlations between variables in several categories. There are widespread variations in the prevalence of diverse chronic diseases, the number of hospitalizations for specific diseases, and the diagnosis and mortality rates for different states. Identifying such correlations is fundamental to developing insights that will help in the creation of targeted management, mitigation, and preventive policies, ultimately minimizing the risks and costs of chronic diseases. As the population ages and individuals suffer from multiple conditions, or comorbidity, it is imperative that the various stakeholders, including the government, non-governmental organizations (NGOs), policy makers, health providers, and society as a whole, address these adverse effects in a timely and efficient manner. PMID:29494555
Structure of a viscoplastic theory
NASA Technical Reports Server (NTRS)
Freed, Alan D.
1988-01-01
The general structure of a viscoplastic theory is developed from physical and thermodynamical considerations. The flow equation is of classical form. The dynamic recovery approach is shown to be superior to the hardening function approach for incorporating nonlinear strain hardening into the material response through the evolutionary equation for back stress. A novel approach for introducing isotropic strain hardening into the theory is presented, which results in a useful simplification. In particular, the limiting stress for the kinematic saturation of state (not the drag stress) is the chosen scalar-valued state variable. The resulting simplification is that there is no coupling between dynamic and thermal recovery terms in each evolutionary equation. The derived theory of viscoplasticity has the structure of a two-surface plasticity theory when the response is plasticlike, and the structure of a Bailey-Orowan creep theory when the response is creeplike.
Observer-Based Adaptive Neural Network Control for Nonlinear Systems in Nonstrict-Feedback Form.
Chen, Bing; Zhang, Huaguang; Lin, Chong
2016-01-01
This paper focuses on the problem of adaptive neural network (NN) control for a class of nonlinear nonstrict-feedback systems via output feedback. A novel adaptive NN backstepping output-feedback control approach is first proposed for nonlinear nonstrict-feedback systems. The monotonicity of system bounding functions and the structure character of radial basis function (RBF) NNs are used to overcome the difficulties that arise from nonstrict-feedback structure. A state observer is constructed to estimate the immeasurable state variables. By combining adaptive backstepping technique with approximation capability of radial basis function NNs, an output-feedback adaptive NN controller is designed through backstepping approach. It is shown that the proposed controller guarantees semiglobal boundedness of all the signals in the closed-loop systems. Two examples are used to illustrate the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Williams, Caitlin R. S.; Sorrentino, Francesco; Murphy, Thomas E.; Roy, Rajarshi
2013-12-01
We experimentally study the complex dynamics of a unidirectionally coupled ring of four identical optoelectronic oscillators. The coupling between these systems is time-delayed in the experiment and can be varied over a wide range of delays. We observe that as the coupling delay is varied, the system may show different synchronization states, including complete isochronal synchrony, cluster synchrony, and two splay-phase states. We analyze the stability of these solutions through a master stability function approach, which we show can be effectively applied to all the different states observed in the experiment. Our analysis supports the experimentally observed multistability in the system.
[The state of the psychological contract and its relation with employees' psychological health].
Gracia, Francisco Javier; Silla, Inmaculada; Peiró, José María; Fortes-Ferreira, Lina
2006-05-01
In the present paper the role of the state of the psychological contract to predict psychological health results is studied in a sample of 385 employees of different Spanish companies. Results indicate that the state of the psychological contract significantly predicts life satisfaction, work-family conflict and well-being beyond the prediction produced by the content of the psychological contract. In addition, trust and fairness, two dimensions of the state of psychological contract, all together contribute to explain these psychological health variables adding value to the role as predictor of fulfillment of the psychological contract. The results support the approach argued by Guest and colleagues.
Beating the Odds: Trees to Success in Different Countries
ERIC Educational Resources Information Center
Finch, W. Holmes; Marchant, Gregory J.
2017-01-01
A recursive partitioning model approach in the form of classification and regression trees (CART) was used with 2012 PISA data for five countries (Canada, Finland, Germany, Singapore-China, and the Unites States). The objective of the study was to determine demographic and educational variables that differentiated between low SES student that were…
ERIC Educational Resources Information Center
Pieterse, Alex L.; Lee, Minsun; Fetzer, Alexa
2016-01-01
This study documents various process elements of multicultural training from the perspective of counseling and counseling psychology students within the United States (US). Using a mixed-methods approach, findings indicate that racial group membership is an important variable that differentially impacts White students and students of Color while…
NASA Astrophysics Data System (ADS)
Ficklin, D. L.; Abatzoglou, J. T.
2017-12-01
The spatial variability in the balance between surface runoff (Q) and evapotranspiration (ET) is critical for understanding water availability. The Budyko framework suggests that this balance is solely a function of aridity. Observed deviations from this framework for individual watersheds, however, can vary significantly, resulting in uncertainty in using the Budyko framework in ungauged catchments and under future climate and land use scenarios. Here, we model the spatial variability in the partitioning of precipitation into Q and ET using a set of climatic, physiographic, and vegetation metrics for 211 near-natural watersheds across the contiguous United States (CONUS) within Budyko's framework through the free parameter ω. Using a generalized additive model, we found that precipitation seasonality, the ratio of soil water holding capacity to precipitation, topographic slope, and the fraction of precipitation falling as snow explained 81.2% of the variability in ω. This ω model applied to the Budyko framework explained 97% of the spatial variability in long-term Q for an independent set of near-natural watersheds. The developed ω model was also used to estimate the entire CONUS surface water balance for both contemporary and mid-21st century conditions. The contemporary CONUS surface water balance compared favorably to more sophisticated land-surface modeling efforts. For mid-21st century conditions, the model simulated an increase in the fraction of precipitation used by ET across the CONUS with declines in Q for much of the eastern CONUS and mountainous watersheds across the western US. The Budyko framework using the modeled ω lends itself to an alternative approach for assessing the potential response of catchment water balance to climate change to complement other approaches.
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
Lu, Zhiming
2018-01-30
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
NASA Astrophysics Data System (ADS)
Basso, B.; Dumont, B.
2015-12-01
A systems approach was implemented to assess the impact of management strategies and climate variability on crop yield, nitrate leaching and soil organic carbon across the the Midwest US at a fine scale spatial resolution. We used the SALUS model which designed to simulated yield and environmental outcomes of continous crop rotations under different agronomic management, soil, weather. We extracted soil parameters from the SSURGO (Soil Survey Geographic) data of nine Midwest states (IA, IL, IN, MI, MN, MO, OH, SD, WI) and weather from NARR (North American Regional Reanalysis). State specific management itineraries were extracted from USDA-NAS. We present the results different cropping systems (continuous corn, corn-soybean and extended rotations) under different management practices (no-tillage, cover crops and residue management). Simulations were conducted under both the baseline (1979-2014) and projected climatic projections (RCP2.5, 6). Results indicated that climate change would likely have a negative impact on corn yields in some areas and positive in others. Soil N, and C losses can be reduced with the adoption of conservation practices.
Modularity and the spread of perturbations in complex dynamical systems
NASA Astrophysics Data System (ADS)
Kolchinsky, Artemy; Gates, Alexander J.; Rocha, Luis M.
2015-12-01
We propose a method to decompose dynamical systems based on the idea that modules constrain the spread of perturbations. We find partitions of system variables that maximize "perturbation modularity," defined as the autocovariance of coarse-grained perturbed trajectories. The measure effectively separates the fast intramodular from the slow intermodular dynamics of perturbation spreading (in this respect, it is a generalization of the "Markov stability" method of network community detection). Our approach captures variation of modular organization across different system states, time scales, and in response to different kinds of perturbations: aspects of modularity which are all relevant to real-world dynamical systems. It offers a principled alternative to detecting communities in networks of statistical dependencies between system variables (e.g., "relevance networks" or "functional networks"). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization planted in a system's coupling matrix. Additionally, in homogeneously coupled map lattices, it identifies the presence of self-organized modularity that depends on the initial state, dynamical parameters, and type of perturbations. Our approach offers a powerful tool for exploring the modular organization of complex dynamical systems.
Thresholds for conservation and management: structured decision making as a conceptual framework
Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.
2014-01-01
changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.
Modularity and the spread of perturbations in complex dynamical systems.
Kolchinsky, Artemy; Gates, Alexander J; Rocha, Luis M
2015-12-01
We propose a method to decompose dynamical systems based on the idea that modules constrain the spread of perturbations. We find partitions of system variables that maximize "perturbation modularity," defined as the autocovariance of coarse-grained perturbed trajectories. The measure effectively separates the fast intramodular from the slow intermodular dynamics of perturbation spreading (in this respect, it is a generalization of the "Markov stability" method of network community detection). Our approach captures variation of modular organization across different system states, time scales, and in response to different kinds of perturbations: aspects of modularity which are all relevant to real-world dynamical systems. It offers a principled alternative to detecting communities in networks of statistical dependencies between system variables (e.g., "relevance networks" or "functional networks"). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization planted in a system's coupling matrix. Additionally, in homogeneously coupled map lattices, it identifies the presence of self-organized modularity that depends on the initial state, dynamical parameters, and type of perturbations. Our approach offers a powerful tool for exploring the modular organization of complex dynamical systems.
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhiming
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
NASA Astrophysics Data System (ADS)
Staley, Dennis; Negri, Jacquelyn; Kean, Jason
2016-04-01
Population expansion into fire-prone steeplands has resulted in an increase in post-fire debris-flow risk in the western United States. Logistic regression methods for determining debris-flow likelihood and the calculation of empirical rainfall intensity-duration thresholds for debris-flow initiation represent two common approaches for characterizing hazard and reducing risk. Logistic regression models are currently being used to rapidly assess debris-flow hazard in response to design storms of known intensities (e.g. a 10-year recurrence interval rainstorm). Empirical rainfall intensity-duration thresholds comprise a major component of the United States Geological Survey (USGS) and the National Weather Service (NWS) debris-flow early warning system at a regional scale in southern California. However, these two modeling approaches remain independent, with each approach having limitations that do not allow for synergistic local-scale (e.g. drainage-basin scale) characterization of debris-flow hazard during intense rainfall. The current logistic regression equations consider rainfall a unique independent variable, which prevents the direct calculation of the relation between rainfall intensity and debris-flow likelihood. Regional (e.g. mountain range or physiographic province scale) rainfall intensity-duration thresholds fail to provide insight into the basin-scale variability of post-fire debris-flow hazard and require an extensive database of historical debris-flow occurrence and rainfall characteristics. Here, we present a new approach that combines traditional logistic regression and intensity-duration threshold methodologies. This method allows for local characterization of both the likelihood that a debris-flow will occur at a given rainfall intensity, the direct calculation of the rainfall rates that will result in a given likelihood, and the ability to calculate spatially explicit rainfall intensity-duration thresholds for debris-flow generation in recently burned areas. Our approach synthesizes the two methods by incorporating measured rainfall intensity into each model variable (based on measures of topographic steepness, burn severity and surface properties) within the logistic regression equation. This approach provides a more realistic representation of the relation between rainfall intensity and debris-flow likelihood, as likelihood values asymptotically approach zero when rainfall intensity approaches 0 mm/h, and increase with more intense rainfall. Model performance was evaluated by comparing predictions to several existing regional thresholds. The model, based upon training data collected in southern California, USA, has proven to accurately predict rainfall intensity-duration thresholds for other areas in the western United States not included in the original training dataset. In addition, the improved logistic regression model shows promise for emergency planning purposes and real-time, site-specific early warning. With further validation, this model may permit the prediction of spatially-explicit intensity-duration thresholds for debris-flow generation in areas where empirically derived regional thresholds do not exist. This improvement would permit the expansion of the early-warning system into other regions susceptible to post-fire debris flow.
Ma, H. -Y.; Chuang, C. C.; Klein, S. A.; ...
2015-11-06
Here, we present an improved procedure of generating initial conditions (ICs) for climate model hindcast experiments with specified sea surface temperature and sea ice. The motivation is to minimize errors in the ICs and lead to a better evaluation of atmospheric parameterizations' performance in the hindcast mode. We apply state variables (horizontal velocities, temperature and specific humidity) from the operational analysis/reanalysis for the atmospheric initial states. Without a data assimilation system, we apply a two-step process to obtain other necessary variables to initialize both the atmospheric (e.g., aerosols and clouds) and land models (e.g., soil moisture). First, we nudge onlymore » the model horizontal velocities towards operational analysis/reanalysis values, given a 6-hour relaxation time scale, to obtain all necessary variables. Compared to the original strategy in which horizontal velocities, temperature and specific humidity are nudged, the revised approach produces a better representation of initial aerosols and cloud fields which are more consistent and closer to observations and model's preferred climatology. Second, we obtain land ICs from an offline land model simulation forced with observed precipitation, winds, and surface fluxes. This approach produces more realistic soil moisture in the land ICs. With this refined procedure, the simulated precipitation, clouds, radiation, and surface air temperature over land are improved in the Day 2 mean hindcasts. Following this procedure, we propose a “Core” integration suite which provides an easily repeatable test allowing model developers to rapidly assess the impacts of various parameterization changes on the fidelity of modelled cloud-associated processes relative to observations.« less
NASA Astrophysics Data System (ADS)
Ma, H.-Y.; Chuang, C. C.; Klein, S. A.; Lo, M.-H.; Zhang, Y.; Xie, S.; Zheng, X.; Ma, P.-L.; Zhang, Y.; Phillips, T. J.
2015-12-01
We present an improved procedure of generating initial conditions (ICs) for climate model hindcast experiments with specified sea surface temperature and sea ice. The motivation is to minimize errors in the ICs and lead to a better evaluation of atmospheric parameterizations' performance in the hindcast mode. We apply state variables (horizontal velocities, temperature, and specific humidity) from the operational analysis/reanalysis for the atmospheric initial states. Without a data assimilation system, we apply a two-step process to obtain other necessary variables to initialize both the atmospheric (e.g., aerosols and clouds) and land models (e.g., soil moisture). First, we nudge only the model horizontal velocities toward operational analysis/reanalysis values, given a 6 h relaxation time scale, to obtain all necessary variables. Compared to the original strategy in which horizontal velocities, temperature, and specific humidity are nudged, the revised approach produces a better representation of initial aerosols and cloud fields which are more consistent and closer to observations and model's preferred climatology. Second, we obtain land ICs from an off-line land model simulation forced with observed precipitation, winds, and surface fluxes. This approach produces more realistic soil moisture in the land ICs. With this refined procedure, the simulated precipitation, clouds, radiation, and surface air temperature over land are improved in the Day 2 mean hindcasts. Following this procedure, we propose a "Core" integration suite which provides an easily repeatable test allowing model developers to rapidly assess the impacts of various parameterization changes on the fidelity of modeled cloud-associated processes relative to observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk
2016-10-15
Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allowsmore » us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.« less
Fock expansion of multimode pure Gaussian states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cariolaro, Gianfranco; Pierobon, Gianfranco, E-mail: gianfranco.pierobon@unipd.it
2015-12-15
The Fock expansion of multimode pure Gaussian states is derived starting from their representation as displaced and squeezed multimode vacuum states. The approach is new and appears to be simpler and more general than previous ones starting from the phase-space representation given by the characteristic or Wigner function. Fock expansion is performed in terms of easily evaluable two-variable Hermite–Kampé de Fériet polynomials. A relatively simple and compact expression for the joint statistical distribution of the photon numbers in the different modes is obtained. In particular, this result enables one to give a simple characterization of separable and entangled states, asmore » shown for two-mode and three-mode Gaussian states.« less
The application of thermally induced multistable composites to morphing aircraft structures
NASA Astrophysics Data System (ADS)
Mattioni, Filippo; Weaver, Paul M.; Potter, Kevin D.; Friswell, Michael I.
2008-03-01
One approach to morphing aircraft is to use bistable or multistable structures that have two or more stable equilibrium configurations to define a discrete set of shapes for the morphing structure. Moving between these stable states may be achieved using an actuation system or by aerodynamic loads. This paper considers three concepts for morphing aircraft based on multistable structures, namely a variable sweep wing, bistable blended winglets and a variable camber trailing edge. The philosophy behind these concepts is outlined, and simulated and experimental results are given.
Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution.
García-Patrón, Raúl; Cerf, Nicolas J
2006-11-10
A fully general approach to the security analysis of continuous-variable quantum key distribution (CV-QKD) is presented. Provided that the quantum channel is estimated via the covariance matrix of the quadratures, Gaussian attacks are shown to be optimal against all collective eavesdropping strategies. The proof is made strikingly simple by combining a physical model of measurement, an entanglement-based description of CV-QKD, and a recent powerful result on the extremality of Gaussian states [M. M. Wolf, Phys. Rev. Lett. 96, 080502 (2006)10.1103/PhysRevLett.96.080502].
Bayesian state space models for dynamic genetic network construction across multiple tissues.
Liang, Yulan; Kelemen, Arpad
2016-08-01
Construction of gene-gene interaction networks and potential pathways is a challenging and important problem in genomic research for complex diseases while estimating the dynamic changes of the temporal correlations and non-stationarity are the keys in this process. In this paper, we develop dynamic state space models with hierarchical Bayesian settings to tackle this challenge for inferring the dynamic profiles and genetic networks associated with disease treatments. We treat both the stochastic transition matrix and the observation matrix time-variant and include temporal correlation structures in the covariance matrix estimations in the multivariate Bayesian state space models. The unevenly spaced short time courses with unseen time points are treated as hidden state variables. Hierarchical Bayesian approaches with various prior and hyper-prior models with Monte Carlo Markov Chain and Gibbs sampling algorithms are used to estimate the model parameters and the hidden state variables. We apply the proposed Hierarchical Bayesian state space models to multiple tissues (liver, skeletal muscle, and kidney) Affymetrix time course data sets following corticosteroid (CS) drug administration. Both simulation and real data analysis results show that the genomic changes over time and gene-gene interaction in response to CS treatment can be well captured by the proposed models. The proposed dynamic Hierarchical Bayesian state space modeling approaches could be expanded and applied to other large scale genomic data, such as next generation sequence (NGS) combined with real time and time varying electronic health record (EHR) for more comprehensive and robust systematic and network based analysis in order to transform big biomedical data into predictions and diagnostics for precision medicine and personalized healthcare with better decision making and patient outcomes.
Dynamic modeling and parameter estimation of a radial and loop type distribution system network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jun Qui; Heng Chen; Girgis, A.A.
1993-05-01
This paper presents a new identification approach to three-phase power system modeling and model reduction taking power system network as multi-input, multi-output (MIMO) processes. The model estimate can be obtained in discrete-time input-output form, discrete- or continuous-time state-space variable form, or frequency-domain impedance transfer function matrix form. An algorithm for determining the model structure of this MIMO process is described. The effect of measurement noise on the approach is also discussed. This approach has been applied on a sample system and simulation results are also presented in this paper.
NASA Astrophysics Data System (ADS)
Young, Sean Gregory
The complex interactions between human health and the physical landscape and environment have been recognized, if not fully understood, since the ancient Greeks. Landscape epidemiology, sometimes called spatial epidemiology, is a sub-discipline of medical geography that uses environmental conditions as explanatory variables in the study of disease or other health phenomena. This theory suggests that pathogenic organisms (whether germs or larger vector and host species) are subject to environmental conditions that can be observed on the landscape, and by identifying where such organisms are likely to exist, areas at greatest risk of the disease can be derived. Machine learning is a sub-discipline of artificial intelligence that can be used to create predictive models from large and complex datasets. West Nile virus (WNV) is a relatively new infectious disease in the United States, and has a fairly well-understood transmission cycle that is believed to be highly dependent on environmental conditions. This study takes a geospatial approach to the study of WNV risk, using both landscape epidemiology and machine learning techniques. A combination of remotely sensed and in situ variables are used to predict WNV incidence with a correlation coefficient as high as 0.86. A novel method of mitigating the small numbers problem is also tested and ultimately discarded. Finally a consistent spatial pattern of model errors is identified, indicating the chosen variables are capable of predicting WNV disease risk across most of the United States, but are inadequate in the northern Great Plains region of the US.
Willingness-to-pay for schistosomiasis-related health outcomes in Kenya.
Kirigia, J M; Sambo, L G; Kainyu, L H
2000-01-01
Cost-benefit analysis (CBA) provides a framework for identifying, quantifying, and valuing in monetary terms all the important costs and consequences to society of competing disease interventions. Thus, CBA requires that impacts of schistosomiasis interventions on beneficiaries'health be valued in monetary terms Economic theory requires the use of the willingness to pay (WTP) approach in valuation of changes in health as a result of intervention. It is the only approach which is consistent with the potential Pareto improvement principle, and hence, consistent with CBA. The present study developed a health outcome measure and tested its operational feasibility. Contingent valuation for certain return to normal health from various health states, and for remaining in one's current health state were elicited through direct interview of randomly selected rice farmers, teachers, and health personnel in Kenya. The WTP to avoid risk of advancing to the next more severe state, seemed to be higher than WTP for a return to normal health. Generally, there was a significant difference between the average WTP values of farmers, teachers and health personnel populations. The gender and occupation variable coefficients were positive and highly significant in all regressions. The coefficients of the other explanatory variables were generally not statistically significant, indicating that medical expenses, anxiety cost, loss of earnings, and loss of work time, implied in various health states descriptions did not have significant effect on respondents expressed WTP values. The latter finding shows that there is need for more research to identify the other (besides gender and occupation) determinants of expressed WTP values in Africa. This study has demonstrated that it is possible to elicit coherent WTP values from economically under-developed countries. Further empirical work is clearly needed to at least address the validity and reliability of the contingent valuation approach and its measurements in Africa.
A particle filter for multi-target tracking in track before detect context
NASA Astrophysics Data System (ADS)
Amrouche, Naima; Khenchaf, Ali; Berkani, Daoud
2016-10-01
The track-before-detect (TBD) approach can be used to track a single target in a highly noisy radar scene. This is because it makes use of unthresholded observations and incorporates a binary target existence variable into its target state estimation process when implemented as a particle filter (PF). This paper proposes the recursive PF-TBD approach to detect multiple targets in low-signal-to noise ratios (SNR). The algorithm's successful performance is demonstrated using a simulated two target example.
A Novel Connectionist Network for Solving Long Time-Lag Prediction Tasks
NASA Astrophysics Data System (ADS)
Johnson, Keith; MacNish, Cara
Traditional Recurrent Neural Networks (RNNs) perform poorly on learning tasks involving long time-lag dependencies. More recent approaches such as LSTM and its variants significantly improve on RNNs ability to learn this type of problem. We present an alternative approach to encoding temporal dependencies that associates temporal features with nodes rather than state values, where the nodes explicitly encode dependencies over variable time delays. We show promising results comparing the network's performance to LSTM variants on an extended Reber grammar task.
NASA Astrophysics Data System (ADS)
Pandey, Suraj
This study develops a spatial mapping of agro-ecological zones based on earth observation model using MODIS regional dataset as a tool to guide key areas of cropping system and targeting to climate change strategies. This tool applies to the Indo-gangetic Plains of north India to target the domains of bio-physical characteristics and socio-economics with respect to changing climate in the region. It derive on secondary data for spatially-explicit variables at the state/district level, which serve as indicators of climate variability based on sustainable livelihood approach, natural, social and human. The study details the methodology used and generates the spatial climate risk maps for composite indicators of livelihood and vulnerability index in the region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohrmann, Johannes; Wood, Robert; McGibbon, Jeremy
Marine boundary layer (MBL) aerosol particles affect the climate through their interaction with MBL clouds. Although both MBL clouds and aerosol particles have pronounced seasonal cycles, the factors controlling seasonal variability of MBL aerosol particle concentration are not well-constrained. In this paper an aerosol budget is constructed representing the effects of wet deposition, free-tropospheric entrainment, primary surface sources, and advection on the MBL accumulation mode aerosol number concentration (N a). These terms are further parameterized, and by assuming that on seasonal timescales N a is in steady state, the budget equation is rearranged to form a diagnostic equation for Nmore » a based on observable variables. Using data primarily collected in the subtropical northeast Pacific during the MAGIC campaign (Marine ARM (Atmospheric Radiation Measurement) GPCI (GCSS Pacific Cross-section Intercomparison) Investigation of Clouds), estimates of both mean summer and winter N a concentrations are made using the simplified steady-state model and seasonal mean observed variables, and are found to match well with the observed N a. To attribute the modeled difference between summer and winter aerosol concentrations to individual observed variables (e.g. precipitation rate, free-tropospheric aerosol number concentration), a local sensitivity analysis is combined with the seasonal difference in observed variables. This analysis shows that despite wintertime precipitation frequency being lower than summer, the higher winter precipitation rate accounted for approximately 60% of the modeled seasonal difference in N a, which emphasizes the importance of marine stratocumulus precipitation in determining MBL aerosol concentrations on longer time scales.« less
Mohrmann, Johannes; Wood, Robert; McGibbon, Jeremy; ...
2018-01-21
Marine boundary layer (MBL) aerosol particles affect the climate through their interaction with MBL clouds. Although both MBL clouds and aerosol particles have pronounced seasonal cycles, the factors controlling seasonal variability of MBL aerosol particle concentration are not well-constrained. In this paper an aerosol budget is constructed representing the effects of wet deposition, free-tropospheric entrainment, primary surface sources, and advection on the MBL accumulation mode aerosol number concentration (N a). These terms are further parameterized, and by assuming that on seasonal timescales N a is in steady state, the budget equation is rearranged to form a diagnostic equation for Nmore » a based on observable variables. Using data primarily collected in the subtropical northeast Pacific during the MAGIC campaign (Marine ARM (Atmospheric Radiation Measurement) GPCI (GCSS Pacific Cross-section Intercomparison) Investigation of Clouds), estimates of both mean summer and winter N a concentrations are made using the simplified steady-state model and seasonal mean observed variables, and are found to match well with the observed N a. To attribute the modeled difference between summer and winter aerosol concentrations to individual observed variables (e.g. precipitation rate, free-tropospheric aerosol number concentration), a local sensitivity analysis is combined with the seasonal difference in observed variables. This analysis shows that despite wintertime precipitation frequency being lower than summer, the higher winter precipitation rate accounted for approximately 60% of the modeled seasonal difference in N a, which emphasizes the importance of marine stratocumulus precipitation in determining MBL aerosol concentrations on longer time scales.« less
NASA Astrophysics Data System (ADS)
Mohrmann, Johannes; Wood, Robert; McGibbon, Jeremy; Eastman, Ryan; Luke, Edward
2018-01-01
Marine boundary layer (MBL) aerosol particles affect the climate through their interaction with MBL clouds. Although both MBL clouds and aerosol particles have pronounced seasonal cycles, the factors controlling seasonal variability of MBL aerosol particle concentration are not well constrained. In this paper an aerosol budget is constructed representing the effects of wet deposition, free-tropospheric entrainment, primary surface sources, and advection on the MBL accumulation mode aerosol number concentration (Na). These terms are then parameterized, and by assuming that on seasonal time scales Na is in steady state, the budget equation is rearranged to form a diagnostic equation for Na based on observable variables. Using data primarily collected in the subtropical northeast Pacific during the MAGIC campaign (Marine ARM (Atmospheric Radiation Measurement) GPCI (GCSS Pacific Cross-Section Intercomparison) Investigation of Clouds), estimates of both mean summer and winter Na concentrations are made using the simplified steady state model and seasonal mean observed variables. These are found to match well with the observed Na. To attribute the modeled difference between summer and winter aerosol concentrations to individual observed variables (e.g., precipitation rate and free-tropospheric aerosol number concentration), a local sensitivity analysis is combined with the seasonal difference in observed variables. This analysis shows that despite wintertime precipitation frequency being lower than summer, the higher winter precipitation rate accounted for approximately 60% of the modeled seasonal difference in Na, which emphasizes the importance of marine stratocumulus precipitation in determining MBL aerosol concentrations on longer time scales.
Validation of a Self-Administered Computerized System to Detect Cognitive Impairment in Older Adults
Brinkman, Samuel D.; Reese, Robert J.; Norsworthy, Larry A.; Dellaria, Donna K.; Kinkade, Jacob W.; Benge, Jared; Brown, Kimberly; Ratka, Anna; Simpkins, James W.
2015-01-01
There is increasing interest in the development of economical and accurate approaches to identifying persons in the community who have mild, undetected cognitive impairments. Computerized assessment systems have been suggested as a viable approach to identifying these persons. The validity of a computerized assessment system for identification of memory and executive deficits in older individuals was evaluated in the current study. Volunteers (N = 235) completed a 3-hr battery of neuropsychological tests and a computerized cognitive assessment system. Participants were classified as impaired (n = 78) or unimpaired (n = 157) on the basis of the Mini Mental State Exam, Wechsler Memory Scale-III and the Trail Making Test (TMT), Part B. All six variables (three memory variables and three executive variables) derived from the computerized assessment differed significantly between groups in the expected direction. There was also evidence of temporal stability and concurrent validity. Application of computerized assessment systems for clinical practice and for identification of research participants is discussed in this article. PMID:25332303
Optimal control on hybrid ode systems with application to a tick disease model.
Ding, Wandi
2007-10-01
We are considering an optimal control problem for a type of hybrid system involving ordinary differential equations and a discrete time feature. One state variable has dynamics in only one season of the year and has a jump condition to obtain the initial condition for that corresponding season in the next year. The other state variable has continuous dynamics. Given a general objective functional, existence, necessary conditions and uniqueness for an optimal control are established. We apply our approach to a tick-transmitted disease model with age structure in which the tick dynamics changes seasonally while hosts have continuous dynamics. The goal is to maximize disease-free ticks and minimize infected ticks through an optimal control strategy of treatment with acaricide. Numerical examples are given to illustrate the results.
Sub-optimal control of fuzzy linear dynamical systems under granular differentiability concept.
Mazandarani, Mehran; Pariz, Naser
2018-05-01
This paper deals with sub-optimal control of a fuzzy linear dynamical system. The aim is to keep the state variables of the fuzzy linear dynamical system close to zero in an optimal manner. In the fuzzy dynamical system, the fuzzy derivative is considered as the granular derivative; and all the coefficients and initial conditions can be uncertain. The criterion for assessing the optimality is regarded as a granular integral whose integrand is a quadratic function of the state variables and control inputs. Using the relative-distance-measure (RDM) fuzzy interval arithmetic and calculus of variations, the optimal control law is presented as the fuzzy state variables feedback. Since the optimal feedback gains are obtained as fuzzy functions, they need to be defuzzified. This will result in the sub-optimal control law. This paper also sheds light on the restrictions imposed by the approaches which are based on fuzzy standard interval arithmetic (FSIA), and use strongly generalized Hukuhara and generalized Hukuhara differentiability concepts for obtaining the optimal control law. The granular eigenvalues notion is also defined. Using an RLC circuit mathematical model, it is shown that, due to their unnatural behavior in the modeling phenomenon, the FSIA-based approaches may obtain some eigenvalues sets that might be different from the inherent eigenvalues set of the fuzzy dynamical system. This is, however, not the case with the approach proposed in this study. The notions of granular controllability and granular stabilizability of the fuzzy linear dynamical system are also presented in this paper. Moreover, a sub-optimal control for regulating a Boeing 747 in longitudinal direction with uncertain initial conditions and parameters is gained. In addition, an uncertain suspension system of one of the four wheels of a bus is regulated using the sub-optimal control introduced in this paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Quantum teleportation of nonclassical wave packets: An effective multimode theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benichi, Hugo; Takeda, Shuntaro; Lee, Noriyuki
2011-07-15
We develop a simple and efficient theoretical model to understand the quantum properties of broadband continuous variable quantum teleportation. We show that, if stated properly, the problem of multimode teleportation can be simplified to teleportation of a single effective mode that describes the input state temporal characteristic. Using that model, we show how the finite bandwidth of squeezing and external noise in the classical channel affect the output teleported quantum field. We choose an approach that is especially relevant for the case of non-Gaussian nonclassical quantum states and we finally back-test our model with recent experimental results.
McDowell, W.G.; Benson, A.J.; Byers, J.E.
2014-01-01
1. Two dominant drivers of species distributions are climate and habitat, both of which are changing rapidly. Understanding the relative importance of variables that can control distributions is critical, especially for invasive species that may spread rapidly and have strong effects on ecosystems. 2. Here, we examine the relative importance of climate and habitat variables in controlling the distribution of the widespread invasive freshwater clam Corbicula fluminea, and we model its future distribution under a suite of climate scenarios using logistic regression and maximum entropy modelling (MaxEnt). 3. Logistic regression identified climate variables as more important than habitat variables in controlling Corbicula distribution. MaxEnt modelling predicted Corbicula's range expansion westward and northward to occupy half of the contiguous United States. By 2080, Corbicula's potential range will expand 25–32%, with more than half of the continental United States being climatically suitable. 4. Our combination of multiple approaches has revealed the importance of climate over habitat in controlling Corbicula's distribution and validates the climate-only MaxEnt model, which can readily examine the consequences of future climate projections. 5. Given the strong influence of climate variables on Corbicula's distribution, as well as Corbicula's ability to disperse quickly and over long distances, Corbicula is poised to expand into New England and the northern Midwest of the United States. Thus, the direct effects of climate change will probably be compounded by the addition of Corbicula and its own influences on ecosystem function.
A fuzzy decision tree for fault classification.
Zio, Enrico; Baraldi, Piero; Popescu, Irina C
2008-02-01
In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.
Wind turbine power tracking using an improved multimodel quadratic approach.
Khezami, Nadhira; Benhadj Braiek, Naceur; Guillaud, Xavier
2010-07-01
In this paper, an improved multimodel optimal quadratic control structure for variable speed, pitch regulated wind turbines (operating at high wind speeds) is proposed in order to integrate high levels of wind power to actively provide a primary reserve for frequency control. On the basis of the nonlinear model of the studied plant, and taking into account the wind speed fluctuations, and the electrical power variation, a multimodel linear description is derived for the wind turbine, and is used for the synthesis of an optimal control law involving a state feedback, an integral action and an output reference model. This new control structure allows a rapid transition of the wind turbine generated power between different desired set values. This electrical power tracking is ensured with a high-performance behavior for all other state variables: turbine and generator rotational speeds and mechanical shaft torque; and smooth and adequate evolution of the control variables. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Recurrence-plot-based measures of complexity and their application to heart-rate-variability data.
Marwan, Norbert; Wessel, Niels; Meyerfeldt, Udo; Schirdewan, Alexander; Kurths, Jürgen
2002-08-01
The knowledge of transitions between regular, laminar or chaotic behaviors is essential to understand the underlying mechanisms behind complex systems. While several linear approaches are often insufficient to describe such processes, there are several nonlinear methods that, however, require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart-rate-variability data. For the logistic map these measures enable us not only to detect transitions between chaotic and periodic states, but also to identify laminar states, i.e., chaos-chaos transitions. The traditional recurrence quantification analysis fails to detect the latter transitions. Applying our measures to the heart-rate-variability data, we are able to detect and quantify the laminar phases before a life-threatening cardiac arrhythmia occurs thereby facilitating a prediction of such an event. Our findings could be of importance for the therapy of malignant cardiac arrhythmias.
Hargrove, James L; Heinz, Grete; Heinz, Otto
2008-01-01
Background This study evaluated whether the changes in several anthropometric and functional measures during caloric restriction combined with walking and treadmill exercise would fit a simple model of approach to steady state (a plateau) that can be solved using spreadsheet software (Microsoft Excel®). We hypothesized that transitions in waist girth and several body compartments would fit a simple exponential model that approaches a stable steady-state. Methods The model (an equation) was applied to outcomes reported in the Minnesota starvation experiment using Microsoft Excel's Solver® function to derive rate parameters (k) and projected steady state values. However, data for most end-points were available only at t = 0, 12 and 24 weeks of caloric restriction. Therefore, we derived 2 new equations that enable model solutions to be calculated from 3 equally spaced data points. Results For the group of male subjects in the Minnesota study, body mass declined with a first order rate constant of about 0.079 wk-1. The fractional rate of loss of fat free mass, which includes components that remained almost constant during starvation, was 0.064 wk-1, compared to a rate of loss of fat mass of 0.103 wk-1. The rate of loss of abdominal fat, as exemplified by the change in the waist girth, was 0.213 wk-1. On average, 0.77 kg was lost per cm of waist girth. Other girths showed rates of loss between 0.085 and 0.131 wk-1. Resting energy expenditure (REE) declined at 0.131 wk-1. Changes in heart volume, hand strength, work capacity and N excretion showed rates of loss in the same range. The group of 32 subjects was close to steady state or had already reached steady state for the variables under consideration at the end of semi-starvation. Conclusion When energy intake is changed to new, relatively constant levels, while physical activity is maintained, changes in several anthropometric and physiological measures can be modeled as an exponential approach to steady state using software that is widely available. The 3 point method for parameter estimation provides a criterion for testing whether change in a variable can be usefully modelled with exponential kinetics within the time range for which data are available. PMID:18840293
NASA Astrophysics Data System (ADS)
Yang, P.; Fekete, B. M.; Rosenzweig, B.; Lengyel, F.; Vorosmarty, C. J.
2012-12-01
Atmospheric dynamics are essential inputs to Regional-scale Earth System Models (RESMs). Variables including surface air temperature, total precipitation, solar radiation, wind speed and humidity must be downscaled from coarse-resolution, global General Circulation Models (GCMs) to the high temporal and spatial resolution required for regional modeling. However, this downscaling procedure can be challenging due to the need to correct for bias from the GCM and to capture the spatiotemporal heterogeneity of the regional dynamics. In this study, the results obtained using several downscaling techniques and observational datasets were compared for a RESM of the Northeast Corridor of the United States. Previous efforts have enhanced GCM model outputs through bias correction using novel techniques. For example, the Climate Impact Research at Potsdam Institute developed a series of bias-corrected GCMs towards the next generation climate change scenarios (Schiermeier, 2012; Moss et al., 2010). Techniques to better represent the heterogeneity of climate variables have also been improved using statistical approaches (Maurer, 2008; Abatzoglou, 2011). For this study, four downscaling approaches to transform bias-corrected HADGEM2-ES Model output (daily at .5 x .5 degree) to the 3'*3'(longitude*latitude) daily and monthly resolution required for the Northeast RESM were compared: 1) Bilinear Interpolation, 2) Daily bias-corrected spatial downscaling (D-BCSD) with Gridded Meteorological Datasets (developed by Abazoglou 2011), 3) Monthly bias-corrected spatial disaggregation (M-BCSD) with CRU(Climate Research Unit) and 4) Dynamic Downscaling based on Weather Research and Forecast (WRF) model. Spatio-temporal analysis of the variability in precipitation was conducted over the study domain. Validation of the variables of different downscaling methods against observational datasets was carried out for assessment of the downscaled climate model outputs. The effects of using the different approaches to downscale atmospheric variables (specifically air temperature and precipitation) for use as inputs to the Water Balance Model (WBMPlus, Vorosmarty et al., 1998;Wisser et al., 2008) for simulation of daily discharge and monthly stream flow in the Northeast US for a 100-year period in the 21st century were also assessed. Statistical techniques especially monthly bias-corrected spatial disaggregation (M-BCSD) showed potential advantage among other methods for the daily discharge and monthly stream flow simulation. However, Dynamic Downscaling will provide important complements to the statistical approaches tested.
A Complex Systems Approach to Causal Discovery in Psychiatry.
Saxe, Glenn N; Statnikov, Alexander; Fenyo, David; Ren, Jiwen; Li, Zhiguo; Prasad, Meera; Wall, Dennis; Bergman, Nora; Briggs, Ernestine C; Aliferis, Constantin
2016-01-01
Conventional research methodologies and data analytic approaches in psychiatric research are unable to reliably infer causal relations without experimental designs, or to make inferences about the functional properties of the complex systems in which psychiatric disorders are embedded. This article describes a series of studies to validate a novel hybrid computational approach--the Complex Systems-Causal Network (CS-CN) method-designed to integrate causal discovery within a complex systems framework for psychiatric research. The CS-CN method was first applied to an existing dataset on psychopathology in 163 children hospitalized with injuries (validation study). Next, it was applied to a much larger dataset of traumatized children (replication study). Finally, the CS-CN method was applied in a controlled experiment using a 'gold standard' dataset for causal discovery and compared with other methods for accurately detecting causal variables (resimulation controlled experiment). The CS-CN method successfully detected a causal network of 111 variables and 167 bivariate relations in the initial validation study. This causal network had well-defined adaptive properties and a set of variables was found that disproportionally contributed to these properties. Modeling the removal of these variables resulted in significant loss of adaptive properties. The CS-CN method was successfully applied in the replication study and performed better than traditional statistical methods, and similarly to state-of-the-art causal discovery algorithms in the causal detection experiment. The CS-CN method was validated, replicated, and yielded both novel and previously validated findings related to risk factors and potential treatments of psychiatric disorders. The novel approach yields both fine-grain (micro) and high-level (macro) insights and thus represents a promising approach for complex systems-oriented research in psychiatry.
Kong, Ru; Li, Jingwei; Orban, Csaba; Sabuncu, Mert R; Liu, Hesheng; Schaefer, Alexander; Sun, Nanbo; Zuo, Xi-Nian; Holmes, Avram J; Eickhoff, Simon B; Yeo, B T Thomas
2018-06-06
Resting-state functional magnetic resonance imaging (rs-fMRI) offers the opportunity to delineate individual-specific brain networks. A major question is whether individual-specific network topography (i.e., location and spatial arrangement) is behaviorally relevant. Here, we propose a multi-session hierarchical Bayesian model (MS-HBM) for estimating individual-specific cortical networks and investigate whether individual-specific network topography can predict human behavior. The multiple layers of the MS-HBM explicitly differentiate intra-subject (within-subject) from inter-subject (between-subject) network variability. By ignoring intra-subject variability, previous network mappings might confuse intra-subject variability for inter-subject differences. Compared with other approaches, MS-HBM parcellations generalized better to new rs-fMRI and task-fMRI data from the same subjects. More specifically, MS-HBM parcellations estimated from a single rs-fMRI session (10 min) showed comparable generalizability as parcellations estimated by 2 state-of-the-art methods using 5 sessions (50 min). We also showed that behavioral phenotypes across cognition, personality, and emotion could be predicted by individual-specific network topography with modest accuracy, comparable to previous reports predicting phenotypes based on connectivity strength. Network topography estimated by MS-HBM was more effective for behavioral prediction than network size, as well as network topography estimated by other parcellation approaches. Thus, similar to connectivity strength, individual-specific network topography might also serve as a fingerprint of human behavior.
NASA Astrophysics Data System (ADS)
Tesser, D.; Hoang, L.; McDonald, K. C.
2017-12-01
Efforts to improve municipal water supply systems increasingly rely on an ability to elucidate variables that drive hydrologic dynamics within large watersheds. However, fundamental model variables such as precipitation, soil moisture, evapotranspiration, and soil freeze/thaw state remain difficult to measure empirically across large, heterogeneous watersheds. Satellite remote sensing presents a method to validate these spatially and temporally dynamic variables as well as better inform the watershed models that monitor the water supply for many of the planet's most populous urban centers. PALSAR 2 L-band, Sentinel 1 C-band, and SMAP L-band scenes covering the Cannonsville branch of the New York City (NYC) water supply watershed were obtained for the period of March 2015 - October 2017. The SAR data provides information on soil moisture, free/thaw state, seasonal surface inundation, and variable source areas within the study site. Integrating the remote sensing products with watershed model outputs and ground survey data improves the representation of related processes in the Soil and Water Assessment Tool (SWAT) utilized to monitor the NYC water supply. PALSAR 2 supports accurate mapping of the extent of variable source areas while Sentinel 1 presents a method to model the timing and magnitude of snowmelt runoff events. SMAP Active Radar soil moisture product directly validates SWAT outputs at the subbasin level. This blended approach verifies the distribution of soil wetness classes within the watershed that delineate Hydrologic Response Units (HRUs) in the modified SWAT-Hillslope. The research expands the ability to model the NYC water supply source beyond a subset of the watershed while also providing high resolution information across a larger spatial scale. The global availability of these remote sensing products provides a method to capture fundamental hydrology variables in regions where current modeling efforts and in situ data remain limited.
NASA Astrophysics Data System (ADS)
Ogle, K.
2011-12-01
Many plant and ecosystem processes in arid and semiarid systems may be affected by antecedent environmental conditions (e.g., precipitation patterns, soil water availability, temperature) that integrate over past days, weeks, months, seasons, or years. However, the importance of such antecedent exogenous effects relative to conditions occurring at the time of the observed process is relatively unexplored. Even less is known about the potential importance of antecedent endogenous effects that describe the influence of past ecosystem states on the current ecosystem state; e.g., how is current ecosystem productivity related to past productivity patterns? We hypothesize that incorporation of antecedent exogenous and endogenous factors can improve our predictive understanding of many plant and ecosystem processes, especially in arid and semiarid ecosystems. Furthermore, the common approach to quantifying the effects of antecedent (exogenous) variables relies on arbitrary, deterministic definitions of antecedent variables that (1) may not accurately describe the role of antecedent conditions and (2) ignore uncertainty associated with applying deterministic definitions. In this study, we employ a stochastic framework for (1) computing the antecedent variables that estimates the relative importance of conditions experienced each time unit into the past, also providing insight into potential lag responses, and (2) estimating the effect of antecedent factors on the response variable of interest. We employ this approach to explore the potential roles of antecedent exogenous and endogenous influences in three settings that illustrate the: (1) importance of antecedent precipitation for net primary productivity in the shortgrass steppe in northern Colorado, (2) dependency of tree growth on antecedent precipitation and past growth states for pinyon growing in western Colorado, and (3) influence of antecedent soil water and prior root status on observed root growth in the Mojave Desert FACE experiment. All three examples suggest that antecedent conditions are critical to predicting different indices of productivity such that the incorporation of antecedent effects explained an additional 20-40% of the variation in the productivity responses. Antecedent endogenous factors were important for understanding tree and root growth, suggesting a potential biological inertia effect that is likely linked to labile carbon storage and allocation strategies. The role of antecedent exogenous (water) variables suggests a lag response whose duration and timing differs according to the time scale of the response variable. In summary, antecedent water availability and past endogenous states appear critical to understanding plant and ecosystem productivity in arid and semiarid systems, and this study describes a stochastic framework for quantifying the potential influence of such antecedent conditions.
Minorities, the Poor and School Finance Reform. Vol. 9: Summary and Conclusions.
ERIC Educational Resources Information Center
Brischetto, Robert
In this concluding volume of a nine-volume study of the impact of school finance reform on the poor and minorities, the author summarizes the project's methods, variables, findings, and conclusions about reform in the six states of California, Colorado, Florida, Michigan, New Mexico, and Texas. He first discusses the two general approaches to…
John W. Coulston
2011-01-01
Tropospheric ozone occurs at phytotoxic levels in the United States (Lefohn and Pinkerton 1988). Several plant species, including commercially important timber species, are sensitive to elevated ozone levels. Exposure to elevated ozone can cause growth reduction and foliar injury and make trees more susceptible to secondary stressors such as insects and pathogens (...
Climate-driven vital rates do not always mean climate-driven population.
Tavecchia, Giacomo; Tenan, Simone; Pradel, Roger; Igual, José-Manuel; Genovart, Meritxell; Oro, Daniel
2016-12-01
Current climatic changes have increased the need to forecast population responses to climate variability. A common approach to address this question is through models that project current population state using the functional relationship between demographic rates and climatic variables. We argue that this approach can lead to erroneous conclusions when interpopulation dispersal is not considered. We found that immigration can release the population from climate-driven trajectories even when local vital rates are climate dependent. We illustrated this using individual-based data on a trans-equatorial migratory seabird, the Scopoli's shearwater Calonectris diomedea, in which the variation of vital rates has been associated with large-scale climatic indices. We compared the population annual growth rate λ i , estimated using local climate-driven parameters with ρ i , a population growth rate directly estimated from individual information and that accounts for immigration. While λ i varied as a function of climatic variables, reflecting the climate-dependent parameters, ρ i did not, indicating that dispersal decouples the relationship between population growth and climate variables from that between climatic variables and vital rates. Our results suggest caution when assessing demographic effects of climatic variability especially in open populations for very mobile organisms such as fish, marine mammals, bats, or birds. When a population model cannot be validated or it is not detailed enough, ignoring immigration might lead to misleading climate-driven projections. © 2016 John Wiley & Sons Ltd.
The Relationship between Anxiety and the Social Judgements of Approachability And Trustworthiness
Willis, Megan L.; Dodd, Helen F.; Palermo, Romina
2013-01-01
The aim of the current study was to examine the relationship between individual differences in anxiety and the social judgements of trustworthiness and approachability. We assessed levels of state and trait anxiety in eighty-two participants who rated the trustworthiness and approachability of a series of unexpressive faces. Higher levels of trait anxiety (controlling for age, sex and state anxiety) were associated with the judgement of faces as less trustworthy. In contrast, there was no significant association between trait anxiety and judgements of approachability. These findings indicate that trait anxiety is a significant predictor of trustworthiness evaluations and illustrate the importance of considering the role of individual differences in the evaluation of trustworthiness. We propose that trait anxiety may be an important variable to control for in future studies assessing the cognitive and neural mechanisms underlying trustworthiness. This is likely to be particularly important for studies involving clinical populations who often experience atypical levels of anxiety. PMID:24098566
Semiconductor-inspired design principles for superconducting quantum computing.
Shim, Yun-Pil; Tahan, Charles
2016-03-17
Superconducting circuits offer tremendous design flexibility in the quantum regime culminating most recently in the demonstration of few qubit systems supposedly approaching the threshold for fault-tolerant quantum information processing. Competition in the solid-state comes from semiconductor qubits, where nature has bestowed some very useful properties which can be utilized for spin qubit-based quantum computing. Here we begin to explore how selective design principles deduced from spin-based systems could be used to advance superconducting qubit science. We take an initial step along this path proposing an encoded qubit approach realizable with state-of-the-art tunable Josephson junction qubits. Our results show that this design philosophy holds promise, enables microwave-free control, and offers a pathway to future qubit designs with new capabilities such as with higher fidelity or, perhaps, operation at higher temperature. The approach is also especially suited to qubits on the basis of variable super-semi junctions.
Nonparametric variational optimization of reaction coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banushkina, Polina V.; Krivov, Sergei V., E-mail: s.krivov@leeds.ac.uk
State of the art realistic simulations of complex atomic processes commonly produce trajectories of large size, making the development of automated analysis tools very important. A popular approach aimed at extracting dynamical information consists of projecting these trajectories into optimally selected reaction coordinates or collective variables. For equilibrium dynamics between any two boundary states, the committor function also known as the folding probability in protein folding studies is often considered as the optimal coordinate. To determine it, one selects a functional form with many parameters and trains it on the trajectories using various criteria. A major problem with such anmore » approach is that a poor initial choice of the functional form may lead to sub-optimal results. Here, we describe an approach which allows one to optimize the reaction coordinate without selecting its functional form and thus avoiding this source of error.« less
Calcott, Rebecca D.; Berkman, Elliot T.
2014-01-01
In the present studies, we aimed to understand how approach and avoidance states affect attentional flexibility by examining attentional shifts on a trial-by-trial basis. We also examined how a novel construct in this area, task context, might interact with motivation to influence attentional flexibility. Participants completed a modified composite letter task in which the ratio of global to local targets was varied by block, making different levels of attentional focus beneficial to performance on different blocks. Study 1 demonstrated that, in the absence of a motivation manipulation, switch costs were lowest on blocks with an even ratio of global and local trials and were higher on blocks with an uneven ratio. Other participants completed the task while viewing pictures (Studies 2 and 3) and assuming arm positions (Studies 2 and 4) to induce approach, avoidance, and neutral motivational states. Avoidance motivation reduced switch costs in evenly proportioned contexts, whereas approach motivation reduced switch costs in mostly global contexts. Additionally, approach motivation imparted a similar switch cost magnitude across different contexts, whereas avoidance and neutral states led to variable switch costs depending on the context. Subsequent analyses revealed that these effects were driven largely by faster switching to local targets on mostly global blocks in the approach condition. These findings suggest that avoidance facilitates attentional shifts when switches are frequent, whereas approach facilitates responding to rare or unexpected local stimuli. The main implication of these results is that motivation has different effects on attentional shifts depending on the context. PMID:24294866
Sohl, Terry L.; Wimberly, Michael; Radeloff, Volker C.; Theobald, David M.; Sleeter, Benjamin M.
2016-01-01
A variety of land-use and land-cover (LULC) models operating at scales from local to global have been developed in recent years, including a number of models that provide spatially explicit, multi-class LULC projections for the conterminous United States. This diversity of modeling approaches raises the question: how consistent are their projections of future land use? We compared projections from six LULC modeling applications for the United States and assessed quantitative, spatial, and conceptual inconsistencies. Each set of projections provided multiple scenarios covering a period from roughly 2000 to 2050. Given the unique spatial, thematic, and temporal characteristics of each set of projections, individual projections were aggregated to a common set of basic, generalized LULC classes (i.e., cropland, pasture, forest, range, and urban) and summarized at the county level across the conterminous United States. We found very little agreement in projected future LULC trends and patterns among the different models. Variability among scenarios for a given model was generally lower than variability among different models, in terms of both trends in the amounts of basic LULC classes and their projected spatial patterns. Even when different models assessed the same purported scenario, model projections varied substantially. Projections of agricultural trends were often far above the maximum historical amounts, raising concerns about the realism of the projections. Comparisons among models were hindered by major discrepancies in categorical definitions, and suggest a need for standardization of historical LULC data sources. To capture a broader range of uncertainties, ensemble modeling approaches are also recommended. However, the vast inconsistencies among LULC models raise questions about the theoretical and conceptual underpinnings of current modeling approaches. Given the substantial effects that land-use change can have on ecological and societal processes, there is a need for improvement in LULC theory and modeling capabilities to improve acceptance and use of regional- to national-scale LULC projections for the United States and elsewhere.
Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M
2017-10-01
Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal transmission. This is the first application of a deterministic state-space model to represent the discharge characteristics of motor units during voluntary contractions. Copyright © 2017 the American Physiological Society.
Bassani, Diego G; Corsi, Daniel J; Gaffey, Michelle F; Barros, Aluisio J D
2014-01-01
Worse health outcomes including higher morbidity and mortality are most often observed among the poorest fractions of a population. In this paper we present and validate national, regional and state-level distributions of national wealth index scores, for urban and rural populations, derived from household asset data collected in six survey rounds in India between 1992-3 and 2007-8. These new indices and their sub-national distributions allow for comparative analyses of a standardized measure of wealth across time and at various levels of population aggregation in India. Indices were derived through principal components analysis (PCA) performed using standardized variables from a correlation matrix to minimize differences in variance. Valid and simple indices were constructed with the minimum number of assets needed to produce scores with enough variability to allow definition of unique decile cut-off points in each urban and rural area of all states. For all indices, the first PCA components explained between 36% and 43% of the variance in household assets. Using sub-national distributions of national wealth index scores, mean height-for-age z-scores increased from the poorest to the richest wealth quintiles for all surveys, and stunting prevalence was higher among the poorest and lower among the wealthiest. Urban and rural decile cut-off values for India, for the six regions and for the 24 major states revealed large variability in wealth by geographical area and level, and rural wealth score gaps exceeded those observed in urban areas. The large variability in sub-national distributions of national wealth index scores indicates the importance of accounting for such variation when constructing wealth indices and deriving score distribution cut-off points. Such an approach allows for proper within-sample economic classification, resulting in scores that are valid indicators of wealth and correlate well with health outcomes, and enables wealth-related analyses at whichever geographical area and level may be most informative for policy-making processes.
Methodology for the systems engineering process. Volume 2: Technical parameters
NASA Technical Reports Server (NTRS)
Nelson, J. H.
1972-01-01
A scheme based on starting the logic networks from the development and mission factors that are of primary concern in an aerospace system is described. This approach required identifying the primary states (design, design verification, premission, mission, postmission), identifying the attributes within each state (performance capability, survival, evaluation, operation, etc), and then developing the generic relationships of variables for each branch. To illustrate this concept, a system was used that involved a launch vehicle and payload for an earth orbit mission. Examination showed that this example was sufficient to illustrate the concept. A more complicated mission would follow the same basic approach, but would have more extensive sets of generic trees and more correlation points between branches. It has been shown that in each system state (production, test, and use), a logic could be developed to order and classify the parameters involved in the translation from general requirements to specific requirements for system elements.
NASA Astrophysics Data System (ADS)
Gassara, H.; El Hajjaji, A.; Chaabane, M.
2017-07-01
This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.
NASA Astrophysics Data System (ADS)
Arsenault, R.; Mai, J.; Latraverse, M.; Tolson, B.
2017-12-01
Probabilistic ensemble forecasts generated by the ensemble streamflow prediction (ESP) methodology are subject to biases due to errors in the hydrological model's initial states. In day-to-day operations, hydrologists must compensate for discrepancies between observed and simulated states such as streamflow. However, in data-scarce regions, little to no information is available to guide the streamflow assimilation process. The manual assimilation process can then lead to more uncertainty due to the numerous options available to the forecaster. Furthermore, the model's mass balance may be compromised and could affect future forecasts. In this study we propose a data-driven approach in which specific variables that may be adjusted during assimilation are defined. The underlying principle was to identify key variables that would be the most appropriate to modify during streamflow assimilation depending on the initial conditions such as the time period of the assimilation, the snow water equivalent of the snowpack and meteorological conditions. The variables to adjust were determined by performing an automatic variational data assimilation on individual (or combinations of) model state variables and meteorological forcing. The assimilation aimed to simultaneously optimize: (1) the error between the observed and simulated streamflow at the timepoint where the forecasts starts and (2) the bias between medium to long-term observed and simulated flows, which were simulated by running the model with the observed meteorological data on a hindcast period. The optimal variables were then classified according to the initial conditions at the time period where the forecast is initiated. The proposed method was evaluated by measuring the average electricity generation of a hydropower complex in Québec, Canada driven by this method. A test-bed which simulates the real-world assimilation, forecasting, water release optimization and decision-making of a hydropower cascade was developed to assess the performance of each individual process in the reservoir management chain. Here the proposed method was compared to the PF algorithm while keeping all other elements intact. Preliminary results are encouraging in terms of power generation and robustness for the proposed approach.
Heart-Rate Variability—More than Heart Beats?
Ernst, Gernot
2017-01-01
Heart-rate variability (HRV) is frequently introduced as mirroring imbalances within the autonomous nerve system. Many investigations are based on the paradigm that increased sympathetic tone is associated with decreased parasympathetic tone and vice versa. But HRV is probably more than an indicator for probable disturbances in the autonomous system. Some perturbations trigger not reciprocal, but parallel changes of vagal and sympathetic nerve activity. HRV has also been considered as a surrogate parameter of the complex interaction between brain and cardiovascular system. Systems biology is an inter-disciplinary field of study focusing on complex interactions within biological systems like the cardiovascular system, with the help of computational models and time series analysis, beyond others. Time series are considered surrogates of the particular system, reflecting robustness or fragility. Increased variability is usually seen as associated with a good health condition, whereas lowered variability might signify pathological changes. This might explain why lower HRV parameters were related to decreased life expectancy in several studies. Newer integrating theories have been proposed. According to them, HRV reflects as much the state of the heart as the state of the brain. The polyvagal theory suggests that the physiological state dictates the range of behavior and psychological experience. Stressful events perpetuate the rhythms of autonomic states, and subsequently, behaviors. Reduced variability will according to this theory not only be a surrogate but represent a fundamental homeostasis mechanism in a pathological state. The neurovisceral integration model proposes that cardiac vagal tone, described in HRV beyond others as HF-index, can mirror the functional balance of the neural networks implicated in emotion–cognition interactions. Both recent models represent a more holistic approach to understanding the significance of HRV. PMID:28955705
Ma, Yuanyuan; Dong, Xiaoli; Wang, Yonggang; Xia, Yongyao
2018-03-05
Hydrogen production through water splitting is considered a promising approach for solar energy harvesting. However, the variable and intermittent nature of solar energy and the co-production of H 2 and O 2 significantly reduce the flexibility of this approach, increasing the costs of its use in practical applications. Herein, using the reversible n-type doping/de-doping reaction of the solid-state polytriphenylamine-based battery electrode, we decouple the H 2 and O 2 production in acid water electrolysis. In this architecture, the H 2 and O 2 production occur at different times, which eliminates the issue of gas mixing and adapts to the variable and intermittent nature of solar energy, facilitating the conversion of solar energy to hydrogen (STH). Furthermore, for the first time, we demonstrate a membrane-free solar water splitting through commercial photovoltaics and the decoupled acid water electrolysis, which potentially paves the way for a new approach for solar water splitting. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Novel SPECT Technologies and Approaches in Cardiac Imaging
Slomka, Piotr; Hung, Guang-Uei; Germano, Guido; Berman, Daniel S.
2017-01-01
Recent novel approaches in myocardial perfusion single photon emission CT (SPECT) have been facilitated by new dedicated high-efficiency hardware with solid-state detectors and optimized collimators. New protocols include very low-dose (1 mSv) stress-only, two-position imaging to mitigate attenuation artifacts, and simultaneous dual-isotope imaging. Attenuation correction can be performed by specialized low-dose systems or by previously obtained CT coronary calcium scans. Hybrid protocols using CT angiography have been proposed. Image quality improvements have been demonstrated by novel reconstructions and motion correction. Fast SPECT acquisition facilitates dynamic flow and early function measurements. Image processing algorithms have become automated with virtually unsupervised extraction of quantitative imaging variables. This automation facilitates integration with clinical variables derived by machine learning to predict patient outcome or diagnosis. In this review, we describe new imaging protocols made possible by the new hardware developments. We also discuss several novel software approaches for the quantification and interpretation of myocardial perfusion SPECT scans. PMID:29034066
Variationally Optimized Free-Energy Flooding for Rate Calculation.
McCarty, James; Valsson, Omar; Tiwary, Pratyush; Parrinello, Michele
2015-08-14
We propose a new method to obtain kinetic properties of infrequent events from molecular dynamics simulation. The procedure employs a recently introduced variational approach [Valsson and Parrinello, Phys. Rev. Lett. 113, 090601 (2014)] to construct a bias potential as a function of several collective variables that is designed to flood the associated free energy surface up to a predefined level. The resulting bias potential effectively accelerates transitions between metastable free energy minima while ensuring bias-free transition states, thus allowing accurate kinetic rates to be obtained. We test the method on a few illustrative systems for which we obtain an order of magnitude improvement in efficiency relative to previous approaches and several orders of magnitude relative to unbiased molecular dynamics. We expect an even larger improvement in more complex systems. This and the ability of the variational approach to deal efficiently with a large number of collective variables will greatly enhance the scope of these calculations. This work is a vindication of the potential that the variational principle has if applied in innovative ways.
Martyna, Agnieszka; Michalska, Aleksandra; Zadora, Grzegorz
2015-05-01
The problem of interpretation of common provenance of the samples within the infrared spectra database of polypropylene samples from car body parts and plastic containers as well as Raman spectra databases of blue solid and metallic automotive paints was under investigation. The research involved statistical tools such as likelihood ratio (LR) approach for expressing the evidential value of observed similarities and differences in the recorded spectra. Since the LR models can be easily proposed for databases described by a few variables, research focused on the problem of spectra dimensionality reduction characterised by more than a thousand variables. The objective of the studies was to combine the chemometric tools easily dealing with multidimensionality with an LR approach. The final variables used for LR models' construction were derived from the discrete wavelet transform (DWT) as a data dimensionality reduction technique supported by methods for variance analysis and corresponded with chemical information, i.e. typical absorption bands for polypropylene and peaks associated with pigments present in the car paints. Univariate and multivariate LR models were proposed, aiming at obtaining more information about the chemical structure of the samples. Their performance was controlled by estimating the levels of false positive and false negative answers and using the empirical cross entropy approach. The results for most of the LR models were satisfactory and enabled solving the stated comparison problems. The results prove that the variables generated from DWT preserve signal characteristic, being a sparse representation of the original signal by keeping its shape and relevant chemical information.
A Bayesian state-space approach for damage detection and classification
NASA Astrophysics Data System (ADS)
Dzunic, Zoran; Chen, Justin G.; Mobahi, Hossein; Büyüköztürk, Oral; Fisher, John W.
2017-11-01
The problem of automatic damage detection in civil structures is complex and requires a system that can interpret collected sensor data into meaningful information. We apply our recently developed switching Bayesian model for dependency analysis to the problems of damage detection and classification. The model relies on a state-space approach that accounts for noisy measurement processes and missing data, which also infers the statistical temporal dependency between measurement locations signifying the potential flow of information within the structure. A Gibbs sampling algorithm is used to simultaneously infer the latent states, parameters of the state dynamics, the dependence graph, and any changes in behavior. By employing a fully Bayesian approach, we are able to characterize uncertainty in these variables via their posterior distribution and provide probabilistic estimates of the occurrence of damage or a specific damage scenario. We also implement a single class classification method which is more realistic for most real world situations where training data for a damaged structure is not available. We demonstrate the methodology with experimental test data from a laboratory model structure and accelerometer data from a real world structure during different environmental and excitation conditions.
On dynamical systems approaches and methods in f ( R ) cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alho, Artur; Carloni, Sante; Uggla, Claes, E-mail: aalho@math.ist.utl.pt, E-mail: sante.carloni@tecnico.ulisboa.pt, E-mail: claes.uggla@kau.se
We discuss dynamical systems approaches and methods applied to flat Robertson-Walker models in f ( R )-gravity. We argue that a complete description of the solution space of a model requires a global state space analysis that motivates globally covering state space adapted variables. This is shown explicitly by an illustrative example, f ( R ) = R + α R {sup 2}, α > 0, for which we introduce new regular dynamical systems on global compactly extended state spaces for the Jordan and Einstein frames. This example also allows us to illustrate several local and global dynamical systems techniquesmore » involving, e.g., blow ups of nilpotent fixed points, center manifold analysis, averaging, and use of monotone functions. As a result of applying dynamical systems methods to globally state space adapted dynamical systems formulations, we obtain pictures of the entire solution spaces in both the Jordan and the Einstein frames. This shows, e.g., that due to the domain of the conformal transformation between the Jordan and Einstein frames, not all the solutions in the Jordan frame are completely contained in the Einstein frame. We also make comparisons with previous dynamical systems approaches to f ( R ) cosmology and discuss their advantages and disadvantages.« less
Peeking Network States with Clustered Patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jinoh; Sim, Alex
2015-10-20
Network traffic monitoring has long been a core element for effec- tive network management and security. However, it is still a chal- lenging task with a high degree of complexity for comprehensive analysis when considering multiple variables and ever-increasing traffic volumes to monitor. For example, one of the widely con- sidered approaches is to scrutinize probabilistic distributions, but it poses a scalability concern and multivariate analysis is not gen- erally supported due to the exponential increase of the complexity. In this work, we propose a novel method for network traffic moni- toring based on clustering, one of the powerful deep-learningmore » tech- niques. We show that the new approach enables us to recognize clustered results as patterns representing the network states, which can then be utilized to evaluate “similarity” of network states over time. In addition, we define a new quantitative measure for the similarity between two compared network states observed in dif- ferent time windows, as a supportive means for intuitive analysis. Finally, we demonstrate the clustering-based network monitoring with public traffic traces, and show that the proposed approach us- ing the clustering method has a great opportunity for feasible, cost- effective network monitoring.« less
Analytic Thermoelectric Couple Modeling: Variable Material Properties and Transient Operation
NASA Technical Reports Server (NTRS)
Mackey, Jonathan A.; Sehirlioglu, Alp; Dynys, Fred
2015-01-01
To gain a deeper understanding of the operation of a thermoelectric couple a set of analytic solutions have been derived for a variable material property couple and a transient couple. Using an analytic approach, as opposed to commonly used numerical techniques, results in a set of useful design guidelines. These guidelines can serve as useful starting conditions for further numerical studies, or can serve as design rules for lab built couples. The analytic modeling considers two cases and accounts for 1) material properties which vary with temperature and 2) transient operation of a couple. The variable material property case was handled by means of an asymptotic expansion, which allows for insight into the influence of temperature dependence on different material properties. The variable property work demonstrated the important fact that materials with identical average Figure of Merits can lead to different conversion efficiencies due to temperature dependence of the properties. The transient couple was investigated through a Greens function approach; several transient boundary conditions were investigated. The transient work introduces several new design considerations which are not captured by the classic steady state analysis. The work helps to assist in designing couples for optimal performance, and also helps assist in material selection.
Quantum State Reduction by Matter-Phase-Related Measurements in Optical Lattices
Kozlowski, Wojciech; Caballero-Benitez, Santiago F.; Mekhov, Igor B.
2017-01-01
A many-body atomic system coupled to quantized light is subject to weak measurement. Instead of coupling light to the on-site density, we consider the quantum backaction due to the measurement of matter-phase-related variables such as global phase coherence. We show how this unconventional approach opens up new opportunities to affect system evolution. We demonstrate how this can lead to a new class of final states different from those possible with dissipative state preparation or conventional projective measurements. These states are characterised by a combination of Hamiltonian and measurement properties thus extending the measurement postulate for the case of strong competition with the system’s own evolution. PMID:28225012
Quantum State Reduction by Matter-Phase-Related Measurements in Optical Lattices.
Kozlowski, Wojciech; Caballero-Benitez, Santiago F; Mekhov, Igor B
2017-02-22
A many-body atomic system coupled to quantized light is subject to weak measurement. Instead of coupling light to the on-site density, we consider the quantum backaction due to the measurement of matter-phase-related variables such as global phase coherence. We show how this unconventional approach opens up new opportunities to affect system evolution. We demonstrate how this can lead to a new class of final states different from those possible with dissipative state preparation or conventional projective measurements. These states are characterised by a combination of Hamiltonian and measurement properties thus extending the measurement postulate for the case of strong competition with the system's own evolution.
Tong, Shaocheng; Wang, Tong; Li, Yongming; Zhang, Huaguang
2014-06-01
This paper discusses the problem of adaptive neural network output feedback control for a class of stochastic nonlinear strict-feedback systems. The concerned systems have certain characteristics, such as unknown nonlinear uncertainties, unknown dead-zones, unmodeled dynamics and without the direct measurements of state variables. In this paper, the neural networks (NNs) are employed to approximate the unknown nonlinear uncertainties, and then by representing the dead-zone as a time-varying system with a bounded disturbance. An NN state observer is designed to estimate the unmeasured states. Based on both backstepping design technique and a stochastic small-gain theorem, a robust adaptive NN output feedback control scheme is developed. It is proved that all the variables involved in the closed-loop system are input-state-practically stable in probability, and also have robustness to the unmodeled dynamics. Meanwhile, the observer errors and the output of the system can be regulated to a small neighborhood of the origin by selecting appropriate design parameters. Simulation examples are also provided to illustrate the effectiveness of the proposed approach.
Boettcher, P J; Tixier-Boichard, M; Toro, M A; Simianer, H; Eding, H; Gandini, G; Joost, S; Garcia, D; Colli, L; Ajmone-Marsan, P
2010-05-01
The genetic diversity of the world's livestock populations is decreasing, both within and across breeds. A wide variety of factors has contributed to the loss, replacement or genetic dilution of many local breeds. Genetic variability within the more common commercial breeds has been greatly decreased by selectively intense breeding programmes. Conservation of livestock genetic variability is thus important, especially when considering possible future changes in production environments. The world has more than 7500 livestock breeds and conservation of all of them is not feasible. Therefore, prioritization is needed. The objective of this article is to review the state of the art in approaches for prioritization of breeds for conservation, particularly those approaches that consider molecular genetic information, and to identify any shortcomings that may restrict their application. The Weitzman method was among the first and most well-known approaches for utilization of molecular genetic information in conservation prioritization. This approach balances diversity and extinction probability to yield an objective measure of conservation potential. However, this approach was designed for decision making across species and measures diversity as distinctiveness. For livestock, prioritization will most commonly be performed among breeds within species, so alternatives that measure diversity as co-ancestry (i.e. also within-breed variability) have been proposed. Although these methods are technically sound, their application has generally been limited to research studies; most existing conservation programmes have effectively primarily based decisions on extinction risk. The development of user-friendly software incorporating these approaches may increase their rate of utilization.
Revealing nonclassicality beyond Gaussian states via a single marginal distribution
Park, Jiyong; Lu, Yao; Lee, Jaehak; Shen, Yangchao; Zhang, Kuan; Zhang, Shuaining; Zubairy, Muhammad Suhail; Kim, Kihwan; Nha, Hyunchul
2017-01-01
A standard method to obtain information on a quantum state is to measure marginal distributions along many different axes in phase space, which forms a basis of quantum-state tomography. We theoretically propose and experimentally demonstrate a general framework to manifest nonclassicality by observing a single marginal distribution only, which provides a unique insight into nonclassicality and a practical applicability to various quantum systems. Our approach maps the 1D marginal distribution into a factorized 2D distribution by multiplying the measured distribution or the vacuum-state distribution along an orthogonal axis. The resulting fictitious Wigner function becomes unphysical only for a nonclassical state; thus the negativity of the corresponding density operator provides evidence of nonclassicality. Furthermore, the negativity measured this way yields a lower bound for entanglement potential—a measure of entanglement generated using a nonclassical state with a beam-splitter setting that is a prototypical model to produce continuous-variable (CV) entangled states. Our approach detects both Gaussian and non-Gaussian nonclassical states in a reliable and efficient manner. Remarkably, it works regardless of measurement axis for all non-Gaussian states in finite-dimensional Fock space of any size, also extending to infinite-dimensional states of experimental relevance for CV quantum informatics. We experimentally illustrate the power of our criterion for motional states of a trapped ion, confirming their nonclassicality in a measurement-axis–independent manner. We also address an extension of our approach combined with phase-shift operations, which leads to a stronger test of nonclassicality, that is, detection of genuine non-Gaussianity under a CV measurement. PMID:28077456
Revealing nonclassicality beyond Gaussian states via a single marginal distribution.
Park, Jiyong; Lu, Yao; Lee, Jaehak; Shen, Yangchao; Zhang, Kuan; Zhang, Shuaining; Zubairy, Muhammad Suhail; Kim, Kihwan; Nha, Hyunchul
2017-01-31
A standard method to obtain information on a quantum state is to measure marginal distributions along many different axes in phase space, which forms a basis of quantum-state tomography. We theoretically propose and experimentally demonstrate a general framework to manifest nonclassicality by observing a single marginal distribution only, which provides a unique insight into nonclassicality and a practical applicability to various quantum systems. Our approach maps the 1D marginal distribution into a factorized 2D distribution by multiplying the measured distribution or the vacuum-state distribution along an orthogonal axis. The resulting fictitious Wigner function becomes unphysical only for a nonclassical state; thus the negativity of the corresponding density operator provides evidence of nonclassicality. Furthermore, the negativity measured this way yields a lower bound for entanglement potential-a measure of entanglement generated using a nonclassical state with a beam-splitter setting that is a prototypical model to produce continuous-variable (CV) entangled states. Our approach detects both Gaussian and non-Gaussian nonclassical states in a reliable and efficient manner. Remarkably, it works regardless of measurement axis for all non-Gaussian states in finite-dimensional Fock space of any size, also extending to infinite-dimensional states of experimental relevance for CV quantum informatics. We experimentally illustrate the power of our criterion for motional states of a trapped ion, confirming their nonclassicality in a measurement-axis-independent manner. We also address an extension of our approach combined with phase-shift operations, which leads to a stronger test of nonclassicality, that is, detection of genuine non-Gaussianity under a CV measurement.
Qubit-Programmable Operations on Quantum Light Fields
Barbieri, Marco; Spagnolo, Nicolò; Ferreyrol, Franck; Blandino, Rémi; Smith, Brian J.; Tualle-Brouri, Rosa
2015-01-01
Engineering quantum operations is a crucial capability needed for developing quantum technologies and designing new fundamental physics tests. Here we propose a scheme for realising a controlled operation acting on a travelling continuous-variable quantum field, whose functioning is determined by a discrete input qubit. This opens a new avenue for exploiting advantages of both information encoding approaches. Furthermore, this approach allows for the program itself to be in a superposition of operations, and as a result it can be used within a quantum processor, where coherences must be maintained. Our study can find interest not only in general quantum state engineering and information protocols, but also details an interface between different physical platforms. Potential applications can be found in linking optical qubits to optical systems for which coupling is best described in terms of their continuous variables, such as optomechanical devices. PMID:26468614
NASA Astrophysics Data System (ADS)
Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.
2014-01-01
We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.
Origin choice and petal loss in the flower garden of spiral wave tip trajectories
Gray, Richard A.; Wikswo, John P.; Otani, Niels F.
2009-01-01
Rotating spiral waves have been observed in numerous biological and physical systems. These spiral waves can be stationary, meander, or even degenerate into multiple unstable rotating waves. The spatiotemporal behavior of spiral waves has been extensively quantified by tracking spiral wave tip trajectories. However, the precise methodology of identifying the spiral wave tip and its influence on the specific patterns of behavior remains a largely unexplored topic of research. Here we use a two-state variable FitzHugh–Nagumo model to simulate stationary and meandering spiral waves and examine the spatiotemporal representation of the system’s state variables in both the real (i.e., physical) and state spaces. We show that mapping between these two spaces provides a method to demarcate the spiral wave tip as the center of rotation of the solution to the underlying nonlinear partial differential equations. This approach leads to the simplest tip trajectories by eliminating portions resulting from the rotational component of the spiral wave. PMID:19791998
Origin choice and petal loss in the flower garden of spiral wave tip trajectories.
Gray, Richard A; Wikswo, John P; Otani, Niels F
2009-09-01
Rotating spiral waves have been observed in numerous biological and physical systems. These spiral waves can be stationary, meander, or even degenerate into multiple unstable rotating waves. The spatiotemporal behavior of spiral waves has been extensively quantified by tracking spiral wave tip trajectories. However, the precise methodology of identifying the spiral wave tip and its influence on the specific patterns of behavior remains a largely unexplored topic of research. Here we use a two-state variable FitzHugh-Nagumo model to simulate stationary and meandering spiral waves and examine the spatiotemporal representation of the system's state variables in both the real (i.e., physical) and state spaces. We show that mapping between these two spaces provides a method to demarcate the spiral wave tip as the center of rotation of the solution to the underlying nonlinear partial differential equations. This approach leads to the simplest tip trajectories by eliminating portions resulting from the rotational component of the spiral wave.
Which System Variables Carry Robust Early Signs of Upcoming Phase Transition? An Ecological Example.
Negahbani, Ehsan; Steyn-Ross, D Alistair; Steyn-Ross, Moira L; Aguirre, Luis A
2016-01-01
Growth of critical fluctuations prior to catastrophic state transition is generally regarded as a universal phenomenon, providing a valuable early warning signal in dynamical systems. Using an ecological fisheries model of three populations (juvenile prey J, adult prey A and predator P), a recent study has reported silent early warning signals obtained from P and A populations prior to saddle-node (SN) bifurcation, and thus concluded that early warning signals are not universal. By performing a full eigenvalue analysis of the same system we demonstrate that while J and P populations undergo SN bifurcation, A does not jump to a new state, so it is not expected to carry early warning signs. In contrast with the previous study, we capture a significant increase in the noise-induced fluctuations in the P population, but only on close approach to the bifurcation point; it is not clear why the P variance initially shows a decaying trend. Here we resolve this puzzle using observability measures from control theory. By computing the observability coefficient for the system from the recordings of each population considered one at a time, we are able to quantify their ability to describe changing internal dynamics. We demonstrate that precursor fluctuations are best observed using only the J variable, and also P variable if close to transition. Using observability analysis we are able to describe why a poorly observable variable (P) has poor forecasting capabilities although a full eigenvalue analysis shows that this variable undergoes a bifurcation. We conclude that observability analysis provides complementary information to identify the variables carrying early-warning signs about impending state transition.
From nanoelectronics to nano-spintronics.
Wang, Kang L; Ovchinnikov, Igor; Xiu, Faxian; Khitun, Alex; Bao, Ming
2011-01-01
Today's electronics uses electron charge as a state variable for logic and computing operation, which is often represented as voltage or current. In this representation of state variable, carriers in electronic devices behave independently even to a few and single electron cases. As the scaling continues to reduce the physical feature size and to increase the functional throughput, two most outstanding limitations and major challenges, among others, are power dissipation and variability as identified by ITRS. This paper presents the expose, in that collective phenomena, e.g., spintronics using appropriate order parameters of magnetic moment as a state variable may be considered favorably for a new room-temperature information processing paradigm. A comparison between electronics and spintronics in terms of variability, quantum and thermal fluctuations will be presented. It shows that the benefits of the scalability to smaller sizes in the case of spintronics (nanomagnetics) include a much reduced variability problem as compared with today's electronics. In addition, another advantage of using nanomagnets is the possibility of constructing nonvolatile logics, which allow for immense power savings during system standby. However, most of devices with magnetic moment usually use current to drive the devices and consequently, power dissipation is a major issue. We will discuss approaches of using electric-field control of ferromagnetism in dilute magnetic semiconductor (DMS) and metallic ferromagnetic materials. With the DMSs, carrier-mediated transition from paramagnetic to ferromagnetic phases make possible to have devices work very much like field effect transistor, plus the non-volatility afforded by ferromagnetism. Then we will describe new possibilities of the use of electric field for metallic materials and devices: Spin wave devices with multiferroics materials. We will also further describe a potential new method of electric field control of metallic ferromagnetism via field effect of the Thomas Fermi surface layer.
Scenario Development for the Southwestern United States
NASA Astrophysics Data System (ADS)
Mahmoud, M.; Gupta, H.; Stewart, S.; Liu, Y.; Hartmann, H.; Wagener, T.
2006-12-01
The primary goal of employing a scenario development approach for the U.S. southwest is to inform regional policy by examining future possibilities related to regional vegetation change, water-leasing, and riparian restoration. This approach is necessary due to a lack of existing explicit water resources application of scenarios to the entire southwest region. A formal approach for scenario development is adopted and applied towards water resources issues within the arid and semi-arid regions of the U.S. southwest following five progressive and reiterative phases: scenario definition, scenario construction, scenario analysis, scenario assessment, and risk management. In the scenario definition phase, the inputs of scientists, modelers, and stakeholders were collected in order to define and construct relevant scenarios to the southwest and its water sustainability needs. From stakeholder-driven scenario workshops and breakout sessions, the three main axes of principal change were identified to be climate change, population development patterns, and quality of information monitoring technology. Based on the extreme and varying conditions of these three main axes, eight scenario narratives were drafted to describe the state of each scenario's respective future and the events which led to it. Events and situations are described within each scenario narrative with respect to key variables; variables that are both important to regional water resources (as distinguished by scientists and modelers), and are good tracking and monitoring indicators of change. The current phase consists of scenario construction, where the drafted scenarios are re-presented to regional scientists and modelers to verify that proper key variables are included (or excluded) from the eight narratives. The next step is to construct the data sets necessary to implement the eight scenarios on the respective computational models of modelers investigating vegetation change, water-leasing, and riparian restoration in the southwest
Steady state estimation of soil organic carbon using satellite-derived canopy leaf area index
Fang, Yilin; Liu, Chongxuan; Huang, Maoyi; ...
2014-12-02
Soil organic carbon (SOC) plays a key role in the global carbon cycle that is important for decadal-to-century climate prediction. Estimation of soil organic carbon stock using model-based methods typically requires spin-up (time marching transient simulation) of the carbon-nitrogen (CN) models by performing hundreds to thousands years long simulations until the carbon-nitrogen pools reach dynamic steady-state. This has become a bottleneck for global modeling and analysis, especially when testing new physical and/or chemical mechanisms and evaluating parameter sensitivity. Here we report a new numerical approach to estimate global soil carbon stock that can avoid the long term spin-up of themore » CN model. The approach uses canopy leaf area index (LAI) from satellite data and takes advantage of a reaction-based biogeochemical module NGBGC (Next Generation BioGeoChemical Module) that was recently developed and incorporated in version 4 of the Community Land Model (CLM4). Although NGBGC uses the same CN mechanisms as used in CLM4CN, it can be easily configured to run prognostic or steady state simulations. In this approach, monthly LAI from the multi-year Moderate Resolution Imaging Spectroradiometer (MODIS) data was used to calculate potential annual average gross primary production (GPP) and leaf carbon for the period of the atmospheric forcing. The calculated potential annual average GPP and leaf C are then used by NGBGC to calculate the steady-state distributions of carbon and nitrogen in different vegetation and soil pools by solving the steady-state reaction-network in NGBGC using the Newton-Raphson method. The new approach was applied at point and global scales and compared with SOC derived from long spin-up by running NGBGC in prognostic mode, and SOC from the empirical data of the Harmonized World Soil Database (HWSD). The steady-state solution is comparable to the spin-up value when the MODIS LAI is close to the LAI from the spin-up solution, and largely captured the variability of the HWSD SOC across the different dominant plant functional types (PFTs) at global scale. The numerical correlation between the calculated and HWSD SOC was, however, weak at both point and global scales, suggesting that the models used in describing biogeochemical processes in CLM needs improvements and/or HWSD needs updating as suggested by other studies. Besides SOC, the steady state solution also includes all other state variables simulated by a spin-up run, such as NPP, GPP, total vegetation C etc., which makes the developed approach a promising tool to efficiently estimate global SOC distribution and evaluate and compare different aspects simulated by different CN mechanisms in the model.« less
A programing system for research and applications in structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.
1981-01-01
The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of constraints and design variables. Features shown in numerical examples include: variability of structural layout and overall shape geometry, static strength and stiffness constraints, local buckling failure, and vibration constraints.
Epidemiological overview of HIV/AIDS in pregnant women from a state of northeastern Brazil.
Silva, Claúdia Mendes da; Alves, Regina de Souza; Santos, Tâmyssa Simões Dos; Bragagnollo, Gabriela Rodrigues; Tavares, Clodis Maria; Santos, Amuzza Aylla Pereira Dos
2018-01-01
To learn the epidemiological characteristics of HIV infection in pregnant women. Descriptive study with quantitative approach. The study population was composed of pregnant women with HIV/AIDS residing in the state of Alagoas. Data were organized into variables and analyzed according to the measures of dispersion parameter relevant to the arithmetic mean and standard deviation (X ± S). Between 2007 and 2015, 773 cases of HIV/AIDS were recorded in pregnant women in Alagoas. The studied variables identified that most of these pregnant women were young, had low levels of education and faced socioeconomic vulnerability. It is necessary to include actions aimed at increasing the attention paid to women, once the assurance of full care and early diagnosis of HIV are important strategies to promote adequate treatment adherence and reduce the vertical transmission.
Scaling cosmology with variable dark-energy equation of state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castro, David R.; Velten, Hermano; Zimdahl, Winfried, E-mail: drodriguez-ufes@hotmail.com, E-mail: velten@physik.uni-bielefeld.de, E-mail: winfried.zimdahl@pq.cnpq.br
2012-06-01
Interactions between dark matter and dark energy which result in a power-law behavior (with respect to the cosmic scale factor) of the ratio between the energy densities of the dark components (thus generalizing the ΛCDM model) have been considered as an attempt to alleviate the cosmic coincidence problem phenomenologically. We generalize this approach by allowing for a variable equation of state for the dark energy within the CPL-parametrization. Based on analytic solutions for the Hubble rate and using the Constitution and Union2 SNIa sets, we present a statistical analysis and classify different interacting and non-interacting models according to the Akaikemore » (AIC) and the Bayesian (BIC) information criteria. We do not find noticeable evidence for an alleviation of the coincidence problem with the mentioned type of interaction.« less
Unfolding dimension and the search for functional markers in the human electroencephalogram
NASA Astrophysics Data System (ADS)
Dünki, Rudolf M.; Schmid, Gary Bruno
1998-02-01
A biparametric approach to dimensional analysis in terms of a so-called ``unfolding dimension'' is introduced to explore the extent to which the human EEG can be described by stable features characteristic of an individual despite the well-known problems of intraindividual variability. Our analysis comprises an EEG data set recorded from healthy individuals over a time span of 5 years. The outcome is shown to be comparable to advanced linear methods of spectral analysis with regard to intraindividual specificity and stability over time. Such linear methods have not yet proven to be specific to the EEG of different brain states. Thus we have also investigated the specificity of our biparametric approach by comparing the mental states schizophrenic psychosis and remission, i.e., illness versus full recovery. A difference between EEG in psychosis and remission became apparent within recordings taken at rest with eyes closed and no stimulated or requested mental activity. Hence our approach distinguishes these functional brain states even in the absence of an active or intentional stimulus. This sheds a different light upon theories of schizophrenia as an information-processing disturbance of the brain.
Factors influencing crime rates: an econometric analysis approach
NASA Astrophysics Data System (ADS)
Bothos, John M. A.; Thomopoulos, Stelios C. A.
2016-05-01
The scope of the present study is to research the dynamics that determine the commission of crimes in the US society. Our study is part of a model we are developing to understand urban crime dynamics and to enhance citizens' "perception of security" in large urban environments. The main targets of our research are to highlight dependence of crime rates on certain social and economic factors and basic elements of state anticrime policies. In conducting our research, we use as guides previous relevant studies on crime dependence, that have been performed with similar quantitative analyses in mind, regarding the dependence of crime on certain social and economic factors using statistics and econometric modelling. Our first approach consists of conceptual state space dynamic cross-sectional econometric models that incorporate a feedback loop that describes crime as a feedback process. In order to define dynamically the model variables, we use statistical analysis on crime records and on records about social and economic conditions and policing characteristics (like police force and policing results - crime arrests), to determine their influence as independent variables on crime, as the dependent variable of our model. The econometric models we apply in this first approach are an exponential log linear model and a logit model. In a second approach, we try to study the evolvement of violent crime through time in the US, independently as an autonomous social phenomenon, using autoregressive and moving average time-series econometric models. Our findings show that there are certain social and economic characteristics that affect the formation of crime rates in the US, either positively or negatively. Furthermore, the results of our time-series econometric modelling show that violent crime, viewed solely and independently as a social phenomenon, correlates with previous years crime rates and depends on the social and economic environment's conditions during previous years.
De Palma, Rodney; Sörensson, Peder; Verouhis, Dinos; Pernow, John; Saleh, Nawzad
2017-07-27
Clinical outcome following acute myocardial infarction is predicted by final infarct size evaluated in relation to left ventricular myocardium at risk (MaR). Contrast-enhanced steady-state free precession (CE-SSFP) cardiovascular magnetic resonance imaging (CMR) is not widely used for assessing MaR. Evidence of its utility compared to traditional assessment methods and as a surrogate for clinical outcome is needed. Retrospective analysis within a study evaluating post-conditioning during ST elevation myocardial infarction (STEMI) treated with coronary intervention (n = 78). CE-SSFP post-infarction was compared with angiographic jeopardy methods. Differences and variability between CMR and angiographic methods using Bland-Altman analyses were evaluated. Clinical outcomes were compared to MaR and extent of infarction. MaR showed correlation between CE-SSFP, and both BARI and APPROACH scores of 0.83 (p < 0.0001) and 0.84 (p < 0.0001) respectively. Bias between CE-SSFP and BARI was 1.1% (agreement limits -11.4 to +9.1). Bias between CE-SSFP and APPROACH was 1.2% (agreement limits -13 to +10.5). Inter-observer variability for the BARI score was 0.56 ± 2.9; 0.42 ± 2.1 for the APPROACH score; -1.4 ± 3.1% for CE-SSFP. Intra-observer variability was 0.15 ± 1.85 for the BARI score; for the APPROACH score 0.19 ± 1.6; and for CE-SSFP -0.58 ± 2.9%. Quantification of MaR with CE-SSFP imaging following STEMI shows high correlation and low bias compared with angiographic scoring and supports its use as a reliable and practical method to determine myocardial salvage in this patient population. Clinical trial registration information for the parent clinical trial: Karolinska Clinical Trial Registration (2008) Unique identifier: CT20080014. Registered 04 th January 2008.
Reconfigurable quadruple quantum dots in a silicon nanowire transistor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Betz, A. C., E-mail: ab2106@cam.ac.uk; Broström, M.; Gonzalez-Zalba, M. F.
2016-05-16
We present a reconfigurable metal-oxide-semiconductor multi-gate transistor that can host a quadruple quantum dot in silicon. The device consists of an industrial quadruple-gate silicon nanowire field-effect transistor. Exploiting the corner effect, we study the versatility of the structure in the single quantum dot and the serial double quantum dot regimes and extract the relevant capacitance parameters. We address the fabrication variability of the quadruple-gate approach which, paired with improved silicon fabrication techniques, makes the corner state quantum dot approach a promising candidate for a scalable quantum information architecture.
Left Atrial Appendage Closure for Stroke Prevention: Devices, Techniques, and Efficacy.
Iskandar, Sandia; Vacek, James; Lavu, Madhav; Lakkireddy, Dhanunjaya
2016-05-01
Left atrial appendage closure can be performed either surgically or percutaneously. Surgical approaches include direct suture, excision and suture, stapling, and clipping. Percutaneous approaches include endocardial, epicardial, and hybrid endocardial-epicardial techniques. Left atrial appendage anatomy is highly variable and complex; therefore, preprocedural imaging is crucial to determine device selection and sizing, which contribute to procedural success and reduction of complications. Currently, the WATCHMAN is the only device that is approved for left atrial appendage closure in the United States. Copyright © 2016 Elsevier Inc. All rights reserved.
A prospective approach to coastal geography from satellite. [technological forecasting
NASA Technical Reports Server (NTRS)
Munday, J. C., Jr.
1981-01-01
A forecasting protocol termed the "prospective approach' was used to examine probable futures relative to coastal applications of satellite data. Significant variables include the energy situation, the national economy, national Earth satellite programs, and coastal zone research, commercial activity, and regulatory activity. Alternative scenarios for the period until 1986 are presented. Possible response by state/local remote sensing centers include operational applications for users, input to geo-base information systems (GIS), development of decision-making algorithms using GIS data, and long term research programs for coastal management using merged satellite and traditional data.
Consensus positive position feedback control for vibration attenuation of smart structures
NASA Astrophysics Data System (ADS)
Omidi, Ehsan; Nima Mahmoodi, S.
2015-04-01
This paper presents a new network-based approach for active vibration control in smart structures. In this approach, a network with known topology connects collocated actuator/sensor elements of the smart structure to one another. Each of these actuators/sensors, i.e., agent or node, is enhanced by a separate multi-mode positive position feedback (PPF) controller. The decentralized PPF controlled agents collaborate with each other in the designed network, under a certain consensus dynamics. The consensus constraint forces neighboring agents to cooperate with each other such that the disagreement between the time-domain actuation of the agents is driven to zero. The controller output of each agent is calculated using state-space variables; hence, optimal state estimators are designed first for the proposed observer-based consensus PPF control. The consensus controller is numerically investigated for a flexible smart structure, i.e., a thin aluminum beam that is clamped at its both ends. Results demonstrate that the consensus law successfully imposes synchronization between the independently controlled agents, as the disagreements between the decentralized PPF controller variables converge to zero in a short time. The new consensus PPF controller brings extra robustness to vibration suppression in smart structures, where malfunctions of an agent can be compensated for by referencing the neighboring agents’ performance. This is demonstrated in the results by comparing the new controller with former centralized PPF approach.
An Integrated Approach to Winds, Jets, and State Transitions
NASA Astrophysics Data System (ADS)
Neilsen, Joseph
2017-09-01
We propose a large multiwavelength campaign (120 ks Chandra HETGS, NuSTAR, INTEGRAL, JVLA/ATCA, Swift, XMM, Gemini) on a black hole transient to study the influence of ionized winds on relativistic jets and state transitions. With a reimagined observing strategy based on new results on integrated RMS variability and a decade of radio/X-ray monitoring, we will search for winds during and after the state transition to test their influence on and track their coevolution with the disk and the jet over the next 2-3 months. Our spectral and timing constraints will provide precise probes of the accretion geometry and accretion/ejection physics.
What is the uncertainty principle of non-relativistic quantum mechanics?
NASA Astrophysics Data System (ADS)
Riggs, Peter J.
2018-05-01
After more than ninety years of discussions over the uncertainty principle, there is still no universal agreement on what the principle states. The Robertson uncertainty relation (incorporating standard deviations) is given as the mathematical expression of the principle in most quantum mechanics textbooks. However, the uncertainty principle is not merely a statement of what any of the several uncertainty relations affirm. It is suggested that a better approach would be to present the uncertainty principle as a statement about the probability distributions of incompatible variables and the resulting restrictions on quantum states.
ERIC Educational Resources Information Center
Baker, Bruce D.; Green, Preston C., III
2009-01-01
The goal of this study is to apply a conventional education cost-function approach for estimating the sensitivity of cost models and predicted education costs to the inclusion of school district level racial composition variables and further to test whether race neutral alternatives sufficiently capture the additional costs associated with school…
ERIC Educational Resources Information Center
Bishop, Malachy; Chan, Fong; Rumrill, Phillip D., Jr.; Frain, Michael P.; Tansey, Timothy N.; Chiu, Chung-Yi; Strauser, David; Umeasiegbu, Veronica I.
2015-01-01
Purpose: To examine demographic, functional, and clinical multiple sclerosis (MS) variables affecting employment status in a national sample of adults with MS in the United States. Method: The sample included 4,142 working-age (20-65 years) Americans with MS (79.1% female) who participated in a national survey. The mean age of participants was…
Scenario studies as a synthetic and integrative research activity for Long-Term Ecological Research
Jonathan R. Thompson; Arnim Wiek; Frederick J. Swanson; Stephen R. Carpenter; Nancy Fresco; Teresa Hollingsworth; Thomas A. Spies; David R. Foster
2012-01-01
Scenario studies have emerged as a powerful approach for synthesizing diverse forms of research and for articulating and evaluating alternative socioecological futures. Unlike predictive modeling, scenarios do not attempt to forecast the precise or probable state of any variable at a given point in the future. Instead, comparisons among a set of contrasting scenarios...
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Salvucci, Guido D.; Rigden, Angela J.; Jung, Martin; Collatz, G. James; Schubert, Siegfried D.
2015-01-01
The spatial pattern across the continental United States of the interannual variance of warm season water-dependent evapotranspiration, a pattern of relevance to land-atmosphere feedback, cannot be measured directly. Alternative and indirect approaches to estimating the pattern, however, do exist, and given the uncertainty of each, we use several such approaches here. We first quantify the water dependent evapotranspiration variance pattern inherent in two derived evapotranspiration datasets available from the literature. We then search for the pattern in proxy geophysical variables (air temperature, stream flow, and NDVI) known to have strong ties to evapotranspiration. The variances inherent in all of the different (and mostly independent) data sources show some differences but are generally strongly consistent they all show a large variance signal down the center of the U.S., with lower variances toward the east and (for the most part) toward the west. The robustness of the pattern across the datasets suggests that it indeed represents the pattern operating in nature. Using Budykos hydroclimatic framework, we show that the pattern can largely be explained by the relative strength of water and energy controls on evapotranspiration across the continent.
Nims, Robert J; Durney, Krista M; Cigan, Alexander D; Dusséaux, Antoine; Hung, Clark T; Ateshian, Gerard A
2016-02-06
This study presents a damage mechanics framework that employs observable state variables to describe damage in isotropic or anisotropic fibrous tissues. In this mixture theory framework, damage is tracked by the mass fraction of bonds that have broken. Anisotropic damage is subsumed in the assumption that multiple bond species may coexist in a material, each having its own damage behaviour. This approach recovers the classical damage mechanics formulation for isotropic materials, but does not appeal to a tensorial damage measure for anisotropic materials. In contrast with the classical approach, the use of observable state variables for damage allows direct comparison of model predictions to experimental damage measures, such as biochemical assays or Raman spectroscopy. Investigations of damage in discrete fibre distributions demonstrate that the resilience to damage increases with the number of fibre bundles; idealizing fibrous tissues using continuous fibre distribution models precludes the modelling of damage. This damage framework was used to test and validate the hypothesis that growth of cartilage constructs can lead to damage of the synthesized collagen matrix due to excessive swelling caused by synthesized glycosaminoglycans. Therefore, alternative strategies must be implemented in tissue engineering studies to prevent collagen damage during the growth process.
Nims, Robert J.; Durney, Krista M.; Cigan, Alexander D.; Hung, Clark T.; Ateshian, Gerard A.
2016-01-01
This study presents a damage mechanics framework that employs observable state variables to describe damage in isotropic or anisotropic fibrous tissues. In this mixture theory framework, damage is tracked by the mass fraction of bonds that have broken. Anisotropic damage is subsumed in the assumption that multiple bond species may coexist in a material, each having its own damage behaviour. This approach recovers the classical damage mechanics formulation for isotropic materials, but does not appeal to a tensorial damage measure for anisotropic materials. In contrast with the classical approach, the use of observable state variables for damage allows direct comparison of model predictions to experimental damage measures, such as biochemical assays or Raman spectroscopy. Investigations of damage in discrete fibre distributions demonstrate that the resilience to damage increases with the number of fibre bundles; idealizing fibrous tissues using continuous fibre distribution models precludes the modelling of damage. This damage framework was used to test and validate the hypothesis that growth of cartilage constructs can lead to damage of the synthesized collagen matrix due to excessive swelling caused by synthesized glycosaminoglycans. Therefore, alternative strategies must be implemented in tissue engineering studies to prevent collagen damage during the growth process. PMID:26855751
A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasqualini, Donatella
This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimatedmore » stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.« less
The typological approach in child and family psychology: a review of theory, methods, and research.
Mandara, Jelani
2003-06-01
The purpose of this paper was to review the theoretical underpinnings, major concepts, and methods of the typological approach. It was argued that the typological approach offers a systematic, empirically rigorous and reliable way to synthesize the nomothetic variable-centered approach with the idiographic case-centered approach. Recent advances in cluster analysis validation make it a promising method for uncovering natural typologies. This paper also reviewed findings from personality and family studies that have revealed 3 prototypical personalities and parenting styles: Adjusted/Authoritative, Overcontrolled/Authoritarian, and Undercontrolled/Permissive. These prototypes are theorized to be synonymous with attractor basins in psychological state space. The connection between family types and personality structure as well as future directions of typological research were also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, VT.; Silva, L.; Digonnet, H.
2011-05-04
The objective of this work is to model the viscoelastic behaviour of polymer from the solid state to the liquid state. With this objective, we perform experimental tensile tests and compare with simulation results. The chosen polymer is a PMMA whose behaviour depends on its temperature. The computation simulation is based on Navier-Stokes equations where we propose a mixed finite element method with an interpolation P1+/P1 using displacement (or velocity) and pressure as principal variables. The implemented technique uses a mesh composed of triangles (2D) or tetrahedra (3D). The goal of this approach is to model the viscoelastic behaviour ofmore » polymers through a fluid-structure coupling technique with a multiphase approach.« less
Norris, Peter M; da Silva, Arlindo M
2016-07-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.
NASA Technical Reports Server (NTRS)
Norris, Peter M.; Da Silva, Arlindo M.
2016-01-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.
Norris, Peter M.; da Silva, Arlindo M.
2018-01-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847
A data-driven approach for modeling post-fire debris-flow volumes and their uncertainty
Friedel, Michael J.
2011-01-01
This study demonstrates the novel application of genetic programming to evolve nonlinear post-fire debris-flow volume equations from variables associated with a data-driven conceptual model of the western United States. The search space is constrained using a multi-component objective function that simultaneously minimizes root-mean squared and unit errors for the evolution of fittest equations. An optimization technique is then used to estimate the limits of nonlinear prediction uncertainty associated with the debris-flow equations. In contrast to a published multiple linear regression three-variable equation, linking basin area with slopes greater or equal to 30 percent, burn severity characterized as area burned moderate plus high, and total storm rainfall, the data-driven approach discovers many nonlinear and several dimensionally consistent equations that are unbiased and have less prediction uncertainty. Of the nonlinear equations, the best performance (lowest prediction uncertainty) is achieved when using three variables: average basin slope, total burned area, and total storm rainfall. Further reduction in uncertainty is possible for the nonlinear equations when dimensional consistency is not a priority and by subsequently applying a gradient solver to the fittest solutions. The data-driven modeling approach can be applied to nonlinear multivariate problems in all fields of study.
Distinct promoter activation mechanisms modulate noise-driven HIV gene expression
NASA Astrophysics Data System (ADS)
Chavali, Arvind K.; Wong, Victor C.; Miller-Jensen, Kathryn
2015-12-01
Latent human immunodeficiency virus (HIV) infections occur when the virus occupies a transcriptionally silent but reversible state, presenting a major obstacle to cure. There is experimental evidence that random fluctuations in gene expression, when coupled to the strong positive feedback encoded by the HIV genetic circuit, act as a ‘molecular switch’ controlling cell fate, i.e., viral replication versus latency. Here, we implemented a stochastic computational modeling approach to explore how different promoter activation mechanisms in the presence of positive feedback would affect noise-driven activation from latency. We modeled the HIV promoter as existing in one, two, or three states that are representative of increasingly complex mechanisms of promoter repression underlying latency. We demonstrate that two-state and three-state models are associated with greater variability in noisy activation behaviors, and we find that Fano factor (defined as variance over mean) proves to be a useful noise metric to compare variability across model structures and parameter values. Finally, we show how three-state promoter models can be used to qualitatively describe complex reactivation phenotypes in response to therapeutic perturbations that we observe experimentally. Ultimately, our analysis suggests that multi-state models more accurately reflect observed heterogeneous reactivation and may be better suited to evaluate how noise affects viral clearance.
Higher temporal variability of forest breeding bird communities in fragmented landscapes
Boulinier, T.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Flather, C.H.; Pollock, K.H.
1998-01-01
Understanding the relationship between animal community dynamics and landscape structure has become a priority for biodiversity conservation. In particular, predicting the effects of habitat destruction that confine species to networks of small patches is an important prerequisite to conservation plan development. Theoretical models that predict the occurrence of species in fragmented landscapes, and relationships between stability and diversity do exist. However, reliable empirical investigations of the dynamics of biodiversity have been prevented by differences in species detection probabilities among landscapes. Using long-term data sampled at a large spatial scale in conjunction with a capture-recapture approach, we developed estimates of parameters of community changes over a 22-year period for forest breeding birds in selected areas of the eastern United States. We show that forest fragmentation was associated not only with a reduced number of forest bird species, but also with increased temporal variability in the number of species. This higher temporal variability was associated with higher local extinction and turnover rates. These results have major conservation implications. Moreover, the approach used provides a practical tool for the study of the dynamics of biodiversity.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include (1) diagnostic/detection methodology, (2) prognosis/lifing methodology, (3) diagnostic/prognosis linkage, (4) experimental validation, and (5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multimechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include 1) diagnostic/detection methodology, 2) prognosis/lifing methodology, 3) diagnostic/prognosis linkage, 4) experimental validation and 5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multi-mechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
Shanks, O.C.; Sivaganesan, M.; Peed, L.; Kelty, C.A.; Blackwood, A.D.; Greene, M.R.; Noble, R.T.; Bushon, R.N.; Stelzer, E.A.; Kinzelman, J.; Anan'Eva, T.; Sinigalliano, C.; Wanless, D.; Griffith, J.; Cao, Y.; Weisberg, S.; Harwood, V.J.; Staley, C.; Oshima, K.H.; Varma, M.; Haugland, R.A.
2012-01-01
The application of quantitative real-time PCR (qPCR) technologies for the rapid identification of fecal bacteria in environmental waters is being considered for use as a national water quality metric in the United States. The transition from research tool to a standardized protocol requires information on the reproducibility and sources of variation associated with qPCR methodology across laboratories. This study examines interlaboratory variability in the measurement of enterococci and Bacteroidales concentrations from standardized, spiked, and environmental sources of DNA using the Entero1a and GenBac3 qPCR methods, respectively. Comparisons are based on data generated from eight different research facilities. Special attention was placed on the influence of the DNA isolation step and effect of simplex and multiplex amplification approaches on interlaboratory variability. Results suggest that a crude lysate is sufficient for DNA isolation unless environmental samples contain substances that can inhibit qPCR amplification. No appreciable difference was observed between simplex and multiplex amplification approaches. Overall, interlaboratory variability levels remained low (<10% coefficient of variation) regardless of qPCR protocol. ?? 2011 American Chemical Society.
Structural Time Series Model for El Niño Prediction
NASA Astrophysics Data System (ADS)
Petrova, Desislava; Koopman, Siem Jan; Ballester, Joan; Rodo, Xavier
2015-04-01
ENSO is a dominant feature of climate variability on inter-annual time scales destabilizing weather patterns throughout the globe, and having far-reaching socio-economic consequences. It does not only lead to extensive rainfall and flooding in some regions of the world, and anomalous droughts in others, thus ruining local agriculture, but also substantially affects the marine ecosystems and the sustained exploitation of marine resources in particular coastal zones, especially the Pacific South American coast. As a result, forecasting of ENSO and especially of the warm phase of the oscillation (El Niño/EN) has long been a subject of intense research and improvement. Thus, the present study explores a novel method for the prediction of the Niño 3.4 index. In the state-of-the-art the advantageous statistical modeling approach of Structural Time Series Analysis has not been applied. Therefore, we have developed such a model using a State Space approach for the unobserved components of the time series. Its distinguishing feature is that observations consist of various components - level, seasonality, cycle, disturbance, and regression variables incorporated as explanatory covariates. These components are aimed at capturing the various modes of variability of the N3.4 time series. They are modeled separately, then combined in a single model for analysis and forecasting. Customary statistical ENSO prediction models essentially use SST, SLP and wind stress in the equatorial Pacific. We introduce new regression variables - subsurface ocean temperature in the western equatorial Pacific, motivated by recent (Ramesh and Murtugudde, 2012) and classical research (Jin, 1997), (Wyrtki, 1985), showing that subsurface processes and heat accumulation there are fundamental for initiation of an El Niño event; and a southern Pacific temperature-difference tracer, the Rossbell dipole, leading EN by about nine months (Ballester, 2011).
The detection of cheating in multiple choice examinations
NASA Astrophysics Data System (ADS)
Richmond, Peter; Roehner, Bertrand M.
2015-10-01
Cheating in examinations is acknowledged by an increasing number of organizations to be widespread. We examine two different approaches to assess their effectiveness at detecting anomalous results, suggestive of collusion, using data taken from a number of multiple-choice examinations organized by the UK Radio Communication Foundation. Analysis of student pair overlaps of correct answers is shown to give results consistent with more orthodox statistical correlations for which confidence limits as opposed to the less familiar "Bonferroni method" can be used. A simulation approach is also developed which confirms the interpretation of the empirical approach. Then the variables Xi =(1 -Ui) Yi +Ui Z are a system of symmetric dependent binary variables (0 , 1 ; p) whose correlation matrix is ρij = r. The proof is easy and given in the paper. Let us add two remarks. • We used the expression "symmetric variables" to reflect the fact that all Xi play the same role. The expression "exchangeable variables" is often used with the same meaning. • The correlation matrix has only positive elements. This is of course imposed by the symmetry condition. ρ12 < 0 and ρ23 < 0 would imply ρ13 > 0, thus violating the symmetry requirement. In the following subsections we will be concerned with the question of uniqueness of the set of Xi generated above. Needless to say, it is useful to know whether the proposition gives the answer or only one among many. More precisely, the problem can be stated as follows.
NASA Astrophysics Data System (ADS)
Liu, Jie; Wang, Wilson; Ma, Fai
2011-07-01
System current state estimation (or condition monitoring) and future state prediction (or failure prognostics) constitute the core elements of condition-based maintenance programs. For complex systems whose internal state variables are either inaccessible to sensors or hard to measure under normal operational conditions, inference has to be made from indirect measurements using approaches such as Bayesian learning. In recent years, the auxiliary particle filter (APF) has gained popularity in Bayesian state estimation; the APF technique, however, has some potential limitations in real-world applications. For example, the diversity of the particles may deteriorate when the process noise is small, and the variance of the importance weights could become extremely large when the likelihood varies dramatically over the prior. To tackle these problems, a regularized auxiliary particle filter (RAPF) is developed in this paper for system state estimation and forecasting. This RAPF aims to improve the performance of the APF through two innovative steps: (1) regularize the approximating empirical density and redraw samples from a continuous distribution so as to diversify the particles; and (2) smooth out the rather diffused proposals by a rejection/resampling approach so as to improve the robustness of particle filtering. The effectiveness of the proposed RAPF technique is evaluated through simulations of a nonlinear/non-Gaussian benchmark model for state estimation. It is also implemented for a real application in the remaining useful life (RUL) prediction of lithium-ion batteries.
Exact solutions for the entropy production rate of several irreversible processes.
Ross, John; Vlad, Marcel O
2005-11-24
We investigate thermal conduction described by Newton's law of cooling and by Fourier's transport equation and chemical reactions based on mass action kinetics where we detail a simple example of a reaction mechanism with one intermediate. In these cases we derive exact expressions for the entropy production rate and its differential. We show that at a stationary state the entropy production rate is an extremum if and only if the stationary state is a state of thermodynamic equilibrium. These results are exact and independent of any expansions of the entropy production rate. In the case of thermal conduction we compare our exact approach with the conventional approach based on the expansion of the entropy production rate near equilibrium. If we expand the entropy production rate in a series and keep terms up to the third order in the deviation variables and then differentiate, we find out that the entropy production rate is not an extremum at a nonequilibrium steady state. If there is a strict proportionality between fluxes and forces, then the entropy production rate is an extremum at the stationary state even if the stationary state is far away from equilibrium.
Integration of nanoscale memristor synapses in neuromorphic computing architectures
NASA Astrophysics Data System (ADS)
Indiveri, Giacomo; Linares-Barranco, Bernabé; Legenstein, Robert; Deligeorgis, George; Prodromakis, Themistoklis
2013-09-01
Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.
Integrated stoichiometric, thermodynamic and kinetic modelling of steady state metabolism
Fleming, R.M.T.; Thiele, I.; Provan, G.; Nasheuer, H.P.
2010-01-01
The quantitative analysis of biochemical reactions and metabolites is at frontier of biological sciences. The recent availability of high-throughput technology data sets in biology has paved the way for new modelling approaches at various levels of complexity including the metabolome of a cell or an organism. Understanding the metabolism of a single cell and multi-cell organism will provide the knowledge for the rational design of growth conditions to produce commercially valuable reagents in biotechnology. Here, we demonstrate how equations representing steady state mass conservation, energy conservation, the second law of thermodynamics, and reversible enzyme kinetics can be formulated as a single system of linear equalities and inequalities, in addition to linear equalities on exponential variables. Even though the feasible set is non-convex, the reformulation is exact and amenable to large-scale numerical analysis, a prerequisite for computationally feasible genome scale modelling. Integrating flux, concentration and kinetic variables in a unified constraint-based formulation is aimed at increasing the quantitative predictive capacity of flux balance analysis. Incorporation of experimental and theoretical bounds on thermodynamic and kinetic variables ensures that the predicted steady state fluxes are both thermodynamically and biochemically feasible. The resulting in silico predictions are tested against fluxomic data for central metabolism in E. coli and compare favourably with in silico prediction by flux balance analysis. PMID:20230840
Perendeci, Altinay; Arslan, Sever; Tanyolaç, Abdurrahman; Celebi, Serdar S
2009-10-01
A conceptual neural fuzzy model based on adaptive-network based fuzzy inference system, ANFIS, was proposed using available input on-line and off-line operational variables for a sugar factory anaerobic wastewater treatment plant operating under unsteady state to estimate the effluent chemical oxygen demand, COD. The predictive power of the developed model was improved as a new approach by adding the phase vector and the recent values of COD up to 5-10 days, longer than overall retention time of wastewater in the system. History of last 10 days for COD effluent with two-valued phase vector in the input variable matrix including all parameters had more predictive power. History of 7 days with two-valued phase vector in the matrix comprised of only on-line variables yielded fairly well estimations. The developed ANFIS model with phase vector and history extension has been able to adequately represent the behavior of the treatment system.
NASA Astrophysics Data System (ADS)
Borga, Marco; Francois, Baptiste; Hingray, Benoit; Zoccatelli, Davide; Creutin, Jean-Dominique; brown, Casey
2016-04-01
Due to their variable and un-controllable features, integration of Variable Renewable Energies (e.g. solar-power, wind-power and hydropower, denoted as VRE) into the electricity network implies higher production variability and increased risk of not meeting demand. Two approaches are commonly used for assessing this risk and especially its evolution in a global change context (i.e. climate and societal changes); top-down and bottom-up approaches. The general idea of a top-down approach is to drive analysis of global change or of some key aspects of global change on their systems (e.g., the effects of the COP 21, of the deployment of Smart Grids, or of climate change) with chains of loosely linked simulation models within a predictive framework. The bottom-up approach aims to improve understanding of the dependencies between the vulnerability of regional systems and large-scale phenomenon from knowledge gained through detailed exploration of the response to change of the system of interest, which may reveal vulnerability thresholds, tipping points as well as potential opportunities. Brown et al. (2012) defined an analytical framework to merge these two approaches. The objective is to build, a set of Climate Response Functions (CRFs) putting in perspective i) indicators of desired states ("success") and undesired states ("failure") of a system as defined in collaboration with stakeholders 2) exhaustive exploration of the effects of uncertain forcings and imperfect system understanding on the response of the system itself to a plausible set of possible changes, implemented a with multi-dimensionally consistent "stress test" algorithm, and 3) a set "ex post" hydroclimatic and socioeconomic scenarios that provide insight into the differential effectiveness of alternative policies and serve as entry points for the provision of climate information to inform policy evaluation and choice. We adapted this approach for analyzing a 100 % renewable energy system within a region in Northern Italy. The main VRE available in the region are solar and hydropower (with an important fraction of run-of-the river hydropower). The indicator of success is the well-known 'energy penetration', defined as the percentage of energy demand met by the VRE power generation. The synthetic weather variables used for building the CRFs are obtained by perturbing the observed weather time series with the change factors method. A large ensemble of future climate scenarios from CMIP5 experiments are further used for assessing these factors for different emission scenarios, climate models and future prediction lead times. Their positioning on the CRFs allows discussing the risk pertaining to VRE penetration in the future. A focus is especially made on the different CRFs obtained from daily to seasonal time scales.
Teresi, Jeanne A.; Jones, Richard N.
2017-01-01
The purpose of this article is to introduce the methods used and challenges confronted by the authors of this two-part series of articles describing the results of analyses of measurement equivalence of the short form scales from the Patient Reported Outcomes Measurement Information System® (PROMIS®). Qualitative and quantitative approaches used to examine differential item functioning (DIF) are reviewed briefly. Qualitative methods focused on generation of DIF hypotheses. The basic quantitative approaches used all rely on a latent variable model, and examine parameters either derived directly from item response theory (IRT) or from structural equation models (SEM). A key methods focus of these articles is to describe state-of-the art approaches to examination of measurement equivalence in eight domains: physical health, pain, fatigue, sleep, depression, anxiety, cognition, and social function. These articles represent the first time that DIF has been examined systematically in the PROMIS short form measures, particularly among ethnically diverse groups. This is also the first set of analyses to examine the performance of PROMIS short forms in patients with cancer. Latent variable model state-of-the-art methods for examining measurement equivalence are introduced briefly in this paper to orient readers to the approaches adopted in this set of papers. Several methodological challenges underlying (DIF-free) anchor item selection and model assumption violations are presented as a backdrop for the articles in this two-part series on measurement equivalence of PROMIS measures. PMID:28983448
Teresi, Jeanne A; Jones, Richard N
2016-01-01
The purpose of this article is to introduce the methods used and challenges confronted by the authors of this two-part series of articles describing the results of analyses of measurement equivalence of the short form scales from the Patient Reported Outcomes Measurement Information System ® (PROMIS ® ). Qualitative and quantitative approaches used to examine differential item functioning (DIF) are reviewed briefly. Qualitative methods focused on generation of DIF hypotheses. The basic quantitative approaches used all rely on a latent variable model, and examine parameters either derived directly from item response theory (IRT) or from structural equation models (SEM). A key methods focus of these articles is to describe state-of-the art approaches to examination of measurement equivalence in eight domains: physical health, pain, fatigue, sleep, depression, anxiety, cognition, and social function. These articles represent the first time that DIF has been examined systematically in the PROMIS short form measures, particularly among ethnically diverse groups. This is also the first set of analyses to examine the performance of PROMIS short forms in patients with cancer. Latent variable model state-of-the-art methods for examining measurement equivalence are introduced briefly in this paper to orient readers to the approaches adopted in this set of papers. Several methodological challenges underlying (DIF-free) anchor item selection and model assumption violations are presented as a backdrop for the articles in this two-part series on measurement equivalence of PROMIS measures.
NASA Astrophysics Data System (ADS)
Gan, L.; Yang, F.; Shi, Y. F.; He, H. L.
2017-11-01
Many occasions related to batteries demand to know how much continuous and instantaneous power can batteries provide such as the rapidly developing electric vehicles. As the large-scale applications of lithium-ion batteries, lithium-ion batteries are used to be our research object. Many experiments are designed to get the lithium-ion battery parameters to ensure the relevance and reliability of the estimation. To evaluate the continuous and instantaneous load capability of a battery called state-of-function (SOF), this paper proposes a fuzzy logic algorithm based on battery state-of-charge(SOC), state-of-health(SOH) and C-rate parameters. Simulation and experimental results indicate that the proposed approach is suitable for battery SOF estimation.
Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States
NASA Astrophysics Data System (ADS)
Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas
2017-11-01
Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.
Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States.
Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas
2017-11-03
Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.
State estimation applications in aircraft flight-data analysis: A user's manual for SMACK
NASA Technical Reports Server (NTRS)
Bach, Ralph E., Jr.
1991-01-01
The evolution in the use of state estimation is traced for the analysis of aircraft flight data. A unifying mathematical framework for state estimation is reviewed, and several examples are presented that illustrate a general approach for checking instrument accuracy and data consistency, and for estimating variables that are difficult to measure. Recent applications associated with research aircraft flight tests and airline turbulence upsets are described. A computer program for aircraft state estimation is discussed in some detail. This document is intended to serve as a user's manual for the program called SMACK (SMoothing for AirCraft Kinematics). The diversity of the applications described emphasizes the potential advantages in using SMACK for flight-data analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niccoli, G.
The antiperiodic transfer matrices associated to higher spin representations of the rational 6-vertex Yang-Baxter algebra are analyzed by generalizing the approach introduced recently in the framework of Sklyanin's quantum separation of variables (SOV) for cyclic representations, spin-1/2 highest weight representations, and also for spin-1/2 representations of the 6-vertex reflection algebra. Such SOV approach allow us to derive exactly results which represent complicate tasks for more traditional methods based on Bethe ansatz and Baxter Q-operator. In particular, we both prove the completeness of the SOV characterization of the transfer matrix spectrum and its simplicity. Then, the derived characterization of local operatorsmore » by Sklyanin's quantum separate variables and the expression of the scalar products of separate states by determinant formulae allow us to compute the form factors of the local spin operators by one determinant formulae similar to those of the scalar products.« less
Airfoil Design and Optimization by the One-Shot Method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1995-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Simulating the dynamic behavior of a vertical axis wind turbine operating in unsteady conditions
NASA Astrophysics Data System (ADS)
Battisti, L.; Benini, E.; Brighenti, A.; Soraperra, G.; Raciti Castelli, M.
2016-09-01
The present work aims at assessing the reliability of a simulation tool capable of computing the unsteady rotational motion and the associated tower oscillations of a variable speed VAWT immersed in a coherent turbulent wind. As a matter of fact, since the dynamic behaviour of a variable speed turbine strongly depends on unsteady wind conditions (wind gusts), a steady state approach can't accurately catch transient correlated issues. The simulation platform proposed here is implemented using a lumped mass approach: the drive train is described by resorting to both the polar inertia and the angular position of rotating parts, also considering their speed and acceleration, while rotor aerodynamic is based on steady experimental curves. The ultimate objective of the presented numerical platform is the simulation of transient phenomena, driven by turbulence, occurring during rotor operation, with the aim of supporting the implementation of efficient and robust control algorithms.
Airfoil optimization by the one-shot method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1994-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Clark, M.R.; Gangopadhyay, S.; Hay, L.; Rajagopalan, B.; Wilby, R.
2004-01-01
A number of statistical methods that are used to provide local-scale ensemble forecasts of precipitation and temperature do not contain realistic spatial covariability between neighboring stations or realistic temporal persistence for subsequent forecast lead times. To demonstrate this point, output from a global-scale numerical weather prediction model is used in a stepwise multiple linear regression approach to downscale precipitation and temperature to individual stations located in and around four study basins in the United States. Output from the forecast model is downscaled for lead times up to 14 days. Residuals in the regression equation are modeled stochastically to provide 100 ensemble forecasts. The precipitation and temperature ensembles from this approach have a poor representation of the spatial variability and temporal persistence. The spatial correlations for downscaled output are considerably lower than observed spatial correlations at short forecast lead times (e.g., less than 5 days) when there is high accuracy in the forecasts. At longer forecast lead times, the downscaled spatial correlations are close to zero. Similarly, the observed temporal persistence is only partly present at short forecast lead times. A method is presented for reordering the ensemble output in order to recover the space-time variability in precipitation and temperature fields. In this approach, the ensemble members for a given forecast day are ranked and matched with the rank of precipitation and temperature data from days randomly selected from similar dates in the historical record. The ensembles are then reordered to correspond to the original order of the selection of historical data. Using this approach, the observed intersite correlations, intervariable correlations, and the observed temporal persistence are almost entirely recovered. This reordering methodology also has applications for recovering the space-time variability in modeled streamflow. ?? 2004 American Meteorological Society.
NASA Astrophysics Data System (ADS)
Srinivasan, Vasudevan
Air plasma spray is inherently complex due to the deviation from equilibrium conditions, three dimensional nature, multitude of interrelated (controllable) parameters and (uncontrollable) variables involved, and stochastic variability at different stages. The resultant coatings are complex due to the layered high defect density microstructure. Despite the widespread use and commercial success for decades in earthmoving, automotive, aerospace and power generation industries, plasma spray has not been completely understood and prime reliance for critical applications such as thermal barrier coatings on gas turbines are yet to be accomplished. This dissertation is aimed at understanding the in-flight particle state of the plasma spray process towards designing coatings and achieving coating reliability with the aid of noncontact in-flight particle and spray stream sensors. Key issues such as the phenomena of optimum particle injection and the definition of spray stream using particle state are investigated. Few strategies to modify the microstructure and properties of Yttria Stabilized Zirconia coatings are examined systematically using the framework of process maps. An approach to design process window based on design relevant coating properties is presented. Options to control the process for enhanced reproducibility and reliability are examined and the resultant variability is evaluated systematically at the different stages in the process. The 3D variability due to the difference in plasma characteristics has been critically examined by investigating splats collected from the entire spray footprint.
Rawlings, Renata A; Shi, Hang; Yuan, Lo-Hua; Brehm, William; Pop-Busui, Rodica; Nelson, Patrick W
2011-12-01
Several metrics of glucose variability have been proposed to date, but an integrated approach that provides a complete and consistent assessment of glycemic variation is missing. As a consequence, and because of the tedious coding necessary during quantification, most investigators and clinicians have not yet adopted the use of multiple glucose variability metrics to evaluate glycemic variation. We compiled the most extensively used statistical techniques and glucose variability metrics, with adjustable hyper- and hypoglycemic limits and metric parameters, to create a user-friendly Continuous Glucose Monitoring Graphical User Interface for Diabetes Evaluation (CGM-GUIDE©). In addition, we introduce and demonstrate a novel transition density profile that emphasizes the dynamics of transitions between defined glucose states. Our combined dashboard of numerical statistics and graphical plots support the task of providing an integrated approach to describing glycemic variability. We integrated existing metrics, such as SD, area under the curve, and mean amplitude of glycemic excursion, with novel metrics such as the slopes across critical transitions and the transition density profile to assess the severity and frequency of glucose transitions per day as they move between critical glycemic zones. By presenting the above-mentioned metrics and graphics in a concise aggregate format, CGM-GUIDE provides an easy to use tool to compare quantitative measures of glucose variability. This tool can be used by researchers and clinicians to develop new algorithms of insulin delivery for patients with diabetes and to better explore the link between glucose variability and chronic diabetes complications.
Rawlings, Renata A.; Shi, Hang; Yuan, Lo-Hua; Brehm, William; Pop-Busui, Rodica
2011-01-01
Abstract Background Several metrics of glucose variability have been proposed to date, but an integrated approach that provides a complete and consistent assessment of glycemic variation is missing. As a consequence, and because of the tedious coding necessary during quantification, most investigators and clinicians have not yet adopted the use of multiple glucose variability metrics to evaluate glycemic variation. Methods We compiled the most extensively used statistical techniques and glucose variability metrics, with adjustable hyper- and hypoglycemic limits and metric parameters, to create a user-friendly Continuous Glucose Monitoring Graphical User Interface for Diabetes Evaluation (CGM-GUIDE©). In addition, we introduce and demonstrate a novel transition density profile that emphasizes the dynamics of transitions between defined glucose states. Results Our combined dashboard of numerical statistics and graphical plots support the task of providing an integrated approach to describing glycemic variability. We integrated existing metrics, such as SD, area under the curve, and mean amplitude of glycemic excursion, with novel metrics such as the slopes across critical transitions and the transition density profile to assess the severity and frequency of glucose transitions per day as they move between critical glycemic zones. Conclusions By presenting the above-mentioned metrics and graphics in a concise aggregate format, CGM-GUIDE provides an easy to use tool to compare quantitative measures of glucose variability. This tool can be used by researchers and clinicians to develop new algorithms of insulin delivery for patients with diabetes and to better explore the link between glucose variability and chronic diabetes complications. PMID:21932986
Efficient dual approach to distance metric learning.
Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton
2014-02-01
Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.
Data-Driven Model Uncertainty Estimation in Hydrologic Data Assimilation
NASA Astrophysics Data System (ADS)
Pathiraja, S.; Moradkhani, H.; Marshall, L.; Sharma, A.; Geenens, G.
2018-02-01
The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.
Characterization of the free-energy landscapes of proteins by NMR-guided metadynamics
Granata, Daniele; Camilloni, Carlo; Vendruscolo, Michele; Laio, Alessandro
2013-01-01
The use of free-energy landscapes rationalizes a wide range of aspects of protein behavior by providing a clear illustration of the different states accessible to these molecules, as well as of their populations and pathways of interconversion. The determination of the free-energy landscapes of proteins by computational methods is, however, very challenging as it requires an extensive sampling of their conformational spaces. We describe here a technique to achieve this goal with relatively limited computational resources by incorporating nuclear magnetic resonance (NMR) chemical shifts as collective variables in metadynamics simulations. As in this approach the chemical shifts are not used as structural restraints, the resulting free-energy landscapes correspond to the force fields used in the simulations. We illustrate this approach in the case of the third Ig-binding domain of protein G from streptococcal bacteria (GB3). Our calculations reveal the existence of a folding intermediate of GB3 with nonnative structural elements. Furthermore, the availability of the free-energy landscape enables the folding mechanism of GB3 to be elucidated by analyzing the conformational ensembles corresponding to the native, intermediate, and unfolded states, as well as the transition states between them. Taken together, these results show that, by incorporating experimental data as collective variables in metadynamics simulations, it is possible to enhance the sampling efficiency by two or more orders of magnitude with respect to standard molecular dynamics simulations, and thus to estimate free-energy differences among the different states of a protein with a kBT accuracy by generating trajectories of just a few microseconds. PMID:23572592
Zhao, Yong Mei; Golden, Aaron; Mar, Jessica C.; Einstein, Francine H.; Greally, John M.
2014-01-01
The mechanism and significance of epigenetic variability in the same cell type between healthy individuals are not clear. Here, we purify human CD34+ hematopoietic stem and progenitor cells (HSPCs) from different individuals and find that there is increased variability of DNA methylation at loci with properties of promoters and enhancers. The variability is especially enriched at candidate enhancers near genes transitioning between silent and expressed states, and encoding proteins with leukocyte differentiation properties. Our findings of increased variability at loci with intermediate DNA methylation values, at candidate “poised” enhancers, and at genes involved in HSPC lineage commitment suggest that CD34+ cell subtype heterogeneity between individuals is a major mechanism for the variability observed. Epigenomic studies performed on cell populations, even when purified, are testing collections of epigenomes, or meta-epigenomes. Our findings show that meta-epigenomic approaches to data analysis can provide insights into cell subpopulation structure. PMID:25327398
Verrot, Lucile; Destouni, Georgia
2015-01-01
Soil moisture influences and is influenced by water, climate, and ecosystem conditions, affecting associated ecosystem services in the landscape. This paper couples snow storage-melting dynamics with an analytical modeling approach to screening basin-scale, long-term soil moisture variability and change in a changing climate. This coupling enables assessment of both spatial differences and temporal changes across a wide range of hydro-climatic conditions. Model application is exemplified for two major Swedish hydrological basins, Norrström and Piteälven. These are located along a steep temperature gradient and have experienced different hydro-climatic changes over the time period of study, 1950-2009. Spatially, average intra-annual variability of soil moisture differs considerably between the basins due to their temperature-related differences in snow dynamics. With regard to temporal change, the long-term average state and intra-annual variability of soil moisture have not changed much, while inter-annual variability has changed considerably in response to hydro-climatic changes experienced so far in each basin.
Extending existing structural identifiability analysis methods to mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2018-01-01
The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Grolet, Aurelien; Thouverez, Fabrice
2015-02-01
This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.
Utilizing population variation, vaccination, and systems biology to study human immunology
Tsang, John S.
2016-01-01
The move toward precision medicine has highlighted the importance of understanding biological variability within and across individuals in the human population. In particular, given the prevalent involvement of the immune system in diverse pathologies, an important question is how much and what information about the state of the immune system is required to enable accurate prediction of future health and response to medical interventions. Towards addressing this question, recent studies using vaccination as a model perturbation and systems-biology approaches are beginning to provide a glimpse of how natural population variation together with multiplexed, high-throughput measurement and computational analysis can be used to uncover predictors of immune response quality in humans. Here I discuss recent developments in this emerging field, with emphasis on baseline correlates of vaccination responses, sources of immune-state variability, as well as relevant features of study design, data generation, and computational analysis. PMID:26187853
Socio-ecological Typologies for Understanding Adaptive Capacity of a Region to Natural Disasters
NASA Astrophysics Data System (ADS)
Surendran Nair, S.; Preston, B. L.; King, A. W.; Mei, R.
2015-12-01
It is expected that the frequency and magnitude of extreme climatic events will increase in coming decades with an anticipated increase in losses from climate hazards. In the Gulf Coastal region of the United States, climate hazards/disasters are common including hurricanes, drought and flooding. However, the capacity to adapt to extreme climatic events varies across the region. This adaptive capacity is linked to the magnitude of the extreme event, exposed infrastructure, and the socio-economic conditions across the region. This study uses hierarchical clustering to quantitatively integrates regional socioeconomic and biophysical factors and develop socio-ecological typologies (SET). The biophysical factors include climatic and topographic variables, and the socio-economic variables include human capital, social capital and man-made resources (infrastructure) of the region. The types of the SET are independent variables in a statistical model of a regional variable of interest. The methodology was applied to US Gulf States to evaluate the social and biophysical determinants of the regional variation in social vulnerability and economic loss to climate hazards. The results show that the SET explains much of the regional variation in social vulnerability, effectively capturing its determinants. In addition, the SET also explains of the variability in economic loss to hazards across of the region. The approach can thus be used to prioritize adaptation strategies to reduce vulnerability and loss across the region.
Identifying Slow Molecular Motions in Complex Chemical Reactions.
Piccini, GiovanniMaria; Polino, Daniela; Parrinello, Michele
2017-09-07
We have studied the cyclization reaction of deprotonated 4-chloro-1-butanethiol to tetrahydrothiophene by means of well-tempered metadynamics. To properly select the collective variables, we used the recently proposed variational approach to conformational dynamics within the framework of metadyanmics. This allowed us to select the appropriate linear combinations from a set of collective variables representing the slow degrees of freedom that best describe the slow modes of the reaction. We performed our calculations at three different temperatures, namely, 300, 350, and 400 K. We show that the choice of such collective variables allows one to easily interpret the complex free-energy surface of such a reaction by univocal identification of the conformers belonging to reactants and product states playing a fundamental role in the reaction mechanism.
Identification of redundant and synergetic circuits in triplets of electrophysiological data
NASA Astrophysics Data System (ADS)
Erramuzpe, Asier; Ortega, Guillermo J.; Pastor, Jesus; de Sola, Rafael G.; Marinazzo, Daniele; Stramaglia, Sebastiano; Cortes, Jesus M.
2015-12-01
Objective. Neural systems are comprised of interacting units, and relevant information regarding their function or malfunction can be inferred by analyzing the statistical dependencies between the activity of each unit. While correlations and mutual information are commonly used to characterize these dependencies, our objective here is to extend interactions to triplets of variables to better detect and characterize dynamic information transfer. Approach. Our approach relies on the measure of interaction information (II). The sign of II provides information as to the extent to which the interaction of variables in triplets is redundant (R) or synergetic (S). Three variables are said to be redundant when a third variable, say Z, added to a pair of variables (X, Y), diminishes the information shared between X and Y. Similarly, the interaction in the triplet is said to be synergetic when conditioning on Z enhances the information shared between X and Y with respect to the unconditioned state. Here, based on this approach, we calculated the R and S status for triplets of electrophysiological data recorded from drug-resistant patients with mesial temporal lobe epilepsy in order to study the spatial organization and dynamics of R and S close to the epileptogenic zone (the area responsible for seizure propagation). Main results. In terms of spatial organization, our results show that R matched the epileptogenic zone while S was distributed more in the surrounding area. In relation to dynamics, R made the largest contribution to high frequency bands (14-100 Hz), while S was expressed more strongly at lower frequencies (1-7 Hz). Thus, applying II to such clinical data reveals new aspects of epileptogenic structure in terms of the nature (redundancy versus synergy) and dynamics (fast versus slow rhythms) of the interactions. Significance. We expect this methodology, robust and simple, can reveal new aspects beyond pair-interactions in networks of interacting units in other setups with multi-recording data sets (and thus, not necessarily in epilepsy, the pathology we have approached here).
NASA Astrophysics Data System (ADS)
Golinkoff, Jordan Seth
The accurate estimation of forest attributes at many different spatial scales is a critical problem. Forest landowners may be interested in estimating timber volume, forest biomass, and forest structure to determine their forest's condition and value. Counties and states may be interested to learn about their forests to develop sustainable management plans and policies related to forests, wildlife, and climate change. Countries and consortiums of countries need information about their forests to set global and national targets to deal with issues of climate change and deforestation as well as to set national targets and understand the state of their forest at a given point in time. This dissertation approaches these questions from two perspectives. The first perspective uses the process model Biome-BGC paired with inventory and remote sensing data to make inferences about a current forest state given known climate and site variables. Using a model of this type, future climate data can be used to make predictions about future forest states as well. An example of this work applied to a forest in northern California is presented. The second perspective of estimating forest attributes uses high resolution aerial imagery paired with light detection and ranging (LiDAR) remote sensing data to develop statistical estimates of forest structure. Two approaches within this perspective are presented: a pixel based approach and an object based approach. Both approaches can serve as the platform on which models (either empirical growth and yield models or process models) can be run to generate inferences about future forest state and current forest biogeochemical cycling.
Discrete Time McKean–Vlasov Control Problem: A Dynamic Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, Huyên, E-mail: pham@math.univ-paris-diderot.fr; Wei, Xiaoli, E-mail: tyswxl@gmail.com
We consider the stochastic optimal control problem of nonlinear mean-field systems in discrete time. We reformulate the problem into a deterministic control problem with marginal distribution as controlled state variable, and prove that dynamic programming principle holds in its general form. We apply our method for solving explicitly the mean-variance portfolio selection and the multivariate linear-quadratic McKean–Vlasov control problem.
The evolution of health care advance planning law and policy.
Sabatino, Charles P
2010-06-01
The legal tools of health care advance planning have substantially changed since their emergence in the mid-1970s. Thirty years of policy development, primarily at the state legislative level addressing surrogate decision making and advance directives, have resulted in a disjointed policy landscape, yet with important points of convergence evolving over time. An understanding of the evolution of advance care planning policy has important implications for policy at both the state and federal levels. This article is a longitudinal statutory and literature review of health care advance planning from its origins to the present. While considerable variability across the states still remains, changes in law and policy over time suggest a gradual paradigm shift from what is described as a "legal transactional approach" to a "communications approach," the most recent extension of which is the emergence of Physician Orders for Life-Sustaining Treatment, or POLST. The communications approach helps translate patients' goals into visible and portable medical orders. States are likely to continue gradually moving away from a legal transactional mode of advance planning toward a communications model, albeit with challenges to authentic and reliable communication that accurately translates patients' wishes into the care they receive. In the meantime, the states and their health care institutions will continue to serve as the primary laboratory for advance care planning policy and practice.
Reverse thrust performance of the QCSEE variable pitch turbofan engine
NASA Technical Reports Server (NTRS)
Samanich, N. E.; Reemsnyder, D. C.; Blodmer, H. E.
1980-01-01
Results of steady state reverse and forward to reverse thrust transient performance tests are presented. The original quiet, clean, short haul, experimental engine four segment variable fan nozzle was retested in reverse and compared with a continuous, 30 deg half angle conical exlet. Data indicated that the significantly more stable, higher pressure recovery flow with the fixed 30 deg exlet resulted in lower engine vibrations, lower fan blade stress, and approximately a 20 percent improvement in reverse thrust. Objective reverse thrust of 35 percent of takeoff thrust was reached. Thrust response of less than 1.5 sec was achieved for the approach and the takeoff to reverse thrust transients.
Identifying optimal remotely-sensed variables for ecosystem monitoring in Colorado Plateau drylands
Poitras, Travis; Villarreal, Miguel; Waller, Eric K.; Nauman, Travis; Miller, Mark E.; Duniway, Michael C.
2018-01-01
Water-limited ecosystems often recover slowly following anthropogenic or natural disturbance. Multitemporal remote sensing can be used to monitor ecosystem recovery after disturbance; however, dryland vegetation cover can be challenging to accurately measure due to sparse cover and spectral confusion between soils and non-photosynthetic vegetation. With the goal of optimizing a monitoring approach for identifying both abrupt and gradual vegetation changes, we evaluated the ability of Landsat-derived spectral variables to characterize surface variability of vegetation cover and bare ground across a range of vegetation community types. Using three year composites of Landsat data, we modeled relationships between spectral information and field data collected at monitoring sites near Canyonlands National Park, UT. We also developed multiple regression models to assess improvement over single variables. We found that for all vegetation types, percent cover bare ground could be accurately modeled with single indices that included a combination of red and shortwave infrared bands, while near infrared-based vegetation indices like NDVI worked best for quantifying tree cover and total live vegetation cover in woodlands. We applied four models to characterize the spatial distribution of putative grassland ecological states across our study area, illustrating how this approach can be implemented to guide dryland ecosystem management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cochran, J.; Bird, L.; Heeter, J.
Many countries -- reflecting very different geographies, markets, and power systems -- are successfully managing high levels of variable renewable energy on the electric grid, including that from wind and solar energy. This study documents the diverse approaches to effective integration of variable renewable energy among six countries -- Australia (South Australia), Denmark, Germany, Ireland, Spain, and the United States (Western region-Colorado and Texas)-- and summarizes policy best practices that energy ministers and other stakeholders can pursue to ensure that electricity markets and power systems can effectively coevolve with increasing penetrations of variable renewable energy. Each country has crafted itsmore » own combination of policies, market designs, and system operations to achieve the system reliability and flexibility needed to successfully integrate renewables. Notwithstanding this diversity, the approaches taken by the countries studied all coalesce around five strategic areas: lead public engagement, particularly for new transmission; coordinate and integrate planning; develop rules for market evolution that enable system flexibility; expand access to diverse resources and geographic footprint of operations; and improve system operations. The ability to maintain a broad ecosystem perspective, to organize and make available the wealth of experiences, and to ensure a clear path from analysis to enactment should be the primary focus going forward.« less
NASA Astrophysics Data System (ADS)
Yeh, G. T.; Tsai, C. H.
2015-12-01
This paper presents the development of a THMC (thermal-hydrology-mechanics-chemistry) process model in variably saturated media. The governing equations for variably saturated flow and reactive chemical transport are obtained based on the mass conservation principle of species transport supplemented with Darcy's law, constraint of species concentration, equation of states, and constitutive law of K-S-P (Conductivity-Degree of Saturation-Capillary Pressure). The thermal transport equation is obtained based on the conservation of energy. The geo-mechanic displacement is obtained based on the assumption of equilibrium. Conventionally, these equations have been implicitly coupled via the calculations of secondary variables based on primary variables. The mechanisms of coupling have not been obvious. In this paper, governing equations are explicitly coupled for all primary variables. The coupling is accomplished via the storage coefficients, transporting velocities, and conduction-dispersion-diffusion coefficient tensor; one set each for every primary variable. With this new system of equations, the coupling mechanisms become clear. Physical interpretations of every term in the coupled equations will be discussed. Examples will be employed to demonstrate the intuition and superiority of these explicit coupling approaches. Keywords: Variably Saturated Flow, Thermal Transport, Geo-mechanics, Reactive Transport.
Hanan, Erin J; Tague, Christina; Choate, Janet; Liu, Mingliang; Kolden, Crystal; Adam, Jennifer
2018-03-24
Disturbances such as wildfire, insect outbreaks, and forest clearing, play an important role in regulating carbon, nitrogen, and hydrologic fluxes in terrestrial watersheds. Evaluating how watersheds respond to disturbance requires understanding mechanisms that interact over multiple spatial and temporal scales. Simulation modeling is a powerful tool for bridging these scales; however, model projections are limited by uncertainties in the initial state of plant carbon and nitrogen stores. Watershed models typically use one of two methods to initialize these stores: spin-up to steady state or remote sensing with allometric relationships. Spin-up involves running a model until vegetation reaches equilibrium based on climate. This approach assumes that vegetation across the watershed has reached maturity and is of uniform age, which fails to account for landscape heterogeneity and non-steady-state conditions. By contrast, remote sensing, can provide data for initializing such conditions. However, methods for assimilating remote sensing into model simulations can also be problematic. They often rely on empirical allometric relationships between a single vegetation variable and modeled carbon and nitrogen stores. Because allometric relationships are species- and region-specific, they do not account for the effects of local resource limitation, which can influence carbon allocation (to leaves, stems, roots, etc.). To address this problem, we developed a new initialization approach using the catchment-scale ecohydrologic model RHESSys. The new approach merges the mechanistic stability of spin-up with the spatial fidelity of remote sensing. It uses remote sensing to define spatially explicit targets for one or several vegetation state variables, such as leaf area index, across a watershed. The model then simulates the growth of carbon and nitrogen stores until the defined targets are met for all locations. We evaluated this approach in a mixed pine-dominated watershed in central Idaho, and a chaparral-dominated watershed in southern California. In the pine-dominated watershed, model estimates of carbon, nitrogen, and water fluxes varied among methods, while the target-driven method increased correspondence between observed and modeled streamflow. In the chaparral watershed, where vegetation was more homogeneously aged, there were no major differences among methods. Thus, in heterogeneous, disturbance-prone watersheds, the target-driven approach shows potential for improving biogeochemical projections. © 2018 by the Ecological Society of America.
Modeling species occurrence dynamics with multiple states and imperfect detection
MacKenzie, D.I.; Nichols, J.D.; Seamans, M.E.; Gutierrez, R.J.
2009-01-01
Recent extensions of occupancy modeling have focused not only on the distribution of species over space, but also on additional state variables (e.g., reproducing or not, with or without disease organisms, relative abundance categories) that provide extra information about occupied sites. These biologist-driven extensions are characterized by ambiguity in both species presence and correct state classification, caused by imperfect detection. We first show the relationships between independently published approaches to the modeling of multistate occupancy. We then extend the pattern-based modeling to the case of sampling over multiple seasons or years in order to estimate state transition probabilities associated with system dynamics. The methodology and its potential for addressing relevant ecological questions are demonstrated using both maximum likelihood (occupancy and successful reproduction dynamics of California Spotted Owl) and Markov chain Monte Carlo estimation approaches (changes in relative abundance of green frogs in Maryland). Just as multistate capture-recapture modeling has revolutionized the study of individual marked animals, we believe that multistate occupancy modeling will dramatically increase our ability to address interesting questions about ecological processes underlying population-level dynamics. ?? 2009 by the Ecological Society of America.
Dynamic models for problems of species occurrence with multiple states
MacKenzie, D.I.; Nichols, J.D.; Seamans, M.E.; Gutierrez, R.J.
2009-01-01
Recent extensions of occupancy modeling have focused not only on the distribution of species over space, but also on additional state variables (e.g., reproducing or not, with or without disease organisms, relative abundance categories) that provide extra information about occupied sites. These biologist-driven extensions are characterized by ambiguity in both species presence and correct state classification, caused by imperfect detection. We first show the relationships between independently published approaches to the modeling of multistate occupancy. We then extend the pattern-based modeling to the case of sampling over multiple seasons or years in order to estimate state transition probabilities associated with system dynamics. The methodology and its potential for addressing relevant ecological questions are demonstrated using both maximum likelihood (occupancy and successful reproduction dynamics of California Spotted Owl) and Markov chain Monte Carlo estimation approaches (changes in relative abundance of green frogs in Maryland). Just as multistate capture?recapture modeling has revolutionized the study of individual marked animals, we believe that multistate occupancy modeling will dramatically increase our ability to address interesting questions about ecological processes underlying population-level dynamics.
Superior arm-movement decoding from cortex with a new, unsupervised-learning algorithm
NASA Astrophysics Data System (ADS)
Makin, Joseph G.; O'Doherty, Joseph E.; Cardoso, Mariana M. B.; Sabes, Philip N.
2018-04-01
Objective. The aim of this work is to improve the state of the art for motor-control with a brain-machine interface (BMI). BMIs use neurological recording devices and decoding algorithms to transform brain activity directly into real-time control of a machine, archetypically a robotic arm or a cursor. The standard procedure treats neural activity—vectors of spike counts in small temporal windows—as noisy observations of the kinematic state (position, velocity, acceleration) of the fingertip. Inferring the state from the observations then takes the form of a dynamical filter, typically some variant on Kalman’s (KF). The KF, however, although fairly robust in practice, is optimal only when the relationships between variables are linear and the noise is Gaussian, conditions usually violated in practice. Approach. To overcome these limitations we introduce a new filter, the ‘recurrent exponential-family harmonium’ (rEFH), that models the spike counts explicitly as Poisson-distributed, and allows for arbitrary nonlinear dynamics and observation models. Furthermore, the model underlying the filter is acquired through unsupervised learning, which allows temporal correlations in spike counts to be explained by latent dynamics that do not necessarily correspond to the kinematic state of the fingertip. Main results. We test the rEFH on offline reconstruction of the kinematics of reaches in the plane. The rEFH outperforms the standard, as well as three other state-of-the-art, decoders, across three monkeys, two different tasks, most kinematic variables, and a range of bin widths, amounts of training data, and numbers of neurons. Significance. Our algorithm establishes a new state of the art for offline decoding of reaches—in particular, for fingertip velocities, the variable used for control in most online decoders.
Investigation of multidimensional control systems in the state space and wavelet medium
NASA Astrophysics Data System (ADS)
Fedosenkov, D. B.; Simikova, A. A.; Fedosenkov, B. A.
2018-05-01
The notions are introduced of “one-dimensional-point” and “multidimensional-point” automatic control systems. To demonstrate the joint use of approaches based on the concepts of state space and wavelet transforms, a method for optimal control in a state space medium represented in the form of time-frequency representations (maps), is considered. The computer-aided control system is formed on the basis of the similarity transformation method, which makes it possible to exclude the use of reduced state variable observers. 1D-material flow signals formed by primary transducers are converted by means of wavelet transformations into multidimensional concentrated-at-a point variables in the form of time-frequency distributions of Cohen’s class. The algorithm for synthesizing a stationary controller for feeding processes is given here. The conclusion is made that the formation of an optimal control law with time-frequency distributions available contributes to the improvement of transient processes quality in feeding subsystems and the mixing unit. Confirming the efficiency of the method presented is illustrated by an example of the current registration of material flows in the multi-feeding unit. The first section in your paper.
Williams, Alwyn; Hunter, Mitchell C.; Kammerer, Melanie; Kane, Daniel A.; Jordan, Nicholas R.; Mortensen, David A.; Smith, Richard G.; Snapp, Sieglinde
2016-01-01
Yield stability is fundamental to global food security in the face of climate change, and better strategies are needed for buffering crop yields against increased weather variability. Regional- scale analyses of yield stability can support robust inferences about buffering strategies for widely-grown staple crops, but have not been accomplished. We present a novel analytical approach, synthesizing 2000–2014 data on weather and soil factors to quantify their impact on county-level maize yield stability in four US states that vary widely in these factors (Illinois, Michigan, Minnesota and Pennsylvania). Yield stability is quantified as both ‘downside risk’ (minimum yield potential, MYP) and ‘volatility’ (temporal yield variability). We show that excessive heat and drought decreased mean yields and yield stability, while higher precipitation increased stability. Soil water holding capacity strongly affected yield volatility in all four states, either directly (Minnesota and Pennsylvania) or indirectly, via its effects on MYP (Illinois and Michigan). We infer that factors contributing to soil water holding capacity can help buffer maize yields against variable weather. Given that soil water holding capacity responds (within limits) to agronomic management, our analysis highlights broadly relevant management strategies for buffering crop yields against climate variability, and informs region-specific strategies. PMID:27560666
Entangled-coherent-state quantum key distribution with entanglement witnessing
NASA Astrophysics Data System (ADS)
Simon, David S.; Jaeger, Gregg; Sergienko, Alexander V.
2014-01-01
An entanglement-witness approach to quantum coherent-state key distribution and a system for its practical implementation are described. In this approach, eavesdropping can be detected by a change in sign of either of two witness functions: an entanglement witness S or an eavesdropping witness W. The effects of loss and eavesdropping on system operation are evaluated as a function of distance. Although the eavesdropping witness W does not directly witness entanglement for the system, its behavior remains related to that of the true entanglement witness S. Furthermore, W is easier to implement experimentally than S. W crosses the axis at a finite distance, in a manner reminiscent of entanglement sudden death. The distance at which this occurs changes measurably when an eavesdropper is present. The distance dependence of the two witnesses due to amplitude reduction and due to increased variance resulting from both ordinary propagation losses and possible eavesdropping activity is provided. Finally, the information content and secure key rate of a continuous variable protocol using this witness approach are given.
An exploratory data analysis of electroencephalograms using the functional boxplots approach
Ngo, Duy; Sun, Ying; Genton, Marc G.; Wu, Jennifer; Srinivasan, Ramesh; Cramer, Steven C.; Ombao, Hernando
2015-01-01
Many model-based methods have been developed over the last several decades for analysis of electroencephalograms (EEGs) in order to understand electrical neural data. In this work, we propose to use the functional boxplot (FBP) to analyze log periodograms of EEG time series data in the spectral domain. The functional bloxplot approach produces a median curve—which is not equivalent to connecting medians obtained from frequency-specific boxplots. In addition, this approach identifies a functional median, summarizes variability, and detects potential outliers. By extending FBPs analysis from one-dimensional curves to surfaces, surface boxplots are also used to explore the variation of the spectral power for the alpha (8–12 Hz) and beta (16–32 Hz) frequency bands across the brain cortical surface. By using rank-based nonparametric tests, we also investigate the stationarity of EEG traces across an exam acquired during resting-state by comparing the spectrum during the early vs. late phases of a single resting-state EEG exam. PMID:26347598
Yang, Limin; Huang, Chengquan; Homer, Collin G.; Wylie, Bruce K.; Coan, Michael
2003-01-01
A wide range of urban ecosystem studies, including urban hydrology, urban climate, land use planning, and resource management, require current and accurate geospatial data of urban impervious surfaces. We developed an approach to quantify urban impervious surfaces as a continuous variable by using multisensor and multisource datasets. Subpixel percent impervious surfaces at 30-m resolution were mapped using a regression tree model. The utility, practicality, and affordability of the proposed method for large-area imperviousness mapping were tested over three spatial scales (Sioux Falls, South Dakota, Richmond, Virginia, and the Chesapeake Bay areas of the United States). Average error of predicted versus actual percent impervious surface ranged from 8.8 to 11.4%, with correlation coefficients from 0.82 to 0.91. The approach is being implemented to map impervious surfaces for the entire United States as one of the major components of the circa 2000 national land cover database.
Technological advances in real-time tracking of cell death
Skommer, Joanna; Darzynkiewicz, Zbigniew; Wlodkowic, Donald
2010-01-01
Cell population can be viewed as a quantum system, which like Schrödinger’s cat exists as a combination of survival- and death-allowing states. Tracking and understanding cell-to-cell variability in processes of high spatio-temporal complexity such as cell death is at the core of current systems biology approaches. As probabilistic modeling tools attempt to impute information inaccessible by current experimental approaches, advances in technologies for single-cell imaging and omics (proteomics, genomics, metabolomics) should go hand in hand with the computational efforts. Over the last few years we have made exciting technological advances that allow studies of cell death dynamically in real-time and with the unprecedented accuracy. These approaches are based on innovative fluorescent assays and recombinant proteins, bioelectrical properties of cells, and more recently also on state-of-the-art optical spectroscopy. Here, we review current status of the most innovative analytical technologies for dynamic tracking of cell death, and address the interdisciplinary promises and future challenges of these methods. PMID:20519963
A genuine nonlinear approach for controller design of a boiler-turbine system.
Yang, Shizhong; Qian, Chunjiang; Du, Haibo
2012-05-01
This paper proposes a genuine nonlinear approach for controller design of a drum-type boiler-turbine system. Based on a second order nonlinear model, a finite-time convergent controller is first designed to drive the states to their setpoints in a finite time. In the case when the state variables are unmeasurable, the system will be regulated using a constant controller or an output feedback controller. An adaptive controller is also designed to stabilize the system since the model parameters may vary under different operating points. The novelty of the proposed controller design approach lies in fully utilizing the system nonlinearities instead of linearizing or canceling them. In addition, the newly developed techniques for finite-time convergent controller are used to guarantee fast convergence of the system. Simulations are conducted under different cases and the results are presented to illustrate the performance of the proposed controllers. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jokar Arsanjani, Jamal; Helbich, Marco; Kainz, Wolfgang; Darvishi Boloorani, Ali
2013-04-01
This research analyses the suburban expansion in the metropolitan area of Tehran, Iran. A hybrid model consisting of logistic regression model, Markov chain (MC), and cellular automata (CA) was designed to improve the performance of the standard logistic regression model. Environmental and socio-economic variables dealing with urban sprawl were operationalised to create a probability surface of spatiotemporal states of built-up land use for the years 2006, 2016, and 2026. For validation, the model was evaluated by means of relative operating characteristic values for different sets of variables. The approach was calibrated for 2006 by cross comparing of actual and simulated land use maps. The achieved outcomes represent a match of 89% between simulated and actual maps of 2006, which was satisfactory to approve the calibration process. Thereafter, the calibrated hybrid approach was implemented for forthcoming years. Finally, future land use maps for 2016 and 2026 were predicted by means of this hybrid approach. The simulated maps illustrate a new wave of suburban development in the vicinity of Tehran at the western border of the metropolis during the next decades.
NASA Astrophysics Data System (ADS)
Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.
2003-04-01
Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.
Lipp, Ilona; Murphy, Kevin; Caseras, Xavier; Wise, Richard G
2015-06-01
FMRI BOLD responses to changes in neural activity are influenced by the reactivity of the vasculature. By complementing a task-related BOLD acquisition with a vascular reactivity measure obtained through breath-holding or hypercapnia, this unwanted variance can be statistically reduced in the BOLD responses of interest. Recently, it has been suggested that vascular reactivity can also be estimated using a resting state scan. This study aimed to compare three breath-hold based analysis approaches (block design, sine-cosine regressor and CO2 regressor) and a resting state approach (CO2 regressor) to measure vascular reactivity. We tested BOLD variance explained by the model and repeatability of the measures. Fifteen healthy participants underwent a breath-hold task and a resting state scan with end-tidal CO2 being recorded during both. Vascular reactivity was defined as CO2-related BOLD percent signal change/mmHg change in CO2. Maps and regional vascular reactivity estimates showed high repeatability when the breath-hold task was used. Repeatability and variance explained by the CO2 trace regressor were lower for the resting state data based approach, which resulted in highly variable measures of vascular reactivity. We conclude that breath-hold based vascular reactivity estimations are more repeatable than resting-based estimates, and that there are limitations with replacing breath-hold scans by resting state scans for vascular reactivity assessment. Copyright © 2015. Published by Elsevier Inc.
Lipp, Ilona; Murphy, Kevin; Caseras, Xavier; Wise, Richard G.
2015-01-01
FMRI BOLD responses to changes in neural activity are influenced by the reactivity of the vasculature. By complementing a task-related BOLD acquisition with a vascular reactivity measure obtained through breath-holding or hypercapnia, this unwanted variance can be statistically reduced in the BOLD responses of interest. Recently, it has been suggested that vascular reactivity can also be estimated using a resting state scan. This study aimed to compare three breath-hold based analysis approaches (block design, sine–cosine regressor and CO2 regressor) and a resting state approach (CO2 regressor) to measure vascular reactivity. We tested BOLD variance explained by the model and repeatability of the measures. Fifteen healthy participants underwent a breath-hold task and a resting state scan with end-tidal CO2 being recorded during both. Vascular reactivity was defined as CO2-related BOLD percent signal change/mm Hg change in CO2. Maps and regional vascular reactivity estimates showed high repeatability when the breath-hold task was used. Repeatability and variance explained by the CO2 trace regressor were lower for the resting state data based approach, which resulted in highly variable measures of vascular reactivity. We conclude that breath-hold based vascular reactivity estimations are more repeatable than resting-based estimates, and that there are limitations with replacing breath-hold scans by resting state scans for vascular reactivity assessment. PMID:25795342
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.
Computation of Steady-State Probability Distributions in Stochastic Models of Cellular Networks
Hallen, Mark; Li, Bochong; Tanouchi, Yu; Tan, Cheemeng; West, Mike; You, Lingchong
2011-01-01
Cellular processes are “noisy”. In each cell, concentrations of molecules are subject to random fluctuations due to the small numbers of these molecules and to environmental perturbations. While noise varies with time, it is often measured at steady state, for example by flow cytometry. When interrogating aspects of a cellular network by such steady-state measurements of network components, a key need is to develop efficient methods to simulate and compute these distributions. We describe innovations in stochastic modeling coupled with approaches to this computational challenge: first, an approach to modeling intrinsic noise via solution of the chemical master equation, and second, a convolution technique to account for contributions of extrinsic noise. We show how these techniques can be combined in a streamlined procedure for evaluation of different sources of variability in a biochemical network. Evaluation and illustrations are given in analysis of two well-characterized synthetic gene circuits, as well as a signaling network underlying the mammalian cell cycle entry. PMID:22022252
Reulen, Holger; Kneib, Thomas
2016-04-01
One important goal in multi-state modelling is to explore information about conditional transition-type-specific hazard rate functions by estimating influencing effects of explanatory variables. This may be performed using single transition-type-specific models if these covariate effects are assumed to be different across transition-types. To investigate whether this assumption holds or whether one of the effects is equal across several transition-types (cross-transition-type effect), a combined model has to be applied, for instance with the use of a stratified partial likelihood formulation. Here, prior knowledge about the underlying covariate effect mechanisms is often sparse, especially about ineffectivenesses of transition-type-specific or cross-transition-type effects. As a consequence, data-driven variable selection is an important task: a large number of estimable effects has to be taken into account if joint modelling of all transition-types is performed. A related but subsequent task is model choice: is an effect satisfactory estimated assuming linearity, or is the true underlying nature strongly deviating from linearity? This article introduces component-wise Functional Gradient Descent Boosting (short boosting) for multi-state models, an approach performing unsupervised variable selection and model choice simultaneously within a single estimation run. We demonstrate that features and advantages in the application of boosting introduced and illustrated in classical regression scenarios remain present in the transfer to multi-state models. As a consequence, boosting provides an effective means to answer questions about ineffectiveness and non-linearity of single transition-type-specific or cross-transition-type effects.
Static sampling of dynamic processes - a paradox?
NASA Astrophysics Data System (ADS)
Mälicke, Mirko; Neuper, Malte; Jackisch, Conrad; Hassler, Sibylle; Zehe, Erwin
2017-04-01
Environmental systems monitoring aims at its core at the detection of spatio-temporal patterns of processes and system states, which is a pre-requisite for understanding and explaining their baffling heterogeneity. Most observation networks rely on distributed point sampling of states and fluxes of interest, which is combined with proxy-variables from either remote sensing or near surface geophysics. The cardinal question on the appropriate experimental design of such a monitoring network has up to now been answered in many different ways. Suggested approaches range from sampling in a dense regular grid using for the so-called green machine, transects along typical catenas, clustering of several observations sensors in presumed functional units or HRUs, arrangements of those cluster along presumed lateral flow paths to last not least a nested, randomized stratified arrangement of sensors or samples. Common to all these approaches is that they provide a rather static spatial sampling, while state variables and their spatial covariance structure dynamically change in time. It is hence of key interest how much of our still incomplete understanding stems from inappropriate sampling and how much needs to be attributed to an inappropriate analysis of spatial data sets. We suggest that it is much more promising to analyze the spatial variability of processes, for instance changes in soil moisture values, than to investigate the spatial variability of soil moisture states themselves. This is because wetting of the soil, reflected in a soil moisture increase, is causes by a totally different meteorological driver - rainfall - than drying of the soil. We hence propose that the rising and the falling limbs of soil moisture time series belong essentially to different ensembles, as they are influenced by different drivers. Positive and negative temporal changes in soil moisture need, hence, to be analyzed separately. We test this idea using the CAOS data set as a benchmark. Specifically, we expect the covariance structure of the positive temporal changes of soil moisture to be dominated by the spatial structure of rain- and through-fall and saturated hydraulic conductivity. The covariance in temporarily decreasing soil moisture during radiation driven conditions is expect to be dominated by the spatial structure of retention properties and plant transpiration. An analysis of soil moisture changes has furthermore the advantage that those are free from systematic measurement errors.
To Perceive or Not Perceive: The Role of Gamma-band Activity in Signaling Object Percepts
Castelhano, João; Rebola, José; Leitão, Bruno; Rodriguez, Eugenio; Castelo-Branco, Miguel
2013-01-01
The relation of gamma-band synchrony to holistic perception in which concerns the effects of sensory processing, high level perceptual gestalt formation, motor planning and response is still controversial. To provide a more direct link to emergent perceptual states we have used holistic EEG/ERP paradigms where the moment of perceptual “discovery” of a global pattern was variable. Using a rapid visual presentation of short-lived Mooney objects we found an increase of gamma-band activity locked to perceptual events. Additional experiments using dynamic Mooney stimuli showed that gamma activity increases well before the report of an emergent holistic percept. To confirm these findings in a data driven manner we have further used a support vector machine classification approach to distinguish between perceptual vs. non perceptual states, based on time-frequency features. Sensitivity, specificity and accuracy were all above 95%. Modulations in the 30–75 Hz range were larger for perception states. Interestingly, phase synchrony was larger for perception states for high frequency bands. By focusing on global gestalt mechanisms instead of local processing we conclude that gamma-band activity and synchrony provide a signature of holistic perceptual states of variable onset, which are separable from sensory and motor processing. PMID:23785494
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Paul
By “Quantitative Empirical Analysis” (QEA) is intended the use of statistical methods to infer, from data that often tend to be of a historical nature, the characteristics of states that correlate with some designated dependent variable (e.g. proliferation of nuclear weapons). QEA is a well-established approach in the social sciences, but is not notably well-known among physical scientists, who tend to think of the social sciences as inherently qualitative. This article attempts to provide a snapshot of research, most of which has evolved over the past decade, involving the application of QEA to issues in which the dependent variable ofmore » interest is intended as some measure of nuclear proliferation. Standard practices in QEA are described, especially as they relate to data collection. The QEA approach is compared and contrasted to other quantitative approaches to studying proliferation-related issues, including a “figure of merit” approach that has largely been developed within the DOE complex, and two distinct methodologies termed in a recent US National Academy of Sciences study as “case by case” and “predefined framework.” Sample results from QEA applied to proliferation are indicated, as are doubts about such quantitative approaches. A simplistic decision-theoretic model of the optimal time for the international community to intervene in a possible proliferation scenario is used to illustrate the possibility of synergies between different approaches« less
Benefit of Preemptive Pharmacogenetic Information on Clinical Outcome.
Roden, Dan M; Van Driest, Sara L; Mosley, Jonathan D; Wells, Quinn S; Robinson, Jamie R; Denny, Joshua C; Peterson, Josh F
2018-05-01
The development of new knowledge around the genetic determinants of variable drug action has naturally raised the question of how this new knowledge can be used to improve the outcome of drug therapy. Two broad approaches have been taken: a point-of-care approach in which genotyping for specific variant(s) is undertaken at the time of drug prescription, and a preemptive approach in which multiple genetic variants are typed in an individual patient and the information archived for later use when a drug with a "pharmacogenetic story" is prescribed. This review addresses the current state of implementation, the rationale for these approaches, and barriers that must be overcome. Benefits to pharmacogenetic testing are only now being defined and will be discussed. © 2018 American Society for Clinical Pharmacology and Therapeutics.
Low-thrust trajectory analysis for the geosynchronous mission
NASA Technical Reports Server (NTRS)
Jasper, T. P.
1973-01-01
Methodology employed in development of a computer program designed to analyze optimal low-thrust trajectories is described, and application of the program to a Solar Electric Propulsion Stage (SEPS) geosynchronous mission is discussed. To avoid the zero inclination and eccentricity singularities which plague many small-force perturbation techniques, a special set of state variables (equinoctial) is used. Adjoint equations are derived for the minimum time problem and are also free from the singularities. Solutions to the state and adjoint equations are obtained by both orbit averaging and precision numerical integration; an evaluation of these approaches is made.
Kemp, Andrew H; López, Santiago Rodríguez; Passos, Valeria M A; Bittencourt, Marcio S; Dantas, Eduardo M; Mill, José G; Ribeiro, Antonio L P; Thayer, Julian F; Bensenor, Isabela M; Lotufo, Paulo A
2016-05-01
Research has linked high-frequency heart rate variability (HF-HRV) to cognitive function. The present study adopts a modern path modelling approach to understand potential causal pathways that may underpin this relationship. Here we examine the association between resting-state HF-HRV and executive function in a large sample of civil servants from Brazil (N=8114) recruited for the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil). HF-HRV was calculated from 10-min resting-state electrocardiograms. Executive function was assessed using the trail-making test (version B). Insulin resistance (a marker of type 2 diabetes mellitus) and carotid intima-media thickness (subclinical atherosclerosis) mediated the relationship between HRV and executive function in seriatim. A limitation of the present study is its cross-sectional design; therefore, conclusions must be confirmed in longitudinal study. Nevertheless, findings support that possibility that HRV provides a 'spark' that initiates a cascade of adverse downstream effects that subsequently leads to cognitive impairment. Copyright © 2016 Elsevier B.V. All rights reserved.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-01-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311
Monitoring for the management of disease risk in animal translocation programmes
Nichols, James D.; Hollmen, Tuula E.; Grand, James B.
2017-01-01
Monitoring is best viewed as a component of some larger programme focused on science or conservation. The value of monitoring is determined by the extent to which it informs the parent process. Animal translocation programmes are typically designed to augment or establish viable animal populations without changing the local community in any detrimental way. Such programmes seek to minimize disease risk to local wild animals, to translocated animals, and in some cases to humans. Disease monitoring can inform translocation decisions by (1) providing information for state-dependent decisions, (2) assessing progress towards programme objectives, and (3) permitting learning in order to make better decisions in the future. Here we discuss specific decisions that can be informed by both pre-release and post-release disease monitoring programmes. We specify state variables and vital rates needed to inform these decisions. We then discuss monitoring data and analytic methods that can be used to estimate these state variables and vital rates. Our discussion is necessarily general, but hopefully provides a basis for tailoring disease monitoring approaches to specific translocation programmes.
Comparison of several maneuvering target tracking models
NASA Astrophysics Data System (ADS)
McIntyre, Gregory A.; Hintz, Kenneth J.
1998-07-01
The tracking of maneuvering targets is complicated by the fact that acceleration is not directly observable or measurable. Additionally, acceleration can be induced by a variety of sources including human input, autonomous guidance, or atmospheric disturbances. The approaches to tracking maneuvering targets can be divided into two categories both of which assume that the maneuver input command is unknown. One approach is to model the maneuver as a random process. The other approach assumes that the maneuver is not random and that it is either detected or estimated in real time. The random process models generally assume one of two statistical properties, either white noise or an autocorrelated noise. The multiple-model approach is generally used with the white noise model while a zero-mean, exponentially correlated acceleration approach is used with the autocorrelated noise model. The nonrandom approach uses maneuver detection to correct the state estimate or a variable dimension filter to augment the state estimate with an extra state component during a detected maneuver. Another issue with the tracking of maneuvering target is whether to perform the Kalman filter in Polar or Cartesian coordinates. This paper will examine and compare several exponentially correlated acceleration approaches in both Polar and Cartesian coordinates for accuracy and computational complexity. They include the Singer model in both Polar and Cartesian coordinates, the Singer model in Polar coordinates converted to Cartesian coordinates, Helferty's third order rational approximation of the Singer model and the Bar-Shalom and Fortmann model. This paper shows that these models all provide very accurate position estimates with only minor differences in velocity estimates and compares the computational complexity of the models.
Huebner, Thomas; Goernig, Matthias; Schuepbach, Michael; Sanz, Ernst; Pilgram, Roland; Seeck, Andrea; Voss, Andreas
2010-01-01
Background: Electrocardiographic methods still provide the bulk of cardiovascular diagnostics. Cardiac ischemia is associated with typical alterations in cardiac biosignals that have to be measured, analyzed by mathematical algorithms and allegorized for further clinical diagnostics. The fast growing fields of biomedical engineering and applied sciences are intensely focused on generating new approaches to cardiac biosignal analysis for diagnosis and risk stratification in myocardial ischemia. Objectives: To present and review the state of the art in and new approaches to electrocardiologic methods for non-invasive detection and risk stratification in coronary artery disease (CAD) and myocardial ischemia; secondarily, to explore the future perspectives of these methods. Methods: In follow-up to the Expert Discussion at the 2008 Workshop on "Biosignal Analysis" of the German Society of Biomedical Engineering in Potsdam, Germany, we comprehensively searched the pertinent literature and databases and compiled the results into this review. Then, we categorized the state-of-the-art methods and selected new approaches based on their applications in detection and risk stratification of myocardial ischemia. Finally, we compared the pros and cons of the methods and explored their future potentials for cardiology. Results: Resting ECG, particularly suited for detecting ST-elevation myocardial infarctions, and exercise ECG, for the diagnosis of stable CAD, are state-of-the-art methods. New exercise-free methods for detecting stable CAD include cardiogoniometry (CGM); methods for detecting acute coronary syndrome without ST elevation are Body Surface Potential Mapping, functional imaging and CGM. Heart rate variability and blood pressure variability analyses, microvolt T-wave alternans and signal-averaged ECG mainly serve in detecting and stratifying the risk for lethal arrythmias in patients with myocardial ischemia or previous myocardial infarctions. Telemedicine and ambient-assisted living support the electrocardiological monitoring of at-risk patients. Conclusions: There are many promising methods for the exercise-free, non-invasive detection of CAD and myocardial ischemia in the stable and acute phases. In the coming years, these new methods will help enhance state-of-the-art procedures in routine diagnostics. The future can expect that equally novel methods for risk stratification and telemedicine will transition into clinical routine. PMID:21063467
Bassani, Diego G.; Corsi, Daniel J.; Gaffey, Michelle F.; Barros, Aluisio J. D.
2014-01-01
Background Worse health outcomes including higher morbidity and mortality are most often observed among the poorest fractions of a population. In this paper we present and validate national, regional and state-level distributions of national wealth index scores, for urban and rural populations, derived from household asset data collected in six survey rounds in India between 1992–3 and 2007–8. These new indices and their sub-national distributions allow for comparative analyses of a standardized measure of wealth across time and at various levels of population aggregation in India. Methods Indices were derived through principal components analysis (PCA) performed using standardized variables from a correlation matrix to minimize differences in variance. Valid and simple indices were constructed with the minimum number of assets needed to produce scores with enough variability to allow definition of unique decile cut-off points in each urban and rural area of all states. Results For all indices, the first PCA components explained between 36% and 43% of the variance in household assets. Using sub-national distributions of national wealth index scores, mean height-for-age z-scores increased from the poorest to the richest wealth quintiles for all surveys, and stunting prevalence was higher among the poorest and lower among the wealthiest. Urban and rural decile cut-off values for India, for the six regions and for the 24 major states revealed large variability in wealth by geographical area and level, and rural wealth score gaps exceeded those observed in urban areas. Conclusions The large variability in sub-national distributions of national wealth index scores indicates the importance of accounting for such variation when constructing wealth indices and deriving score distribution cut-off points. Such an approach allows for proper within-sample economic classification, resulting in scores that are valid indicators of wealth and correlate well with health outcomes, and enables wealth-related analyses at whichever geographical area and level may be most informative for policy-making processes. PMID:25356667
Detection of orbital angular momentum using a photonic integrated circuit.
Rui, Guanghao; Gu, Bing; Cui, Yiping; Zhan, Qiwen
2016-06-20
Orbital angular momentum (OAM) state of photons offer an attractive additional degree of freedom that has found a variety of applications. Measurement of OAM state, which is a critical task of these applications, demands photonic integrated devices for improved fidelity, miniaturization, and reconfiguration. Here we report the design of a silicon-integrated OAM receiver that is capable of detecting distinct and variable OAM states. Furthermore, the reconfiguration capability of the detector is achieved by applying voltage to the GeSe film to form gratings with alternate states. The resonant wavelength for arbitrary OAM state is demonstrated to be tunable in a quasi-linear manner through adjusting the duty cycle of the gratings. This work provides a viable approach for the realization of a compact integrated OAM detection device with enhanced functionality that may find important applications in optical communications and information processing with OAM states.
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Giampaolo, Salvatore M.; Illuminati, Fabrizio
2007-10-01
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1×M bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself and the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a , uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.
Lindqvist, R
2006-07-01
Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.
NASA Astrophysics Data System (ADS)
Dodson, J. B.; Taylor, P. C.
2016-12-01
The diurnal cycle of convection (CDC) greatly influences the water, radiative, and energy budgets in convectively active regions. For example, previous research of the Amazonian CDC has identified significant monthly covariability between the satellite-observed radiative and precipitation diurnal and multiple reanalysis-derived atmospheric state variables (ASVs) representing convective instability. However, disagreements between retrospective analysis products (reanalyses) over monthly ASV anomalies create significant uncertainty in the resulting covariability. Satellite observations of convective clouds can be used to characterize monthly anomalies in convective activity. CloudSat observes multiple properties of both deep convective cores and the associated anvils, and so is useful as an alternative to the use of reanalyses. CloudSat cannot observe the full diurnal cycle, but it can detect differences between daytime and nighttime convection. Initial efforts to use CloudSat data to characterize convective activity showed that the results are highly dependent on the choice of variable used to characterize the cloud. This is caused by a series of inverse relationships between convective frequency, cloud top height, radar reflectivity vertical profile, and other variables. A single, multi-variable index for convective activity based on CloudSat data may be useful to clarify the results. Principal component analysis (PCA) provides a method to create a multivariable index, where the first principal component (PC1) corresponds with convective instability. The time series of PC1 can then be used as a proxy for monthly variability in convective activity. The primary challenge presented involves determining the utility of PCA for creating a robust index for convective activity that accounts for the complex relationships of multiple convective cloud variables, and yields information about the interactions between convection, the convective environment, and radiation beyond the previous single-variable approaches. The choice of variables used to calculate PC1 may influence any results based on PC1, so it is necessary to test the sensitivity of the results to different variable combinations.
Automated EEG sleep staging in the term-age baby using a generative modelling approach
NASA Astrophysics Data System (ADS)
Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten
2018-06-01
Objective. We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. Approach. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording’s feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen’s kappa agreement calculated between the estimates and clinicians’ visual labels. Main results. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. Significance. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.
Finite element structural redesign by large admissible perturbations
NASA Technical Reports Server (NTRS)
Bernitsas, Michael M.; Beyko, E.; Rim, C. W.; Alzahabi, B.
1991-01-01
In structural redesign, two structural states are involved; the baseline (known) State S1 with unacceptable performance, and the objective (unknown) State S2 with given performance specifications. The difference between the two states in performance and design variables may be as high as 100 percent or more depending on the scale of the structure. A Perturbation Approach to Redesign (PAR) is presented to relate any two structural states S1 and S2 that are modeled by the same finite element model and represented by different values of the design variables. General perturbation equations are derived expressing implicitly the natural frequencies, dynamic modes, static deflections, static stresses, Euler buckling loads, and buckling modes of the objective S2 in terms of its performance specifications, and S1 data and Finite Element Analysis (FEA) results. Large Admissible Perturbation (LEAP) algorithms are implemented in code RESTRUCT to define the objective S2 incrementally without trial and error by postprocessing FEA results of S1 with no additional FEAs. Systematic numerical applications in redesign of a 10 element 48 degree of freedom (dof) beam, a 104 element 192 dof offshore tower, a 64 element 216 dof plate, and a 144 element 896 dof cylindrical shell show the accuracy, efficiency, and potential of PAR to find an objective state that may differ 100 percent from the baseline design.
Determinants of the Rigor of State Protection Policies for Persons With Dementia in Assisted Living.
Nattinger, Matthew C; Kaskie, Brian
2017-01-01
Continued growth in the number of individuals with dementia residing in assisted living (AL) facilities raises concerns about their safety and protection. However, unlike federally regulated nursing facilities, AL facilities are state-regulated and there is a high degree of variation among policies designed to protect persons with dementia. Despite the important role these protection policies have in shaping the quality of life of persons with dementia residing in AL facilities, little is known about their formation. In this research, we examined the adoption of AL protection policies pertaining to staffing, the physical environment, and the use of chemical restraints. For each protection policy type, we modeled policy rigor using an innovative point-in-time approach, incorporating variables associated with state contextual, institutional, political, and external factors. We found that the rate of state AL protection policy adoptions remained steady over the study period, with staffing policies becoming less rigorous over time. Variables reflecting institutional policy making, including legislative professionalism and bureaucratic oversight, were associated with the rigor of state AL dementia protection policies. As we continue to evaluate the mechanisms contributing to the rigor of AL protection policies, it seems that organized advocacy efforts might expand their role in educating state policy makers about the importance of protecting persons with dementia residing in AL facilities and moving to advance appropriate policies.
A kinetic approach to some quasi-linear laws of macroeconomics
NASA Astrophysics Data System (ADS)
Gligor, M.; Ignat, M.
2002-11-01
Some previous works have presented the data on wealth and income distributions in developed countries and have found that the great majority of population is described by an exponential distribution, which results in idea that the kinetic approach could be adequate to describe this empirical evidence. The aim of our paper is to extend this framework by developing a systematic kinetic approach of the socio-economic systems and to explain how linear laws, modelling correlations between macroeconomic variables, may arise in this context. Firstly we construct the Boltzmann kinetic equation for an idealised system composed by many individuals (workers, officers, business men, etc.), each of them getting a certain income and spending money for their needs. To each individual a certain time variable amount of money is associated this meaning him/her phase space coordinate. In this way the exponential distribution of money in a closed economy is explicitly found. The extension of this result, including states near the equilibrium, give us the possibility to take into account the regular increase of the total amount of money, according to the modern economic theories. The Kubo-Green-Onsager linear response theory leads us to a set of linear equations between some macroeconomic variables. Finally, the validity of such laws is discussed in relation with the time reversal symmetry and is tested empirically using some macroeconomic time series.
Variation of parameters using Battin's universal functions
NASA Astrophysics Data System (ADS)
Burton, James R., III; Melton, Robert G.
This paper presents a variation of parameters analysis, suitable for use in situations involving small perturbations to the two-body problem, using Battin's universal functions. Unlike the universal variable formulation, this approach avoids the need to switch among different functional representations if the orbit transitions from elliptical, through parabolic, to hyperbolic state, making it attractive for use in simulating low-thrust trajectories ascending to escape or capturing into orbit.
2D problems of surface growth theory with applications to additive manufacturing
NASA Astrophysics Data System (ADS)
Manzhirov, A. V.; Mikhin, M. N.
2018-04-01
We study 2D problems of surface growth theory of deformable solids and their applications to the analysis of the stress-strain state of AM fabricated products and structures. Statements of the problems are given, and a solution method based on the approaches of the theory of functions of a complex variable is suggested. Computations are carried out for model problems. Qualitative and quantitative results are discussed.
Libia Patricia Peralta Agudelo; Maristela Marangon
2006-01-01
The study is based in the Environmental Protection Area of Guaraqueçaba located in the Atlantic Forest of the State of Paraná, southern Brazil. EPAs in Brazil allow private ownership, resource extraction, and agriculture according to predefined land use laws. A systemsâ approach was adopted to define the main interacting variables needed to understand the local socio-...
2013-05-22
responsible for conducting Reception , Staging, Onward movement, and Integration of personnel and equipment, the distribution management of supplies...temperature refrigerated containers or leased refrigerated containers on semi- trailers . Class III Distribution As the DoD executive agent, DLA is...quantitative approach involves the investigation of a human or social problem, and tests the theory based on the collection of variables, numerically
Identifying and assessing the substance-exposed infant.
Clark, Lisa; Rohan, Annie
2015-01-01
As the rate of opioid prescription grows, so does fetal exposure to opioids during pregnancy. With increasing fetal exposure to both prescription and nonprescription drugs, there has been a concurrent increase in identification of Neonatal Withdrawal Syndrome (NWS) and adaptation difficulties after birth. In addition, extended use of opioids, barbiturates, and benzodiazepines in neonatal intensive care has resulted in iatrogenic withdrawal syndromes. There is a lack of evidence to support the use of any one specific evaluation strategy to identify NWS. Clinicians caring for infants must use a multimethod approach to diagnosis, including interview and toxicology screening. Signs of NWS are widely variable, and reflect dysfunction in autonomic regulation, state control, and sensory and motor functioning. Several assessment tools have been developed for assessing severity of withdrawal in term neonates. These tools assist in determining need and duration of pharmacologic therapy and help in titration of these therapies. Considerable variability exists in the pharmacologic and nonpharmacologic approaches to affected babies across settings. An evidence-based protocol for identification, evaluation, and management of NWS should be in place in every nursery. This article provides an overview of identification and assessment considerations for providers who care for babies at risk for or who are experiencing alterations in state, behavior, and responses after prenatal or iatrogenic exposure to agents associated with the spectrum of withdrawal.
NASA Astrophysics Data System (ADS)
Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.
2009-09-01
SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.
Identifying bird and reptile vulnerabilities to climate change in the southwestern United States
Hatten, James R.; Giermakowski, J. Tomasz; Holmes, Jennifer A.; Nowak, Erika M.; Johnson, Matthew J.; Ironside, Kirsten E.; van Riper, Charles; Peters, Michael; Truettner, Charles; Cole, Kenneth L.
2016-07-06
Current and future breeding ranges of 15 bird and 16 reptile species were modeled in the Southwestern United States. Rather than taking a broad-scale, vulnerability-assessment approach, we created a species distribution model (SDM) for each focal species incorporating climatic, landscape, and plant variables. Baseline climate (1940–2009) was characterized with Parameter-elevation Regressions on Independent Slopes Model (PRISM) data and future climate with global-circulation-model data under an A1B emission scenario. Climatic variables included monthly and seasonal temperature and precipitation; landscape variables included terrain ruggedness, soil type, and insolation; and plant variables included trees and shrubs commonly associated with a focal species. Not all species-distribution models contained a plant, but if they did, we included a built-in annual migration rate for more accurate plant-range projections in 2039 or 2099. We conducted a group meta-analysis to (1) determine how influential each variable class was when averaged across all species distribution models (birds or reptiles), and (2) identify the correlation among contemporary (2009) habitat fragmentation and biological attributes and future range projections (2039 or 2099). Projected changes in bird and reptile ranges varied widely among species, with one-third of the ranges predicted to expand and two-thirds predicted to contract. A group meta-analysis indicated that climatic variables were the most influential variable class when averaged across all models for both groups, followed by landscape and plant variables (birds), or plant and landscape variables (reptiles), respectively. The second part of the meta-analysis indicated that numerous contemporary habitat-fragmentation (for example, patch isolation) and biological-attribute (for example, clutch size, longevity) variables were significantly correlated with the magnitude of projected range changes for birds and reptiles. Patch isolation was a significant trans-specific driver of projected bird and reptile ranges, suggesting that strategic actions should focus on restoration and enhancement of habitat at local and regional scales to promote landscape connectivity and conservation of core areas.
Data-driven discovery of Koopman eigenfunctions using deep learning
NASA Astrophysics Data System (ADS)
Lusch, Bethany; Brunton, Steven L.; Kutz, J. Nathan
2017-11-01
Koopman operator theory transforms any autonomous non-linear dynamical system into an infinite-dimensional linear system. Since linear systems are well-understood, a mapping of non-linear dynamics to linear dynamics provides a powerful approach to understanding and controlling fluid flows. However, finding the correct change of variables remains an open challenge. We present a strategy to discover an approximate mapping using deep learning. Our neural networks find this change of variables, its inverse, and a finite-dimensional linear dynamical system defined on the new variables. Our method is completely data-driven and only requires measurements of the system, i.e. it does not require derivatives or knowledge of the governing equations. We find a minimal set of approximate Koopman eigenfunctions that are sufficient to reconstruct and advance the system to future states. We demonstrate the method on several dynamical systems.
Mechanics of deformations in terms of scalar variables
NASA Astrophysics Data System (ADS)
Ryabov, Valeriy A.
2017-05-01
Theory of particle and continuous mechanics is developed which allows a treatment of pure deformation in terms of the set of variables "coordinate-momentum-force" instead of the standard treatment in terms of tensor-valued variables "strain-stress." This approach is quite natural for a microscopic description of atomic system, according to which only pointwise forces caused by the stress act to atoms making a body deform. The new concept starts from affine transformation of spatial to material coordinates in terms of the stretch tensor or its analogs. Thus, three principal stretches and three angles related to their orientation form a set of six scalar variables to describe deformation. Instead of volume-dependent potential used in the standard theory, which requires conditions of equilibrium for surface and body forces acting to a volume element, a potential dependent on scalar variables is introduced. A consistent introduction of generalized force associated with this potential becomes possible if a deformed body is considered to be confined on the surface of torus having six genuine dimensions. Strain, constitutive equations and other fundamental laws of the continuum and particle mechanics may be neatly rewritten in terms of scalar variables. Giving a new presentation for finite deformation new approach provides a full treatment of hyperelasticity including anisotropic case. Derived equations of motion generate a new kind of thermodynamical ensemble in terms of constant tension forces. In this ensemble, six internal deformation forces proportional to the components of Irving-Kirkwood stress are controlled by applied external forces. In thermodynamical limit, instead of the pressure and volume as state variables, this ensemble employs deformation force measured in kelvin unit and stretch ratio.
NASA Astrophysics Data System (ADS)
Martin, Royce Ann
The purpose of this study was to determine the extent that student scores on a researcher-constructed quantitative and document literacy test, the Aviation Documents Delineator (ADD), were associated with (a) learning styles (imaginative, analytic, common sense, dynamic, and undetermined), as identified by the Learning Type Measure, (b) program curriculum (aerospace administration, professional pilot, both aerospace administration and professional pilot, other, or undeclared), (c) overall cumulative grade point average at Indiana State University, and (d) year in school (freshman, sophomore, junior, or senior). The Aviation Documents Delineator (ADD) was a three-part, 35 question survey that required students to interpret graphs, tables, and maps. Tasks assessed in the ADD included (a) locating, interpreting, and describing specific data displayed in the document, (b) determining data for a specified point on the table through interpolation, (c) comparing data for a string of variables representing one aspect of aircraft performance to another string of variables representing a different aspect of aircraft performance, (d) interpreting the documents to make decisions regarding emergency situations, and (e) performing single and/or sequential mathematical operations on a specified set of data. The Learning Type Measure (LTM) was a 15 item self-report survey developed by Bernice McCarthy (1995) to profile an individual's processing and perception tendencies in order to reveal different individual approaches to learning. The sample used in this study included 143 students enrolled in Aerospace Technology Department courses at Indiana State University in the fall of 1996. The ADD and the LTM were administered to each subject. Data collected in this investigation were analyzed using a stepwise multiple regression analysis technique. Results of the study revealed that the variables, year in school and GPA, were significant predictors of the criterion variables, document, quantitative, and total literacy, when utilizing the ADD. The variables learning style and program of study were found not to be significant predictors of literacy scores on the ADD instrument.
A multidisciplinary approach to overreaching detection in endurance trained athletes.
Le Meur, Yann; Hausswirth, Christophe; Natta, Françoise; Couturier, Antoine; Bignet, Frank; Vidal, Pierre Paul
2013-02-01
In sport, high training load required to reach peak performance pushes human adaptation to their limits. In that process, athletes may experience general fatigue, impaired performance, and may be identified as overreached (OR). When this state lasts for several months, an overtraining syndrome is diagnosed (OT). Until now, no variable per se can detect OR, a requirement to prevent the transition from OR to OT. It encouraged us to further investigate OR using a multivariate approach, including physiological, biomechanical, cognitive, and perceptive monitoring. Twenty-four highly trained triathletes were separated into an overload group and a normo-trained group (NT) during 3 wk of training. Given the decrement of their running performance, 11 triathletes were diagnosed as OR after this period. A discriminant analysis showed that the changes of eight parameters measured during a maximal incremental test could explain 98.2% of the OR state (lactatemia, heart rate, biomechanical parameters and effort perception). Variations in heart rate and lactatemia were the two most discriminating factors. When the multifactorial analysis was restricted to these variables, the classification score reached 89.5%. Catecholamines and creatine kinase concentrations at rest did not change significantly in both groups. Running pattern was preserved and cognitive performance decrement was observed only at exhaustion in OR subjects. This study showed that monitoring various variables is required to prevent the transition between NT and OR. It emphasized that an OR index, which combines heart rate and blood lactate concentration changes after a strenuous training period, could be helpful to routinely detect OR.
Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando Costa; Rizol, Paloma Maria Silva Rocha
2017-01-01
ABSTRACT OBJECTIVE Predict the number of hospitalizations for asthma and pneumonia associated with exposure to air pollutants in the city of São José dos Campos, São Paulo State. METHODS This is a computational model using fuzzy logic based on Mamdani’s inference method. For the fuzzification of the input variables of particulate matter, ozone, sulfur dioxide and apparent temperature, we considered two relevancy functions for each variable with the linguistic approach: good and bad. For the output variable number of hospitalizations for asthma and pneumonia, we considered five relevancy functions: very low, low, medium, high and very high. DATASUS was our source for the number of hospitalizations in the year 2007 and the result provided by the model was correlated with the actual data of hospitalization with lag from zero to two days. The accuracy of the model was estimated by the ROC curve for each pollutant and in those lags. RESULTS In the year of 2007, 1,710 hospitalizations by pneumonia and asthma were recorded in São José dos Campos, State of São Paulo, with a daily average of 4.9 hospitalizations (SD = 2.9). The model output data showed positive and significant correlation (r = 0.38) with the actual data; the accuracies evaluated for the model were higher for sulfur dioxide in lag 0 and 2 and for particulate matter in lag 1. CONCLUSIONS Fuzzy modeling proved accurate for the pollutant exposure effects and hospitalization for pneumonia and asthma approach. PMID:28658366
Characterizing the Cloud Decks of Luhman 16AB with Medium-resolution Spectroscopic Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kellogg, Kendra; Metchev, Stanimir; Heinze, Aren
2017-11-01
We present results from a two-night R ∼ 4000 0.9–2.5 μ m spectroscopic monitoring campaign of Luhman 16AB (L7.5 + T0.5). We assess the variability amplitude as a function of pressure level in the atmosphere of Luhman 16B: the more variable of the two components. The amplitude decreases monotonically with decreasing pressure, indicating that the source of variability—most likely patchy clouds—lies in the lower atmosphere. An unexpected result is that the strength of the K i absorption is higher in the faint state of Luhman 16B and lower in the bright state. We conclude that either the abundance of Kmore » i increases when the clouds roll in, potentially because of additional K i in the cloud itself, or that the temperature–pressure profile changes. We reproduce the change in K i absorption strengths with combinations of spectral templates to represent the bright and the faint variability states. These are dominated by a warmer L8 or L9 component, with a smaller contribution from a cooler T1 or T2 component. The success of this approach argues that the mechanism responsible for brown dwarf variability is also behind the diverse spectral morphology across the L-to-T transition. We further suggest that the L9–T1 part of the sequence represents a narrow but random ordering of effective temperatures and cloud fractions, obscured by the monotonic progression in methane absorption strength.« less
NASA Astrophysics Data System (ADS)
Tintoré, Joaquín
2017-04-01
The last 20 years of ocean research have allowed a description of the state of the large-scale ocean circulation. However, it is also well known that there is no such thing as an ocean state and that the ocean varies a wide range of spatial and temporal scales. More recently, in the last 10 years, new monitoring and modelling technologies have emerged allowing quasi real time observation and forecasting of the ocean at regional and local scales. Theses new technologies are key components of recent observing & forecasting systems being progressively implemented in many regional seas and coastal areas of the world oceans. As a result, new capabilities to characterise the ocean state and more important, its variability at small spatial and temporal scales, exists today in many cases in quasi-real time. Examples of relevance for society can be cited, among others our capabilities to detect and understand long-term climatic changes and also our capabilities to better constrain our forecasting capabilities of the coastal ocean circulation at temporal scales from sub-seasonal to inter-annual and spatial from regional to meso and submesoscale. The Mediterranean Sea is a well-known laboratory ocean where meso and submesoscale features can be ideally observed and studied as shown by the key contributions from projects such as Perseus, CMEMS, Jericonext, among others. The challenge for the next 10 years is the integration of theses technologies and multiplatform observing and forecasting systems to (a) monitor the variability at small scales mesoscale/weeks) in order (b) to resolve the sub-basin/seasonal and inter-annual variability and by this (c) establish the decadal variability, understand the associated biases and correct them. In other words, the new observing systems now allow a major change in our focus of ocean observation, now from small to large scales. Recent studies from SOCIB -www.socib.es- have shown the importance of this new small to large-scale multi-platform approach in ocean observation. Three examples from the integration capabilities of SOCIB facilities will be presented and discussed. First the quasi-continuous high frequency glider monitoring of the Ibiza Channel since 2011, an important biodiversity hot spot and a 'choke' point in the Western Mediterranean circulation, has allowed us to reveal a high frequency variability in the North-South exchanges, with very significant changes (0.8 - 0.9 Sv) occurring over periods of days to week of the same order as the previously known seasonal cycle. HF radar data and model results have also contributed more recently to better describe and understand the variability at small scales. Second, the Alborex/Perseus project multi-platform experiment (e.g., RV catamaran, 2 gliders, 25 drifters, 3 Argo type profilers & satellite data) that focused on submesoscale processes and ecosystem response and carried out in the Alborán Sea in May 2014. Glider results showed significant chlorophyll subduction in areas adjacent to the steep density front with patterns related to vertical motion. Initial dynamical interpretations will be presented. Third and final, I will discuss the key relevance of the data centre to guarantee data interoperability, quality control, availability and distribution for this new approach to ocean observation and forecasting to be really efficient in responding to key scientific state of the art priorities, enhancing technology development and responding to society needs.
NASA Astrophysics Data System (ADS)
Henry de Frahan, Marc T.; Varadan, Sreenivas; Johnsen, Eric
2015-01-01
Although the Discontinuous Galerkin (DG) method has seen widespread use for compressible flow problems in a single fluid with constant material properties, it has yet to be implemented in a consistent fashion for compressible multiphase flows with shocks and interfaces. Specifically, it is challenging to design a scheme that meets the following requirements: conservation, high-order accuracy in smooth regions and non-oscillatory behavior at discontinuities (in particular, material interfaces). Following the interface-capturing approach of Abgrall [1], we model flows of multiple fluid components or phases using a single equation of state with variable material properties; discontinuities in these properties correspond to interfaces. To represent compressible phenomena in solids, liquids, and gases, we present our analysis for equations of state belonging to the Mie-Grüneisen family. Within the DG framework, we propose a conservative, high-order accurate, and non-oscillatory limiting procedure, verified with simple multifluid and multiphase problems. We show analytically that two key elements are required to prevent spurious pressure oscillations at interfaces and maintain conservation: (i) the transport equation(s) describing the material properties must be solved in a non-conservative weak form, and (ii) the suitable variables must be limited (density, momentum, pressure, and appropriate properties entering the equation of state), coupled with a consistent reconstruction of the energy. Further, we introduce a physics-based discontinuity sensor to apply limiting in a solution-adaptive fashion. We verify this approach with one- and two-dimensional problems with shocks and interfaces, including high pressure and density ratios, for fluids obeying different equations of state to illustrate the robustness and versatility of the method. The algorithm is implemented on parallel graphics processing units (GPU) to achieve high speedup.
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
Incorporating climate change and morphological uncertainty into coastal change hazard assessments
Baron, Heather M.; Ruggiero, Peter; Wood, Nathan J.; Harris, Erica L.; Allan, Jonathan; Komar, Paul D.; Corcoran, Patrick
2015-01-01
Documented and forecasted trends in rising sea levels and changes in storminess patterns have the potential to increase the frequency, magnitude, and spatial extent of coastal change hazards. To develop realistic adaptation strategies, coastal planners need information about coastal change hazards that recognizes the dynamic temporal and spatial scales of beach morphology, the climate controls on coastal change hazards, and the uncertainties surrounding the drivers and impacts of climate change. We present a probabilistic approach for quantifying and mapping coastal change hazards that incorporates the uncertainty associated with both climate change and morphological variability. To demonstrate the approach, coastal change hazard zones of arbitrary confidence levels are developed for the Tillamook County (State of Oregon, USA) coastline using a suite of simple models and a range of possible climate futures related to wave climate, sea-level rise projections, and the frequency of major El Niño events. Extreme total water levels are more influenced by wave height variability, whereas the magnitude of erosion is more influenced by sea-level rise scenarios. Morphological variability has a stronger influence on the width of coastal hazard zones than the uncertainty associated with the range of climate change scenarios.
On the Latent Variable Interpretation in Sum-Product Networks.
Peharz, Robert; Gens, Robert; Pernkopf, Franz; Domingos, Pedro
2017-10-01
One of the central themes in Sum-Product networks (SPNs) is the interpretation of sum nodes as marginalized latent variables (LVs). This interpretation yields an increased syntactic or semantic structure, allows the application of the EM algorithm and to efficiently perform MPE inference. In literature, the LV interpretation was justified by explicitly introducing the indicator variables corresponding to the LVs' states. However, as pointed out in this paper, this approach is in conflict with the completeness condition in SPNs and does not fully specify the probabilistic model. We propose a remedy for this problem by modifying the original approach for introducing the LVs, which we call SPN augmentation. We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks. Based on these results, we find a sound derivation of the EM algorithm for SPNs. Furthermore, the Viterbi-style algorithm for MPE proposed in literature was never proven to be correct. We show that this is indeed a correct algorithm, when applied to selective SPNs, and in particular when applied to augmented SPNs. Our theoretical results are confirmed in experiments on synthetic data and 103 real-world datasets.
A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.
NASA Astrophysics Data System (ADS)
Tate, Ranjeet Shekhar
1992-01-01
General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact) general relativity, the Hamiltonian is constrained to vanish. I present various approaches one can take to obtain an interpretation of the quantum theory of such "dynamically constrained" systems. I apply some of these ideas to the Bianchi I cosmology, and analyze the issue of the initial singularity in quantum theory.
Parental and Infant Gender Factors in Parent-Infant Interaction: State-Space Dynamic Analysis.
Cerezo, M Angeles; Sierra-García, Purificación; Pons-Salvador, Gemma; Trenado, Rosa M
2017-01-01
This study aimed to investigate the influence of parental gender on their interaction with their infants, considering, as well, the role of the infant's gender. The State Space Grid (SSG) method, a graphical tool based on the non-linear dynamic system (NDS) approach was used to analyze the interaction, in Free-Play setting, of 52 infants, aged 6 to 10 months, divided into two groups: half of the infants interacted with their fathers and half with their mothers. There were 50% boys in each group. MANOVA results showed no differential parenting of boys and girls. Additionally, mothers and fathers showed no differences in the Diversity of behavioral dyadic states nor in Predictability. However, differences associated with parent's gender were found in that the paternal dyads were more "active" than the maternal dyads: they were faster in the rates per second of behavioral events and transitions or change of state. In contrast, maternal dyads were more repetitive because, once they visited a certain dyadic state, they tend to be involved in more events. Results showed a significant discriminant function on the parental groups, fathers and mothers. Specifically, the content analyses carried out for the three NDS variables, that previously showed differences between groups, showed particular dyadic behavioral states associated with the rate of Transitions and the Events per Visit ratio. Thus, the transitions involving 'in-out' of 'Child Social Approach neutral - Sensitive Approach neutral' state and the repetitions of events in the dyadic state 'Child Play-Sensitive Approach neutral' distinguished fathers from mothers. The classification of dyads (with fathers and mothers) based on this discriminant function identified 73.10% (19/26) of the father-infant dyads and 88.5% (23/26) of the mother-infant dyads. The study of father-infant interaction using the SSG approach offers interesting possibilities because it characterizes and quantifies the actual moment-to-moment flow of parent-infant interactive dynamics. Our findings showed how observational methods applied to natural contexts offer new facets in father vs. mother interactive behavior with their infants that can inform further developments in this field.
Entropy Information of Cardiorespiratory Dynamics in Neonates during Sleep.
Lucchini, Maristella; Pini, Nicolò; Fifer, William P; Burtchen, Nina; Signorini, Maria G
2017-05-01
Sleep is a central activity in human adults and characterizes most of the newborn infant life. During sleep, autonomic control acts to modulate heart rate variability (HRV) and respiration. Mechanisms underlying cardiorespiratory interactions in different sleep states have been studied but are not yet fully understood. Signal processing approaches have focused on cardiorespiratory analysis to elucidate this co-regulation. This manuscript proposes to analyze heart rate (HR), respiratory variability and their interrelationship in newborn infants to characterize cardiorespiratory interactions in different sleep states (active vs. quiet). We are searching for indices that could detect regulation alteration or malfunction, potentially leading to infant distress. We have analyzed inter-beat (RR) interval series and respiration in a population of 151 newborns, and followed up with 33 at 1 month of age. RR interval series were obtained by recognizing peaks of the QRS complex in the electrocardiogram (ECG), corresponding to the ventricles depolarization. Univariate time domain, frequency domain and entropy measures were applied. In addition, Transfer Entropy was considered as a bivariate approach able to quantify the bidirectional information flow from one signal (respiration) to another (RR series). Results confirm the validity of the proposed approach. Overall, HRV is higher in active sleep, while high frequency (HF) power characterizes more quiet sleep. Entropy analysis provides higher indices for SampEn and Quadratic Sample entropy (QSE) in quiet sleep. Transfer Entropy values were higher in quiet sleep and point to a major influence of respiration on the RR series. At 1 month of age, time domain parameters show an increase in HR and a decrease in variability. No entropy differences were found across ages. The parameters employed in this study help to quantify the potential for infants to adapt their cardiorespiratory responses as they mature. Thus, they could be useful as early markers of risk for infant cardiorespiratory vulnerabilities.
Statistical theory on the analytical form of cloud particle size distributions
NASA Astrophysics Data System (ADS)
Wu, Wei; McFarquhar, Greg
2017-11-01
Several analytical forms of cloud particle size distributions (PSDs) have been used in numerical modeling and remote sensing retrieval studies of clouds and precipitation, including exponential, gamma, lognormal, and Weibull distributions. However, there is no satisfying physical explanation as to why certain distribution forms preferentially occur instead of others. Theoretically, the analytical form of a PSD can be derived by directly solving the general dynamic equation, but no analytical solutions have been found yet. Instead of using a process level approach, the use of the principle of maximum entropy (MaxEnt) for determining the analytical form of PSDs from the perspective of system is examined here. Here, the issue of variability under coordinate transformations that arises using the Gibbs/Shannon definition of entropy is identified, and the use of the concept of relative entropy to avoid these problems is discussed. Focusing on cloud physics, the four-parameter generalized gamma distribution is proposed as the analytical form of a PSD using the principle of maximum (relative) entropy with assumptions on power law relations between state variables, scale invariance and a further constraint on the expectation of one state variable (e.g. bulk water mass). DOE ASR.
Cosmological perturbations of a perfect fluid and noncommutative variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Felice, Antonio; Gerard, Jean-Marc; Suyama, Teruaki
2010-03-15
We describe the linear cosmological perturbations of a perfect fluid at the level of an action, providing thus an alternative to the standard approach based only on the equations of motion. This action is suited not only to perfect fluids with a barotropic equation of state, but also to those for which the pressure depends on two thermodynamical variables. By quantizing the system we find that (1) some perturbation fields exhibit a noncommutativity quite analogous to the one observed for a charged particle moving in a strong magnetic field, (2) local curvature and pressure perturbations cannot be measured simultaneously, (3)more » ghosts appear if the null energy condition is violated.« less
New construction of eigenstates and separation of variables for SU( N) quantum spin chains
NASA Astrophysics Data System (ADS)
Gromov, Nikolay; Levkovich-Maslyuk, Fedor; Sizov, Grigory
2017-09-01
We conjecture a new way to construct eigenstates of integrable XXX quantum spin chains with SU( N) symmetry. The states are built by repeatedly acting on the vacuum with a single operator B good( u) evaluated at the Bethe roots. Our proposal serves as a compact alternative to the usual nested algebraic Bethe ansatz. Furthermore, the roots of this operator give the separated variables of the model, explicitly generalizing Sklyanin's approach to the SU( N) case. We present many tests of the conjecture and prove it in several special cases. We focus on rational spin chains with fundamental representation at each site, but expect many of the results to be valid more generally.
Current Status and Challenges of Atmospheric Data Assimilation
NASA Astrophysics Data System (ADS)
Atlas, R. M.; Gelaro, R.
2016-12-01
The issues of modern atmospheric data assimilation are fairly simple to comprehend but difficult to address, involving the combination of literally billions of model variables and tens of millions of observations daily. In addition to traditional meteorological variables such as wind, temperature pressure and humidity, model state vectors are being expanded to include explicit representation of precipitation, clouds, aerosols and atmospheric trace gases. At the same time, model resolutions are approaching single-kilometer scales globally and new observation types have error characteristics that are increasingly non-Gaussian. This talk describes the current status and challenges of atmospheric data assimilation, including an overview of current methodologies, the difficulty of estimating error statistics, and progress toward coupled earth system analyses.
Gaussian entanglement revisited
NASA Astrophysics Data System (ADS)
Lami, Ludovico; Serafini, Alessio; Adesso, Gerardo
2018-02-01
We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.
Bieler, Noah S; Tschopp, Jan P; Hünenberger, Philippe H
2015-06-09
An extension of the λ-local-elevation umbrella-sampling (λ-LEUS) scheme [ Bieler et al. J. Chem. Theory Comput. 2014 , 10 , 3006 ] is proposed to handle the multistate (MS) situation, i.e. the calculation of the relative free energies of multiple physical states based on a single simulation. The key element of the MS-λ-LEUS approach is to use a single coupling variable Λ controlling successive pairwise mutations between the states of interest in a cyclic fashion. The Λ variable is propagated dynamically as an extended-system variable, using a coordinate transformation with plateaus and a memory-based biasing potential as in λ-LEUS. Compared to other available MS schemes (one-step perturbation, enveloping distribution sampling and conventional λ-dynamics) the proposed method presents a number of important advantages, namely: (i) the physical states are visited explicitly and over finite time periods; (ii) the extent of unphysical space required to ensure transitions is kept minimal and, in particular, one-dimensional; (iii) the setup protocol solely requires the topologies of the physical states; and (iv) the method only requires limited modifications in a simulation code capable of handling two-state mutations. As an initial application, the absolute binding free energies of five alkali cations to three crown ethers in three different solvents are calculated. The results are found to reproduce qualitatively the main experimental trends and, in particular, the experimental selectivity of 18C6 for K(+) in water and methanol, which is interpreted in terms of opposing trends along the cation series between the solvation free energy of the cation and the direct electrostatic interactions within the complex.
Impact from Magnitude-Rupture Length Uncertainty on Seismic Hazard and Risk
NASA Astrophysics Data System (ADS)
Apel, E. V.; Nyst, M.; Kane, D. L.
2015-12-01
In probabilistic seismic hazard and risk assessments seismic sources are typically divided into two groups: fault sources (to model known faults) and background sources (to model unknown faults). In areas like the Central and Eastern United States and Hawaii the hazard and risk is driven primarily by background sources. Background sources can be modeled as areas, points or pseudo-faults. When background sources are modeled as pseudo-faults, magnitude-length or magnitude-area scaling relationships are required to construct these pseudo-faults. However the uncertainty associated with these relationships is often ignored or discarded in hazard and risk models, particularly when faults sources are the dominant contributor. Conversely, in areas modeled only with background sources these uncertainties are much more significant. In this study we test the impact of using various relationships and the resulting epistemic uncertainties on the seismic hazard and risk in the Central and Eastern United States and Hawaii. It is common to use only one magnitude length relationship when calculating hazard. However, Stirling et al. (2013) showed that for a given suite of magnitude-rupture length relationships the variability can be quite large. The 2014 US National Seismic Hazard Maps (Petersen et al., 2014) used one magnitude-rupture length relationship (Somerville, et al., 2001) in the Central and Eastern United States, and did not consider variability in the seismogenic rupture plane width. Here we use a suite of metrics to compare the USGS approach with these variable uncertainty models to assess 1) the impact on hazard and risk and 2) the epistemic uncertainty associated with choice of relationship. In areas where the seismic hazard is dominated by larger crustal faults (e.g. New Madrid) the choice of magnitude-rupture length relationship has little impact on the hazard or risk. However away from these regions, the choice of relationship is more significant and may approach the size of the uncertainty associated with the ground motion prediction equation suite.
NASA Astrophysics Data System (ADS)
Lebassi-Habtezion, Bereket; Diffenbaugh, Noah S.
2013-10-01
potential importance of local-scale climate phenomena motivates development of approaches to enable computationally feasible nonhydrostatic climate simulations. To that end, we evaluate the potential viability of nested nonhydrostatic model approaches, using the summer climate of the western United States (WUSA) as a case study. We use the Weather Research and Forecast (WRF) model to carry out five simulations of summer 2010. This suite allows us to test differences between nonhydrostatic and hydrostatic resolutions, single and multiple nesting approaches, and high- and low-resolution reanalysis boundary conditions. WRF simulations were evaluated against station observations, gridded observations, and reanalysis data over domains that cover the 11 WUSA states at nonhydrostatic grid spacing of 4 km and hydrostatic grid spacing of 25 km and 50 km. Results show that the nonhydrostatic simulations more accurately resolve the heterogeneity of surface temperature, precipitation, and wind speed features associated with the topography and orography of the WUSA region. In addition, we find that the simulation in which the nonhydrostatic grid is nested directly within the regional reanalysis exhibits the greatest overall agreement with observational data. Results therefore indicate that further development of nonhydrostatic nesting approaches is likely to yield important insights into the response of local-scale climate phenomena to increases in global greenhouse gas concentrations. However, the biases in regional precipitation, atmospheric circulation, and moisture flux identified in a subset of the nonhydrostatic simulations suggest that alternative nonhydrostatic modeling approaches such as superparameterization and variable-resolution global nonhydrostatic modeling will provide important complements to the nested approaches tested here.
Sensitivity Equation Derivation for Transient Heat Transfer Problems
NASA Technical Reports Server (NTRS)
Hou, Gene; Chien, Ta-Cheng; Sheen, Jeenson
2004-01-01
The focus of the paper is on the derivation of sensitivity equations for transient heat transfer problems modeled by different discretization processes. Two examples will be used in this study to facilitate the discussion. The first example is a coupled, transient heat transfer problem that simulates the press molding process in fabrication of composite laminates. These state equations are discretized into standard h-version finite elements and solved by a multiple step, predictor-corrector scheme. The sensitivity analysis results based upon the direct and adjoint variable approaches will be presented. The second example is a nonlinear transient heat transfer problem solved by a p-version time-discontinuous Galerkin's Method. The resulting matrix equation of the state equation is simply in the form of Ax = b, representing a single step, time marching scheme. A direct differentiation approach will be used to compute the thermal sensitivities of a sample 2D problem.
NASA Technical Reports Server (NTRS)
Jones, D. W.
1971-01-01
The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.
Lin, Yen Ting; Chylek, Lily A; Lemons, Nathan W; Hlavacek, William S
2018-06-21
The chemical kinetics of many complex systems can be concisely represented by reaction rules, which can be used to generate reaction events via a kinetic Monte Carlo method that has been termed network-free simulation. Here, we demonstrate accelerated network-free simulation through a novel approach to equation-free computation. In this process, variables are introduced that approximately capture system state. Derivatives of these variables are estimated using short bursts of exact stochastic simulation and finite differencing. The variables are then projected forward in time via a numerical integration scheme, after which a new exact stochastic simulation is initialized and the whole process repeats. The projection step increases efficiency by bypassing the firing of numerous individual reaction events. As we show, the projected variables may be defined as populations of building blocks of chemical species. The maximal number of connected molecules included in these building blocks determines the degree of approximation. Equation-free acceleration of network-free simulation is found to be both accurate and efficient.
Assessing Factors Contributing to Cyanobacteria Harmful Algal Blooms in U.S. Lakes
NASA Astrophysics Data System (ADS)
Salls, W. B.; Iiames, J. S., Jr.; Lunetta, R. S.; Mehaffey, M.; Schaeffer, B. A.
2017-12-01
Cyanobacteria Harmful Algal Blooms (CHABs) in inland lakes have emerged as a major threat to water quality from both ecological and public health standpoints. Understanding the factors and processes driving CHAB occurrence is important in order to properly manage ensuring more favorable water quality outcomes. High water temperatures and nutrient loadings are known drivers of CHABs; however, the contribution of landscape variables and their interactions with these drivers remains relatively unstudied at a regional or national scale. This study assesses upstream landscape variables that may contribute to or obstruct/delay nutrient loadings to freshwater systems in several hundred inland lakes in the Upper Mid-western and Northeastern United States. We employ multiple linear regression and random forest modeling to determine which variables contribute most strongly to CHAB occurrence. This lakeshed-based approach will rank the impact of each landscape variable on cyanobacteria levels derived from satellite remotely sensed data from the Medium Resolution Imaging Spectrometer (MERIS) sensor for the 2011 bloom season (July - October).
Gadadare, Rahul; Mandpe, Leenata; Pokharkar, Varsha
2015-08-01
The present work was undertaken with the objectives of improving the dissolution velocity, related oral bioavailability, and minimizing the fasted/fed state variability of repaglinide, a poorly water-soluble anti-diabetic active by exploring the principles of nanotechnology. Nanocrystal formulations were prepared by both top-down and bottom-up approaches. These approaches were compared in light of their ability to provide the formulation stability in terms of particle size. Soluplus® was used as a stabilizer and Kolliphor™ E-TPGS was used as an oral absorption enhancer. In vitro dissolution profiles were investigated in distilled water, fasted and fed state simulated gastric fluid, and compared with the pure repaglinide. In vivo pharmacokinetics was performed in both the fasted and fed state using Wistar rats. Oral hypoglycemic activity was also assessed in streptozotocin-induced diabetic rats. Nanocrystals TD-A and TD-B showed 19.86 and 25.67-fold increase in saturation solubility, respectively, when compared with pure repaglinide. Almost 10 (TD-A) and 15 (TD-B)-fold enhancement in the oral bioavailability of nanocrystals was observed regardless of the fasted/fed state compared to pure repaglinide. Nanocrystal formulations also demonstrated significant (p < 0.001) hypoglycemic activity with faster onset (less than 30 min) and prolonged duration (up to 8 h) compared to pure repaglinide (after 60 min; up to 4 h, respectively).
A mathematical model captures the structure of subjective affect
Mattek, Alison M.; Wolford, George; Whalen, Paul J.
2016-01-01
While it is possible to observe when another person is having an emotional moment, we also derive information about the affective states of others from what they tell us they are feeling. In an effort to distill the complexity of affective experience, psychologists routinely focus on a simplified subset of subjective rating scales (i.e., dimensions) that capture considerable variability in reported affect: reported valence (i.e., how good or bad?) and reported arousal (e.g., how strong is the emotion you are feeling?). Still, existing theoretical approaches address the basic organization and measurement of these affective dimensions differently. Some approaches organize affect around the dimensions of bipolar valence and arousal (e.g., the circumplex model; Russell, 1980), whereas alternative approaches organize affect around the dimensions of unipolar positivity and unipolar negativity (e.g., the bivariate evaluative model; Cacioppo & Berntson, 1994). In this report, we (1) replicate the data structure observed when collected according to the two approaches described above, and re-interpret these data to suggest that the relationship between each pair of affective dimensions is conditional on valence ambiguity; then (2) formalize this structure with a mathematical model depicting a valence ambiguity dimension that decreases in range as arousal decreases (a triangle). This model captures variability in affective ratings better than alternative approaches, increasing variance explained from ~60% to over 90% without adding parameters. PMID:28544868
NASA Astrophysics Data System (ADS)
Murawski, Aline; Bürger, Gerd; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
The use of a weather pattern based approach for downscaling of coarse, gridded atmospheric data, as usually obtained from the output of general circulation models (GCM), allows for investigating the impact of anthropogenic greenhouse gas emissions on fluxes and state variables of the hydrological cycle such as e.g. on runoff in large river catchments. Here we aim at attributing changes in high flows in the Rhine catchment to anthropogenic climate change. Therefore we run an objective classification scheme (simulated annealing and diversified randomisation - SANDRA, available from the cost733 classification software) on ERA20C reanalyses data and apply the established classification to GCMs from the CMIP5 project. After deriving weather pattern time series from GCM runs using forcing from all greenhouse gases (All-Hist) and using natural greenhouse gas forcing only (Nat-Hist), a weather generator will be employed to obtain climate data time series for the hydrological model. The parameters of the weather pattern classification (i.e. spatial extent, number of patterns, classification variables) need to be selected in a way that allows for good stratification of the meteorological variables that are of interest for the hydrological modelling. We evaluate the skill of the classification in stratifying meteorological data using a multi-variable approach. This allows for estimating the stratification skill for all meteorological variables together, not separately as usually done in existing similar work. The advantage of the multi-variable approach is to properly account for situations where e.g. two patterns are associated with similar mean daily temperature, but one pattern is dry while the other one is related to considerable amounts of precipitation. Thus, the separation of these two patterns would not be justified when considering temperature only, but is perfectly reasonable when accounting for precipitation as well. Besides that, the weather patterns derived from reanalyses data should be well represented in the All-Hist GCM runs in terms of e.g. frequency, seasonality, and persistence. In this contribution we show how to select the most appropriate weather pattern classification and how the classes derived from it are reflected in the GCMs.
Peltola, Tomi; Marttinen, Pekka; Vehtari, Aki
2012-01-01
High-dimensional datasets with large amounts of redundant information are nowadays available for hypothesis-free exploration of scientific questions. A particular case is genome-wide association analysis, where variations in the genome are searched for effects on disease or other traits. Bayesian variable selection has been demonstrated as a possible analysis approach, which can account for the multifactorial nature of the genetic effects in a linear regression model. Yet, the computation presents a challenge and application to large-scale data is not routine. Here, we study aspects of the computation using the Metropolis-Hastings algorithm for the variable selection: finite adaptation of the proposal distributions, multistep moves for changing the inclusion state of multiple variables in a single proposal and multistep move size adaptation. We also experiment with a delayed rejection step for the multistep moves. Results on simulated and real data show increase in the sampling efficiency. We also demonstrate that with application specific proposals, the approach can overcome a specific mixing problem in real data with 3822 individuals and 1,051,811 single nucleotide polymorphisms and uncover a variant pair with synergistic effect on the studied trait. Moreover, we illustrate multimodality in the real dataset related to a restrictive prior distribution on the genetic effect sizes and advocate a more flexible alternative. PMID:23166669
Security proof of continuous-variable quantum key distribution using three coherent states
NASA Astrophysics Data System (ADS)
Brádler, Kamil; Weedbrook, Christian
2018-02-01
We introduce a ternary quantum key distribution (QKD) protocol and asymptotic security proof based on three coherent states and homodyne detection. Previous work had considered the binary case of two coherent states and here we nontrivially extend this to three. Our motivation is to leverage the practical benefits of both discrete and continuous (Gaussian) encoding schemes creating a best-of-both-worlds approach; namely, the postprocessing of discrete encodings and the hardware benefits of continuous ones. We present a thorough and detailed security proof in the limit of infinite signal states which allows us to lower bound the secret key rate. We calculate this is in the context of collective eavesdropping attacks and reverse reconciliation postprocessing. Finally, we compare the ternary coherent state protocol to other well-known QKD schemes (and fundamental repeaterless limits) in terms of secret key rates and loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyadera, Takayuki; Imai, Hideki; Graduate School of Science and Engineering, Chuo University, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551
This paper discusses the no-cloning theorem in a logicoalgebraic approach. In this approach, an orthoalgebra is considered as a general structure for propositions in a physical theory. We proved that an orthoalgebra admits cloning operation if and only if it is a Boolean algebra. That is, only classical theory admits the cloning of states. If unsharp propositions are to be included in the theory, then a notion of effect algebra is considered. We proved that an atomic Archimedean effect algebra admitting cloning operation is a Boolean algebra. This paper also presents a partial result, indicating a relation between the cloningmore » on effect algebras and hidden variables.« less
Systematic effects in the HfF+-ion experiment to search for the electron electric dipole moment
NASA Astrophysics Data System (ADS)
Petrov, A. N.
2018-05-01
The energy splittings for J =1 , F =3 /2 , | mF|=3 /2 hyperfine levels of the 3Δ1 electronic state of 180Hf+19F ion are calculated as functions of the external variable electric and magnetic fields within two approaches. In the first one, the transition to the rotating frame is performed, whereas in the second approach, the quantization of rotating electromagnetic field is performed. Calculations are required for understanding possible systematic errors in the experiment to search for the electron electric dipole moment (e EDM ) with the 180Hf+19F ion.
NASA Technical Reports Server (NTRS)
Noah, S. T.; Kim, Y. B.
1991-01-01
A general approach is developed for determining the periodic solutions and their stability of nonlinear oscillators with piecewise-smooth characteristics. A modified harmonic balance/Fourier transform procedure is devised for the analysis. The procedure avoids certain numerical differentiation employed previously in determining the periodic solutions, therefore enhancing the reliability and efficiency of the method. Stability of the solutions is determined via perturbations of their state variables. The method is applied to a forced oscillator interacting with a stop of finite stiffness. Flip and fold bifurcations are found to occur. This led to the identification of parameter ranges in which chaotic response occurred.
Single pilot scanning behavior in simulated instrument flight
NASA Technical Reports Server (NTRS)
Pennington, J. E.
1979-01-01
A simulation of tasks associated with single pilot general aviation flight under instrument flight rules was conducted as a baseline for future research studies on advanced flight controls and avionics. The tasks, ranging from simple climbs and turns to an instrument landing systems approach, were flown on a fixed base simulator. During the simulation the control inputs, state variables, and the pilots visual scan pattern including point of regard were measured and recorded.
2004-11-01
institutionalized approaches to solving problems, company/client specific mission priorities (for example, State Department vs . Army Reserve and... independent variables that let the user leave a particular step before fin- ishing all the items, and to return at a later time without any data loss. One...Sales, Main Exchange, Miscellane- ous Shops, Post Office, Restaurant , and Theater.) Authorized customers served 04 Other criteria pro- vided by the
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data
Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
The 1D Richards' equation in two layered soils: a Filippov approach to treat discontinuities
NASA Astrophysics Data System (ADS)
Berardi, Marco; Difonzo, Fabio; Vurro, Michele; Lopez, Luciano
2018-05-01
The infiltration process into the soil is generally modeled by the Richards' partial differential equation (PDE). In this paper a new approach for modeling the infiltration process through the interface of two different soils is proposed, where the interface is seen as a discontinuity surface defined by suitable state variables. Thus, the original 1D Richards' PDE, enriched by a particular choice of the boundary conditions, is first approximated by means of a time semidiscretization, that is by means of the transversal method of lines (TMOL). In such a way a sequence of discontinuous initial value problems, described by a sequence of second order differential systems in the space variable, is derived. Then, Filippov theory on discontinuous dynamical systems may be applied in order to study the relevant dynamics of the problem. The numerical integration of the semidiscretized differential system will be performed by using a one-step method, which employs an event driven procedure to locate the discontinuity surface and to adequately change the vector field.
A platform for high-throughput bioenergy production phenotype characterization in single cells
Kelbauskas, Laimonas; Glenn, Honor; Anderson, Clifford; Messner, Jacob; Lee, Kristen B.; Song, Ganquan; Houkal, Jeff; Su, Fengyu; Zhang, Liqiang; Tian, Yanqing; Wang, Hong; Bussey, Kimberly; Johnson, Roger H.; Meldrum, Deirdre R.
2017-01-01
Driven by an increasing number of studies demonstrating its relevance to a broad variety of disease states, the bioenergy production phenotype has been widely characterized at the bulk sample level. Its cell-to-cell variability, a key player associated with cancer cell survival and recurrence, however, remains poorly understood due to ensemble averaging of the current approaches. We present a technology platform for performing oxygen consumption and extracellular acidification measurements of several hundreds to 1,000 individual cells per assay, while offering simultaneous analysis of cellular communication effects on the energy production phenotype. The platform comprises two major components: a tandem optical sensor for combined oxygen and pH detection, and a microwell device for isolation and analysis of single and few cells in hermetically sealed sub-nanoliter chambers. Our approach revealed subpopulations of cells with aberrant energy production profiles and enables determination of cellular response variability to electron transfer chain inhibitors and ion uncouplers. PMID:28349963
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao
2013-07-01
This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.
NASA Technical Reports Server (NTRS)
Mccormac, B. M. (Editor); Seliga, T. A.
1979-01-01
The book contains most of the invited papers and contributions presented at the symposium/workshop on solar-terrestrial influences on weather and climate. Four main issues dominate the activities of the symposium: whether solar variability relationships to weather and climate is a fundamental scientific question to which answers may have important implications for long-term weather and climate prediction; the sun-weather relationships; other potential solar influences on weather including the 11-year sunspot cycle, the 27-day solar rotation, and special solar events such as flares and coronal holes; and the development of practical use of solar variability as a tool for weather and climatic forecasting, other than through empirical approaches. Attention is given to correlation topics; solar influences on global circulation and climate models; lower and upper atmospheric coupling, including electricity; planetary motions and other indirect factors; experimental approaches to sun-weather relationships; and the role of minor atmospheric constituents.
Nonlinear zero-sum differential game analysis by singular perturbation methods
NASA Technical Reports Server (NTRS)
Sinar, J.; Farber, N.
1982-01-01
A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.
A general method to determine the stability of compressible flows
NASA Technical Reports Server (NTRS)
Guenther, R. A.; Chang, I. D.
1982-01-01
Several problems were studied using two completely different approaches. The initial method was to use the standard linearized perturbation theory by finding the value of the individual small disturbance quantities based on the equations of motion. These were serially eliminated from the equations of motion to derive a single equation that governs the stability of fluid dynamic system. These equations could not be reduced unless the steady state variable depends only on one coordinate. The stability equation based on one dependent variable was found and was examined to determine the stability of a compressible swirling jet. The second method applied a Lagrangian approach to the problem. Since the equations developed were based on different assumptions, the condition of stability was compared only for the Rayleigh problem of a swirling flow, both examples reduce to the Rayleigh criterion. This technique allows including the viscous shear terms which is not possible in the first method. The same problem was then examined to see what effect shear has on stability.
Purtle, Jonathan; Lê-Scherban, Félice; Shattuck, Paul; Proctor, Enola K; Brownson, Ross C
2017-06-26
A large proportion of the US population has limited access to mental health treatments because insurance providers limit the utilization of mental health services in ways that are more restrictive than for physical health services. Comprehensive state mental health parity legislation (C-SMHPL) is an evidence-based policy intervention that enhances mental health insurance coverage and improves access to care. Implementation of C-SMHPL, however, is limited. State policymakers have the exclusive authority to implement C-SMHPL, but sparse guidance exists to inform the design of strategies to disseminate evidence about C-SMHPL, and more broadly, evidence-based treatments and mental illness, to this audience. The aims of this exploratory audience research study are to (1) characterize US State policymakers' knowledge and attitudes about C-SMHPL and identify individual- and state-level attributes associated with support for C-SMHPL; and (2) integrate quantitative and qualitative data to develop a conceptual framework to disseminate evidence about C-SMHPL, evidence-based treatments, and mental illness to US State policymakers. The study uses a multi-level (policymaker, state), mixed method (QUAN→qual) approach and is guided by Kingdon's Multiple Streams Framework, adapted to incorporate constructs from Aarons' Model of Evidence-Based Implementation in Public Sectors. A multi-modal survey (telephone, post-mail, e-mail) of 600 US State policymakers (500 legislative, 100 administrative) will be conducted and responses will be linked to state-level variables. The survey will span domains such as support for C-SMHPL, knowledge and attitudes about C-SMHPL and evidence-based treatments, mental illness stigma, and research dissemination preferences. State-level variables will measure factors associated with C-SMHPL implementation, such as economic climate and political environment. Multi-level regression will determine the relative strength of individual- and state-level variables on policymaker support for C-SMHPL. Informed by survey results, semi-structured interviews will be conducted with approximately 50 US State policymakers to elaborate upon quantitative findings. Then, using a systematic process, quantitative and qualitative data will be integrated and a US State policymaker-focused C-SMHPL dissemination framework will be developed. Study results will provide the foundation for hypothesis-driven, experimental studies testing the effects of different dissemination strategies on state policymakers' support for, and implementation of, evidence-based mental health policy interventions.
Rapid Crop Cover Mapping for the Conterminous United States.
Dahal, Devendra; Wylie, Bruce; Howard, Danny
2018-06-05
Timely crop cover maps with sufficient resolution are important components to various environmental planning and research applications. Through the modification and use of a previously developed crop classification model (CCM), which was originally developed to generate historical annual crop cover maps, we hypothesized that such crop cover maps could be generated rapidly during the growing season. Through a process of incrementally removing weekly and monthly independent variables from the CCM and implementing a 'two model mapping' approach, we found it viable to generate conterminous United States-wide rapid crop cover maps at a resolution of 250 m for the current year by the month of September. In this approach, we divided the CCM model into one 'crop type model' to handle the classification of nine specific crops and a second, binary model to classify the presence or absence of 'other' crops. Under the two model mapping approach, the training errors were 0.8% and 1.5% for the crop type and binary model, respectively, while test errors were 5.5% and 6.4%, respectively. With spatial mapping accuracies for annual maps reaching upwards of 70%, this approach demonstrated a strong potential for generating rapid crop cover maps by the 1 st of September.
Continuous variable quantum cryptography using coherent states.
Grosshans, Frédéric; Grangier, Philippe
2002-02-04
We propose several methods for quantum key distribution (QKD) based on the generation and transmission of random distributions of coherent or squeezed states, and we show that they are secure against individual eavesdropping attacks. These protocols require that the transmission of the optical line between Alice and Bob is larger than 50%, but they do not rely on "sub-shot-noise" features such as squeezing. Their security is a direct consequence of the no-cloning theorem, which limits the signal-to-noise ratio of possible quantum measurements on the transmission line. Our approach can also be used for evaluating various QKD protocols using light with Gaussian statistics.
Gopal, Shruti; Miller, Robyn L; Baum, Stefi A; Calhoun, Vince D
2016-01-01
Identification of functionally connected regions while at rest has been at the forefront of research focusing on understanding interactions between different brain regions. Studies have utilized a variety of approaches including seed based as well as data-driven approaches to identifying such networks. Most such techniques involve differentiating groups based on group mean measures. There has been little work focused on differences in spatial characteristics of resting fMRI data. We present a method to identify between group differences in the variability in the cluster characteristics of network regions within components estimated via independent vector analysis (IVA). IVA is a blind source separation approach shown to perform well in capturing individual subject variability within a group model. We evaluate performance of the approach using simulations and then apply to a relatively large schizophrenia data set (82 schizophrenia patients and 89 healthy controls). We postulate, that group differences in the intra-network distributional characteristics of resting state network voxel intensities might indirectly capture important distinctions between the brain function of healthy and clinical populations. Results demonstrate that specific areas of the brain, superior, and middle temporal gyrus that are involved in language and recognition of emotions, show greater component level variance in amplitude weights for schizophrenia patients than healthy controls. Statistically significant correlation between component level spatial variance and component volume was observed in 19 of the 27 non-artifactual components implying an evident relationship between the two parameters. Additionally, the greater spread in the distance of the cluster peak of a component from the centroid in schizophrenia patients compared to healthy controls was observed for seven components. These results indicate that there is hidden potential in exploring variance and possibly higher-order measures in resting state networks to better understand diseases such as schizophrenia. It furthers comprehension of how spatial characteristics can highlight previously unexplored differences between populations such as schizophrenia patients and healthy controls.
Singh, Satbir; Bajaj, Bijender Kumar
2016-10-02
Cost-effective production of proteases, which are robust enough to function under harsh process conditions, is always sought after due to their wide industrial application spectra. Solid-state production of enzymes using agro-industrial wastes as substrates is an environment-friendly approach, and it has several advantages such as high productivity, cost-effectiveness, being less labor-intensive, and less effluent production, among others. In the current study, different agro-wastes were employed for thermoalkali-stable protease production from Bacillus subtilis K-1 under solid-state fermentation. Agricultural residues such as cotton seed cake supported maximum protease production (728 U ml(-1)), which was followed by gram husk (714 U ml(-1)), mustard cake (680 U ml(-1)), and soybean meal (653 U ml(-1)). Plackett-Burman design of experiment showed that peptone, moisture content, temperature, phosphates, and inoculum size were the significant variables that influenced the protease production. Furthermore, statistical optimization of three variables, namely peptone, moisture content, and incubation temperature, by response surface methodology resulted in 40% enhanced protease production as compared to that under unoptimized conditions (from initial 728 to 1020 U ml(-1)). Thus, solid-state fermentation coupled with design of experiment tools represents a cost-effective strategy for production of industrial enzymes.
VeriML: A Dependently-Typed, User-Extensible and Language-Centric Approach to Proof Assistants
2013-01-01
the locally nameless approach [McKinna and Pollack, 1993]. The former two techniques replace all variables by numbers, whereas the locally nameless ...needs to be reasoned about together with shifting. This complicates both the statements and proofs of related lemmas. The locally nameless approach...the locally nameless approach, we separate free variables from bound variables and use deBruijn indices for bound variables (denoted as bi in Table 3.1
Utilizing population variation, vaccination, and systems biology to study human immunology.
Tsang, John S
2015-08-01
The move toward precision medicine has highlighted the importance of understanding biological variability within and across individuals in the human population. In particular, given the prevalent involvement of the immune system in diverse pathologies, an important question is how much and what information about the state of the immune system is required to enable accurate prediction of future health and response to medical interventions. Towards addressing this question, recent studies using vaccination as a model perturbation and systems-biology approaches are beginning to provide a glimpse of how natural population variation together with multiplexed, high-throughput measurement and computational analysis can be used to uncover predictors of immune response quality in humans. Here I discuss recent developments in this emerging field, with emphasis on baseline correlates of vaccination responses, sources of immune-state variability, as well as relevant features of study design, data generation, and computational analysis. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.
A variable capacitance based modeling and power capability predicting method for ultracapacitor
NASA Astrophysics Data System (ADS)
Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang
2018-01-01
Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.
NASA Astrophysics Data System (ADS)
Deco, Gustavo; Martí, Daniel
2007-03-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability.
Adventures in holistic ecosystem modelling: the cumberland basin ecosystem model
NASA Astrophysics Data System (ADS)
Gordon, D. C.; Keizer, P. D.; Daborn, G. R.; Schwinghamer, P.; Silvert, W. L.
A holistic ecosystem model has been developed for the Cumberland Basin, a turbid macrotidal estuary at the head of Canada's Bay of Fundy. The model was constructed as a group exercise involving several dozen scientists. Philosophy of approach and methods were patterned after the BOEDE Ems-Dollard modelling project. The model is one-dimensional, has 3 compartments and 3 boundaries, and is composed of 3 separate submodels (physical, pelagic and benthic). The 28 biological state variables cover the complete estuarine ecosystem and represent broad functional groups of organisms based on trophic relationships. Although still under development and not yet validated, the model has been verified and has reached the stage where most state variables provide reasonable output. The modelling process has stimulated interdisciplinary discussion, identified important data gaps and produced a quantitative tool which can be used to examine ecological hypotheses and determine critical environmental processes. As a result, Canadian scientists have a much better understanding of the Cumberland Basin ecosystem and are better able to provide competent advice on environmental management.
A variant of the anomaly initialisation approach for global climate forecast models
NASA Astrophysics Data System (ADS)
Volpi, Danila; Guemas, Virginie; Doblas-Reyes, Francisco; Hawkins, Ed; Nichols, Nancy; Carrassi, Alberto
2014-05-01
This work presents a refined method of anomaly initialisation (AI) applied to the ocean and sea ice components of the global climate forecast model EC-Earth, with the following particularities: - the use of a weight to the anomalies, in order to avoid the risk of introducing too big anomalies recorded in the observed state, whose amplitude does not fit the range of the internal variability generated by the model. - the AI of the temperature and density ocean state variables instead of the temperature and salinity. Results show that the use of such refinements improve the skill over the Arctic region, part of the North and South Atlantic, part of the North and South Pacific and the Mediterranean Sea. In the Tropical Pacific the full field initialised experiment performs better. This is probably due to a displacement of the observed anomalies caused by the use of the AI technique. Furthermore, preliminary results of an anomaly nudging experiment are discussed.
NASA Astrophysics Data System (ADS)
Kim, Jonghoon; Cho, B. H.
2014-08-01
This paper introduces an innovative approach to analyze electrochemical characteristics and state-of-health (SOH) diagnosis of a Li-ion cell based on the discrete wavelet transform (DWT). In this approach, the DWT has been applied as a powerful tool in the analysis of the discharging/charging voltage signal (DCVS) with non-stationary and transient phenomena for a Li-ion cell. Specifically, DWT-based multi-resolution analysis (MRA) is used for extracting information on the electrochemical characteristics in both time and frequency domain simultaneously. Through using the MRA with implementation of the wavelet decomposition, the information on the electrochemical characteristics of a Li-ion cell can be extracted from the DCVS over a wide frequency range. Wavelet decomposition based on the selection of the order 3 Daubechies wavelet (dB3) and scale 5 as the best wavelet function and the optimal decomposition scale is implemented. In particular, this present approach develops these investigations one step further by showing low and high frequency components (approximation component An and detail component Dn, respectively) extracted from variable Li-ion cells with different electrochemical characteristics caused by aging effect. Experimental results show the clearness of the DWT-based approach for the reliable diagnosis of the SOH for a Li-ion cell.
Identification of the protein folding transition state from molecular dynamics trajectories
NASA Astrophysics Data System (ADS)
Muff, S.; Caflisch, A.
2009-03-01
The rate of protein folding is governed by the transition state so that a detailed characterization of its structure is essential for understanding the folding process. In vitro experiments have provided a coarse-grained description of the folding transition state ensemble (TSE) of small proteins. Atomistic details could be obtained by molecular dynamics (MD) simulations but it is not straightforward to extract the TSE directly from the MD trajectories, even for small peptides. Here, the structures in the TSE are isolated by the cut-based free-energy profile (cFEP) using the network whose nodes and links are configurations sampled by MD and direct transitions among them, respectively. The cFEP is a barrier-preserving projection that does not require arbitrarily chosen progress variables. First, a simple two-dimensional free-energy surface is used to illustrate the successful determination of the TSE by the cFEP approach and to explain the difficulty in defining boundary conditions of the Markov state model for an entropically stabilized free-energy minimum. The cFEP is then used to extract the TSE of a β-sheet peptide with a complex free-energy surface containing multiple basins and an entropic region. In contrast, Markov state models with boundary conditions defined by projected variables and conventional histogram-based free-energy profiles are not able to identify the TSE of the β-sheet peptide.
A finite element based method for solution of optimal control problems
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.; Calise, Anthony J.
1989-01-01
A temporal finite element based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables that are expanded in terms of elemental values and simple shape functions. Unlike other variational approaches to optimal control problems, however, time derivatives of the states and costates do not appear in the governing variational equation. Instead, the only quantities whose time derivatives appear therein are virtual states and virtual costates. Also noteworthy among characteristics of the finite element formulation is the fact that in the algebraic equations which contain costates, they appear linearly. Thus, the remaining equations can be solved iteratively without initial guesses for the costates; this reduces the size of the problem by about a factor of two. Numerical results are presented herein for an elementary trajectory optimization problem which show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The goal is to evaluate the feasibility of this approach for real-time guidance applications. To this end, a simplified two-stage, four-state model for an advanced launch vehicle application is presented which is suitable for finite element solution.
Suitability assessment and mapping of Oyo State, Nigeria, for rice cultivation using GIS
NASA Astrophysics Data System (ADS)
Ayoade, Modupe Alake
2017-08-01
Rice is one of the most preferred food crops in Nigeria. However, local rice production has declined with the oil boom of the 1970s causing demand to outstrip supply. Rice production can be increased through the integration of Geographic Information Systems (GIS) and crop-land suitability analysis and mapping. Based on the key predictor variables that determine rice yield mentioned in relevant literature, data on rainfall, temperature, relative humidity, slope, and soil of Oyo state were obtained. To develop rice suitability maps for the state, two MCE-GIS techniques, namely the Overlay approach and weighted linear combination (WLC), using fuzzy AHP were used and compared. A Boolean land use map derived from a landsat imagery was used in masking out areas currently unavailable for rice production. Both suitability maps were classified into four categories of very suitable, suitable, moderate, and fairly moderate. Although the maps differ slightly, the overlay and WLC (AHP) approach found most parts of Oyo state (51.79 and 82.9 % respectively) to be moderately suitable for rice production. However, in areas like Eruwa, Oyo, and Shaki, rainfall amount received needs to be supplemented by irrigation for increased rice yield.
An adaptive observer for on-line tool wear estimation in turning, Part I: Theory
NASA Astrophysics Data System (ADS)
Danai, Kourosh; Ulsoy, A. Galip
1987-04-01
On-line sensing of tool wear has been a long-standing goal of the manufacturing engineering community. In the absence of any reliable on-line tool wear sensors, a new model-based approach for tool wear estimation has been proposed. This approach is an adaptive observer, based on force measurement, which uses both parameter and state estimation techniques. The design of the adaptive observer is based upon a dynamic state model of tool wear in turning. This paper (Part I) presents the model, and explains its use as the basis for the adaptive observer design. This model uses flank wear and crater wear as state variables, feed as the input, and the cutting force as the output. The suitability of the model as the basis for adaptive observation is also verified. The implementation of the adaptive observer requires the design of a state observer and a parameter estimator. To obtain the model parameters for tuning the adaptive observer procedures for linearisation of the non-linear model are specified. The implementation of the adaptive observer in turning and experimental results are presented in a companion paper (Part II).
Making great leaps forward: Accounting for detectability in herpetological field studies
Mazerolle, Marc J.; Bailey, Larissa L.; Kendall, William L.; Royle, J. Andrew; Converse, Sarah J.; Nichols, James D.
2007-01-01
Detecting individuals of amphibian and reptile species can be a daunting task. Detection can be hindered by various factors such as cryptic behavior, color patterns, or observer experience. These factors complicate the estimation of state variables of interest (e.g., abundance, occupancy, species richness) as well as the vital rates that induce changes in these state variables (e.g., survival probabilities for abundance; extinction probabilities for occupancy). Although ad hoc methods (e.g., counts uncorrected for detection, return rates) typically perform poorly in the face of no detection, they continue to be used extensively in various fields, including herpetology. However, formal approaches that estimate and account for the probability of detection, such as capture-mark-recapture (CMR) methods and distance sampling, are available. In this paper, we present classical approaches and recent advances in methods accounting for detectability that are particularly pertinent for herpetological data sets. Through examples, we illustrate the use of several methods, discuss their performance compared to that of ad hoc methods, and we suggest available software to perform these analyses. The methods we discuss control for imperfect detection and reduce bias in estimates of demographic parameters such as population size, survival, or, at other levels of biological organization, species occurrence. Among these methods, recently developed approaches that no longer require marked or resighted individuals should be particularly of interest to field herpetologists. We hope that our effort will encourage practitioners to implement some of the estimation methods presented herein instead of relying on ad hoc methods that make more limiting assumptions.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.
Translational applications of evaluating physiologic variability in human endotoxemia
Scheff, Jeremy D.; Mavroudis, Panteleimon D.; Calvano, Steve E.; Androulakis, Ioannis P.
2012-01-01
Dysregulation of the inflammatory response is a critical component of many clinically challenging disorders such as sepsis. Inflammation is a biological process designed to lead to healing and recovery, ultimately restoring homeostasis; however, the failure to fully achieve those beneficial results can leave a patient in a dangerous persistent inflammatory state. One of the primary challenges in developing novel therapies in this area is that inflammation is comprised of a complex network of interacting pathways. Here, we discuss our approaches towards addressing this problem through computational systems biology, with a particular focus on how the presence of biological rhythms and the disruption of these rhythms in inflammation may be applied in a translational context. By leveraging the information content embedded in physiologic variability, ranging in scale from oscillations in autonomic activity driving short-term heart rate variability (HRV) to circadian rhythms in immunomodulatory hormones, there is significant potential to gain insight into the underlying physiology. PMID:23203205
Fractional Order Two-Temperature Dual-Phase-Lag Thermoelasticity with Variable Thermal Conductivity
Mallik, Sadek Hossain; Kanoria, M.
2014-01-01
A new theory of two-temperature generalized thermoelasticity is constructed in the context of a new consideration of dual-phase-lag heat conduction with fractional orders. The theory is then adopted to study thermoelastic interaction in an isotropic homogenous semi-infinite generalized thermoelastic solids with variable thermal conductivity whose boundary is subjected to thermal and mechanical loading. The basic equations of the problem have been written in the form of a vector-matrix differential equation in the Laplace transform domain, which is then solved by using a state space approach. The inversion of Laplace transforms is computed numerically using the method of Fourier series expansion technique. The numerical estimates of the quantities of physical interest are obtained and depicted graphically. Some comparisons of the thermophysical quantities are shown in figures to study the effects of the variable thermal conductivity, temperature discrepancy, and the fractional order parameter. PMID:27419210
Zhang, Miaomiao; Wells, William M; Golland, Polina
2016-10-01
Using image-based descriptors to investigate clinical hypotheses and therapeutic implications is challenging due to the notorious "curse of dimensionality" coupled with a small sample size. In this paper, we present a low-dimensional analysis of anatomical shape variability in the space of diffeomorphisms and demonstrate its benefits for clinical studies. To combat the high dimensionality of the deformation descriptors, we develop a probabilistic model of principal geodesic analysis in a bandlimited low-dimensional space that still captures the underlying variability of image data. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than models based on the high-dimensional state-of-the-art approaches such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA).
Spatial vs. individual variability with inheritance in a stochastic Lotka-Volterra system
NASA Astrophysics Data System (ADS)
Dobramysl, Ulrich; Tauber, Uwe C.
2012-02-01
We investigate a stochastic spatial Lotka-Volterra predator-prey model with randomized interaction rates that are either affixed to the lattice sites and quenched, and / or specific to individuals in either population. In the latter situation, we include rate inheritance with mutations from the particles' progenitors. Thus we arrive at a simple model for competitive evolution with environmental variability and selection pressure. We employ Monte Carlo simulations in zero and two dimensions to study the time evolution of both species' densities and their interaction rate distributions. The predator and prey concentrations in the ensuing steady states depend crucially on the environmental variability, whereas the temporal evolution of the individualized rate distributions leads to largely neutral optimization. Contrary to, e.g., linear gene expression models, this system does not experience fixation at extreme values. An approximate description of the resulting data is achieved by means of an effective master equation approach for the interaction rate distribution.
Continuous-variable quantum probes for structured environments
NASA Astrophysics Data System (ADS)
Bina, Matteo; Grasselli, Federico; Paris, Matteo G. A.
2018-01-01
We address parameter estimation for structured environments and suggest an effective estimation scheme based on continuous-variables quantum probes. In particular, we investigate the use of a single bosonic mode as a probe for Ohmic reservoirs, and obtain the ultimate quantum limits to the precise estimation of their cutoff frequency. We assume the probe prepared in a Gaussian state and determine the optimal working regime, i.e., the conditions for the maximization of the quantum Fisher information in terms of the initial preparation, the reservoir temperature, and the interaction time. Upon investigating the Fisher information of feasible measurements, we arrive at a remarkable simple result: homodyne detection of canonical variables allows one to achieve the ultimate quantum limit to precision under suitable, mild, conditions. Finally, upon exploiting a perturbative approach, we find the invariant sweet spots of the (tunable) characteristic frequency of the probe, able to drive the probe towards the optimal working regime.
Applications of MIDAS regression in analysing trends in water quality
NASA Astrophysics Data System (ADS)
Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.
2014-04-01
We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.
Fuzzy logic based robotic controller
NASA Technical Reports Server (NTRS)
Attia, F.; Upadhyaya, M.
1994-01-01
Existing Proportional-Integral-Derivative (PID) robotic controllers rely on an inverse kinematic model to convert user-specified cartesian trajectory coordinates to joint variables. These joints experience friction, stiction, and gear backlash effects. Due to lack of proper linearization of these effects, modern control theory based on state space methods cannot provide adequate control for robotic systems. In the presence of loads, the dynamic behavior of robotic systems is complex and nonlinear, especially where mathematical modeling is evaluated for real-time operators. Fuzzy Logic Control is a fast emerging alternative to conventional control systems in situations where it may not be feasible to formulate an analytical model of the complex system. Fuzzy logic techniques track a user-defined trajectory without having the host computer to explicitly solve the nonlinear inverse kinematic equations. The goal is to provide a rule-based approach, which is closer to human reasoning. The approach used expresses end-point error, location of manipulator joints, and proximity to obstacles as fuzzy variables. The resulting decisions are based upon linguistic and non-numerical information. This paper presents a solution to the conventional robot controller which is independent of computationally intensive kinematic equations. Computer simulation results of this approach as obtained from software implementation are also discussed.
A new polytopic approach for the unknown input functional observer design
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed
2018-03-01
In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.
Information Transfer in the Brain: Insights from a Unified Approach
NASA Astrophysics Data System (ADS)
Marinazzo, Daniele; Wu, Guorong; Pellicoro, Mario; Stramaglia, Sebastiano
Measuring directed interactions in the brain in terms of information flow is a promising approach, mathematically treatable and amenable to encompass several methods. In this chapter we propose some approaches rooted in this framework for the analysis of neuroimaging data. First we will explore how the transfer of information depends on the network structure, showing how for hierarchical networks the information flow pattern is characterized by exponential distribution of the incoming information and a fat-tailed distribution of the outgoing information, as a signature of the law of diminishing marginal returns. This was reported to be true also for effective connectivity networks from human EEG data. Then we address the problem of partial conditioning to a limited subset of variables, chosen as the most informative ones for the driver node.We will then propose a formal expansion of the transfer entropy to put in evidence irreducible sets of variables which provide information for the future state of each assigned target. Multiplets characterized by a large contribution to the expansion are associated to informational circuits present in the system, with an informational character (synergetic or redundant) which can be associated to the sign of the contribution. Applications are reported for EEG and fMRI data.
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
A photonic link for donor spin qubits in silicon
NASA Astrophysics Data System (ADS)
Simmons, Stephanie
Atomically identical donor spin qubits in silicon offer excellent native quantum properties, which match or outperform many qubit rivals. To scale up such systems it would be advantageous to connect silicon donor spin qubits in a cavity-QED architecture. Many proposals in this direction introduce strong electric dipole interactions to the otherwise largely isolated spin qubit ground state in order to couple to superconducting cavities. Here I present an alternative approach, which uses the built-in strong electric dipole (optical) transitions of singly-ionized double donors in silicon. These donors, such as chalcogen donors S +, Se + and Te +, have the same ground-state spin Hamiltonians as shallow donors yet offer mid-gap binding energies and mid-IR optical access to excited orbital states. This photonic link is spin-selective which could be harnessed to measure and couple donor qubits using photonic cavity-QED. This approach should be robust to device environments with variable strains and electric fields, and will allow for CMOS- compatible, bulk-like, spatially separated donor qubit placement, optical parity measurements, and 4.2K operation. I will present preliminary data in support of this approach, including 4.2K optical initialization/readout in Earth's magnetic field, where long T1 and T2 times have been measured.
ERIC Educational Resources Information Center
Moustaki, Irini; Joreskog, Karl G.; Mavridis, Dimitris
2004-01-01
We consider a general type of model for analyzing ordinal variables with covariate effects and 2 approaches for analyzing data for such models, the item response theory (IRT) approach and the PRELIS-LISREL (PLA) approach. We compare these 2 approaches on the basis of 2 examples, 1 involving only covariate effects directly on the ordinal variables…
NASA Astrophysics Data System (ADS)
Hassanabadi, Amir Hossein; Shafiee, Masoud; Puig, Vicenc
2018-01-01
In this paper, sensor fault diagnosis of a singular delayed linear parameter varying (LPV) system is considered. In the considered system, the model matrices are dependent on some parameters which are real-time measurable. The case of inexact parameter measurements is considered which is close to real situations. Fault diagnosis in this system is achieved via fault estimation. For this purpose, an augmented system is created by including sensor faults as additional system states. Then, an unknown input observer (UIO) is designed which estimates both the system states and the faults in the presence of measurement noise, disturbances and uncertainty induced by inexact measured parameters. Error dynamics and the original system constitute an uncertain system due to inconsistencies between real and measured values of the parameters. Then, the robust estimation of the system states and the faults are achieved with H∞ performance and formulated with a set of linear matrix inequalities (LMIs). The designed UIO is also applicable for fault diagnosis of singular delayed LPV systems with unmeasurable scheduling variables. The efficiency of the proposed approach is illustrated with an example.
A Probabilistic Approach for Real-Time Volcano Surveillance
NASA Astrophysics Data System (ADS)
Cannavo, F.; Cannata, A.; Cassisi, C.; Di Grazia, G.; Maronno, P.; Montalto, P.; Prestifilippo, M.; Privitera, E.; Gambino, S.; Coltelli, M.
2016-12-01
Continuous evaluation of the state of potentially dangerous volcanos plays a key role for civil protection purposes. Presently, real-time surveillance of most volcanoes worldwide is essentially delegated to one or more human experts in volcanology, who interpret data coming from different kind of monitoring networks. Unfavorably, the coupling of highly non-linear and complex volcanic dynamic processes leads to measurable effects that can show a large variety of different behaviors. Moreover, due to intrinsic uncertainties and possible failures in some recorded data, the volcano state needs to be expressed in probabilistic terms, thus making the fast volcano state assessment sometimes impracticable for the personnel on duty at the control rooms. With the aim of aiding the personnel on duty in volcano surveillance, we present a probabilistic graphical model to estimate automatically the ongoing volcano state from all the available different kind of measurements. The model consists of a Bayesian network able to represent a set of variables and their conditional dependencies via a directed acyclic graph. The model variables are both the measurements and the possible states of the volcano through the time. The model output is an estimation of the probability distribution of the feasible volcano states. We tested the model on the Mt. Etna (Italy) case study by considering a long record of multivariate data from 2011 to 2015 and cross-validated it. Results indicate that the proposed model is effective and of great power for decision making purposes.
NASA Astrophysics Data System (ADS)
Cullen, N. J.; Anderson, B.; Sirguey, P. J.; Stumm, D.; Mackintosh, A.; Conway, J. P.; Horgan, H. J.; Dadic, R.; Fitzsimons, S.; Lorrey, A.
2016-12-01
Recognizing the scarcity of glacier mass balance data in the Southern Hemisphere, a mass balance measurement program was started at Brewster Glacier in 2004. Evolution of the measurement regime over the 11 years of data recorded means there are differences in the spatial density of data obtained. To ensure the temporal integrity of the dataset a new, geostatistical approach has been developed to calculate mass balance. Spatial co-variance between elevation and snow depth has enabled a digital elevation model to be used in a co-kriging approach to develop a snow depth index (SDI). By capturing the observed spatial variability in snow depth, the SDI is a more reliable predictor than elevation and is used to adjust each year of measurements consistently despite variability in sampling spatial density. The SDI also resolves the spatial structure of summer balance better than elevation. Co-kriging is used again to spatially interpolate a derived mean summer balance index using SDI as a co-variate, which yields a spatial predictor for summer balance. A similar approach is also used to create a predictor for annual balance, which allows us to revisit years where summer balance was not obtained. The average glacier-wide surface winter, summer and annual mass balances over the period 2005-2015 are 2484, -2586, and -102 mm w.e., respectively, with changes in summer balance explaining most of the variability in annual balance. On the whole, there is good agreement between our ELA and AAR values and those derived from the end-of-summer snowline (EOSS) program, though discrepancies in some years cannot be fully accounted for. A mass balance map of Brewster Glacier in an equilibrium state, which by definition has a glacier-wide mass balance equal to zero (a balanced-budget), is used to calculate values of ELA (1923 ±25 m) and AAR (0.40) representative of the observational period. The relationships between mass balance and ELA/AAR are explored, demonstrating they are mostly linear. On average, the mass balance gradients are found to be equal to 14.5 and 7.4 mm we m-1 in the ablation and accumulation zones, respectively. However, there is considerable variability in the gradients from year to year, as well as variability between different elevation bands. The largest variability in the mass balance gradient is observed in the ablation zone.
Shallow cumuli ensemble statistics for development of a stochastic parameterization
NASA Astrophysics Data System (ADS)
Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs
2014-05-01
According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.
Melillo, Paolo; Jovic, Alan; De Luca, Nicola; Pecchia, Leandro
2015-08-01
Accidental falls are a major problem of later life. Different technologies to predict falls have been investigated, but with limited success, mainly because of low specificity due to a high false positive rate. This Letter presents an automatic classifier based on heart rate variability (HRV) analysis with the goal to identify fallers automatically. HRV was used in this study as it is considered a good estimator of autonomic nervous system (ANS) states, which are responsible, among other things, for human balance control. Nominal 24 h electrocardiogram recordings from 168 cardiac patients (age 72 ± 8 years, 60 female), of which 47 were fallers, were investigated. Linear and nonlinear HRV properties were analysed in 30 min excerpts. Different data mining approaches were adopted and their performances were compared with a subject-based receiver operating characteristic analysis. The best performance was achieved by a hybrid algorithm, RUSBoost, integrated with feature selection method based on principal component analysis, which achieved satisfactory specificity and accuracy (80 and 72%, respectively), but low sensitivity (51%). These results suggested that ANS states causing falls could be reliably detected, but also that not all the falls were due to ANS states.
NASA Astrophysics Data System (ADS)
Perez, Santiago; Karakus, Murat; Pellet, Frederic
2017-05-01
The great success and widespread use of impregnated diamond (ID) bits are due to their self-sharpening mechanism, which consists of a constant renewal of diamonds acting at the cutting face as the bit wears out. It is therefore important to keep this mechanism acting throughout the lifespan of the bit. Nonetheless, such a mechanism can be altered by the blunting of the bit that ultimately leads to a less than optimal drilling performance. For this reason, this paper aims at investigating the applicability of artificial intelligence-based techniques in order to monitor tool condition of ID bits, i.e. sharp or blunt, under laboratory conditions. Accordingly, topologically invariant tests are carried out with sharp and blunt bits conditions while recording acoustic emissions (AE) and measuring-while-drilling variables. The combined output of acoustic emission root-mean-square value (AErms), depth of cut ( d), torque (tob) and weight-on-bit (wob) is then utilized to create two approaches in order to predict the wear state condition of the bits. One approach is based on the combination of the aforementioned variables and another on the specific energy of drilling. The two different approaches are assessed for classification performance with various pattern recognition algorithms, such as simple trees, support vector machines, k-nearest neighbour, boosted trees and artificial neural networks. In general, Acceptable pattern recognition rates were obtained, although the subset composed by AErms and tob excels due to the high classification performances rates and fewer input variables.
On the primary variable switching technique for simulating unsaturated-saturated flows
NASA Astrophysics Data System (ADS)
Diersch, H.-J. G.; Perrochet, P.
Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.
NASA Astrophysics Data System (ADS)
Pierini, Stefano; Gentile, Vittorio; de Ruggiero, Paola; Pietranera, Luca
2017-04-01
The Kuroshio Extension (KE) low-frequency variability (LFV) is analyzed with the satellite altimeter data distributed by AVISO from January 1993 to November 2015 through a new ad hoc composite index [1] that links the mean latitudinal position L of the KE jet and an integrated wavelet amplitude A measuring the high-frequency variability (HFV) of the KE path. This approach allows one to follow the KE evolution as an orbit in the (L,A) plane, as typically done in dynamical systems theory. Three intervals, I1 (1993-1998), I2 (1998-2006) and I3 (2006-November 2015) are separately analyzed also with sea surface height (SSH) maps. In I1 and I3, L and A are mostly anti-correlated and a recharging phase (characterized by a weak convoluted jet experiencing a rapid increase of the HFV) begins when negative SSH anomalies, remotely generated by the Pacific Decadal Oscillation, reach the KE region. On the other hand, in I2 the KE evolution is described by a hysteresis loop: this starts with a weak jet state followed by a recharging phase leading, in turn, to a persistent two-meander state, to its progressive and rapid erosion and, eventually, to the reestablishment of a weak jet state. This loop is found to correspond quite closely to the highly nonlinear intrinsic relaxation oscillation obtained in numerical process studies [1,2]. This supports the hypothesis that the KE LFV may have been controlled, during I2, by an intrinsic oceanic mode of variability. [1] Pierini S., 2015. J. Climate, 28, 5873-5881. [2] Pierini S., 2006. J. Phys. Oceanogr., 36, 1605-1625.
Seasonal Predictability in a Model Atmosphere.
NASA Astrophysics Data System (ADS)
Lin, Hai
2001-07-01
The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.
Chudnovsky, Alexandra A; Lee, Hyung Joo; Kostinski, Alex; Kotlov, Tanya; Koutrakis, Petros
2012-09-01
Although ground-level PM2.5 (particulate matter with aerodynamic diameter < 2.5 microm) monitoring sites provide accurate measurements, their spatial coverage within a given region is limited and thus often insufficient for exposure and epidemiological studies. Satellite data expand spatial coverage, enhancing our ability to estimate location- and/or subject-specific exposures to PM2.5. In this study, the authors apply a mixed-effects model approach to aerosol optical depth (AOD) retrievals from the Geostationary Operational Environmental Satellite (GOES) to predict PM2.5 concentrations within the New England area of the United States. With this approach, it is possible to control for the inherent day-to-day variability in the AOD-PM2.5 relationship, which depends on time-varying parameters such as particle optical properties, vertical and diurnal concentration profiles, and ground surface reflectance. The model-predicted PM2.5 mass concentration are highly correlated with the actual observations, R2 = 0.92. Therefore, adjustment for the daily variability in AOD-PM2.5 relationship allows obtaining spatially resolved PM2.5 concentration data that can be of great value to future exposure assessment and epidemiological studies. The authors demonstrated how AOD can be used reliably to predict daily PM2.5 mass concentrations, providing determination of their spatial and temporal variability. Promising results are found by adjusting for daily variability in the AOD-PM2.5 relationship, without the need to account for a wide variety of individual additional parameters. This approach is of a great potential to investigate the associations between subject-specific exposures to PM2.5 and their health effects. Higher 4 x 4-km resolution GOES AOD retrievals comparing with the conventional MODerate resolution Imaging Spectroradiometer (MODIS) 10-km product has the potential to capture PM2.5 variability within the urban domain.
Cardiac surgery report cards: comprehensive review and statistical critique.
Shahian, D M; Normand, S L; Torchiana, D F; Lewis, S M; Pastore, J O; Kuntz, R E; Dreyer, P I
2001-12-01
Public report cards and confidential, collaborative peer education represent distinctly different approaches to cardiac surgery quality assessment and improvement. This review discusses the controversies regarding their methodology and relative effectiveness. Report cards have been the more commonly used approach, typically as a result of state legislation. They are based on the presumption that publication of outcomes effectively motivates providers, and that market forces will reward higher quality. Numerous studies have challenged the validity of these hypotheses. Furthermore, although states with report cards have reported significant decreases in risk-adjusted mortality, it is unclear whether this improvement resulted from public disclosure or, rather, from the development of internal quality programs by hospitals. An additional confounding factor is the nationwide decline in heart surgery mortality, including states without quality monitoring. Finally, report cards may engender negative behaviors such as high-risk case avoidance and "gaming" of the reporting system, especially if individual surgeon results are published. The alternative approach, continuous quality improvement, may provide an opportunity to enhance performance and reduce interprovider variability while avoiding the unintended negative consequences of report cards. This collaborative method, which uses exchange visits between programs and determination of best practice, has been highly effective in northern New England and in the Veterans Affairs Administration. However, despite their potential advantages, quality programs based solely on confidential continuous quality improvement do not address the issue of public accountability. For this reason, some states may continue to mandate report cards. In such instances, it is imperative that appropriate statistical techniques and report formats are used, and that professional organizations simultaneously implement continuous quality improvement programs. The statistical methodology underlying current report cards is flawed, and does not justify the degree of accuracy presented to the public. All existing risk-adjustment methods have substantial inherent imprecision, and this is compounded when the results of such patient-level models are aggregated and used inappropriately to assess provider performance. Specific problems include sample size differences, clustering of observations, multiple comparisons, and failure to account for the random component of interprovider variability. We advocate the use of hierarchical or multilevel statistical models to address these concerns, as well as report formats that emphasize the statistical uncertainty of the results.
NASA Astrophysics Data System (ADS)
Nanus, Leora; Clow, David; Saros, Jasmine; McMurray, Jill; Blett, Tamara; Sickman, James
2017-04-01
High-elevation aquatic ecosystems in Wilderness areas of the western United States are impacted by current and historic atmospheric nitrogen (N) deposition associated with local and regional air pollution. Documented effects include elevated surface water nitrate concentrations, increased algal productivity, and changes in diatom species assemblages. A predictive framework was developed for sensitive high-elevation basins across the western United States at multiple spatial scales including the Rocky Mountain Region (Rockies), the Greater Yellowstone Area (GYA), and Yosemite (YOSE) and Sequoia & Kings Canyon (SEKI) National Parks. Spatial trends in critical loads of N deposition for nutrient enrichment of aquatic ecosystems were quantified and mapped using a geostatistical approach, with modeled N deposition, topography, vegetation, geology, and climate as potential explanatory variables. Multiple predictive models were created using various combinations of explanatory variables; this approach allowed for better quantification of uncertainty and identification of areas most sensitive to high atmospheric N deposition (> 3 kg N ha-1 yr-1). For multiple spatial scales, the lowest critical loads estimates (<1.5 + 1 kg N ha-1 yr-1) occurred in high-elevation basins with steep slopes, sparse vegetation, and exposed bedrock and talus. Based on a nitrate threshold of 1 μmol L-1, estimated critical load exceedances (>1.5 + 1 kg N ha-1 yr-1) correspond with areas of high N deposition and vary spatially ranging from less than 20% to over 40% of the study area for the Rockies, GYA, YOSE, and SEKI. These predictive models and maps identify sensitive aquatic ecosystems that may be impacted by excess atmospheric N deposition and can be used to help protect against future anthropogenic disturbance. The approach presented here may be transferable to other remote and protected high-elevation ecosystems at multiple spatial scales that are sensitive to adverse effects of pollutant loading in the US and around the world.
Damaris: Addressing performance variability in data management for post-petascale simulations
Dorier, Matthieu; Antoniu, Gabriel; Cappello, Franck; ...
2016-10-01
With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. Here, this variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters.more » Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.« less
Damaris: Addressing performance variability in data management for post-petascale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorier, Matthieu; Antoniu, Gabriel; Cappello, Franck
With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. Here, this variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters.more » Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.« less
Congdon, Peter
2009-01-30
Estimates of disease prevalence for small areas are increasingly required for the allocation of health funds according to local need. Both individual level and geographic risk factors are likely to be relevant to explaining prevalence variations, and in turn relevant to the procedure for small area prevalence estimation. Prevalence estimates are of particular importance for major chronic illnesses such as cardiovascular disease. A multilevel prevalence model for cardiovascular outcomes is proposed that incorporates both survey information on patient risk factors and the effects of geographic location. The model is applied to derive micro area prevalence estimates, specifically estimates of cardiovascular disease for Zip Code Tabulation Areas in the USA. The model incorporates prevalence differentials by age, sex, ethnicity and educational attainment from the 2005 Behavioral Risk Factor Surveillance System survey. Influences of geographic context are modelled at both county and state level, with the county effects relating to poverty and urbanity. State level influences are modelled using a random effects approach that allows both for spatial correlation and spatial isolates. To assess the importance of geographic variables, three types of model are compared: a model with person level variables only; a model with geographic effects that do not interact with person attributes; and a full model, allowing for state level random effects that differ by ethnicity. There is clear evidence that geographic effects improve statistical fit. Geographic variations in disease prevalence partly reflect the demographic composition of area populations. However, prevalence variations may also show distinct geographic 'contextual' effects. The present study demonstrates by formal modelling methods that improved explanation is obtained by allowing for distinct geographic effects (for counties and states) and for interaction between geographic and person variables. Thus an appropriate methodology to estimate prevalence at small area level should include geographic effects as well as person level demographic variables.
Congdon, Peter
2009-01-01
Background Estimates of disease prevalence for small areas are increasingly required for the allocation of health funds according to local need. Both individual level and geographic risk factors are likely to be relevant to explaining prevalence variations, and in turn relevant to the procedure for small area prevalence estimation. Prevalence estimates are of particular importance for major chronic illnesses such as cardiovascular disease. Methods A multilevel prevalence model for cardiovascular outcomes is proposed that incorporates both survey information on patient risk factors and the effects of geographic location. The model is applied to derive micro area prevalence estimates, specifically estimates of cardiovascular disease for Zip Code Tabulation Areas in the USA. The model incorporates prevalence differentials by age, sex, ethnicity and educational attainment from the 2005 Behavioral Risk Factor Surveillance System survey. Influences of geographic context are modelled at both county and state level, with the county effects relating to poverty and urbanity. State level influences are modelled using a random effects approach that allows both for spatial correlation and spatial isolates. Results To assess the importance of geographic variables, three types of model are compared: a model with person level variables only; a model with geographic effects that do not interact with person attributes; and a full model, allowing for state level random effects that differ by ethnicity. There is clear evidence that geographic effects improve statistical fit. Conclusion Geographic variations in disease prevalence partly reflect the demographic composition of area populations. However, prevalence variations may also show distinct geographic 'contextual' effects. The present study demonstrates by formal modelling methods that improved explanation is obtained by allowing for distinct geographic effects (for counties and states) and for interaction between geographic and person variables. Thus an appropriate methodology to estimate prevalence at small area level should include geographic effects as well as person level demographic variables. PMID:19183458
Role of slack variables in quasi-Newton methods for constrained optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tapia, R.A.
In constrained optimization the technique of converting an inequality constraint into an equality constraint by the addition of a squared slack variable is well known but rarely used. In choosing an active constraint philosophy over the slack variable approach, researchers quickly justify their choice with the standard criticisms: the slack variable approach increases the dimension of the problem, is numerically unstable, and gives rise to singular systems. It is shown that these criticisms of the slack variable approach need not apply and the two seemingly distinct approaches are actually very closely related. In fact, the squared slack variable formulation canmore » be used to develop a superior and more comprehensive active constraint philosophy.« less
Evaluation of variable selection methods for random forests and omics data sets.
Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke
2017-10-16
Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.
Nauck, Bernhard
2014-09-01
For explaining cross-cultural differences in fertility behavior, this paper conjoins three complementary approaches: the 'demand'-based economic theory of fertility (ETF), a revised version of the 'supply'-based 'value-of-children' (VOC)-approach as a special theory of the general social theory of social production functions and the framing theory of variable rationality. A comprehensive model is specified that encompasses the variable efficiency of having children for the optimization of physical well-being and of social esteem of (potential) parents; it also accounts for the variable rationality of fertility decisions. The model is tested with a data set that comprises information on VOC and fertility of women within the social settings of 18 areas (Peoples Republic of China, North and South India, Indonesia, Palestine, Israel, Turkey, Ghana, South Africa, East and West Germany, the Czech Republic, France, Russia, Poland, Estonia, the United States and Jamaica). Latent class analysis is used to establish a measurement model for the costs and benefits of children and to analyze area differences by a two-level multinomial-model. Two-level Cox-regressions are used to estimate the effects of perceived costs and benefits of children, individual resources and context opportunities, with births of different parity as dependents. This simultaneous test in a cross-cultural context goes beyond the current state of fertility research and provides evidence about the cross-cultural validity of the model, the systematic effects of VOC on fertility and the changing rationality of fertility decisions during demographic transition and socio-economic change. Copyright © 2014 Elsevier Ltd. All rights reserved.
Goal-Function Tree Modeling for Systems Engineering and Fault Management
NASA Technical Reports Server (NTRS)
Patterson, Jonathan D.; Johnson, Stephen B.
2013-01-01
The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to provide formal connectivity between the nominal (SE), and off-nominal (SHM and FM) aspects of functions and designs. This paper describes a formal modeling approach to the initial phases of the development process that integrates the nominal and off-nominal perspectives in a model that unites SE goals and functions of with the failure to achieve goals and functions (SHM/FM). This methodology and corresponding model, known as a Goal-Function Tree (GFT), provides a means to represent, decompose, and elaborate system goals and functions in a rigorous manner that connects directly to design through use of state variables that translate natural language requirements and goals into logical-physical state language. The state variable-based approach also provides the means to directly connect FM to the design, by specifying the range in which state variables must be controlled to achieve goals, and conversely, the failures that exist if system behavior go out-of-range. This in turn allows for the systems engineers and SHM/FM engineers to determine which state variables to monitor, and what action(s) to take should the system fail to achieve that goal. In sum, the GFT representation provides a unified approach to early-phase SE and FM development. This representation and methodology has been successfully developed and implemented using Systems Modeling Language (SysML) on the NASA Space Launch System (SLS) Program. It enabled early design trade studies of failure detection coverage to ensure complete detection coverage of all crew-threatening failures. The representation maps directly both to FM algorithm designs, and to failure scenario definitions needed for design analysis and testing. The GFT representation provided the basis for mapping of abort triggers into scenarios, both needed for initial, and successful quantitative analyses of abort effectiveness (detection and response to crew-threatening events).
Superconducting fault current-limiter with variable shunt impedance
Llambes, Juan Carlos H; Xiong, Xuming
2013-11-19
A superconducting fault current-limiter is provided, including a superconducting element configured to resistively or inductively limit a fault current, and one or more variable-impedance shunts electrically coupled in parallel with the superconducting element. The variable-impedance shunt(s) is configured to present a first impedance during a superconducting state of the superconducting element and a second impedance during a normal resistive state of the superconducting element. The superconducting element transitions from the superconducting state to the normal resistive state responsive to the fault current, and responsive thereto, the variable-impedance shunt(s) transitions from the first to the second impedance. The second impedance of the variable-impedance shunt(s) is a lower impedance than the first impedance, which facilitates current flow through the variable-impedance shunt(s) during a recovery transition of the superconducting element from the normal resistive state to the superconducting state, and thus, facilitates recovery of the superconducting element under load.
2013-03-01
fMRI data (e.g. Kamitami & Tong, 2005). This approach has been remarkably successful in classifying mental workload in complex tasks (Berka, et al...1991). These previous studies relied upon spectral comparison rather than classification. In previous research examining the stability of fMRI ...chose to focus on electrophysiology, as the collection conditions may be more carefully controlled across days than fMRI and it is more amenable to
Beckerman, Bernardo S; Jerrett, Michael; Serre, Marc; Martin, Randall V; Lee, Seung-Jae; van Donkelaar, Aaron; Ross, Zev; Su, Jason; Burnett, Richard T
2013-07-02
Airborne fine particulate matter exhibits spatiotemporal variability at multiple scales, which presents challenges to estimating exposures for health effects assessment. Here we created a model to predict ambient particulate matter less than 2.5 μm in aerodynamic diameter (PM2.5) across the contiguous United States to be applied to health effects modeling. We developed a hybrid approach combining a land use regression model (LUR) selected with a machine learning method, and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals. The PM2.5 data set included 104,172 monthly observations at 1464 monitoring locations with approximately 10% of locations reserved for cross-validation. LUR models were based on remote sensing estimates of PM2.5, land use and traffic indicators. Normalized cross-validated R(2) values for LUR were 0.63 and 0.11 with and without remote sensing, respectively, suggesting remote sensing is a strong predictor of ground-level concentrations. In the models including the BME interpolation of the residuals, cross-validated R(2) were 0.79 for both configurations; the model without remotely sensed data described more fine-scale variation than the model including remote sensing. Our results suggest that our modeling framework can predict ground-level concentrations of PM2.5 at multiple scales over the contiguous U.S.
Di, Qian; Rowland, Sebastian; Koutrakis, Petros; Schwartz, Joel
2017-01-01
Ground-level ozone is an important atmospheric oxidant, which exhibits considerable spatial and temporal variability in its concentration level. Existing modeling approaches for ground-level ozone include chemical transport models, land-use regression, Kriging, and data fusion of chemical transport models with monitoring data. Each of these methods has both strengths and weaknesses. Combining those complementary approaches could improve model performance. Meanwhile, satellite-based total column ozone, combined with ozone vertical profile, is another potential input. We propose a hybrid model that integrates the above variables to achieve spatially and temporally resolved exposure assessments for ground-level ozone. We used a neural network for its capacity to model interactions and nonlinearity. Convolutional layers, which use convolution kernels to aggregate nearby information, were added to the neural network to account for spatial and temporal autocorrelation. We trained the model with AQS 8-hour daily maximum ozone in the continental United States from 2000 to 2012 and tested it with left out monitoring sites. Cross-validated R2 on the left out monitoring sites ranged from 0.74 to 0.80 (mean 0.76) for predictions on 1 km×1 km grid cells, which indicates good model performance. Model performance remains good even at low ozone concentrations. The prediction results facilitate epidemiological studies to assess the health effect of ozone in the long term and the short term. PMID:27332675
Helping Water Utilities Grapple with Climate Change
NASA Astrophysics Data System (ADS)
Yates, D.; Gracely, B.; Miller, K.
2008-12-01
The Water Research Foundation (WRF), serving the drinking water industry and the National Center for Atmospheric Research (NCAR) are collaborating on an effort to develop and implement locally-relevant, structured processes to help water utilities consider the impacts and adaptation options that climate variability and change might have on their water systems. Adopting a case-study approach, the structured process include 1) a problem definition phase, focused on identifying goals, information needs, utility vulnerabilities and possible adaptation options in the face of climate and hydrologic uncertainty; 2) developing and/or modifying system-specific Integrated Water Resource Management (IWRM) models and conducting sensitivity analysis to identify critical variables; 3) developing probabilistic climate change scenarios focused on exploring uncertainties identified as important in the sensitivity analysis in step 2; and 4) implementing the structured process and examining approaches decision making under uncertainty. Collaborators include seven drinking water utilities and two state agencies: 1) The Inland Empire Utility Agency, CA; 2) The El Dorado Irrigation District, Placerville CA; 2) Portland Water Bureau, Portland OR; 3) Colorado Springs Utilities, Colo Spgs, CO; 4) Cincinnati Water, Cincinnati, OH; 5) Massachusetts Water Resources Authority (MWRA), Boston, MA; 6) Durham Water, Durham, NC; and 7) Palm Beach County Water (PBCW), Palm Beach, FL. The California Department of Water Resources and the Colorado Water Conservation Board were the state agencies that we have collaborated with.
Hedman, Erik; Andersson, Erik; Lekander, Mats; Ljótsson, Brjánn
2015-01-01
Severe health anxiety can be effectively treated with exposure-based Internet-delivered cognitive behavior therapy (ICBT), but information about which factors that predict outcome is scarce. Using data from a recently conducted RCT comparing ICBT (n = 79) with Internet-delivered behavioral stress management (IBSM) (n = 79) the presented study investigated predictors of treatment outcome. Analyses were conducted using a two-step linear regression approach and the dependent variable was operationalized both as end state health anxiety at post-treatment and as baseline-to post-treatment improvement. A hypothesis driven approach was used where predictors expected to influence outcome were based on a previous predictor study by our research group. As hypothesized, the results showed that baseline health anxiety and treatment adherence predicted both end state health anxiety and improvement. In addition, anxiety sensitivity, treatment credibility, and working alliance were significant predictors of health anxiety improvement. Demographic variables, i.e. age, gender, marital status, computer skills, educational level, and having children, had no significant predictive value. We conclude that it is possible to predict a substantial proportion of the outcome variance in ICBT and IBSM for severe health anxiety. The findings of the present study can be of high clinical value as they provide information about factors of importance for outcome in the treatment of severe health anxiety. Copyright © 2014 Elsevier Ltd. All rights reserved.
Monitoring the Variability of the Supermassive Black Hole at the Galactic Center
NASA Astrophysics Data System (ADS)
Chen, Zhuo; Do, Tuan; Witzel, Gunther; Ghez, Andrea; Schödel, Rainer; Gallego, Laly; Sitarski, Breann; Lu, Jessica; Becklin, Eric Eric; Dehghanfar, Arezu; Gautam, Abhimat; Hees, Aurelien; Jia, Siyao; Matthews, Keith; Morris, Mark
2018-01-01
The variability of the supermassive black hole at the center of the Galaxy, Sgr A*, has been widely studied over the years in a variety of wavelengths. However, near-infrared studies of the variability of Sgr A* only began in 2003 with the then new technique Adaptive Optics (AO) as speckle shift-and-add data did not reach sufficient depth to detect Sgr A* (K < 16). We apply our new speckle holography approach to the analysis of data obtained between 1995 and 2005 with the speckle imaging technique (reaching K < 17) to re-examine the variability of Sgr A* in an effort to explore the Sgr A* accretion flow over a time baseline of 20 years. We find that the average magnitude of Sgr A* from 1995 to 2005 (K = 16.49 +/- 0.086) agrees very well with the average AO magnitude from 2005-2007 (Kp = 16.3). Our detections of Sgr A* are the first reported prior to 2002. In particular, a significant increase of power in the PSD between the main correlation timescale of ~300 min and 20 years can be excluded. This renders 300 min the dominant timescale and setting the variability state of Sgr A* in the time since 1995 apart from states discussed in the context of the X-ray echoes in the surrounding molecular clouds (for which extended bright periods of several years are required). Finally, we note that the 2001 periapse passage of the extended, dusty object G1, a source similar to G2, had no apparent effect on the emissivity of the accretion flow onto Sgr A*.
Repeat-until-success cubic phase gate for universal continuous-variable quantum computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Kevin; Pooser, Raphael; Siopsis, George
2015-03-24
We report that to achieve universal quantum computation using continuous variables, one needs to jump out of the set of Gaussian operations and have a non-Gaussian element, such as the cubic phase gate. However, such a gate is currently very difficult to implement in practice. Here we introduce an experimentally viable “repeat-until-success” approach to generating the cubic phase gate, which is achieved using sequential photon subtractions and Gaussian operations. Ultimately, we find that our scheme offers benefits in terms of the expected time until success, as well as the fact that we do not require any complex off-line resource state,more » although we require a primitive quantum memory.« less
Quantum information processing with a travelling wave of light
NASA Astrophysics Data System (ADS)
Serikawa, Takahiro; Shiozawa, Yu; Ogawa, Hisashi; Takanashi, Naoto; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira
2018-02-01
We exploit quantum information processing on a traveling wave of light, expecting emancipation from thermal noise, easy coupling to fiber communication, and potentially high operation speed. Although optical memories are technically challenging, we have an alternative approach to apply multi-step operations on traveling light, that is, continuous-variable one-way computation. So far our achievement includes generation of a one-million-mode entangled chain in time-domain, mode engineering of nonlinear resource states, and real-time nonlinear feedforward. Although they are implemented with free space optics, we are also investigating photonic integration and performed quantum teleportation with a passive liner waveguide chip as a demonstration of entangling, measurement, and feedforward. We also suggest a loop-based architecture as another model of continuous-variable computing.
Defect-phase-dynamics approach to statistical domain-growth problem of clock models
NASA Technical Reports Server (NTRS)
Kawasaki, K.
1985-01-01
The growth of statistical domains in quenched Ising-like p-state clock models with p = 3 or more is investigated theoretically, reformulating the analysis of Ohta et al. (1982) in terms of a phase variable and studying the dynamics of defects introduced into the phase field when the phase variable becomes multivalued. The resulting defect/phase domain-growth equation is applied to the interpretation of Monte Carlo simulations in two dimensions (Kaski and Gunton, 1983; Grest and Srolovitz, 1984), and problems encountered in the analysis of related Potts models are discussed. In the two-dimensional case, the problem is essentially that of a purely dissipative Coulomb gas, with a sq rt t growth law complicated by vertex-pinning effects at small t.
Information Leakage Analysis by Abstract Interpretation
NASA Astrophysics Data System (ADS)
Zanioli, Matteo; Cortesi, Agostino
Protecting the confidentiality of information stored in a computer system or transmitted over a public network is a relevant problem in computer security. The approach of information flow analysis involves performing a static analysis of the program with the aim of proving that there will not be leaks of sensitive information. In this paper we propose a new domain that combines variable dependency analysis, based on propositional formulas, and variables' value analysis, based on polyhedra. The resulting analysis is strictly more accurate than the state of the art abstract interpretation based analyses for information leakage detection. Its modular construction allows to deal with the tradeoff between efficiency and accuracy by tuning the granularity of the abstraction and the complexity of the abstract operators.
An Analytic Approach to Modeling Land-Atmosphere Interaction: 1. Construct and Equilibrium Behavior
NASA Astrophysics Data System (ADS)
Brubaker, Kaye L.; Entekhabi, Dara
1995-03-01
A four-variable land-atmosphere model is developed to investigate the coupled exchanges of water and energy between the land surface and atmosphere and the role of these exchanges in the statistical behavior of continental climates. The land-atmosphere system is substantially simplified and formulated as a set of ordinary differential equations that, with the addition of random noise, are suitable for analysis in the form of the multivariate Îto equation. The model treats the soil layer and the near-surface atmosphere as reservoirs with storage capacities for heat and water. The transfers between these reservoirs are regulated by four states: soil saturation, soil temperature, air specific humidity, and air potential temperature. The atmospheric reservoir is treated as a turbulently mixed boundary layer of fixed depth. Heat and moisture advection, precipitation, and layer-top air entrainment are parameterized. The system is forced externally by solar radiation and the lateral advection of air and water mass. The remaining energy and water mass exchanges are expressed in terms of the state variables. The model development and equilibrium solutions are presented. Although comparisons between observed data and steady state model results re inexact, the model appears to do a reasonable job of partitioning net radiation into sensible and latent heat flux in appropriate proportions for bare-soil midlatitude summer conditions. Subsequent work will introduce randomness into the forcing terms to investigate the effect of water-energy coupling and land-atmosphere interaction on variability and persistence in the climatic system.
State estimation and prediction using clustered particle filters.
Lee, Yoonsang; Majda, Andrew J
2016-12-20
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.